Tag Archives: intervention

Brain Training: Placebo effects, publication bias, small sample sizes… and what we do next?

Over the past decade the young field of cognitive training – sometimes referred to as ‘brain training’ – has expanded rapidly. In our lab we have been extremely interested in brain training (Astle et al. 2015; Barnes et al. 2016). It has the potential to tell us a lot about the brain and how it can dynamically respond to changes in our experience.

The basic approach is to give someone lots of practice on a set of cognitive exercises (e.g. memory games), see whether they get better at other things too, and in some cases see whether there are significant brain changes following the training. The appeal is obvious: the potential to slow age-related cognitive decline (e.g. Anguera et al. 2013), remediate cognitive deficits following brain injury (e.g. Westerberg et al. 2007), boost learning (e.g. Nevo and Breznitz 2014) and reduce symptoms associated with neurodevelopmental disorders (e.g. Klingberg et al. 2005). But these strong claims require compelling evidence and the findings in this area have been notoriously inconsistent.

menuscreens

(Commercial brain training programmes are available to both academics and the general public)

I have been working on a review paper for a special issue, and having trawled through the various papers, I think that some consensus is emerging. Higher-order cognitive processes like attention and memory can be trained. These gains will transfer to similarly structured but untrained tasks, and are mirrored by enhanced activity and connectivity within the brain systems responsible for these cognitive functions. However, the scope of these gains is currently very narrow. To give an extreme example, learning to remember very long lists of letters does not necessarily transfer to learning long lists of words, even though those two tasks are so similar – the training can be very content specific (Harrison et al. (2013); see also Ericcson et al. (1980)). But other studies seem to buck that trend, and show substantial wide transfer effects – i.e. people get better not just at what they trained on, but even very different tasks. Why this inconsistency? Well I think there are a few important differences in how the studies are designed, here are two of the most important:

  1. Control groups: Some studies don’t have control groups at all, and many that do don’t have active control groups (i.e. the controls don’t actually do anything, so it is pretty obvious that they are controls). This means that these studies can’t properly control for the placebo effect (https://en.wikipedia.org/wiki/Placebo). If a study doesn’t have an active control group then it is more likely to show a wide transfer effect.
  2. Sample size: The smaller the study (i.e. the fewer the participants) the more likely it is to show wider transfer effects. If studies include lots of participants then it is far more likely to accurately estimate the true size of the transfer effect, which is very small.

When you consider these two factors and only look at the best designed studies, the effect size for wider transfer effects is about d=0.25 – if you are not familiar with this statistic, this is small (Melby-Lervag et al., in press). Furthermore, when considering the effect sizes in this field it is important to remember that this literature almost certainly suffers from a publication bias – it is difficult to publish null effects, and easier to publish positive results. Meaning that there are probably quite a few studies showing no training effects sat in researchers’ drawers, unpublished. As a result, even this small effect size is likely an overestimate of the genuine underlying effect. The true effect is probably even closer to zero.

So claims that training on some cognitive games can produce improvements that spread to symptoms associated with particular disorders – like ADHD – are particularly incredible. Just looking at the best designed studies, the effect size is small, again about d=0.25 (Sonuga-Barke et al., 2013). The publication bias caveat applies here too – even this small effect size is likely an overestimate of the true effect. Some studies do show substantially larger effects, but these are usually not double blind. That is, the person rating those symptoms knows whether or not the individual (usually a child) received the training. This will result in a substantial placebo effect, and this likely explains these supposed enhanced benefits.

Where do we go from here? As a field we need to ensure that future studies have active control groups, double blinding and that we include enough participants to show the effects we are looking for. I think we also need theory. A typical approach is to deliver a training programme, alongside a long list of assessments, and then explore which assessments show transfer. There is little work that explicitly generates and then tests a theory, but I think this is necessary for future progress. Where research is theoretically grounded it is far easier for a field to make meaningful progress, because it gives a collective focus, creates a shared set of critical questions, and provides a framework that can be tested, falsified and revised.

Author information:

Dr. Duncan Astle, Medical Research Council Cognition and Brain Science Unit, Cambridge.

https://www.mrc-cbu.cam.ac.uk/people/duncan.astle/

Reference:

Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, Kong E, Larraburo Y, Rolle C, Johnston E, Gazzaley A (2013) Video game training enhances cognitive control in older adults. Nature 501:97-101.

Astle DE, Barnes JJ, Baker K, Colclough GL, Woolrich MW (2015) Cognitive training enhances intrinsic brain connectivity in childhood. J Neurosci 35:6277-6283.

Barnes JJ, Nobre AC, Woolrich MW, Baker K, Astle DE (2016) Training Working Memory in Childhood Enhances Coupling between Frontoparietal Control Network and Task-Related Regions. J Neurosci 36:9001-9011.

Ericcson KA, Chase WG, Faloon S (1980) Acquisition of a memory skill. Science 208:1181-1182.

Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychological science 24:2409-2419.

Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlstrom K, Gillberg CG, Forssberg H, Westerberg H (2005) Computerized training of working memory in children with ADHD–a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44:177-186.

Melby-Lervag M, Redick TS, Hulme C (in press) Working memory training does not improve performance on measures of intelligence or other measures of “Far Transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science.

Nevo E, Breznitz Z (2014) Effects of working memory and reading acceleration training on improving working memory abilities and reading skills among third graders. Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence 20:752-765.

Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Dopfner M, Dittmann RW, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong IC, Sergeant J (2013) Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. The American journal of psychiatry 170:275-289.

Westerberg H, Jacobaeus H, Hirvikoski T, Clevberger P, Ostensson ML, Bartfai A, Klingberg T (2007) Computerized working memory training after stroke–a pilot study. Brain injury 21:21-29.

Does working memory training change neurophysiology in childhood?

The short answer to that question is ‘yes’.

We have known for some time that training particular cognitive skills, like working memory, can produce improvements in cognition. These improvements transfer to other untrained tasks, provided that they are similarly structured. However, we know very little about how these kinds of intensive cognitive training programmes change children’s performance.

The study has been under embargo whilst it awaits publication in the Journal of Neuroscience, but the embargo has just been lifted, and we can now tell you all about it (the paper itself should be published very soon – open access, naturally). We used magnetoencephalography – a technique for measuring electrical brain activity – to explore patterns of brain activity as children rested in the scanner with their eyes closed. We repeated this procedure before and after the children underwent working memory training. Importantly, only half of the children underwent an intensive version of the training, with the other half doing a low intensity version. This latter group acted as a control, and the children were randomly allocated to the high or low intensity conditions.

After the training, children’s working memory performance improved. We used standardised assessments of working memory to show this. Importantly, these improvements were specific to the group of children who had undertaken the intensive training programme, with the control group showing little or no improvement. This pattern could not have resulted from us expecting that the children in the intensive group ought to show bigger improvements than the controls, because the researcher doing the assessments did not know which group the children were assigned to.

The magnetoencephalography data allowed us to explore whether there were any significant changes in children’s brains, and whether these changes mirrored the improvements in performance that we observed. We used the spontaneous electrical activity in the children’s brains to explore network connectivity – that is, how different brain areas are coordinated. After the training, connectivity within networks involved in attentional control were significantly enhanced. Furthermore, the bigger the change in connectivity, the bigger the improvement in the child’s working memory.

We have a lot more data on these children, which we are slowly crunching our way through. So there will be more to come!

Tools of the Mind: An Effective Intervention for High Poverty Schools?

Early childhood interventions are believed to be a key step towards ending the cycle of poverty. This belief is based upon the large evidence base that demonstrates that good childhood development (measured in various ways) is highly predictive of a large variety of positive outcomes later in life.  If we give children the emotional, social and cognitive support they need in their early years it is hoped that this will make a lasting improvement that persists for the rest of their lives and halt the transmission of negative social problems from one generation to the next.

However, high quality intervention studies are rare, making it difficult to know which type of approach will work (if any).  The lack of research is unsurprising due to the fact that this kind of study is very hard to do well. They usually require a huge investment in time, money and effort. Furthermore, they involve a number of complex factors that might affect both the validity and applicability of the results, such as individual teaching style and child demographics.

This is what makes the recent study into the effectiveness of Tools of the Mind [1], an early intervention programme, by Drs Blair and Raver from New York University [2] a particularly interesting research piece. They argue that children receiving the Tools of the Mind program showed a significant improvement in learning in comparison to children in typical kindergarten classrooms. Importantly, the authors claim that some of these benefits continue into the first grade, after the program had finished, and that many of these effects were stronger in high-poverty schools.

In this post, I will be putting Tools of the Mind to the test, loosely using Dorothy Bishop’s framework for identifying red flags in interventions (It’s a good read. For those interested see [3]). So, what is Tools of the Mind? Is it credible? How solid is the scientific evidence behind it? Does it really work and, if so, are the effects worth the effort and cost involved?

What is Tools of the Mind?

Tools of the Mind is an educational program, developed over 18 years and now used in prekindergartens and kindergartens across the USA and Canada. I will be focussing on the kindergarten program here (that’s ‘reception’ for the British). It is based on the Vygotskian approach: the idea that it is important to teach children to master ‘mental tools’ such as attention and emotion-regulation skills that promote intentional and self-regulated learning. This is expected to develop their executive functions, social and emotional competence at a greater rate. In practice, it involves 60 or more Vygotskian-based activities including activities that require children to create their own learning plans, reflect on their learning and work in pairs with a strong focus on intentional make-believe play tied to stories and literature. Tools of the Mind was named an exemplary education program by the International Bureau of Education at UNESCO in 2001. Maybe this sounds marvellous, but does it actually work?

Who is behind the program and what are their credentials?

The program has largely been developed by Drs Bodrova and Leong. Encouragingly, both appear to be experts in the field. Dr Elena Bodrova was, until recently, Principal Researcher at Mid-continent Research for Education and Learning, a non-profit, non-partisan education research and development corporation, and Dr Deborah Leong is the Professor Emerita of Psychology, Metropolitan State College of Denver. They’ve both written a number of papers, articles and books, and, to the best of my knowledge, have each authored 10 papers in peer review journals in topics related to the Tools of the Mind program. I can’t find any red flags here!

Is there credible science behind the program?

The program claims to improve academic achievement and socio-emotional skills through improving executive functions, and in particular, the ability to self- regulate. Executive functions encompass many different processes involved in the management of cognition and actions: the ability to pay attention to and remember relevant details, to plan, to solve problems strategically and to regulate emotions and behaviour successfully. If you think of your brain as an orchestra, executive functions would be the conductor,  organizing many different instruments to play together as a coherent whole, bringing some in and fading others outand changing the pace and intensity of the music. Indeed, they seem to be involved in pretty much every higher-order cognitive process, calling into question whether the idea of executive functions is too vague and general a concept to be of any use and they are sometimes seen as a controversial topic amongst cognitive scientists.

Putting this issue aside for now, performance on tasks purporting to tap executive functions are excellent predictors of educational progress. More and more research is pointing to the development of these as being one of the most important domains in early childhood for positive short and long-term outcomes. The argument goes something like this: in the same way that a bad conductor is likely to produce dreadful music, no matter how excellent the individual musicians are, poor development of executive function is a very strong predictor of poor social and academic outcomes even if the rest of the cognitive system is well developed. Great claims are made about the extent to which these skills can be boosted by intervention. When you combine this with the fact that growing up in a deprived environment is frequently found to have a profound negative effect on executive functions, it seems a credible and good target for a program like Tools of The Mind.

The approach used to develop executive functions is based on the relatively well known and researched Vygotskian Approach. Unfortunately, I don’t have time and space to review this here but the Tools of the Mind website provides a lot of information on the science behind this.

Is there evidence that the intervention is effective from controlled trials?

Blair and Raver provide the first cluster randomized controlled study into the effectiveness of the Tools of the Mind curriculum in comparison to current practices in kindergartens. They looked at its effects on a range of different cognitive and academic skills. During the two year study, an impressive 795 children took part from 79 different classrooms and 29 schools. The schools were randomly assigned to either the control group who simply continued business as usual or the treatment group who received training to implement the Tools for the Mind programme in their schools. Children were tested in the first term of kindergarten, with follow up tests at a mean of 5 months and 1 year later. It’s questionable as to whether this can be considered an active control group, given that there may have been a potential bias for teachers receiving the new and novel Tools training to expect better results in comparison to those that continued as before, but overall I think this is quite a good study design.

They found that children in the Tools of the Mind classrooms were significantly better at keeping information in their working memory (effect size, ES = 0.14), maintaining attention in the face of distractions (ES = 0.12) and processing information (ES = 0.08) at the first follow up. This was accompanied by a greater rate in academic improvements in mathematics (ES = 0.13), vocabulary (ES not given) and reading (ES = 0.07), in comparison to the control classrooms (although this was only conventionally significant for mathematics). The faster rate of improvement in reading (Figure 1, ES = 0.14) and vocabulary (ES = 0.1) extended into the first grade, becoming significant, after the program had finished, suggesting that it has long term effects.

Most of the effects seen were stronger in high-poverty schools where more than 75% of the pupils are eligible for free or reduced-price lunch. In particular, the effect of Tools of the Mind in comparison to the control group was significant in tests that measured the child’s ability to maintain attention despite emotionally arousing distractors (ES = 0.82), fluid IQ (the ability to solve problems in novel situations, ES = 0.46) and vocabulary (ES = 0.43) in high poverty schools.

Whilst this is the only study of the kindergarten program, it should be noted that both the National Institute for early education research (NIEER) and the Peabody research institute (PRI) have been investigating the pre-kindergarten program. Despite the fact that both studies have included only high-poverty classrooms (around 80%+ children receiving free or reduced price lunch) their results appear to have some substantial disagreements. In one study, NIEER compared 88 children in Tools classrooms with 122 children receiving the district’s balanced literacy curriculum. They found that by the end of the first year, children in the Tools classroom had significantly better classroom experiences, far fewer behavioural problems (an indicator of self-regulation) and some improvement in language performance, although not significant after correcting for multiple comparisons, in comparison to children in the control group. They found no improvement in reading or mathematics. In a subsequent study of the same program but with a slightly different sample (85 children in Tools, 62 in the control classrooms) they found that children in the Tools classrooms performed significantly better in tests of executive functions (stoop battery and flanker tasks) in comparison to children in the control classrooms and that this difference was greater, the more taxing the task.

In contrast, PRI have been conducting a much larger study comparing 498 pupils receiving the Tools curriculum with 379 pupils receiving a range of curricula that would normally be found in pre-school classrooms. They found that the Tools curriculum had no significant effect on direct assessments of achievement or executive function or any teacher ratings of language, executive function, or social behaviour by the end of pre-kindergarten in comparison to the control classrooms. Remarkably, when the children were assessed a year later at the end of kindergarten, they found that the comparison children that had received the normal curricula actually had significantly greater gains in achievement and executive function composite scores and on many of their subtests in comparison to those that received Tools of the Mind!

Are these effects worth the effort, time and cost involved?

Being subjective, this isn’t an easy question to answer. Note that the Tools curriculum can’t simply be taken off the shelf and implemented as other curricula might be: it requires about two years of teacher training including in-classroom coaching and a shift in the teacher’s role in the classroom. However, Blair and Raver appear to believe that: ‘teachers received typical levels of training and implemented the curriculum with materials that are well within the budget of the average kindergarten classroom. Information about the actual cost is difficult to find but in general sources place it at £5000 – £7000 per classroom, which seems reasonable [4, 5] and the materials required appear to be simple, inexpensive and readily available.

However, I think the key to answering this question is to consider the effect sizes produced by the program, the number of standard deviations between the mean score of the control and target groups. In general, the effect sizes found for the kindergarten version of Tools of the Mind in Blair and Raver’s study are relatively low: they all hover around the 0.1 mark, 5 months into the program. This is around half the average effect size for childhood interventions in 2006 found by Duncan et al. [6]. In addition, whilst the effects in high poverty schools look much more promising, it should be noted that the confidence intervals are also much larger for these effects.

To get a feel for what this means, we can get a rough estimate for how this effect size translates into months of development. Bloom et al. [7] found that children progress with an effect size of 1.52 in reading and 1.14 in maths over the first year of schooling. Averaging this and dividing by 12 gives as a rough expected progress estimate of 0.1 per month. This means that many of the effects of Tools of the Mind put children only one month ahead of children not receiving the program.  However, when considering vocabulary in high poverty schools, for example, with an effect size of 0.46, this offers nearly 5 months advantage. Of course, this analysis is something of an oversimplification, but it provides a little context as to whether these effect sizes are meaningful.

How do these effect sizes compare to differences between children coming from disadvantaged and advantaged children?  Whilst it is difficult to put numbers on these differences as many researchers use different measures of what counts  as ‘advantaged’ and ‘disadvantaged’, in order to get an idea of this, I’ll compare the ES to a few studies that found differences within the usual range. I’ll steer clear of the issues in measuring something as vague of ‘executive functions’ and concentrate on one aspect of them, working memory. Noble et al [8] found a difference of 0.31 SD between high and low SES children in the first year of school. Given that Tools of the Mind was found to improve working memory by 0.14, this would be the equivalent of closing half the income related gap. Vocabulary also shows promising results. A study comparing children receiving free school meals (FSM) and those that didn’t at the start of school found a gap of 0.62 SD between the groups [9]. The 0.46 ES for vocabulary in high poverty schools would close this gap by three quarters. Unfortunately, they don’t report what this figure was in high poverty schools one year later, which would be a particularly interesting result. Reading and maths are less promising however. The same study found gaps of 0.69 for reading and 0.68 for mathematics between children receiving FSM and those that didn’t. The ES of 0.13 in maths and 0.14 in reading won’t have much effect on closing this gap, which is a shame, because arguably these are the more valuable educational skills.

In conclusion, Tools of the Mind appears to be a relatively well-grounded intervention program. A random controlled study showed that Tools of the Mind improved children’s executive functions and academic skills in comparison to normal kindergarten classrooms. Despite not being one of the most effective interventions available, the costs, effort and time required seem reasonable and the results suggest that some of the benefits of the program are long term. It holds particular potential for high poverty schools, where the effects of the program appear to go some way to closing income related achievement gaps. However, it is questionable whether the control group used can be considered a full active control and it would perhaps be better to see these results replicated in a study that used a similarly new and novel curriculum or simply used just part of the Tools curriculum as the control group. In addition, results from studies of the pre-kindergarten program by two highly distinguished research bodies are inconclusive with a large study indicating that children receiving Tools of The Mind were at a disadvantage in comparison to normal curricula at the start of school. With this in mind, and the fact that we only have one study investigating the kindergarten program, I would suggest that more research needs to be done to establish the overall effectiveness of this curriculum (and ideally to identify which aspects of the Tools have the greatest effect) before we can advocate it as an effective intervention for kindergarten classes in high poverty schools.

[1] http://www.toolsofthemind.org/

[2] The full paper by Blair and Raver can be found here: http://dx.plos.org/10.1371/journal.pone.0112393

[3] http://deevybee.blogspot.co.uk/2012/02/neuroscientific-interventions-for.html

[4] http://www.washingtonpost.com/local/education/dc-school-reform-targets-early-lessons/2011/11/04/gIQAGZ2VCN_story.html

[5] http://economicdiscipleship.com/2010/12/23/profile-tools-of-the-mind/

[6] https://socialinnovation.usc.edu/files/2014/03/Duncan-Two-Policies-to-Boost-School-Readiness.pdf

[7] http://www.mdrc.org/sites/default/files/full_473.pdf

[8] http://onlinelibrary.wiley.com/doi/10.1111/j.1467-7687.2005.00394.x/pdf

[9] http://www.scotland.gov.uk/Publications/2005/02/20634/51605