All posts by mengyaz

Catching up with the Internet Era: Online data collection for researchers

In the world we humans spend a great deal of time connected to the internet, this is especially true for younger people – who are growing up surrounded by this technology. You can see this huge change over time in graph from Our World in Data below!

Alex post pic1
Source: http://data.worldbank.org/

Increasingly, researchers and companies are leveraging this remote access to behaviour to answer questions about how humans behave. Companies have been collecting ‘user data’ for years from online platforms, and using this inferred information about people to improve user experience, and in some cases sell more products to the correct people. The amount of data we are able to collect on behaviour is expanding exponentially, and at the same time so is the quality and modality of this data – as people connect different devices (like activity monitors, clocks, fridges). Wearable sensors are becoming particularly more frequent – often this data is stored using internet-based services.

 

Infographic: The Predicted Wearables Boom Is All About The Wrist | Statista
Taken from Statista

Psychology and cognitive science is starting to catch up on this trend, as it offers the ability to carry out controlled experiments on a much larger scale. This offers the opportunity to characterise subtle differences, that would be lost in the noise of small samples tested in a lab environment.

However, for many the task of running an online experiment is daunting; there are so many choices, and dealing with building, hosting and data processing can be tricky!

Web Browsers

A good starting point, and often the most straightforward, is building experiments to work in a web browser. The primary advantage of this is that you can run experiments on the vast majority of computers, and even mobile devices, with no installation overhead. There are some limitations though:

Compatibility:

Internet Explorer - sigh

With multiple different web browsers, operating systems, and devices, the possible combinations number in the 1000s. This can lead to unexpected bugs and errors in your experiment. A workaround is to restrict access to a few devices (see below for tips on how to do this in JavaScript) – but this is traded off with how many participants you would like to access.

Accuracy:

Web browsers were not designed to run reaction time experiments in, or present stimulus with millisecond precision. Despite this, some research has shown equivalent precision for Reaction Time, and Stimulus Presentation.

If you are very concerned still, you may utilise WebGL, a web graphics engine, which allows you to gain analogous presentation times to native programs, and even use a computer’s graphics card. Although this will be limited by the operating system and hardware of the user!

There are a number of tools that can help you with browser experiments. From fee-paying services like Gorilla, which deals with task building, hosting and data management for you, to fully open source projects like jsPsych, and PsychoPy’s PsychoJS – which deal with building experiments and data, but not hosting (although there are plans to develop a hosting and data storage solution). All of these offer a graphical user-interface, which allows experiments to be built without any prior knowledge of programming!

Unity

unity

One intermediate tool – which we are currently using – is cross platform development environment called Unity. Whilst originally intended for creating video-games, Unity can be repurposed for creating experimental apps. The large advantage is an easy capability to build to a vast variety of operating systems and platforms with minimal effort: a Unity project can be built for a web browser, iOS app, Android app, Windows, OSX, Linux…. and so on. You can also gain access to sensor information on devices (hear rate monitors, step counting, microphone, camera), to start to access the richness of information contained in these devices.

The utility of this tool for experimental research is huge, and apparently appears to be under-utilised – it has an easy to learn interface, and requires minimal programming knowledge.

Conclusion

Whilst this post is largely non-instructional, hopefully it has shed some light on the potential tools you can use to start running research online (without employing an expensive web or app developer), or hopefully just piqued your interest a tiny bit.

If you would like to dive in to the murky (but exciting) world of web development, you can also check out a few tips for improving the quality of your online data here.

 

 

This exciting post was written by Alexander Irvine, one of the newest members of our lab. Alex previously worked on developing web-based study at Oxford before joining the lab and is experienced in an array of programming languages and tools. Check out his personal website if you want to read more in-depth about online data collection.

Advertisements

Are early interventions effective?

Early years interventions can seem a particularly powerful way to forestall developmental difficulties. The wide-ranging evidence that early cognitive and behavioural difficulties can predict lifelong outcomes[i],[ii] makes it seem obvious that if a child is struggling, intervening earlier is better. This has led to considerable interest in intervening within the first few years of a child’s life, and the temptation to seek out earlier indicators of children’s cognitive development and wellbeing.

However, there are several challenges with choosing effective early interventions. First, reliably identifying which children will require support is a substantial challenge. This is particularly evident in the case of language development. Children’s language is a rich area for research on early intervention, for several reasons: it provides a window onto learning at school entry, and weak language at school entry is a risk factor for poor educational, social and emotional outcomes in subsequent years. There is also a clear socio-economic gradient in children’s language abilities, in which children from lower socio-economic backgrounds tend to have weaker language skills than their peers before they start school.

blog_image2

(But is the story really so simple?)

Despite this, children’s language abilities are highly variable and volatile before they are 4-5 years old, which makes the early identification of children who may be in need of intervention non-trivial. This is demonstrated by several studies showing that whilst children’s early vocabulary around 2 years of age can predict their later vocabulary and reading skills at school entry, this predictive power is very small. Only about 10-20%[iii],[iv] of the variation in children’s abilities at school is typically predicted by their early language skills, which indicates that there are multiple other factors that shape children’s outcomes. This also suggests that finding a sufficiently sensitive marker of early difficulties is challenging because of how much a child’s abilities can shift and develop over time.

This relates to the second challenge: early interventions can struggle to have sustained impacts unless intervention is ongoing. A good example of this comes from a recent study by McGillion and colleagues (2017). The researchers were interested in whether a caregiver intervention to promote talking with their 11-month old child would lead to changes in both parenting approach and children’s language development. Caregivers from a range of socio-economic backgrounds were randomly assigned to one of two conditions: watching a short video about talking with their child, or a control video about dental hygiene. One month later the caregivers in the language video condition talked more with their children than those in the control video condition. This fed into vocabulary improvement in the low socio-economic group: children in the intervention condition had higher vocabularies at 15 and 18 months compared to children in the control condition. However, at 24 months these benefits had disappeared, and there was no effect of the intervention on children’s vocabulary.

These findings demonstrate several important points about early interventions. Interventions can have a positive impact: in this case a brief, low-intensity video intervention was able to positively change caregiver behaviour, and this may have helped gains in children’s vocabulary a few months later. However, these effects were not sustained over time: they had faded by the time children were 2 years old. This shows how critical longer-term follow-ups are in intervention studies to understand whether benefits are long lasting. Moreover, this result reminds us that without ongoing support it is difficult for early gains in one area of cognition to offset other challenges children might face. The promise of early interventions means there can be an assumption that they will permanently shift a child’s trajectory. In reality this is often not the case, and initial gains may often require ongoing support to have real long-term effects for children.

It is important to therefore strike a balance between the promises and limitations of early interventions. Whilst endeavouring to find the right areas to target is undoubtedly valuable, the challenge of finding reliable indicators of difficulties early in a child’s life might mean searching for stable predictors at later ages (e.g. from school entry) may be a more fruitful approach[v]. In addition, long-term follow-ups to interventions may be key to understanding the extent to which they are effective. The challenge of early interventions is that development is complex and shaped by multiple factors. Working within these constraints may help us better identify and help children in need of ongoing support.

 This newest article was contributed by our beloved lab member Erin Hawkins. Erin’s research focuses on understanding the mechanisms and interventions of developmental difficulties in children.

References:

[i] Caspi et al. (2016). Childhood forecast of a small segment of the population with large economic burden. Nature Human Behaviour, doi:10.1038/s41562-016-0005

[ii] The Allen Report (2011). Early Intervention: The Next Steps. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/284086/early-intervention-next-steps2.pdf

[iii] Duff, Reen, Plunkett, & Nation (2015). Do infant vocabulary skills predict school-age language and literacy outcomes? Journal of Child Psychology and Psychiatry, doi:10.1111/jcpp.12378.

[iv] McGillion, Pine, Herbert & Matthews. (2017). A randomised controlled trial to test the effect of promoting caregiver contingent talk on language development in infants from diverse socioeconomic status backgrounds. Journal of Child Psychology and Psychiatry, doi:10.1111/jcpp.12725

[v] Norbury, C. (2015). Editorial: Early intervention in response to language delays – is there a danger of putting too many eggs in the wrong basket? Journal of Child Psychology and Psychiatry, doi:10.1111/jcpp.12446

For further reading:

 

The weather and the brain – using new methods to understand developmental disorders

 

The latest article was written by our brilliant lab member Danyal Akarca. It describes some of his MPhil research which aims to explore transient brain networks in individuals with a particular type of genetic mutation. Dan finished his degree in Pre-Clinical Medicine before joining our lab and has since been fascinated by the intersection of genetic disorders and the dynamics of brain networks.

The brain is a complex dynamic system. It can be very difficult to understand how specific differences within that system can be associated with the cognitive and behavioural difficulties that some children experience. This is because even if we group children together on the basis that they all have a particular developmental disorder, that group of children will likely have a heterogeneous aetiology. That is, even though they all fall into the same category, there may be a wide variety of different underlying brain causes. This makes these disorders notoriously difficult to study.

Developmental disorders that have a known genetic cause can be very useful for understanding these brain-cognition relationships, because by definition they all have the same causal mechanism (i.e. the same gene is responsible for the difficulties that each child experiences). We have been studying a language disorder caused by a mutation to a gene called ZDHHC9. These children have broader cognitive difficulties, and more specific difficulties with speech production, alongside a form of childhood epilepsy called rolandic epilepsy.

In our lab, we have explored how brain structure is organised differently in individuals with this mutation, relative to typically developing controls. Since then our attention has turned to applying new analysis methods to explore differences in dynamic brain function. We have done this by directly recording magnetic fields generated by the activity of neurons, through a device known as a magnetoencephalography (MEG) scanner. The scanner uses magnetic fields generated by the brain to infer electrical activity.

The typical way that MEG data is interpreted, is by comparing how electrical activity within the brain changes in response to a stimulus. These changes can take many forms, including how well synchronised different brain areas are, or the how size of the magnetic response differs across individuals. However, in our current work, we are trying to explore how the brain configures itself within different networks, in a dynamic fashion. This is especially interesting to us, because we think that the ZDHHC9 gene has an impact on the excitability of neurons in particular parts of the brain, specifically in those areas that are associated with language. These changes in network dynamics might be linked to the kinds of cognitive difficulties that these individuals have.

We used an analysis method called “Group Level Exploratory Analysis of Networks” – or GLEAN for short – and has recently been developed at the Oxford centre for Human Brain Activity. The concept behind GLEAN is that the brain changes between different patterns of activation in a fashion that is probabilistic. This is much like the concept of the weather – just as the weather can change from day to day in some probabilistic way, so too may the brain change in its activation.

2a6556de3969141b8006024f2d14873e

This analysis method not only allows us to observe what regions of the brain are active when the participants are in the MEG scanner. It also allows us to see the probabilistic way in which they can change between each other. For example, just as it is more likely to transition from rain one day to cloudiness the next day, relative to say rain to blistering sun, we find that brain activation patterns can be described in a very similar way over sub-second timescales. We can characterise those dynamic transitions in lots of different ways, such as how long you stay in a specific brain state or how long does it take to return to a state once you’ve transitioned away. (A more theoretical account of this can be found in another recent blog post in our Methods section – “The resting brain… that never rests”.) We have found that a number networks differ between individuals with the mutation and our control subjects.

Picture1

(These are two brain networks that show the most differences in activation – largely in the parietal and frontotemporal regions of the brain.)

Interestingly, these networks strongly overlap with areas of the brain that are known to express the gene (we found this out by using data from the Allen Atlas). This is the first time that we know of that researchers have been able to link a particular gene, to differences dynamic electrical brain networks, to a particular pattern of cognitive difficulties. And we are really excited!

 

Brain Training: Placebo effects, publication bias, small sample sizes… and what we do next?

Over the past decade the young field of cognitive training – sometimes referred to as ‘brain training’ – has expanded rapidly. In our lab we have been extremely interested in brain training (Astle et al. 2015; Barnes et al. 2016). It has the potential to tell us a lot about the brain and how it can dynamically respond to changes in our experience.

The basic approach is to give someone lots of practice on a set of cognitive exercises (e.g. memory games), see whether they get better at other things too, and in some cases see whether there are significant brain changes following the training. The appeal is obvious: the potential to slow age-related cognitive decline (e.g. Anguera et al. 2013), remediate cognitive deficits following brain injury (e.g. Westerberg et al. 2007), boost learning (e.g. Nevo and Breznitz 2014) and reduce symptoms associated with neurodevelopmental disorders (e.g. Klingberg et al. 2005). But these strong claims require compelling evidence and the findings in this area have been notoriously inconsistent.

menuscreens

(Commercial brain training programmes are available to both academics and the general public)

I have been working on a review paper for a special issue, and having trawled through the various papers, I think that some consensus is emerging. Higher-order cognitive processes like attention and memory can be trained. These gains will transfer to similarly structured but untrained tasks, and are mirrored by enhanced activity and connectivity within the brain systems responsible for these cognitive functions. However, the scope of these gains is currently very narrow. To give an extreme example, learning to remember very long lists of letters does not necessarily transfer to learning long lists of words, even though those two tasks are so similar – the training can be very content specific (Harrison et al. (2013); see also Ericcson et al. (1980)). But other studies seem to buck that trend, and show substantial wide transfer effects – i.e. people get better not just at what they trained on, but even very different tasks. Why this inconsistency? Well I think there are a few important differences in how the studies are designed, here are two of the most important:

  1. Control groups: Some studies don’t have control groups at all, and many that do don’t have active control groups (i.e. the controls don’t actually do anything, so it is pretty obvious that they are controls). This means that these studies can’t properly control for the placebo effect (https://en.wikipedia.org/wiki/Placebo). If a study doesn’t have an active control group then it is more likely to show a wide transfer effect.
  2. Sample size: The smaller the study (i.e. the fewer the participants) the more likely it is to show wider transfer effects. If studies include lots of participants then it is far more likely to accurately estimate the true size of the transfer effect, which is very small.

When you consider these two factors and only look at the best designed studies, the effect size for wider transfer effects is about d=0.25 – if you are not familiar with this statistic, this is small (Melby-Lervag et al., in press). Furthermore, when considering the effect sizes in this field it is important to remember that this literature almost certainly suffers from a publication bias – it is difficult to publish null effects, and easier to publish positive results. Meaning that there are probably quite a few studies showing no training effects sat in researchers’ drawers, unpublished. As a result, even this small effect size is likely an overestimate of the genuine underlying effect. The true effect is probably even closer to zero.

So claims that training on some cognitive games can produce improvements that spread to symptoms associated with particular disorders – like ADHD – are particularly incredible. Just looking at the best designed studies, the effect size is small, again about d=0.25 (Sonuga-Barke et al., 2013). The publication bias caveat applies here too – even this small effect size is likely an overestimate of the true effect. Some studies do show substantially larger effects, but these are usually not double blind. That is, the person rating those symptoms knows whether or not the individual (usually a child) received the training. This will result in a substantial placebo effect, and this likely explains these supposed enhanced benefits.

Where do we go from here? As a field we need to ensure that future studies have active control groups, double blinding and that we include enough participants to show the effects we are looking for. I think we also need theory. A typical approach is to deliver a training programme, alongside a long list of assessments, and then explore which assessments show transfer. There is little work that explicitly generates and then tests a theory, but I think this is necessary for future progress. Where research is theoretically grounded it is far easier for a field to make meaningful progress, because it gives a collective focus, creates a shared set of critical questions, and provides a framework that can be tested, falsified and revised.

Author information:

Dr. Duncan Astle, Medical Research Council Cognition and Brain Science Unit, Cambridge.

https://www.mrc-cbu.cam.ac.uk/people/duncan.astle/

Reference:

Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, Kong E, Larraburo Y, Rolle C, Johnston E, Gazzaley A (2013) Video game training enhances cognitive control in older adults. Nature 501:97-101.

Astle DE, Barnes JJ, Baker K, Colclough GL, Woolrich MW (2015) Cognitive training enhances intrinsic brain connectivity in childhood. J Neurosci 35:6277-6283.

Barnes JJ, Nobre AC, Woolrich MW, Baker K, Astle DE (2016) Training Working Memory in Childhood Enhances Coupling between Frontoparietal Control Network and Task-Related Regions. J Neurosci 36:9001-9011.

Ericcson KA, Chase WG, Faloon S (1980) Acquisition of a memory skill. Science 208:1181-1182.

Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychological science 24:2409-2419.

Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlstrom K, Gillberg CG, Forssberg H, Westerberg H (2005) Computerized training of working memory in children with ADHD–a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44:177-186.

Melby-Lervag M, Redick TS, Hulme C (in press) Working memory training does not improve performance on measures of intelligence or other measures of “Far Transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science.

Nevo E, Breznitz Z (2014) Effects of working memory and reading acceleration training on improving working memory abilities and reading skills among third graders. Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence 20:752-765.

Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Dopfner M, Dittmann RW, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong IC, Sergeant J (2013) Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. The American journal of psychiatry 170:275-289.

Westerberg H, Jacobaeus H, Hirvikoski T, Clevberger P, Ostensson ML, Bartfai A, Klingberg T (2007) Computerized working memory training after stroke–a pilot study. Brain injury 21:21-29.