Think you’re your own harshest critic? Try peer review…

Our latest blog post is written by me with the wonderful Sue Fletcher-Watson, a colleague whose intellectual excellence is only exceeded by her whit and charm.PeerReview.jpeg

Peer review is a lynch-pin of the scientific process and bookends every scientific project. But despite the crucial importance of the peer review process in determining what research gets funded and published, in our experience PhD students and early career researchers are rarely if ever offered training on how to conduct a good review. Academics frequently find themselves complaining about the unreasonable content and tone of the anonymous reviews they receive, which we attribute partly to a lack of explicit guidance on the review process. In this post we offer some pointers on writing a good review of a journal article.  We hope to help fledgling academics hone their craft, and also provide some insight into the mechanics of peer review for non-academic readers.

What’s my review for?

Before we launch into our list of things to avoid in the review process, let’s just agree what a review of an academic journal article is meant to do. You have one simple decision to make: does this paper add something to the sum of scientific knowledge? Remember of course that reporting a significant effect ≠ adding new knowledge, and similarly, a non-significant result can be highly informative. Don’t get sucked into too much detail – you are not a copy editor, proof-reader, or English-language critic. Beyond that, you will also want to consider whether the manuscript, in its current form, does the best job of presenting that new piece of knowledge. There’s a few specific ways (not) to go about this, so it looks like it might be time for a list…

  1. Remember, this is not YOUR paper

Reviewer 2 walks into a bar and declares that this isn’t the joke they would have written.

First rule of writing a good peer review: remember that this is not your paper. Neither is this a hypothetical study that you wished the authors had conducted. Realising this will have a massive impact on your view of another’s manuscript. The job is not to sculpt it into a paper you could have written. Your job as a reviewer is two-fold: i) make a decision as to the value of this piece of work for your field; and ii) help the authors to present the clearest possible account of their science.

Misunderstanding the role of the reviewer is perhaps at the heart of many peer review horror stories. Duncan does a lot of studies on cognitive training. Primarily he’s interested in the neural mechanisms of change, and tries to be very clear about that. But reviewers almost always ask “where are your far transfer measures?” because they want to assess the potential therapeutic benefit of the training. This is incredibly infuriating. The studies are not designed or powered for looking at this, but instead at something else of equal but different value.

Remember – you can’t ask them to report an imaginary study you wished they had conducted.

  1. Changing the framing, but not the predictions

In this current climate of concern over p-hacking and other nefarious, non-scientific procedures, a question we have to ask ourselves as reviewers is: are there some things I can’t ask them to change? We think the answer is yes – but it may be less than you think. For starters, you can ask authors to re-consider the framing of the study to make it more accurate. Let’s imagine they set out to investigate classroom practice, but used interviews not observations, and so ended up recording teacher attitudes instead. Their framing can end up a bit out of kilter with the methods and findings. As a reviewer, with a bit of distance from the work, you can be very helpful in highlighting this.

If you think there are findings which could be elucidated – for example by including a new covariate, or by running a test again with a specific sub-group excluded – you should feel free to ask.  At the same time, you need to respect that the authors might respond by saying that they think these analyses are not justified.  We all should avoid data-mining for significant results and reviewers should be aware of this risk.

What almost certainly shouldn’t be changed are any predictions being made at the outset. If these represent the authors’ honest, well-founded expectations then they need to be left alone.

However, there may be an exception to this rule… Imagine a paper (and we have both seen these) where the literature reviewed is relatively balanced, or sparse, such that it is impossible to make a concrete prediction about the expected pattern in the new data. And yet these authors have magically extracted hypotheses about the size and direction of effects which match up with their results. In this case, it may be legitimate to ask authors to re-inspect their lit review so that it provides a robust case to support their predictions. Another option is to say that, given the equivocal nature of the field, the study would be better set-up with exploratory research questions. This is a delicate business, and if in doubt, it might be a good place to write something specific to the editor explaining your quandary (more on this in number 5).

  1. Ensuring all the information is there for future readers

In the end the quality of a paper is not determined by the editor or the reviewers… but by those who read and cite it. As a reviewer imagine that you are a naïve reader and ask whether you have all the information you need to make an informed judgement. If you don’t, then request changes. This information could take many forms. In the Method Section, ask yourself whether someone could reasonably replicate the study on the basis of the information provided. In the Results ask whether there are potential confounds or complicating factors that readers are not told about. These kinds of changes are vital.

We also think it is totally legitimate to request that authors include possible alternative interpretations. The whole framing of a paper can sometimes reflect just one of multiple possible interpretations, which could somewhat mislead readers. As a reviewer be wise to this and cut through the spin. The bottom-line: readers should be presented with all information necessary for making up their own minds.

  1. Digging and showing off

There is nothing wrong with a short review. Sometimes papers are good. As an editor, Duncan sometimes feels like reviewers are really clutching at straws, desperate to identify things to comment on. Remember that as a reviewer you are not trying to impress either the authors or the editor. Don’t dig for dirt in order to pad the review or show how brainy you are.

Another pet hate is when reviewers introduce new criticisms in subsequent rounds of review. Certainly if the authors have introduced new analyses or data since the original submission, then yes, this deserves a fresh critique. But please please please don’t wait until they have addressed your initial concerns… and then introduce a fresh set on the same material. When reviewers start doing this it smacks of a desperate attempt to block a paper, thinly veiled by apparently legitimate concerns. Editors shouldn’t stand for that kind of nonsense, so don’t make them pull you up on it.

  1. Honesty about your expertise

You don’t know it all, and there is no point pretending that you do. You have been asked to review a paper because you have relevant expertise, but it isn’t necessarily the case that you are an expert in all aspects of the paper. Make that clear to the authors or the editor (the confidential editor comments box is quite useful for this).

It is increasingly the case that our science is interdisciplinary – we have found this is especially the case where we are developing new neuroimaging methods and applying them to novel populations (e.g. typically and atypically developing children). The papers are usually reviewed by either methods specialists or developmental psychologists, and the reviews can be radically different. This likely reflects the different expertise of the reviewers, and it helps both authors and editor where this is made explicit.

Is it ok to ask authors to cite your work? Controversial. Duncan never has, but Sue (shameless self-publicist) has done. We both agree that it is important to point out areas of the literature that are relevant but have not been covered by the paper – and this might include your own work. After all, there’s a reason why you’ve been selected as a relevant reviewer for this paper.

Now we know what not to do, what should you put in a review?

Start your review with one or two sentences summarising the main purpose of the paper: “This manuscript reports on a study with [population X] using [method Y] to address whether [this thing] affects [this other thing].”  It is also good to highlight one or two key strengths of the paper – interesting topic, clear writing style, novel method, robust analysis etc. The text of your review will be sent, in full and unedited, to the authors. Always remember that someone has slaved over the work being reported, and the article writing itself, and recognise these efforts.

Then follow with your verdict, in a nutshell.  You don’t need to say anything specific about whether the paper should / should not be published (and some journals actively don’t want you to be explicit about this) but you should try to draw out the main themes of your comments to help orient the authors to the specific items which follow.

The next section of your review should be split into two lists – major and minor comments. Major comments are often cross-cutting,   e.g. if you don’t think the conclusions are legitimate based on the results presented. Also in the major comments include anything requiring substantial work on the part of the authors,  like a return to the original data. You might also want to highlight pervasive issues with the writing here – such as poor grammar – but don’t get sucked into noting each individual example.

Minor comments should require less effort on the part of the authors, such as some re-phrasing of key sentences, or addition of extra detail (e.g. “please report confidence intervals as well as p-values”). In each case it is helpful to attach your comments to a specific page and paragraph, and sometimes a numbered line reference too.

At the bottom of the review, you might like to add your signature. Increasing numbers of reviewers are doing this as part of a movement towards more open science practices. But don’t feel obliged – especially if you are relatively junior in your field, it may be difficult to write an honest review without the safety of anonymity.

Ready to review?

So, hopefully any early career researchers reading this might feel a bit more confident about reviewing now. Our key advice is to ensure that your comments are constructive, and framed sensitively. Remember that you and the original authors are both on the same side – trying to get important science out into a public domain where it can have a positive influence on research and practice. Think about what the field needs, and what readers can learn from this paper.

Be kind. Be reasonable. Be a good scientist.

Brain Training: Placebo effects, publication bias, small sample sizes… and what we do next?

Over the past decade the young field of cognitive training – sometimes referred to as ‘brain training’ – has expanded rapidly. In our lab we have been extremely interested in brain training (Astle et al. 2015; Barnes et al. 2016). It has the potential to tell us a lot about the brain and how it can dynamically respond to changes in our experience.

The basic approach is to give someone lots of practice on a set of cognitive exercises (e.g. memory games), see whether they get better at other things too, and in some cases see whether there are significant brain changes following the training. The appeal is obvious: the potential to slow age-related cognitive decline (e.g. Anguera et al. 2013), remediate cognitive deficits following brain injury (e.g. Westerberg et al. 2007), boost learning (e.g. Nevo and Breznitz 2014) and reduce symptoms associated with neurodevelopmental disorders (e.g. Klingberg et al. 2005). But these strong claims require compelling evidence and the findings in this area have been notoriously inconsistent.

menuscreens

(Commercial brain training programmes are available to both academics and the general public)

I have been working on a review paper for a special issue, and having trawled through the various papers, I think that some consensus is emerging. Higher-order cognitive processes like attention and memory can be trained. These gains will transfer to similarly structured but untrained tasks, and are mirrored by enhanced activity and connectivity within the brain systems responsible for these cognitive functions. However, the scope of these gains is currently very narrow. To give an extreme example, learning to remember very long lists of letters does not necessarily transfer to learning long lists of words, even though those two tasks are so similar – the training can be very content specific (Harrison et al. (2013); see also Ericcson et al. (1980)). But other studies seem to buck that trend, and show substantial wide transfer effects – i.e. people get better not just at what they trained on, but even very different tasks. Why this inconsistency? Well I think there are a few important differences in how the studies are designed, here are two of the most important:

  1. Control groups: Some studies don’t have control groups at all, and many that do don’t have active control groups (i.e. the controls don’t actually do anything, so it is pretty obvious that they are controls). This means that these studies can’t properly control for the placebo effect (https://en.wikipedia.org/wiki/Placebo). If a study doesn’t have an active control group then it is more likely to show a wide transfer effect.
  2. Sample size: The smaller the study (i.e. the fewer the participants) the more likely it is to show wider transfer effects. If studies include lots of participants then it is far more likely to accurately estimate the true size of the transfer effect, which is very small.

When you consider these two factors and only look at the best designed studies, the effect size for wider transfer effects is about d=0.25 – if you are not familiar with this statistic, this is small (Melby-Lervag et al., in press). Furthermore, when considering the effect sizes in this field it is important to remember that this literature almost certainly suffers from a publication bias – it is difficult to publish null effects, and easier to publish positive results. Meaning that there are probably quite a few studies showing no training effects sat in researchers’ drawers, unpublished. As a result, even this small effect size is likely an overestimate of the genuine underlying effect. The true effect is probably even closer to zero.

So claims that training on some cognitive games can produce improvements that spread to symptoms associated with particular disorders – like ADHD – are particularly incredible. Just looking at the best designed studies, the effect size is small, again about d=0.25 (Sonuga-Barke et al., 2013). The publication bias caveat applies here too – even this small effect size is likely an overestimate of the true effect. Some studies do show substantially larger effects, but these are usually not double blind. That is, the person rating those symptoms knows whether or not the individual (usually a child) received the training. This will result in a substantial placebo effect, and this likely explains these supposed enhanced benefits.

Where do we go from here? As a field we need to ensure that future studies have active control groups, double blinding and that we include enough participants to show the effects we are looking for. I think we also need theory. A typical approach is to deliver a training programme, alongside a long list of assessments, and then explore which assessments show transfer. There is little work that explicitly generates and then tests a theory, but I think this is necessary for future progress. Where research is theoretically grounded it is far easier for a field to make meaningful progress, because it gives a collective focus, creates a shared set of critical questions, and provides a framework that can be tested, falsified and revised.

Author information:

Dr. Duncan Astle, Medical Research Council Cognition and Brain Science Unit, Cambridge.

https://www.mrc-cbu.cam.ac.uk/people/duncan.astle/

Reference:

Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, Kong E, Larraburo Y, Rolle C, Johnston E, Gazzaley A (2013) Video game training enhances cognitive control in older adults. Nature 501:97-101.

Astle DE, Barnes JJ, Baker K, Colclough GL, Woolrich MW (2015) Cognitive training enhances intrinsic brain connectivity in childhood. J Neurosci 35:6277-6283.

Barnes JJ, Nobre AC, Woolrich MW, Baker K, Astle DE (2016) Training Working Memory in Childhood Enhances Coupling between Frontoparietal Control Network and Task-Related Regions. J Neurosci 36:9001-9011.

Ericcson KA, Chase WG, Faloon S (1980) Acquisition of a memory skill. Science 208:1181-1182.

Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychological science 24:2409-2419.

Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlstrom K, Gillberg CG, Forssberg H, Westerberg H (2005) Computerized training of working memory in children with ADHD–a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44:177-186.

Melby-Lervag M, Redick TS, Hulme C (in press) Working memory training does not improve performance on measures of intelligence or other measures of “Far Transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science.

Nevo E, Breznitz Z (2014) Effects of working memory and reading acceleration training on improving working memory abilities and reading skills among third graders. Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence 20:752-765.

Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Dopfner M, Dittmann RW, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong IC, Sergeant J (2013) Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. The American journal of psychiatry 170:275-289.

Westerberg H, Jacobaeus H, Hirvikoski T, Clevberger P, Ostensson ML, Bartfai A, Klingberg T (2007) Computerized working memory training after stroke–a pilot study. Brain injury 21:21-29.

The connectome goes to school

Children learn an incredible amount whilst at school. Many fundamental skills that typical adults perform effortlessly like reading and maths have to be acquired during childhood. Childhood and adolescence are also a period of important brain development. Particularly the structural connections in the brain show a prolonged maturation that extends throughout childhood and adolescence up to the third decade of life. We are beginning to explore how changes in brain structure over this time support the acquisition of these skills, but also how brain changes may give rise to difficulty developing these skills for some children.

Most research to date has focussed on comparisons of individuals with specific deficits like very low reading performance despite typical performance in other areas. The logic behind this approach is that anatomical structures that are specifically associated with this skill can be isolated. However, learning disorders are rarely that specific. Most children struggling in one aspect of learning also have difficulties in other areas. Furthermore, recent advances in neuroscience suggest that the brain is not a collection of modules that perform particular tasks. In contrast, more recent views suggest that the brain functions as an integrated network.

In our recent study, we wanted to investigate how the white matter brain network may be related to maths and reading performance. The study revealed that children’s reading and maths scores were closely associated with the efficiency of their white matter network. The results further suggested that highly connected regions of the brain were particularly important. These findings indicate that the overall organisation may be more important for reading and maths than differences in very specific areas. This potentially provides a clue to understanding why problems in maths and reading often co-occur.

You can read a pre-print of the article here: https://osf.io/preprints/psyarxiv/jk6yb

The code for the analysis is also available: https://github.com/joebathelt/Learning_Connectome

Science Podcasts

Downloadable radio programmes, called podcasts, have been around since the first bricklike iPods in the early 2000s. Thanks to global sensations like ‘Serial’, podcasts are more popular than ever. But they can provide more than the next true crime fix, podcasts are also a great way to stay up-to-date with the latest developments in science and learn about new topics, while blocking out noisy commuters, going for a run in the park, or doing the dishes. Here is a selection of some fantastic science podcasts alongside a few episodes relevant to developmental cognitive neuroscience. Happy listening!

 

ABC All in the Mind

cover170x170
Excellent show about brain science and psychology. Each episode is centred around a particular topic and includes interviews with researchers as well as affected people.

http://www.abc.net.au/radionational/programs/allinthemind/

Interesting episodes:
http://www.abc.net.au/radionational/programs/allinthemind/the-neuroscience-of-learning/7781442
http://www.abc.net.au/radionational/programs/allinthemind/apps-for-autism/7701834 http://www.abc.net.au/radionational/programs/allinthemind/eating-disorders-families-and-technology/7438440

 

BBC All in the Mind

cover170x170-2
This podcast presents various current items from psychology and neuroscience. There is also a focus on mental health with the All in the Mind Awards.

http://www.bbc.co.uk/programmes/b006qxx9/episodes/player

Interesting episodes:

http://www.bbc.co.uk/programmes/b07bzdjy

 

Brain Matters

cover170x170-3
Interviews with speakers mostly covering molecular and systems neuroscience

http://brainpodcast.com

Interesting episodes:
Enhancing cognition with video games: https://tmblr.co/Zxdhyr28KX4Jh

 

Invisibilia

cover170x170-4

A spin-off from the makers of Radiolab. This show focuses on ‘the invisible forces that shape our lives’ with stories around sociology, anthropology, psychology, and neuroscience.

http://www.npr.org/podcasts/510307/invisibilia

 

Nature Podcast

cover170x170-5

The Nature podcast provides a great overview of the latest developments in science. In addition to brief summaries of the main articles in the current issue of Nature, the podcast contains interviews and comments from the main authors of these studies. The News & Views segment presents quick summaries of what’s happening all across science.

http://www.nature.com/nature/podcast/

 

Radiolab

cover170x170-6
This podcast features a variety of stories that cater to various interests – think of long-form articles in the New Yorker, but for listening. Radiolab often features episodes on science around current and special interest topics.

http://www.radiolab.org

Interesting episode:
http://www.radiolab.org/story/235337-how-grow-your-brain/

 

The Brain Science Podcast

cover170x170-7
This podcast contains interviews with eminent researchers in neuroscience and neurology.

http://brainsciencepodcast.com

Interesting episodes:
http://brainsciencepodcast.com/bsp/review-of-the-great-brain-debate-bsp-4.html http://brainsciencepodcast.com/bsp/brain-science-podcasts-first-six-months-bsp-14.html
http://brainsciencepodcast.com/bsp/review-proust-and-the-squid-the-story-and-science-of-the-rea.html

 

Picture credits: Podcast pictures were taken from the websites of each podcast as referenced. The feature image was taken from https://www.scienceworld.ca/sites/default/files/styles/featured/public/images/brain_headphones_image.jpg?itok=g-1kCvgV

The developmental cognitive neuroscience of the Great British Bake-off – Part II

The days are getting shorter, leaves are starting to fall, and a new season of the Great British Bake Off is upon us. We watched as this year’s contestants battled with batter, broke down over bread, crumbled before biscuits, and were torn by torte. One of the most difficult parts of the programme is the technical challenge. In order to succeed, the contestants have to create a perfect bake given the ingredients and basic instructions. The instructions can be extremely sparse. For example, the instructions for the batter week challenge just read ‘make laced pancakes’. This illustrates one of the fundamental challenges that face us in many situations in everyday life. We often have an abstract higher goal, a metaphorical laced pancake, and have to break it down into the necessary steps that get us to that goal, e.g. weight flour, sift flour, crack eggs and mix with flour etc.

The ability to plan is also important outside the Bake-off tent. Anyone who tried getting a four-year-old to bake will know that is also an ability that we are not born with but develop over time. Unfortunately, planning is not usually tested using baking challenges in developmental psych labs due to health & safety concerns among other reasons. Instead, clever games like the Tower of London task [1] are used (Shallice, 1982). In this test, the participant is presented with three pegs and a number of disks of varying sizes. The participant has to create a tower of disks according to a template by arranging the disks in the fewest moves possible while keeping all disks on the pegs and not placing a larger disk on top of a smaller one.

OLYMPUS DIGITAL CAMERAFigure: Illustration of the Tower of London task. Please contact us if you are interested in implementing this task using macarons and sponge fingers

Studies in typical development found that planning ability measured by this task develops continuously throughout childhood and adolescents until stable performance levels are reached in early adulthood (Huizinga, Dolan, & van der Molen, 2006; Luciana, Conklin, Hooper, & Yarger, 2005) – a possible reason for the absence of pre-schoolers in the GBB0 hall of fame. There is also an important lesson for teenage bakers: While general cognitive development contributes to performance improvements between childhood and adolescence, increased scores between late adolescence and adulthood are mostly due to better impulse control (Albert & Steinberg, 2011). So, in baking as in life, think how you will combat moisture before mixing the dough.

You may ask yourself if there are other factors beyond growing up and controlling impulses to get the edge in planning ability. Enthusiastic bakers with little concern about personal safety may find transcranial magnetic stimulation an appealing option. A 2012 study in the journal Experimental Brain Research found that magnetic stimulation of the right dorso-lateral prefrontal cortex significantly increased performance in the Tower of London task in patients with Parkinson’s disease (Srovnalova, Marecek, Kubikova, & Rektorova, 2012). However, the application to the field of fine baking remains to be investigated and the use of TMS in baking tents is not recommended.

 

Footnotes:

[1] This is a version of the classic Tower of Hanoi puzzle that has been adapted for neuropsychological testing)

 

References:

Albert, D., & Steinberg, L. (2011). Age Differences in Strategic Planning as Indexed by the Tower of London. Child Development, 82(5), 1501–1517. http://doi.org/10.1111/j.1467-8624.2011.01613.x

Huizinga, M., Dolan, C. V., & van der Molen, M. W. (2006). Age-related change in executive function: Developmental trends and a latent variable analysis. Neuropsychologia, 44(11), 2017–2036. http://doi.org/10.1016/j.neuropsychologia.2006.01.010

Luciana, M., Conklin, H. M., Hooper, C. J., & Yarger, R. S. (2005). The Development of Nonverbal Working Memory and Executive Control Processes in Adolescents. Child Development, 76(3), 697–712. http://doi.org/10.1111/j.1467-8624.2005.00872.x

Shallice, T. (1982). Specific Impairments of Planning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 298(1089), 199–209. http://doi.org/10.1098/rstb.1982.0082

Srovnalova, H., Marecek, R., Kubikova, R., & Rektorova, I. (2012). The role of the right dorsolateral prefrontal cortex in the Tower of London task performance: repetitive transcranial magnetic stimulation study in patients with Parkinson’s disease. Experimental Brain Research, 223(2), 251–257. http://doi.org/10.1007/s00221-012-3255-9

Darling, the kids are out RAM!

Working memory, the ability to hold things in mind and manipulate them, is very important for children and is closely linked to their success in school. For instance, limited working memory leads to difficulties with following instructions and paying attention in class (also see our previous post https://forgingconnectionsblog.wordpress.com/2015/02/05/adhd-and-low-working-memory-a-comparison/). A major research aim is to understand why some children’s working memory capacity is limited. All children start with lower working memory capacity that increases as they grow up.
We also know that working memory, like all mental functions, is supported by the brain. The brain undergoes considerable growth and reorganisation as children grow up. Most studies so far looked at brain structures that support working memory across development. However, some structure may be more important in younger children and some in older children.
Our new study investigates for the first time how the contribution of brain structures to working memory may change with age. For that, we tested a large number of children between 6 and 16 years on different working memory tasks. We looked at aspects of working memory concerned with storing information (locations, words) and manipulating it. The children also completed MRI scans to image their brain structure. We found that white matter connecting the two hemispheres, and white matter connecting occipital and temporal areas is more important for manipulating information held in mind in younger children, but less important in older ones. In contrast, the thickness of an area in the left posterior temporal lobe was more important in older kids. We think that these findings reflect increased specialisation of the working memory system as it develops from a distributed system in younger children that requires good wiring between different brain areas to a more localised system that is supported by high-quality local machinery. By analogy, imagine you were completing a work project. If you were collaborating with people, quality and speed are largely determined by how well the team communicates – this would be very difficult if you were trying to coordinate via mobile phones in an area with low reception. On the other hand, if one person completed the project, then the outcome would depend on the ability of this worker. The insights from this study will help us to better understand how working memory is constrained at different ages, which may allow us to design better inventions in the future to help children who struggle with working memory.

A preprint of this paper is available on BioArxiv: http://biorxiv.org/content/early/2016/08/15/069617

The analysis code is available on GitHub: https://github.com/joebathelt/WorkingMemory_and_BrainStructure_Code

OHBM 2016 – Impressions and Perspectives

I had the immense privilege to attend the annual meeting of the Organization for Human Brain Mapping (OHBM) in Geneva last week. OHBM is a fantastic venue to see the latest and greatest developments in the field of human neuroimaging and this year made no exception. The program was jam-packed with keynote lectures, symposia, poster presentations, educational courses, and many informal discussions. It is almost impossible to describe the full breadth of the meeting, but I will try to summarize the ideas and developments that were most interesting to me.

Three broad themes emerged for me: big data, methodological rigor, and new approaches to science. Big data was pervasive throughout the meeting with a large number of posters making use of huge databases and special interest talks focussing on the practicalities and promises of data sharing and meta-analysis. It seems like the field is reacting to the widely discussed reproducibility crises (http://www.apa.org/monitor/2015/10/share-reproducibility.aspx) and prominent review articles about the faults of the low sample sizes that so far have been common practice in neuroimaging (http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html). These efforts seem well suited to firmly establish many features of brain structure and function, especially around typical brain development. In the coming years, this is likely to influence publishing standards, education, and funding priorities on a wide scale. I hope that this will not lead to the infant being ejected with the proverbial lavational liquid. There is still a need to study small samples of rare populations that give a better insight into biological mechanisms, e.g. rare genetic disorders. Further, highly specific questions about cognitive and brain mechanisms that require custom assessments will probably continue to be assessed in smaller scale studies, before being rolled out for large samples.

A related issue that probably also arose from the replication discussions is methodological rigor. Symposia at OHBM2016 discussed many issues that had been raised in the literature, like the effect of head motion on structural and functional imaging, comparison of post-mortem anatomy with diffusion imaging, and procedures to move beyond statistical association. Efforts to move towards higher transparency of analysis strategies were also prominently discussed. This includes sharing of more complete statistical maps (more info here: http://nidm.nidash.org/specs/nidm-overview.html– soon to be available in SPM and FSL), tools for easier reporting and visualisation of analysis pipelines, and access to well described standard datasets. I can imagine a future in which analyses are published in an interactive format that allows for access to the data and the possibility to tweak parameters to assess the robustness of the results.

These exciting developments also pose some challenges. The trend towards large datasets also requires a new kind of analytic and theoretical approach. This leads to a clash between traditional scientific approach and big data science. Let me expand: The keynote lectures presented impressive work that was carried out in the traditional hypothesis-test-refine-hypothesis fashion. For instance, keynote speaker Nora Volkow, National Institute of Drug Abuse, presented a comprehensive account of dopamine receptors in human addiction based on a series of elegant, but conceptually simple PET experiments. In contrast to the traditional approach of collecting a few measurements to test a specific hypothesis, big data covers a lot of different measurements with a very broad aim. This creates the problem of high-dimensional data that need to be reduced to reach meaningful conclusions. Machine learning approaches emerged as a relatively new addition to the human neuroscience toolkit to tackle pervasive problems associated with this. There is great promise in these methods, but standards for reliability still need to be established and new theoretical developments are needed to integrate these findings with current knowledge. Hopefully, there will be closer communication between method developers and scientists applying these tools to human neuroscience as these methods mature.

A single-gene disorder affects brain organisation

Many children struggle with learning particular skills despite good access to learning opportunities. For instance, some children need more time and considerably more effort to learn language compared to their peers. In some cases, this is associated with other problems, e.g. in certain types of childhood epilepsy, in others, these difficulties occur in isolation, e.g. specific language impairment. It is vital to understand the processes that lead to these difficulties so that problems can be identified early and treated with the most effective interventions. Yet, understanding these developmental difficulties still poses a scientific challenge. We know that genetic predisposition plays a role based on heritability studies and intermediate difficulties in family members. However, learning difficulties are associated with a large number of genes that individually have only very small effects. A possible reason for this could be that developmental disorders that are defined on the basis of behaviour reflect a mixture of underlying biology. This heterogeneity makes it very difficult to establish any causative mechanism.

One way of getting around this conundrum is to study known genetic disorders that share some similarity with more common developmental syndromes. To this end, we investigated a case group of individuals with mutations in a particular gene. We established that this mutation is associated with disproportionate deficits in attention, language, and oro-motor control (Baker et al., 2015). In the current study, we explored the effect of this genetic mutation on the organisation of the brain network to understand how a genetic difference may lead to differences in thinking and behaviour. Brain regions with typically high expression of this gene showed the highest connectivity, which may indicate that it is important for the development of structural connections. Further, cases with mutations in this gene displayed reduced efficiency of information transfer in the brain network. These findings suggest that brain organisation may provide an important intermediate level of description that could help to reveal how genetic differences give rise to learning difficulties. We hope to extend this work to compare brain organisation between genetic groups and developmental disorders directly in the future.

 

The article about the study is now available as a preprint:  http://dx.doi.org/10.1101/057687

Interested readers can also retrieve the analysis scripts for this study here:  https://github.com/joebathelt/ZDHHC9_connectome

 

 

Reference:

Baker, K., Astle, D. E., Scerif, G., Barnes, J., Smith, J., Moffat, G., et al. (2015). Epilepsy, cognitive deficits and neuroanatomy in males with ZDHHC9 mutations. Annals of Clinical and Translational Neurology, n/a–n/a. http://doi.org/10.1002/acn3.196

The resting brain… that never rests

“Spend 5-10 minutes lying down, make yourself comfortable, and keep your eyes open. Be still. Don’t think of anything specific”

These are the typical instructions gives to participants in a ‘resting state’. This is the study of brain activity with neuroimaging while the subject is literally told to do nothing.  This approach is very popular in our field… but why is it worth putting such effort into understanding a brain that isn’t doing anything? But in reality, the brain is never doing nothing. And studying the ongoing spontaneous activity that it produces can provide key insights to how the brain is organised.

Traditionally, a technique called functional Magnetic Resonance Imaging (fMRI) is used to study the resting brain. This uses changes in metabolism to chart brain activity. It turns out that the patterns of activity across the brain are not random, but are highly consistent across many studies. Some brain areas – in some cases anatomically distant from one another – have very similar patterns of activity to each other. These are referred to as resting state networks (RSNs).

A problem with this imaging method is that it is slow. It measures changes in metabolism in the order of seconds, even though electrical brain activity really occurs on a millisecond scale.  A landmark paper by Baker et al. (2014) instead used an electrophysiological technique, MEG, which can capture this incredibly rapid brain activity. They combined this technique with a statistical model, called a Hidden Markov Model (HMM).

They showed that contrary to previous thinking, these networks are not stable and consistent over time. Even when they brain is at rest they change in a rapid and dynamic way – the resting brain is never actually resting.

For more details of how they did – read on:

 

What is a Hidden Markov Model (HMM)?

A model, such HMM, is a representation of reality, built around several predictions based on elementary observations and a set of rules which aim to find the best solution of the problem. Let’s think about Santa Claus: he has to carry a present to all the nice kids. The problem is that he doesn’t know how to fill the sack with the toys, which has limited capacity. The input in this case will be the toys which have a certain weight and volume. Santa could try lots of solutions, until he finds this optimal configuration of toys. In essence, he is using a ‘stochastic model’ that tries multiple solutions. Santa knows the inputs, and can see how varying this results in a more optimal solution.

An HMM is also a stochastic model.  But this time the input is hidden, that is, we cannot observe it. These are the brain states that produce these network patterns. Instead, the output – the brain recordings – is visible. To better understand how the model works, imagine a prisoner is locked in a windowless cell for a long time. They want to know about the weather outside.  The only source of information is if the guard in front of his cell is carrying an umbrella (🌂) or not (x🌂x). In this case the states are the weather: sunny (☀), cloudy (☁), or rainy (☔), and they are hidden. The observation is the presence or absence of an umbrella. Imagine now that after a few days the prisoner has recorded a sequence of observations so he can turn to science and use HMMs to predict what the weather is like. The prisoner needs to know just 3 things to set up the model on the basis of their observation:

  • Prior probabilities: the initial probabilities of any particular type of weather e.g. if the prisoner lives in a country where it is just as likely to be sunny, cloudy or rainy, then the prior probabilities are equiprobable.
  • Emission probabilities: the probability of the guard bringing an umbrella or not given a weather condition.
  • Transition probabilities: the probabilities that a state is influenced by the past states, e.g. if it’s raining today, what is the probability of it being sunny, cloudy or rainy tomorrow.

What is the probability that the next day will be ☔ given that the guard is carrying an umbrella 🌂? After many days of observations, let’s say 50, what is the probability that day 51 will be ☀? In this case the calculation is really hard. The prisoner needs to integrate the various sources of information in order to establish the most likely weather condition on the following day – there is actually an algorithm for doing this, it is called a ‘Viterbi algorithm’.Picture1

How HMM is used in resting state paradigm

Using HMMs, Baker et al. (2014) identified 8 different brain states that match the networks typically found using fMRI. More importantly, they revealed that the transitions between the different RSNs are much faster than previously suggested. Because they used MEG and not fMRI, it was possible to calculate when a state is active or not, that is, the temporal characteristics of the states.

The authors additionally mapped where the state was active. They used the temporal information of the states to identify only the neural activity that is unique in each state. Therefore, they combined this information with the neuronal activity localization to build the networks maps.  This procedure identifies the brain areas associated with each state.

This study provides evidence that within-network functional connectivity is sustained by temporal dynamics that fluctuate from 200-400 ms. These dynamics are generated by brain states that match the classic RSNs, but which are constituted in a much more dynamic way than previously thought. The fact that each state remains active for only 100-200 ms, suggests that these brain states are underpinned by fast and transient mechanisms. This is important, because it has previously been unclear how these so called ‘resting’ networks are related to rapid psychological processes. This new approach provides an important step in bridging this gap. At last we have a method capable of exploring these networks on a time-scale that allows us to explore how they meaningfully support cognition.

 

Reference:

Baker, A. P., Brookes, M. J., Rezek, I. A., Smith, S. M., Behrens, T., Probert Smith, P. J., & Woolrich, M. (2014). Fast transient networks in spontaneous human brain activity. eLife, 3, e01867–18. http://doi.org/10.7554/eLife.01867

How being overweight may cause children difficulties

Obesity is a growing problem in developed countries. In the UK, a recent WHO studied indicated that one in four adults is obese (WHO obesity in the UK) and up to one in five children (WHO childhood obesity in the UK). Health minister Jeremy Hunt referred to the increasing number of children with severe weight problem as a national emergency (Interview with Jeremy Hunt on childhood obesity). Worryingly, obesity in children may not only be associated with increased risk for cardiovascular conditions, but may also hinder children’s academic progress. For instance, a cohort study of 5,000 children in Australia found that obesity was related to lower school performance for boys. That relationship persisted even when the researchers took other factors like family wealth into account (Black, Johnston, & Peeters, 2015, Taras & Potts Datema, 2005). Similarly, a study of 600 high-school students in the UK found that higher body-mass index (BMI), that is body weight relative to height, was negatively associated with school performance (Arora et al., 2013).

 

Obesity in childhood and adolescence is associated with lower cognitive performance

Lower school performance may be caused by differences in cognitive skills. Several studies investigated differences in cognitive performance in children and adolescents with obesity. Most of these studies focussed on executive functions. EF is used as an umbrella term for a set of inter-related abilities, including goal planning, attention, working memory, inhibition, and cognitive flexibility (Anderson, 2002; Diamond, 2013). Wirt and colleagues found a negative association between body weight, inhibitory control, and cognitive flexibility in a community sample of nearly 500 children (Cserjési, Molnár, Luminet, & Lénárd, 2007; Wirt, Schreiber, Kesztyüs, & Steinacker, 2015), even when controlling for family and lifestyle factors. Higher BMI was also associated with worse performance on executive function assessments in a sample of children with attention deficit hyperactivity disorder (ADHD) (Graziano et al., 2012). Adolescents show a similar association between obesity and cognitive performance. A study by Lokken and colleagues found impairments in attention and executive function in adolescents with obesity (Lokken, Boeka, Austin, Gunstad, & Harmon, 2009).

Together, these studies suggest that obesity in children and adolescents is associated with poorer performance on executive function tests (Liang, Matheson, Kaye, & Boutelle, 2014). These findings indicate that children and adolescents with higher body weight may find it more difficult to control their behaviour.

 

Obesity in childhood and adolescence is associated with structural and functional brain differences

Cognitive differences associated with childhood and adolescent obesity are also linked to structural and functional differences in the brain. Yau and colleagues compared 30 adolescents with obesity to a control group of 30 adolescents with normal weight matched for age, gender, and socio-economic status. The study found lower academic achievement, lower working memory, attention, and mental flexibility in the obese group that was associated with reduced cortical thickness in the orbitofrontal and anterior cingulate cortex (Bauer et al., 2015; Ou, Andres, Pivik, Cleves, & Badger, 2015; Yau, Kang, Javier, & Convit, 2014). These brain areas are generally associated with behavioural control. Further, the authors reported reductions in the microstructural integrity of several major white matter tracts, which may indicate that obesity is associated with differences in the efficiency of communication between brain areas (Stanek et al., 2011; Yau et al., 2014). Schwartz and colleagues took a closer look the relationship between body fat and white matter composition in a sample of 970 adolescents (Schwartz et al., 2014). The findings of the study suggested that white matter differences associated with obesity are linked to differences in the fatty acid composition of brain white matter. Further research is needed to interpret these results. But it could indicate that differences in diet associated with obesity could impact on cognitive performance by influencing the insulation of the brain’s wiring.

Brain function may also be affected by obesity. A series of studies by Kamijo and colleagues investigated functional differences in children with obesity using event-related potentials (ERP) (Kamijo, Khan, et al., 2012a; Kamijo, Pontifex, et al., 2012b; Kamijo et al., 2014). To obtain ERPs, the electro-encephalogram (EEG) is recorded while participants perform a cognitive task. The signal is then averaged to derive the electrophysiological response that is directly linked to a particular cognitive event. Children with obesity were found to perform worse on tasks that required inhibition of prepotent responses. The lower performance was associated with lower amplitude of an ERP response related to error monitoring and a less frontal distribution of an attention-related ERP. The ERP results may indicate a less efficient conflict monitoring system and differences in the neural organisation of the attention system in children with obesity.

 

Limitations: the chicken, the egg, and the confounding effect of the rooster

Like in many other areas of human cognitive neuroscience, studies of children and adolescents with obesity are based on correlations. This leads to some limitations of the conclusions that can be drawn from such work. For one, it is not possible to draw any firm conclusion about the causal relationship between the variables. In other words, it is not clear if obesity leads to differences in cognition or if cognitive differences predispose individuals to become obese. Secondly, the relationship between two variables may be influences by a third variable that has not been assessed. Some of these variables include other environmental influences that may both be associated with obesity as well as cognitive differences. These are likely include measures like family wealth and education (O’Dea & Wilson, 2006) among other influences that have not yet been investigated. Other physiological factors that are associated with obesity may also confound the relationship between obesity and cognition. For instance, differences in cardiovascular health or insulin metabolism in children with obesity may influence brain function and cognitive performance. However, some of the studies took these factors into account by matching control groups on cardiovascular health (Kamijo et al., 2014; Kamijo, Pontifex, et al., 2012b) or studying obese groups with a typical insulin response (Stanek et al., 2011). These studies found that differences in cognitive performance in the obese group were still observed when controlling for the influence of these factors.

Another factor that is rarely assessed it sleep apnea. A study by Tau found that differences in school achievement in children and adolescents with obesity were associated with obstructive sleep apnea. While this result does not invalidate other findings about cognitive performance deficits in children and adolescents with obesity, it highlights a potential mechanism by which obesity may impact on the cognitive performance.

 

Conclusion

The current literature suggests that obesity in childhood and adolescence is associated with cognitive differences in executive function and differences in the organisation of brain system related to executive function and cognitive control. Future research will be needed to identify the mechanism by which body fat content, brain physiology, and cognitive performance may be linked to address the unprecedented scale of weight problems in children and adolescents and their consequences.

 

References

Anderson, P. (2002). Assessment and development of executive function (EF) during childhood. Child Neuropsychology, 8(2), 71–82. http://doi.org/10.1076/chin.8.2.71.8724

Arora, T., Hosseini Araghi, M., Bishop, J., Yao, G. L., Thomas, G. N., & Taheri, S. (2013). The complexity of obesity in UK adolescents: relationships with quantity and type of technology, sleep duration and quality, academic performance and aspiration. Pediatric Obesity, 8(5), 358–366. http://doi.org/10.1111/j.2047-6310.2012.00119.x

Bauer, C. C. C., Moreno, B., González-Santos, L., Concha, L., Barquera, S., & Barrios, F. A. (2015). Child overweight and obesity are associated with reduced executive cognitive performance and brain alterations: a magnetic resonance imaging study in Mexican children. Pediatric Obesity, 10(3), 196–204. http://doi.org/10.1111/ijpo.241

Black, N., Johnston, D. W., & Peeters, A. (2015). Childhood Obesity and Cognitive Achievement (Vol. 24, pp. 1082–1100). Presented at the Health Economics (United Kingdom). http://doi.org/10.1002/hec.3211

Cserjési, R., Molnár, D., Luminet, O., & Lénárd, L. (2007). Is there any relationship between obesity and mental flexibility in children? Appetite, 49(3), 675–678. http://doi.org/10.1016/j.appet.2007.04.001

Diamond, A. (2013). Executive functions. Annual Review of Psychology, 64(1), 135–168. http://doi.org/10.1146/annurev-psych-113011-143750

Graziano, P. A., Bagner, D. M., Waxmonsky, J. G., Reid, A., McNamara, J. P., & Geffken, G. R. (2012). Co-occurring weight problems among children with attention deficit/hyperactivity disorder: The role of executive functioning. International Journal of Obesity, 36(4), 567–572. http://doi.org/10.1038/ijo.2011.245

Kamijo, K., Khan, N. A., Pontifex, M. B., Scudder, M. R., Drollette, E. S., Raine, L. B., et al. (2012a). The relation of adiposity to cognitive control and scholastic achievement in preadolescent children. Obesity, 20(12), 2406–2411. http://doi.org/10.1038/oby.2012.112

Kamijo, K., Pontifex, M. B., Khan, N. A., Raine, L. B., Scudder, M. R., Drollette, E. S., et al. (2012b). The association of childhood obesity to neuroelectric indices of inhibition. Psychophysiology, 49(10), 1361–1371. http://doi.org/10.1111/j.1469-8986.2012.01459.x

Kamijo, K., Pontifex, M. B., Khan, N. A., Raine, L. B., Scudder, M. R., Drollette, E. S., et al. (2014). The negative association of childhood obesity to cognitive control of action monitoring. Cerebral Cortex, 24(3), 654–662. http://doi.org/10.1093/cercor/bhs349

Liang, J., Matheson, B. E., Kaye, W. H., & Boutelle, K. N. (2014). Neurocognitive correlates of obesity and obesity-related behaviors in children and adolescents. International Journal of Obesity, 38(4), 494–506. http://doi.org/10.1038/ijo.2013.142

Lokken, K. L., Boeka, A. G., Austin, H. M., Gunstad, J., & Harmon, C. M. (2009). Evidence of executive dysfunction in extremely obese adolescents: a pilot study. Surgery for Obesity and Related Diseases : Official Journal of the American Society for Bariatric Surgery, 5(5), 547–552. http://doi.org/10.1016/j.soard.2009.05.008

O’Dea, J. A., & Wilson, R. (2006). Socio-cognitive and nutritional factors associated with body mass index in children and adolescents: Possibilities for childhood obesity prevention. Health Education Research, 21(6), 796–805. http://doi.org/10.1093/her/cyl125

Ou, X., Andres, A., Pivik, R. T., Cleves, M. A., & Badger, T. M. (2015). Brain gray and white matter differences in healthy normal weight and obese children. Journal of Magnetic Resonance Imaging, 42(5), 1205–1213. http://doi.org/10.1002/jmri.24912

Schwartz, D. H., Dickie, E., Pangelinan, M. M., Leonard, G., Perron, M., Pike, G. B., et al. (2014). Adiposity is associated with structural properties of the adolescent brain. NeuroImage, 103, 192–201. http://doi.org/10.1016/j.neuroimage.2014.09.030

Stanek, K. M., Grieve, S. M., Brickman, A. M., Korgaonkar, M. S., Paul, R. H., Cohen, R. A., & Gunstad, J. J. (2011). Obesity is associated with reduced white matter integrity in otherwise healthy adults. Obesity, 19(3), 500–504. http://doi.org/10.1038/oby.2010.312

Taras, H., & Potts Datema, W. (2005). Obesity and Student Performance at School. Journal of School Health, 75(8), 291–295. http://doi.org/10.1111/j.1746-1561.2005.00040.x

Wirt, T., Schreiber, A., Kesztyüs, D., & Steinacker, J. M. (2015). Early Life cognitive abilities and body weight: Cross-sectional study of the association of inhibitory control, cognitive flexibility, and sustained attention with BMI percentiles in primary school children. Journal of Obesity, 2015(3), 1–10. http://doi.org/10.1155/2015/534651

Yau, P. L., Kang, E. H., Javier, D. C., & Convit, A. (2014). Preliminary evidence of cognitive and brain abnormalities in uncomplicated adolescent obesity. Obesity, 22(8), 1865–1871. http://doi.org/10.1002/oby.20801

 

Picture credit: Charlie and the Chocolate Factory, Warner Bros Pictures, 2005