The latest article was written by our brilliant lab member Danyal Akarca. It describes some of his MPhil research which aims to explore transient brain networks in individuals with a particular type of genetic mutation. Dan finished his degree in Pre-Clinical Medicine before joining our lab and has since been fascinated by the intersection of genetic disorders and the dynamics of brain networks.
The brain is a complex dynamic system. It can be very difficult to understand how specific differences within that system can be associated with the cognitive and behavioural difficulties that some children experience. This is because even if we group children together on the basis that they all have a particular developmental disorder, that group of children will likely have a heterogeneous aetiology. That is, even though they all fall into the same category, there may be a wide variety of different underlying brain causes. This makes these disorders notoriously difficult to study.
Developmental disorders that have a known genetic cause can be very useful for understanding these brain-cognition relationships, because by definition they all have the same causal mechanism (i.e. the same gene is responsible for the difficulties that each child experiences). We have been studying a language disorder caused by a mutation to a gene called ZDHHC9. These children have broader cognitive difficulties, and more specific difficulties with speech production, alongside a form of childhood epilepsy called rolandic epilepsy.
In our lab, we have explored how brain structure is organised differently in individuals with this mutation, relative to typically developing controls. Since then our attention has turned to applying new analysis methods to explore differences in dynamic brain function. We have done this by directly recording magnetic fields generated by the activity of neurons, through a device known as a magnetoencephalography (MEG) scanner. The scanner uses magnetic fields generated by the brain to infer electrical activity.
The typical way that MEG data is interpreted, is by comparing how electrical activity within the brain changes in response to a stimulus. These changes can take many forms, including how well synchronised different brain areas are, or the how size of the magnetic response differs across individuals. However, in our current work, we are trying to explore how the brain configures itself within different networks, in a dynamic fashion. This is especially interesting to us, because we think that the ZDHHC9 gene has an impact on the excitability of neurons in particular parts of the brain, specifically in those areas that are associated with language. These changes in network dynamics might be linked to the kinds of cognitive difficulties that these individuals have.
We used an analysis method called “Group Level Exploratory Analysis of Networks” – or GLEAN for short – and has recently been developed at the Oxford centre for Human Brain Activity. The concept behind GLEAN is that the brain changes between different patterns of activation in a fashion that is probabilistic. This is much like the concept of the weather – just as the weather can change from day to day in some probabilistic way, so too may the brain change in its activation.
This analysis method not only allows us to observe what regions of the brain are active when the participants are in the MEG scanner. It also allows us to see the probabilistic way in which they can change between each other. For example, just as it is more likely to transition from rain one day to cloudiness the next day, relative to say rain to blistering sun, we find that brain activation patterns can be described in a very similar way over sub-second timescales. We can characterise those dynamic transitions in lots of different ways, such as how long you stay in a specific brain state or how long does it take to return to a state once you’ve transitioned away. (A more theoretical account of this can be found in another recent blog post in our Methods section – “The resting brain… that never rests”.) We have found that a number networks differ between individuals with the mutation and our control subjects.
(These are two brain networks that show the most differences in activation – largely in the parietal and frontotemporal regions of the brain.)
Interestingly, these networks strongly overlap with areas of the brain that are known to express the gene (we found this out by using data from the Allen Atlas). This is the first time that we know of that researchers have been able to link a particular gene, to differences dynamic electrical brain networks, to a particular pattern of cognitive difficulties. And we are really excited!
This is a blog piece is written with Sue Fletcher-Watson, a colleague of supreme wisdom and tact, ideally qualified for this particular post. It is a follow-up to our previous joint post about peer-review. We now turn our attention to the response to reviewers.
As with the role of reviewer, junior scientists submitting their work as authors are given little (if any) guidance on how to interact with their reviewers. Interactions with reviewers are an incredibly valuable opportunity to improve your manuscript and find the best way of presenting your science. However, all too often responding to reviewers is seen as an onerous chore, which partly reflects the attitude we take into the process. These exchanges largely happen in private and even though they play a critical role in academia, we rarely talk about them in public. We think this needs to change – here are some pointers for how to interact with your reviewers.
- Engage with the spirit of the review
Your reviewers will be representative of a portion of your intended readership. Sometimes when reading reviewers’ comments we can find ourselves asking “have they even read the paper?!”. But if the reviewer has misunderstood some critical aspect of the paper then it is entirely possible that a proportion of the broader readership will also. An apparently misguided review, whilst admittedly frustrating, should be taken as a warning sign. Give yourself a day or two to settle your temper, and then recognise that this is your opportunity to make your argument clearer and more convincing.
Similarly, resist the temptation to undertake the minimal possible revisions in order to get your paper past the reviewers. If a reviewer makes a good point and you can think of ways of using your data to address it, then go for it, even if this goes beyond what they specified. Remember – this is your last chance to make this manuscript as good as it can be.
- Be grateful and respectful. But don’t be afraid to disagree with your reviewers.
Writing a good review takes time. Thank the reviewers for their efforts. Be polite and respectful, even if you think a review is not particularly constructive. But don’t be afraid to disagree with reviewers. Sometimes reviewers ask you to do things that you don’t think are valid or wise, and it’s important to defend your work. No one wants a dog’s dinner of a paper… a sort of patchwork of awkwardly combined paragraphs designed to appease various reviewer comments. As the author you need to retain ownership of the work. This will mean that sometimes you need to explain why a recommendation has not been actioned. You can acknowledge the importance of a reviewer’s point, without including it in your manuscript.
We have both experienced reviewers who have requested changes we don’t feel are legitimate. Examples include the reviewer who requested a correlational analysis on a sub-group with a sample size of n=17. Or the reviewer who asked Sue to describe how her results, from a study with participants aged 18 and over, might relate to early signs of autism in infancy (answer: they have no bearing whatsoever and I’m not prepared to speculate in print). Or the reviewer who asked for inclusion of variables in a regression analysis which did not correlate with the outcome, (despite that being a clearly-stated criterion for inclusion in the analysis), on the basis of their personal hunch. In these cases, politely but firmly refusing to make a change may be the right thing to do, though you can nearly always provide some form of concession. For example, in the last case, you might include an extra justification, with a supporting citation, for your chosen regression method.
- Give your response a clear and transparent structure
With any luck, your revised manuscript will go out to the same people who reviewed it the first time. If you do a particularly good job of addressing their comments – and if the original comments themselves were largely minor – your editor may even decide your manuscript doesn’t need peer review a second time. In any case, to maximise the chances of a good result it is essential that you present your response clearly, concisely and fluently.
Start by copying and pasting the reviewer comments into a document. Organise them into numbered lists, one for each reviewer. This might mean breaking down multi-part comments into separate items, and you may also wish to paraphrase to make your response a bit more succinct. However, beware of changing the reviewer’s intended meaning!
Then provide your responses under each numbered point, addressed to the editor (“The reviewer makes an excellent point and…”). In each case, try to: acknowledge the validity of what the reviewer is saying; briefly mention how you have addressed the point; give a page reference. This ‘response to reviewers’ document should be accompanied by an updated manuscript in which any significant areas of new text , or heavily edited text, are highlighted something like this. Don’t submit a revised manuscript with tracked changes – these are too detailed and messy for a reviewer to have to navigate – and don’t feel the need to highlight every changed word.
If it’s an especially complicated or lengthy response, then it is sometimes a good idea to include a (very) pithy summary up top for the Editor, before you get to the reviewer-specific response. A handful of bullet points can help orient the Editor to the major changes that they can expect to find in the new version of your manuscript.
- The response letter can be a great place to include additional analyses that didn’t make it into the paper
Often when exploring the impact of various design choices or testing the impact of assumptions on your analysis, additional comparisons can be very useful. We both often include additional analyses in our ‘response to reviewer’ letters. This aids transparency and can also be a useful way of showing reviewers that your findings are solid. Sometimes these will be analyses that have been explicitly asked for, but on other occasions you may well want to do this from your initiative. As reviewers we are both greatly impressed when authors use their own data to address a point, even if we didn’t explicitly ask them to do this.
One word of warning here, however. Remember that you don’t want to put an important piece of information or line of reasoning only in your response letter, if it ought also to be in the final manuscript. If you’ve completed an extra analysis as part of your consideration of a reviewer point, consider whether this might also have relevance to your readership when the paper is published. It might be important to leave it out – you don’t want to include ‘red herring’ analyses or look like you are scraping the statistical barrel by testing ‘til the cows come home. But on the other hand, if the analysis directly answers a question which is likely to be in your reader’s mind, consider including it. This could be as a supplement, linked online data set, or a simple statement: e.g. “we repeated all analyses excluding n=2 participants with epilepsy and results were the same”.
- Sometimes you may need the nuclear option
We have both had experiences where we have been forced to make direct contact with the Action Editor. A caveat to all the points above is that there are occasions where reviewers attempt to block the publication of a manuscript unreasonably. Duncan had an experience of a reviewer who simply cut and paste their original review, and reused it across multiple subsequent rounds of revision. Duncan thought that his team had done a good job of addressing the reviewer’s concerns, where possible, but without any specific guidance from the reviewer they were at a loss to identify what they should do next. Having already satisfied two other reviewers, he decided to contact the Action Editor and explain the situation. They accepted the paper. Sue has blogged before about a paper reporting on a small RCT which was rejected for the simple reason that it reported a null result. She approached the Editor with her concern and it was agreed that the paper should be re-submitted as a new manuscript and sent out again for a fresh set of reviews. This shouldn’t be necessary, but sadly sometimes it is.
Editors will not be happy with authors trying to circumvent the proper review process, but in our experience they are sympathetic to authors when papers are blocked by unreasonable reviewers. After all, we have all been there. If this is the situation you find yourself in, be as diplomatic as possible and outline your concerns to the Editor.
In conclusion, much of what we want to say can probably be summed up with the following: This is not a tick-box exercise, but the last opportunity to improve your paper before it reaches your audience. Engage with your reviewers, be open-minded, and don’t be afraid to rethink.
Really, when it comes to responding to reviewers, the clue is in the name. It’s a response, not a reaction – so be thoughtful, be engaged and be a good scientist.
Peer review is a lynch-pin of the scientific process and bookends every scientific project. But despite the crucial importance of the peer review process in determining what research gets funded and published, in our experience PhD students and early career researchers are rarely if ever offered training on how to conduct a good review. Academics frequently find themselves complaining about the unreasonable content and tone of the anonymous reviews they receive, which we attribute partly to a lack of explicit guidance on the review process. In this post we offer some pointers on writing a good review of a journal article. We hope to help fledgling academics hone their craft, and also provide some insight into the mechanics of peer review for non-academic readers.
What’s my review for?
Before we launch into our list of things to avoid in the review process, let’s just agree what a review of an academic journal article is meant to do. You have one simple decision to make: does this paper add something to the sum of scientific knowledge? Remember of course that reporting a significant effect ≠ adding new knowledge, and similarly, a non-significant result can be highly informative. Don’t get sucked into too much detail – you are not a copy editor, proof-reader, or English-language critic. Beyond that, you will also want to consider whether the manuscript, in its current form, does the best job of presenting that new piece of knowledge. There’s a few specific ways (not) to go about this, so it looks like it might be time for a list…
- Remember, this is not YOUR paper
First rule of writing a good peer review: remember that this is not your paper. Neither is this a hypothetical study that you wished the authors had conducted. Realising this will have a massive impact on your view of another’s manuscript. The job is not to sculpt it into a paper you could have written. Your job as a reviewer is two-fold: i) make a decision as to the value of this piece of work for your field; and ii) help the authors to present the clearest possible account of their science.
Misunderstanding the role of the reviewer is perhaps at the heart of many peer review horror stories. Duncan does a lot of studies on cognitive training. Primarily he’s interested in the neural mechanisms of change, and tries to be very clear about that. But reviewers almost always ask “where are your far transfer measures?” because they want to assess the potential therapeutic benefit of the training. This is incredibly infuriating. The studies are not designed or powered for looking at this, but instead at something else of equal but different value.
Remember – you can’t ask them to report an imaginary study you wished they had conducted.
- Changing the framing, but not the predictions
In this current climate of concern over p-hacking and other nefarious, non-scientific procedures, a question we have to ask ourselves as reviewers is: are there some things I can’t ask them to change? We think the answer is yes – but it may be less than you think. For starters, you can ask authors to re-consider the framing of the study to make it more accurate. Let’s imagine they set out to investigate classroom practice, but used interviews not observations, and so ended up recording teacher attitudes instead. Their framing can end up a bit out of kilter with the methods and findings. As a reviewer, with a bit of distance from the work, you can be very helpful in highlighting this.
If you think there are findings which could be elucidated – for example by including a new covariate, or by running a test again with a specific sub-group excluded – you should feel free to ask. At the same time, you need to respect that the authors might respond by saying that they think these analyses are not justified. We all should avoid data-mining for significant results and reviewers should be aware of this risk.
What almost certainly shouldn’t be changed are any predictions being made at the outset. If these represent the authors’ honest, well-founded expectations then they need to be left alone.
However, there may be an exception to this rule… Imagine a paper (and we have both seen these) where the literature reviewed is relatively balanced, or sparse, such that it is impossible to make a concrete prediction about the expected pattern in the new data. And yet these authors have magically extracted hypotheses about the size and direction of effects which match up with their results. In this case, it may be legitimate to ask authors to re-inspect their lit review so that it provides a robust case to support their predictions. Another option is to say that, given the equivocal nature of the field, the study would be better set-up with exploratory research questions. This is a delicate business, and if in doubt, it might be a good place to write something specific to the editor explaining your quandary (more on this in number 5).
- Ensuring all the information is there for future readers
In the end the quality of a paper is not determined by the editor or the reviewers… but by those who read and cite it. As a reviewer imagine that you are a naïve reader and ask whether you have all the information you need to make an informed judgement. If you don’t, then request changes. This information could take many forms. In the Method Section, ask yourself whether someone could reasonably replicate the study on the basis of the information provided. In the Results ask whether there are potential confounds or complicating factors that readers are not told about. These kinds of changes are vital.
We also think it is totally legitimate to request that authors include possible alternative interpretations. The whole framing of a paper can sometimes reflect just one of multiple possible interpretations, which could somewhat mislead readers. As a reviewer be wise to this and cut through the spin. The bottom-line: readers should be presented with all information necessary for making up their own minds.
- Digging and showing off
There is nothing wrong with a short review. Sometimes papers are good. As an editor, Duncan sometimes feels like reviewers are really clutching at straws, desperate to identify things to comment on. Remember that as a reviewer you are not trying to impress either the authors or the editor. Don’t dig for dirt in order to pad the review or show how brainy you are.
Another pet hate is when reviewers introduce new criticisms in subsequent rounds of review. Certainly if the authors have introduced new analyses or data since the original submission, then yes, this deserves a fresh critique. But please please please don’t wait until they have addressed your initial concerns… and then introduce a fresh set on the same material. When reviewers start doing this it smacks of a desperate attempt to block a paper, thinly veiled by apparently legitimate concerns. Editors shouldn’t stand for that kind of nonsense, so don’t make them pull you up on it.
- Honesty about your expertise
You don’t know it all, and there is no point pretending that you do. You have been asked to review a paper because you have relevant expertise, but it isn’t necessarily the case that you are an expert in all aspects of the paper. Make that clear to the authors or the editor (the confidential editor comments box is quite useful for this).
It is increasingly the case that our science is interdisciplinary – we have found this is especially the case where we are developing new neuroimaging methods and applying them to novel populations (e.g. typically and atypically developing children). The papers are usually reviewed by either methods specialists or developmental psychologists, and the reviews can be radically different. This likely reflects the different expertise of the reviewers, and it helps both authors and editor where this is made explicit.
Is it ok to ask authors to cite your work? Controversial. Duncan never has, but Sue (shameless self-publicist) has done. We both agree that it is important to point out areas of the literature that are relevant but have not been covered by the paper – and this might include your own work. After all, there’s a reason why you’ve been selected as a relevant reviewer for this paper.
Now we know what not to do, what should you put in a review?
Start your review with one or two sentences summarising the main purpose of the paper: “This manuscript reports on a study with [population X] using [method Y] to address whether [this thing] affects [this other thing].” It is also good to highlight one or two key strengths of the paper – interesting topic, clear writing style, novel method, robust analysis etc. The text of your review will be sent, in full and unedited, to the authors. Always remember that someone has slaved over the work being reported, and the article writing itself, and recognise these efforts.
Then follow with your verdict, in a nutshell. You don’t need to say anything specific about whether the paper should / should not be published (and some journals actively don’t want you to be explicit about this) but you should try to draw out the main themes of your comments to help orient the authors to the specific items which follow.
The next section of your review should be split into two lists – major and minor comments. Major comments are often cross-cutting, e.g. if you don’t think the conclusions are legitimate based on the results presented. Also in the major comments include anything requiring substantial work on the part of the authors, like a return to the original data. You might also want to highlight pervasive issues with the writing here – such as poor grammar – but don’t get sucked into noting each individual example.
Minor comments should require less effort on the part of the authors, such as some re-phrasing of key sentences, or addition of extra detail (e.g. “please report confidence intervals as well as p-values”). In each case it is helpful to attach your comments to a specific page and paragraph, and sometimes a numbered line reference too.
At the bottom of the review, you might like to add your signature. Increasing numbers of reviewers are doing this as part of a movement towards more open science practices. But don’t feel obliged – especially if you are relatively junior in your field, it may be difficult to write an honest review without the safety of anonymity.
Ready to review?
So, hopefully any early career researchers reading this might feel a bit more confident about reviewing now. Our key advice is to ensure that your comments are constructive, and framed sensitively. Remember that you and the original authors are both on the same side – trying to get important science out into a public domain where it can have a positive influence on research and practice. Think about what the field needs, and what readers can learn from this paper.
Be kind. Be reasonable. Be a good scientist.
Over the past decade the young field of cognitive training – sometimes referred to as ‘brain training’ – has expanded rapidly. In our lab we have been extremely interested in brain training (Astle et al. 2015; Barnes et al. 2016). It has the potential to tell us a lot about the brain and how it can dynamically respond to changes in our experience.
The basic approach is to give someone lots of practice on a set of cognitive exercises (e.g. memory games), see whether they get better at other things too, and in some cases see whether there are significant brain changes following the training. The appeal is obvious: the potential to slow age-related cognitive decline (e.g. Anguera et al. 2013), remediate cognitive deficits following brain injury (e.g. Westerberg et al. 2007), boost learning (e.g. Nevo and Breznitz 2014) and reduce symptoms associated with neurodevelopmental disorders (e.g. Klingberg et al. 2005). But these strong claims require compelling evidence and the findings in this area have been notoriously inconsistent.
(Commercial brain training programmes are available to both academics and the general public)
I have been working on a review paper for a special issue, and having trawled through the various papers, I think that some consensus is emerging. Higher-order cognitive processes like attention and memory can be trained. These gains will transfer to similarly structured but untrained tasks, and are mirrored by enhanced activity and connectivity within the brain systems responsible for these cognitive functions. However, the scope of these gains is currently very narrow. To give an extreme example, learning to remember very long lists of letters does not necessarily transfer to learning long lists of words, even though those two tasks are so similar – the training can be very content specific (Harrison et al. (2013); see also Ericcson et al. (1980)). But other studies seem to buck that trend, and show substantial wide transfer effects – i.e. people get better not just at what they trained on, but even very different tasks. Why this inconsistency? Well I think there are a few important differences in how the studies are designed, here are two of the most important:
- Control groups: Some studies don’t have control groups at all, and many that do don’t have active control groups (i.e. the controls don’t actually do anything, so it is pretty obvious that they are controls). This means that these studies can’t properly control for the placebo effect (https://en.wikipedia.org/wiki/Placebo). If a study doesn’t have an active control group then it is more likely to show a wide transfer effect.
- Sample size: The smaller the study (i.e. the fewer the participants) the more likely it is to show wider transfer effects. If studies include lots of participants then it is far more likely to accurately estimate the true size of the transfer effect, which is very small.
When you consider these two factors and only look at the best designed studies, the effect size for wider transfer effects is about d=0.25 – if you are not familiar with this statistic, this is small (Melby-Lervag et al., in press). Furthermore, when considering the effect sizes in this field it is important to remember that this literature almost certainly suffers from a publication bias – it is difficult to publish null effects, and easier to publish positive results. Meaning that there are probably quite a few studies showing no training effects sat in researchers’ drawers, unpublished. As a result, even this small effect size is likely an overestimate of the genuine underlying effect. The true effect is probably even closer to zero.
So claims that training on some cognitive games can produce improvements that spread to symptoms associated with particular disorders – like ADHD – are particularly incredible. Just looking at the best designed studies, the effect size is small, again about d=0.25 (Sonuga-Barke et al., 2013). The publication bias caveat applies here too – even this small effect size is likely an overestimate of the true effect. Some studies do show substantially larger effects, but these are usually not double blind. That is, the person rating those symptoms knows whether or not the individual (usually a child) received the training. This will result in a substantial placebo effect, and this likely explains these supposed enhanced benefits.
Where do we go from here? As a field we need to ensure that future studies have active control groups, double blinding and that we include enough participants to show the effects we are looking for. I think we also need theory. A typical approach is to deliver a training programme, alongside a long list of assessments, and then explore which assessments show transfer. There is little work that explicitly generates and then tests a theory, but I think this is necessary for future progress. Where research is theoretically grounded it is far easier for a field to make meaningful progress, because it gives a collective focus, creates a shared set of critical questions, and provides a framework that can be tested, falsified and revised.
Dr. Duncan Astle, Medical Research Council Cognition and Brain Science Unit, Cambridge.
Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, Kong E, Larraburo Y, Rolle C, Johnston E, Gazzaley A (2013) Video game training enhances cognitive control in older adults. Nature 501:97-101.
Astle DE, Barnes JJ, Baker K, Colclough GL, Woolrich MW (2015) Cognitive training enhances intrinsic brain connectivity in childhood. J Neurosci 35:6277-6283.
Barnes JJ, Nobre AC, Woolrich MW, Baker K, Astle DE (2016) Training Working Memory in Childhood Enhances Coupling between Frontoparietal Control Network and Task-Related Regions. J Neurosci 36:9001-9011.
Ericcson KA, Chase WG, Faloon S (1980) Acquisition of a memory skill. Science 208:1181-1182.
Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychological science 24:2409-2419.
Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlstrom K, Gillberg CG, Forssberg H, Westerberg H (2005) Computerized training of working memory in children with ADHD–a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44:177-186.
Melby-Lervag M, Redick TS, Hulme C (in press) Working memory training does not improve performance on measures of intelligence or other measures of “Far Transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science.
Nevo E, Breznitz Z (2014) Effects of working memory and reading acceleration training on improving working memory abilities and reading skills among third graders. Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence 20:752-765.
Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Dopfner M, Dittmann RW, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong IC, Sergeant J (2013) Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. The American journal of psychiatry 170:275-289.
Westerberg H, Jacobaeus H, Hirvikoski T, Clevberger P, Ostensson ML, Bartfai A, Klingberg T (2007) Computerized working memory training after stroke–a pilot study. Brain injury 21:21-29.
Children learn an incredible amount whilst at school. Many fundamental skills that typical adults perform effortlessly like reading and maths have to be acquired during childhood. Childhood and adolescence are also a period of important brain development. Particularly the structural connections in the brain show a prolonged maturation that extends throughout childhood and adolescence up to the third decade of life. We are beginning to explore how changes in brain structure over this time support the acquisition of these skills, but also how brain changes may give rise to difficulty developing these skills for some children.
Most research to date has focussed on comparisons of individuals with specific deficits like very low reading performance despite typical performance in other areas. The logic behind this approach is that anatomical structures that are specifically associated with this skill can be isolated. However, learning disorders are rarely that specific. Most children struggling in one aspect of learning also have difficulties in other areas. Furthermore, recent advances in neuroscience suggest that the brain is not a collection of modules that perform particular tasks. In contrast, more recent views suggest that the brain functions as an integrated network.
In our recent study, we wanted to investigate how the white matter brain network may be related to maths and reading performance. The study revealed that children’s reading and maths scores were closely associated with the efficiency of their white matter network. The results further suggested that highly connected regions of the brain were particularly important. These findings indicate that the overall organisation may be more important for reading and maths than differences in very specific areas. This potentially provides a clue to understanding why problems in maths and reading often co-occur.
You can read a pre-print of the article here: https://osf.io/preprints/psyarxiv/jk6yb
The code for the analysis is also available: https://github.com/joebathelt/Learning_Connectome
Downloadable radio programmes, called podcasts, have been around since the first bricklike iPods in the early 2000s. Thanks to global sensations like ‘Serial’, podcasts are more popular than ever. But they can provide more than the next true crime fix, podcasts are also a great way to stay up-to-date with the latest developments in science and learn about new topics, while blocking out noisy commuters, going for a run in the park, or doing the dishes. Here is a selection of some fantastic science podcasts alongside a few episodes relevant to developmental cognitive neuroscience. Happy listening!
ABC All in the Mind
Excellent show about brain science and psychology. Each episode is centred around a particular topic and includes interviews with researchers as well as affected people.
BBC All in the Mind
This podcast presents various current items from psychology and neuroscience. There is also a focus on mental health with the All in the Mind Awards.
Interviews with speakers mostly covering molecular and systems neuroscience
Enhancing cognition with video games: https://tmblr.co/Zxdhyr28KX4Jh
A spin-off from the makers of Radiolab. This show focuses on ‘the invisible forces that shape our lives’ with stories around sociology, anthropology, psychology, and neuroscience.
The Nature podcast provides a great overview of the latest developments in science. In addition to brief summaries of the main articles in the current issue of Nature, the podcast contains interviews and comments from the main authors of these studies. The News & Views segment presents quick summaries of what’s happening all across science.
This podcast features a variety of stories that cater to various interests – think of long-form articles in the New Yorker, but for listening. Radiolab often features episodes on science around current and special interest topics.
The Brain Science Podcast
This podcast contains interviews with eminent researchers in neuroscience and neurology.
Picture credits: Podcast pictures were taken from the websites of each podcast as referenced. The feature image was taken from https://www.scienceworld.ca/sites/default/files/styles/featured/public/images/brain_headphones_image.jpg?itok=g-1kCvgV
The days are getting shorter, leaves are starting to fall, and a new season of the Great British Bake Off is upon us. We watched as this year’s contestants battled with batter, broke down over bread, crumbled before biscuits, and were torn by torte. One of the most difficult parts of the programme is the technical challenge. In order to succeed, the contestants have to create a perfect bake given the ingredients and basic instructions. The instructions can be extremely sparse. For example, the instructions for the batter week challenge just read ‘make laced pancakes’. This illustrates one of the fundamental challenges that face us in many situations in everyday life. We often have an abstract higher goal, a metaphorical laced pancake, and have to break it down into the necessary steps that get us to that goal, e.g. weight flour, sift flour, crack eggs and mix with flour etc.
The ability to plan is also important outside the Bake-off tent. Anyone who tried getting a four-year-old to bake will know that is also an ability that we are not born with but develop over time. Unfortunately, planning is not usually tested using baking challenges in developmental psych labs due to health & safety concerns among other reasons. Instead, clever games like the Tower of London task  are used (Shallice, 1982). In this test, the participant is presented with three pegs and a number of disks of varying sizes. The participant has to create a tower of disks according to a template by arranging the disks in the fewest moves possible while keeping all disks on the pegs and not placing a larger disk on top of a smaller one.
Figure: Illustration of the Tower of London task. Please contact us if you are interested in implementing this task using macarons and sponge fingers
Studies in typical development found that planning ability measured by this task develops continuously throughout childhood and adolescents until stable performance levels are reached in early adulthood (Huizinga, Dolan, & van der Molen, 2006; Luciana, Conklin, Hooper, & Yarger, 2005) – a possible reason for the absence of pre-schoolers in the GBB0 hall of fame. There is also an important lesson for teenage bakers: While general cognitive development contributes to performance improvements between childhood and adolescence, increased scores between late adolescence and adulthood are mostly due to better impulse control (Albert & Steinberg, 2011). So, in baking as in life, think how you will combat moisture before mixing the dough.
You may ask yourself if there are other factors beyond growing up and controlling impulses to get the edge in planning ability. Enthusiastic bakers with little concern about personal safety may find transcranial magnetic stimulation an appealing option. A 2012 study in the journal Experimental Brain Research found that magnetic stimulation of the right dorso-lateral prefrontal cortex significantly increased performance in the Tower of London task in patients with Parkinson’s disease (Srovnalova, Marecek, Kubikova, & Rektorova, 2012). However, the application to the field of fine baking remains to be investigated and the use of TMS in baking tents is not recommended.
 This is a version of the classic Tower of Hanoi puzzle that has been adapted for neuropsychological testing)
Albert, D., & Steinberg, L. (2011). Age Differences in Strategic Planning as Indexed by the Tower of London. Child Development, 82(5), 1501–1517. http://doi.org/10.1111/j.1467-8624.2011.01613.x
Huizinga, M., Dolan, C. V., & van der Molen, M. W. (2006). Age-related change in executive function: Developmental trends and a latent variable analysis. Neuropsychologia, 44(11), 2017–2036. http://doi.org/10.1016/j.neuropsychologia.2006.01.010
Luciana, M., Conklin, H. M., Hooper, C. J., & Yarger, R. S. (2005). The Development of Nonverbal Working Memory and Executive Control Processes in Adolescents. Child Development, 76(3), 697–712. http://doi.org/10.1111/j.1467-8624.2005.00872.x
Shallice, T. (1982). Specific Impairments of Planning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 298(1089), 199–209. http://doi.org/10.1098/rstb.1982.0082
Srovnalova, H., Marecek, R., Kubikova, R., & Rektorova, I. (2012). The role of the right dorsolateral prefrontal cortex in the Tower of London task performance: repetitive transcranial magnetic stimulation study in patients with Parkinson’s disease. Experimental Brain Research, 223(2), 251–257. http://doi.org/10.1007/s00221-012-3255-9
Working memory, the ability to hold things in mind and manipulate them, is very important for children and is closely linked to their success in school. For instance, limited working memory leads to difficulties with following instructions and paying attention in class (also see our previous post https://forgingconnectionsblog.wordpress.com/2015/02/05/adhd-and-low-working-memory-a-comparison/). A major research aim is to understand why some children’s working memory capacity is limited. All children start with lower working memory capacity that increases as they grow up.
We also know that working memory, like all mental functions, is supported by the brain. The brain undergoes considerable growth and reorganisation as children grow up. Most studies so far looked at brain structures that support working memory across development. However, some structure may be more important in younger children and some in older children.
Our new study investigates for the first time how the contribution of brain structures to working memory may change with age. For that, we tested a large number of children between 6 and 16 years on different working memory tasks. We looked at aspects of working memory concerned with storing information (locations, words) and manipulating it. The children also completed MRI scans to image their brain structure. We found that white matter connecting the two hemispheres, and white matter connecting occipital and temporal areas is more important for manipulating information held in mind in younger children, but less important in older ones. In contrast, the thickness of an area in the left posterior temporal lobe was more important in older kids. We think that these findings reflect increased specialisation of the working memory system as it develops from a distributed system in younger children that requires good wiring between different brain areas to a more localised system that is supported by high-quality local machinery. By analogy, imagine you were completing a work project. If you were collaborating with people, quality and speed are largely determined by how well the team communicates – this would be very difficult if you were trying to coordinate via mobile phones in an area with low reception. On the other hand, if one person completed the project, then the outcome would depend on the ability of this worker. The insights from this study will help us to better understand how working memory is constrained at different ages, which may allow us to design better inventions in the future to help children who struggle with working memory.
A preprint of this paper is available on BioArxiv: http://biorxiv.org/content/early/2016/08/15/069617
The analysis code is available on GitHub: https://github.com/joebathelt/WorkingMemory_and_BrainStructure_Code
I had the immense privilege to attend the annual meeting of the Organization for Human Brain Mapping (OHBM) in Geneva last week. OHBM is a fantastic venue to see the latest and greatest developments in the field of human neuroimaging and this year made no exception. The program was jam-packed with keynote lectures, symposia, poster presentations, educational courses, and many informal discussions. It is almost impossible to describe the full breadth of the meeting, but I will try to summarize the ideas and developments that were most interesting to me.
Three broad themes emerged for me: big data, methodological rigor, and new approaches to science. Big data was pervasive throughout the meeting with a large number of posters making use of huge databases and special interest talks focussing on the practicalities and promises of data sharing and meta-analysis. It seems like the field is reacting to the widely discussed reproducibility crises (http://www.apa.org/monitor/2015/10/share-reproducibility.aspx) and prominent review articles about the faults of the low sample sizes that so far have been common practice in neuroimaging (http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html). These efforts seem well suited to firmly establish many features of brain structure and function, especially around typical brain development. In the coming years, this is likely to influence publishing standards, education, and funding priorities on a wide scale. I hope that this will not lead to the infant being ejected with the proverbial lavational liquid. There is still a need to study small samples of rare populations that give a better insight into biological mechanisms, e.g. rare genetic disorders. Further, highly specific questions about cognitive and brain mechanisms that require custom assessments will probably continue to be assessed in smaller scale studies, before being rolled out for large samples.
A related issue that probably also arose from the replication discussions is methodological rigor. Symposia at OHBM2016 discussed many issues that had been raised in the literature, like the effect of head motion on structural and functional imaging, comparison of post-mortem anatomy with diffusion imaging, and procedures to move beyond statistical association. Efforts to move towards higher transparency of analysis strategies were also prominently discussed. This includes sharing of more complete statistical maps (more info here: http://nidm.nidash.org/specs/nidm-overview.html– soon to be available in SPM and FSL), tools for easier reporting and visualisation of analysis pipelines, and access to well described standard datasets. I can imagine a future in which analyses are published in an interactive format that allows for access to the data and the possibility to tweak parameters to assess the robustness of the results.
These exciting developments also pose some challenges. The trend towards large datasets also requires a new kind of analytic and theoretical approach. This leads to a clash between traditional scientific approach and big data science. Let me expand: The keynote lectures presented impressive work that was carried out in the traditional hypothesis-test-refine-hypothesis fashion. For instance, keynote speaker Nora Volkow, National Institute of Drug Abuse, presented a comprehensive account of dopamine receptors in human addiction based on a series of elegant, but conceptually simple PET experiments. In contrast to the traditional approach of collecting a few measurements to test a specific hypothesis, big data covers a lot of different measurements with a very broad aim. This creates the problem of high-dimensional data that need to be reduced to reach meaningful conclusions. Machine learning approaches emerged as a relatively new addition to the human neuroscience toolkit to tackle pervasive problems associated with this. There is great promise in these methods, but standards for reliability still need to be established and new theoretical developments are needed to integrate these findings with current knowledge. Hopefully, there will be closer communication between method developers and scientists applying these tools to human neuroscience as these methods mature.