Don’t rule the lab with an iron fist: tips for effective lab management

download

This is the latest in our semi-regular (trying to be regular) blog series, with the indomitable Sue Fletcher-Watson. The practical skills of being a good scientist are rarely taught but are vital. This is the subject for our series of blogs, and this time we are turning our attention to managing the lab.

You have your University position, you have research funding of some kind (whoop!), and the lab – you, your students, researchers and technicians – is starting to take shape. Managing the team effectively will make for happy and productive group members. This can turn into a virtuous cycle – lab members who enjoy what they are doing are a joy to manage, and make your life so much easier.

What follows is stuff we have learnt over the past few years. The points are broadly organised into two sections – firstly we discuss issues of organisation and management style, then we move onto the non-science elements of a successful lab.

  • Balance lab identity with flexibility

It might seem obvious, but a group needs a shared identity that their projects can hang off. This could be a series of over-arching questions, a particular sample of interest, a set of new methods you are advancing, or a core set of principles. A clear group identity helps lab members understand the overall picture, how their science is contributing, and how/where expertise can be shared. When Sue joined the Patrick Wild Centre and was given a branded PhD mug it made her feel one of the team. This kind of thing makes a real difference for new staff and students.

But be careful that your lab identity doesn’t kill off flexibility. It’s important that individual lab members are able to take intellectual ownership of their work, even if they are junior. In Duncan’s lab he focuses cognitive and brain development in childhood. When a potential student approached him with an interest in socio-economic status he was initially very reluctant. He could see it could easily turn into an intractable mush – that literature can be a real mess. But he went with it, and together they put together a project they were both happy with. Turns out, the student was amazing, the project highly successful, and this line of research now forms a main arm of his current research programme. Giving lab members intellectual ownership and being flexible about projects means that you give your students and staff permission to innovate. In science this is the most valuable thing of all, and an essential component of how we make progress.

So have a clear identity and make sure this is explicit, and understood by all members – but don’t sacrifice flexibility.

  • Balance easy wins and long-term goals

When starting your lab you will have various potential projects. In the early days when the lab is small you need to be strategic. Investing all of your energies (and those of your lab) in a single demanding, high-risk project, which may take several years to come to fruition, is unwise. When Duncan started his lab in Cambridge a very wise mentor told him to balance easy wins and long-term goals – alongside a large project he needed simpler experiments. This advice was invaluable. That big project took Duncan’s embryonic team 18 months to collect the data alone, and another 18 months or so to analyse fully. All the lab members were trying to build their CVs and could not afford an empty patch for 3 years. So along the way they published several standalone experimental papers, which made everyone feel more relaxed and meant they could establish ourselves as a lab.

(In the end that big project came to fruition and generated multiple big papers, so our patience was rewarded!).

Meanwhile, during Sue’s postdoc she designed a novel intervention and then evaluated it in a small randomised controlled trial.  That’s a long time to wait for your big paper. To make matters worse, she took maternity leave twice during the project! It was a tough time, waiting for that research to get out into the world, though writing a few review papers and getting the remainder of her PhD studies into journals helped pass the time!

  • Don’t rule your lab with an iron fist

We have all heard about those labs where the staff are not so much managed as subjugated (maybe there will be a blog on that, if we are feeling brave enough). Lab members are micromanaged, their outgoing emails vetted and they are subjected to surprise inspections. If you are tempted to run your lab in this way, or if this is your natural management style, our advice: just don’t.  It results in very unhappy lab members, creates needless extra work for you, and is ultimately pointless. If your lab members are not enjoying working within your group then they will not be productive. Running them like a sweatshop will do nothing to improve that.

A better approach is to have a clear role-setting process when each member joins the lab. Explain your role as the PI and their role as the post-doc (or whatever). This role-setting includes outlining expectations – mandatory meetings, duties within the group, initial goals, the schedule for your individual meetings etc. If you feel this has not been respected, then of course, pull them up and gently tell them that you want to change things. But don’t micro-manage your group.

  • Have regular meetings with minutes

Regular group meetings (we both have them fortnightly) provide a space for sharing experience, best practice, technical expertise, troubleshooting and peer support. They’re also a quick and easy way to get a snapshot of where everyone is up to on their projects. Good lab meetings will save you time, and build a happy team. Make sure you reserve a timeslot and don’t let them run over. If you fall into the habit of having needlessly long lab meetings they can easily become onerous. Keep them to time. When you meet, get members of the lab to take turns writing minutes. Sue’s lab stores these in a shared folder which is also crammed with resources to help lab members work independently – model ethics documents and cover letters, useful logos and campus maps.

Sometimes subgroups within your lab may need to meet without you – let them. This could be a really valuable way of them tackling various problems without you needing to expend any time. Don’t insist on being there for every discussion or meeting.

 

So far we have focussed on elements of lab organisation, strategy and management style. For the next set of points we turn our attention to the more interpersonal elements of running a successful lab.

  • Encourage your lab to have a social life

Within Duncan’s lab there is a designated Social Secretary, with the job of orchestrating lab social events (checking availability, garnering opinion of what people would like to do, making bookings etc). Make sure that any social events are not enforced, and remember that alcohol is not an essential component of social occasions! It is important not to make anyone feel excluded. N.B. some social activities like life drawing classes might be an unwise choice (sorry Joe), so try to stick to simple events that allow everyone an opportunity to chat without feeling uncomfortable.

Things to keep in mind: what is your role within these social situations? Remember that you still have to be these people’s boss come Monday morning. Have fun but don’t be unprofessional. No-nos include gossiping about your colleagues or asking intrusive questions about people’s personal lives. While taking to the dancefloor to bust a groove is a big YES.

  • Be a good mentor and keep an eye on wellbeing

Running a successful lab is also about keeping an eye on the wellbeing of those you manage. It’s important to let them know that you care. Both Sue and Duncan make a point of asking, even though everyone seems pretty happy most of the time. This makes lab members mindful of their own wellbeing and flags the fact that you are a person they can speak to if / when they struggle. If you’re doing an annual review, ask them if they’re happy with their work-life balance. Remind your lab group to take time off – especially PhD students who can fall for the myth that they ought to be working 7-day weeks. It can also be a good idea for lab members to have mentors outside the group. With their mentor they can discuss matters like job opportunities, or difficulties they are having with their PI (i.e. you!). It can also provide a valuable alternate perspective on their work.

There is always a slight tension about how much of your own personal life you expose lab members to. Both Sue and Duncan think it is important to show your lab members that you too are a human being, with your own stresses, frustrations and problems. In our experience, when you choose to be yourself with your lab members, and be honest about how you are doing, you give lab members permission to do likewise. It creates a safe space in which everyone feels that they can be themselves, and this is important in creating a supportive work environment. Don’t overshare, but do be yourself.

So there we have it, our tips on running an effective lab. Be flexible, be a human being, be a good scientist.

Advertisements

Are early interventions effective?

Early years interventions can seem a particularly powerful way to forestall developmental difficulties. The wide-ranging evidence that early cognitive and behavioural difficulties can predict lifelong outcomes[i],[ii] makes it seem obvious that if a child is struggling, intervening earlier is better. This has led to considerable interest in intervening within the first few years of a child’s life, and the temptation to seek out earlier indicators of children’s cognitive development and wellbeing.

However, there are several challenges with choosing effective early interventions. First, reliably identifying which children will require support is a substantial challenge. This is particularly evident in the case of language development. Children’s language is a rich area for research on early intervention, for several reasons: it provides a window onto learning at school entry, and weak language at school entry is a risk factor for poor educational, social and emotional outcomes in subsequent years. There is also a clear socio-economic gradient in children’s language abilities, in which children from lower socio-economic backgrounds tend to have weaker language skills than their peers before they start school.

blog_image2

(But is the story really so simple?)

Despite this, children’s language abilities are highly variable and volatile before they are 4-5 years old, which makes the early identification of children who may be in need of intervention non-trivial. This is demonstrated by several studies showing that whilst children’s early vocabulary around 2 years of age can predict their later vocabulary and reading skills at school entry, this predictive power is very small. Only about 10-20%[iii],[iv] of the variation in children’s abilities at school is typically predicted by their early language skills, which indicates that there are multiple other factors that shape children’s outcomes. This also suggests that finding a sufficiently sensitive marker of early difficulties is challenging because of how much a child’s abilities can shift and develop over time.

This relates to the second challenge: early interventions can struggle to have sustained impacts unless intervention is ongoing. A good example of this comes from a recent study by McGillion and colleagues (2017). The researchers were interested in whether a caregiver intervention to promote talking with their 11-month old child would lead to changes in both parenting approach and children’s language development. Caregivers from a range of socio-economic backgrounds were randomly assigned to one of two conditions: watching a short video about talking with their child, or a control video about dental hygiene. One month later the caregivers in the language video condition talked more with their children than those in the control video condition. This fed into vocabulary improvement in the low socio-economic group: children in the intervention condition had higher vocabularies at 15 and 18 months compared to children in the control condition. However, at 24 months these benefits had disappeared, and there was no effect of the intervention on children’s vocabulary.

These findings demonstrate several important points about early interventions. Interventions can have a positive impact: in this case a brief, low-intensity video intervention was able to positively change caregiver behaviour, and this may have helped gains in children’s vocabulary a few months later. However, these effects were not sustained over time: they had faded by the time children were 2 years old. This shows how critical longer-term follow-ups are in intervention studies to understand whether benefits are long lasting. Moreover, this result reminds us that without ongoing support it is difficult for early gains in one area of cognition to offset other challenges children might face. The promise of early interventions means there can be an assumption that they will permanently shift a child’s trajectory. In reality this is often not the case, and initial gains may often require ongoing support to have real long-term effects for children.

It is important to therefore strike a balance between the promises and limitations of early interventions. Whilst endeavouring to find the right areas to target is undoubtedly valuable, the challenge of finding reliable indicators of difficulties early in a child’s life might mean searching for stable predictors at later ages (e.g. from school entry) may be a more fruitful approach[v]. In addition, long-term follow-ups to interventions may be key to understanding the extent to which they are effective. The challenge of early interventions is that development is complex and shaped by multiple factors. Working within these constraints may help us better identify and help children in need of ongoing support.

 This newest article was contributed by our beloved lab member Erin Hawkins. Erin’s research focuses on understanding the mechanisms and interventions of developmental difficulties in children.

References:

[i] Caspi et al. (2016). Childhood forecast of a small segment of the population with large economic burden. Nature Human Behaviour, doi:10.1038/s41562-016-0005

[ii] The Allen Report (2011). Early Intervention: The Next Steps. https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/284086/early-intervention-next-steps2.pdf

[iii] Duff, Reen, Plunkett, & Nation (2015). Do infant vocabulary skills predict school-age language and literacy outcomes? Journal of Child Psychology and Psychiatry, doi:10.1111/jcpp.12378.

[iv] McGillion, Pine, Herbert & Matthews. (2017). A randomised controlled trial to test the effect of promoting caregiver contingent talk on language development in infants from diverse socioeconomic status backgrounds. Journal of Child Psychology and Psychiatry, doi:10.1111/jcpp.12725

[v] Norbury, C. (2015). Editorial: Early intervention in response to language delays – is there a danger of putting too many eggs in the wrong basket? Journal of Child Psychology and Psychiatry, doi:10.1111/jcpp.12446

For further reading:

 

The Dos and Don’ts of PhD supervision

Myself and Sue Fletcher-Watson (you know, the fabulously clever one…) have been putting our minds to a series of blog posts, attempting to help the fledgling academic get to grips with some of their new professional duties.  This week it is a real classic – how to supervise a PhD student.

download

Being a good supervisor is not easy, and tricky relationships between students and supervisors are all too common (you may have direct experience yourself). Understanding and mutually agreeing upon the role of the student and the supervisor is a crucial starting point. Establishing these expectations is an important early step and will make navigating the PhD easier for all concerned. With a shared idea of where you are both starting from, and where you want to get to, together you can chart the path forward.  Hopefully these DOs and DON’Ts will help you get off on the right foot as a PhD supervisor.

  1. Managing your intellectual contribution

DO challenge their thinking…

A good PhD supervisor should question their student’s decision making – some part of your supervision meetings should be viva-like. Why are you doing this? How does this method answer this question? What do these data mean? Make sure your student understands what you’re doing and why – be explicit that you expect them to defend their interpretation of the literature / research plans NOT adhere to yours. It is important that they don’t feel that this is a test, with just one right answer.

When questioning your student, strike a balance between exploring alternatives, and making forward progress. Probing assumptions is important but don’t become a pedant; you need to recognise when is the time to question the fundamentals, and when is the time to move the debate on to the next decision.

DON’T make decisions for them…

Help students determine the next decision to be made (“now that we have selected our measures we need to consider what analysis framework we will use”) and also the parameters that constrain this decision… but remember that it is not your place to make the decisions for them. Flagging the consequences of the various choices is an excellent way to bring your experience to bear, without eclipsing the student. You may wish to highlight budget constraints, discuss the likely recruitment rate, or consider the consequences of a chosen data type in terms of analysis time and required skills. Help them see how each decision feeds back. Sue recently worked with a student to move from research question, to methods, to power calculation, to ideal sample size, and then – finding that the project budget was inadequate to support the sample target – back to RQ again. It’s tough for a student to have to re-visit earlier decisions but important to consider all the relevant factors before committing to data collection.

  1. Who’s in charge?

DO give them project management responsibility

Both Sue and Duncan run fortnightly lab group meetings with all their students and this is highly recommended as a way to check in and ensure no student gets left hanging. But don’t make the mistake of becoming your student’s project manager. Whether you prefer frequent informal check-ins or more formal supervisions that are spaced apart, your student should be in charge of monitoring actual progress against plans, and recording decisions. For formal meetings, they can provide an agenda, attachments and minutes, assigning action points to you and other supervisors, and chasing these if needed. They should monitor the budget, and develop excellent version control and data management skills.

This achieves a number of different goals simultaneously. First, it gives your student a chance to learn the generic project management skills which will be essential if they continue in an academic career, and useful if they leave academia too. Second, it helps to reinforce the sense that this is their project, since power and responsibility go hand in hand. Finally, it means that your role in the project is as an intellectual contributor, shaping and challenging the research, rather than wasting your skills and time on bureaucratic tasks.

DON’T make them a lackey to serve your professional goals

Graduate students are not cheap research assistants. They are highly talented early career researchers with independent funding (in the majority of cases) that has been awarded to them on merit. They have chosen to place their graduate research project in your lab. They are not there to conduct your research or write your papers. In any case, attempting this approach is self-defeating. Students soon realise that they are being taken advantage of, especially when they chat with friends in other labs. When students become unhappy and the trust in their supervisor breaks down, the whole process can become ineffective for everyone concerned. As the supervisor you are there to help and guide their project… not the other way around.

This can be really challenging when graduate projects are embedded within a larger funded project. How do you balance the commitment you’ve made to the funder alongside the need for students to have sufficient ownership? Firstly, think carefully about whether your project really lends itself to a graduate student. Highly specified and rigid projects need great research assistants rather than graduate students. Secondly, build in sufficient scope for individual projects and analyses, for example by collecting flexible data types (e.g. parent-child play samples) which invite novel analyses, and make sure that students are aware of any constraints before they start.

 

  1. What are they here to learn?

 

DO provide opportunities to learn skills which extend beyond the project goals

A successful graduate project is not just measured in terms of thesis or papers, but in the professional skills acquired and whether these help them launch a career in whichever direction they choose. This will mean allowing your students to learn skills necessary for their research, but also giving them broader opportunities: formal training courses, giving and hearing talks, visiting other labs or attending conferences. This is all to be encouraged, though be careful that it happens alongside research progress, rather than as a displacement activity. Towards the end of the PhD, as the student prepares to fly the nest, these activities can be an important way of building the connections that are necessary to be part of a scientific community and make the next step in their career.

 

DON’T expect them to achieve technical marvels without support

All too often supervisors see exciting new analyses in papers or in talks and want to bring those skills to their lab. But remember, if you cannot teach the students to do something, then who will? Duncan still tries to get his hands dirty with the code (and is humoured in this effort by his lab members), but often manages to underestimate how difficult some technical steps can be. Meanwhile, Sue is coming to terms with the fact that she will never fully understand R.

If you recommend new technical analyses, or development of innovative stimuli, then make sure you fully understand what you are asking of your student – allow sufficient time for this and provide appropriate support. When thinking of co-supervisors, don’t always go for the bigwig you are trying to cosy up to… often a more junior member of staff whose technical skills are up-to-date would be far more useful to your student. Also try and create a lab culture with good peer support. Just as ‘it takes a village to raise a child’, so “it takes a lab to supervise a PhD student”. Proper peer support mechanisms, shared problem solving and an open lab culture are important ingredients for giving students the proper support they need.

  1. Being reasonable

DO set clear expectations            

Step one of any PhD should be to create a project timeline.  Sue normally recommends a detailed 6-month plan alongside a sketched 3-year plan, both of which should be updated about quarterly. For a study which is broken down into a series of short experiments, a different model might be better. Whatever planning system you adopt, you should work with your student to set realistic deadlines for specific tasks, assuming a full time 9-5 working day and no more, and adhere to these. Model best practice by providing timely feedback at an appropriate level of detail – remember, you can give too much as well as too little feedback.

Think carefully about what sort of outputs to ask for – make them reasonable and appropriate to the task. You might want propose a lengthy piece of writing in the first six months, to demonstrate that your student has grasped the literature and can pull it together coherently to make a case for their chosen research.  But, depending on the project, it might also be a good idea to get them to contrast some differing theoretical models in a mini-presentation to your lab, create a table summarising key methodological details and findings from previous work, or publish a web page to prioritise community engagement with the project topic.

DON’T forget their mental health and work-life balance

PhD students might have recently moved from another country or city – they may be tackling cultural differences at work and socially. Simultaneous life changes often happen at the same time as a PhD – meeting a life partner, marriage, children. Your students might be living in rented and shared accommodation, with all the stresses that can bring.

Make sure your students go home at a reasonable hour.  If you must work outside office hours be clear you don’t expect the same from them. Remind them to take a holiday – especially if they have had a period of working hard (e.g. meeting a deadline, collecting data at weekends). Ensure that information about the University support services are available prominently in the lab.

Remember to take into account their personal lives when you are discussing project progress. Being honest about your own personal situation (without over-sharing) creates an environment where it is OK to talk about stuff happening outside the office. Looking after your students’ mental health doesn’t mean being soft, or turning a blind eye to missed deadlines or poor quality work – it means being proactive and taking action when you can see things aren’t going well.

What about the research??

We are planning another blog in the future about MSc and undergrad supervision which might address some other questions more closely tied to the research itself – how to design an encapsulated, realistic but also worthwhile piece of research, and how to structure this over time. But the success (or otherwise) of a PhD hangs on a lot more than just having a great idea and delivering it on time. We hope this post will help readers to reflect on their personal management style and think about how that impacts their students.

So, be humble. Be supportive. Be a good supervisor.

Data-driven subtyping of behavioural problems associated with ADHD in children

Over 20% of children will experience some problems with learning during their schooling that can have a negative impact on their academic attainment, wellbeing, and longer-term prospects. Traditionally difficulties are grouped in diagnostic categories like attention deficit hyperactivity disorder (ADHD), dyslexia, or conduct disorder. Each diagnostic category is associated with certain behaviours that need to be present for the label to be applied. However, children’s difficulties often span multiple domains. For instance, a child could have difficulties with paying attention in the classroom (ADHD) and could also have a deficit in reading (dyslexia). This makes it difficult to decide which diagnostic label is the most appropriate and, consequently, what support to provide. Further, there can be a mixture of difficulties within a diagnostic category. For instance, children with ADHD can have mostly deficits with inattention or hyperactivity, or both. This heterogeneity makes research on the mechanisms behind these disorders very difficult as each subtype may be associated with specific mechanisms. However, there is currently no agreement about the presence, number, or nature of subtypes in most developmental disorders.
In our latest study, we applied a new approach to group behavioural difficulties in a large sample of children who attended the Centre for Attention, Learning, and Memory (CALM). This sample is a mixed group of children who were referred because of any difficulty relating to attention, learning, and/or memory. The parents of the children filled in a questionnaire that is often part of the assessment that informs the advice of education specialists on the most appropriate support for a child. The questionnaire contained questions like “Forgets to turn in completed work”, “Does not understand what he/she reads”, or “Does not get invited to play or go out with others”. In our analysis, we grouped children by the similarity of ratings on this questionnaire using a data-driven algorithm. The algorithm had two aims: a) form groups that are maximally different b) groups should contain cases that are as similar as possible. The results suggested that there were three groups of children with: 1) primary difficulties with attention, hyperactivity, and self-management 2) primary difficulties with school learning, e.g. reading and maths deficits 3) primary difficulties with making and maintaining friendships.
Next, we investigated if the behavioural profiles identified through the algorithm also show differences in brain anatomy. We found that white matter connections of the prefrontal and anterior cingulate cortex mostly strongly distinguished the groups. These areas are implicated in cognitive control, decision making, and behavioural regulation. This indicates that the data-driven grouping aligns well with biological differences that we can investigate in more detail in future studies.
The preprint of the study can be found here: http://www.biorxiv.org/content/early/2017/07/05/158949
The code used for the analysis can be found here: https://github.com/joebathelt/Conners_analysis

Catching up with the Internet Era: Online data collection for researchers

In the world we humans spend a great deal of time connected to the internet, this is especially true for younger people – who are growing up surrounded by this technology. You can see this huge change over time in graph from Our World in Data below!

Alex post pic1
Source: http://data.worldbank.org/

Increasingly, researchers and companies are leveraging this remote access to behaviour to answer questions about how humans behave. Companies have been collecting ‘user data’ for years from online platforms, and using this inferred information about people to improve user experience, and in some cases sell more products to the correct people. The amount of data we are able to collect on behaviour is expanding exponentially, and at the same time so is the quality and modality of this data – as people connect different devices (like activity monitors, clocks, fridges). Wearable sensors are becoming particularly more frequent – often this data is stored using internet-based services.

 

Infographic: The Predicted Wearables Boom Is All About The Wrist | Statista
Taken from Statista

Psychology and cognitive science is starting to catch up on this trend, as it offers the ability to carry out controlled experiments on a much larger scale. This offers the opportunity to characterise subtle differences, that would be lost in the noise of small samples tested in a lab environment.

However, for many the task of running an online experiment is daunting; there are so many choices, and dealing with building, hosting and data processing can be tricky!

Web Browsers

A good starting point, and often the most straightforward, is building experiments to work in a web browser. The primary advantage of this is that you can run experiments on the vast majority of computers, and even mobile devices, with no installation overhead. There are some limitations though:

Compatibility:

Internet Explorer - sigh

With multiple different web browsers, operating systems, and devices, the possible combinations number in the 1000s. This can lead to unexpected bugs and errors in your experiment. A workaround is to restrict access to a few devices (see below for tips on how to do this in JavaScript) – but this is traded off with how many participants you would like to access.

Accuracy:

Web browsers were not designed to run reaction time experiments in, or present stimulus with millisecond precision. Despite this, some research has shown equivalent precision for Reaction Time, and Stimulus Presentation.

If you are very concerned still, you may utilise WebGL, a web graphics engine, which allows you to gain analogous presentation times to native programs, and even use a computer’s graphics card. Although this will be limited by the operating system and hardware of the user!

There are a number of tools that can help you with browser experiments. From fee-paying services like Gorilla, which deals with task building, hosting and data management for you, to fully open source projects like jsPsych, and PsychoPy’s PsychoJS – which deal with building experiments and data, but not hosting (although there are plans to develop a hosting and data storage solution). All of these offer a graphical user-interface, which allows experiments to be built without any prior knowledge of programming!

Unity

unity

One intermediate tool – which we are currently using – is cross platform development environment called Unity. Whilst originally intended for creating video-games, Unity can be repurposed for creating experimental apps. The large advantage is an easy capability to build to a vast variety of operating systems and platforms with minimal effort: a Unity project can be built for a web browser, iOS app, Android app, Windows, OSX, Linux…. and so on. You can also gain access to sensor information on devices (hear rate monitors, step counting, microphone, camera), to start to access the richness of information contained in these devices.

The utility of this tool for experimental research is huge, and apparently appears to be under-utilised – it has an easy to learn interface, and requires minimal programming knowledge.

Conclusion

Whilst this post is largely non-instructional, hopefully it has shed some light on the potential tools you can use to start running research online (without employing an expensive web or app developer), or hopefully just piqued your interest a tiny bit.

If you would like to dive in to the murky (but exciting) world of web development, you can also check out a few tips for improving the quality of your online data here.

 

 

This exciting post was written by Alexander Irvine, one of the newest members of our lab. Alex previously worked on developing web-based study at Oxford before joining the lab and is experienced in an array of programming languages and tools. Check out his personal website if you want to read more in-depth about online data collection.

The weather and the brain – using new methods to understand developmental disorders

 

The latest article was written by our brilliant lab member Danyal Akarca. It describes some of his MPhil research which aims to explore transient brain networks in individuals with a particular type of genetic mutation. Dan finished his degree in Pre-Clinical Medicine before joining our lab and has since been fascinated by the intersection of genetic disorders and the dynamics of brain networks.

The brain is a complex dynamic system. It can be very difficult to understand how specific differences within that system can be associated with the cognitive and behavioural difficulties that some children experience. This is because even if we group children together on the basis that they all have a particular developmental disorder, that group of children will likely have a heterogeneous aetiology. That is, even though they all fall into the same category, there may be a wide variety of different underlying brain causes. This makes these disorders notoriously difficult to study.

Developmental disorders that have a known genetic cause can be very useful for understanding these brain-cognition relationships, because by definition they all have the same causal mechanism (i.e. the same gene is responsible for the difficulties that each child experiences). We have been studying a language disorder caused by a mutation to a gene called ZDHHC9. These children have broader cognitive difficulties, and more specific difficulties with speech production, alongside a form of childhood epilepsy called rolandic epilepsy.

In our lab, we have explored how brain structure is organised differently in individuals with this mutation, relative to typically developing controls. Since then our attention has turned to applying new analysis methods to explore differences in dynamic brain function. We have done this by directly recording magnetic fields generated by the activity of neurons, through a device known as a magnetoencephalography (MEG) scanner. The scanner uses magnetic fields generated by the brain to infer electrical activity.

The typical way that MEG data is interpreted, is by comparing how electrical activity within the brain changes in response to a stimulus. These changes can take many forms, including how well synchronised different brain areas are, or the how size of the magnetic response differs across individuals. However, in our current work, we are trying to explore how the brain configures itself within different networks, in a dynamic fashion. This is especially interesting to us, because we think that the ZDHHC9 gene has an impact on the excitability of neurons in particular parts of the brain, specifically in those areas that are associated with language. These changes in network dynamics might be linked to the kinds of cognitive difficulties that these individuals have.

We used an analysis method called “Group Level Exploratory Analysis of Networks” – or GLEAN for short – and has recently been developed at the Oxford centre for Human Brain Activity. The concept behind GLEAN is that the brain changes between different patterns of activation in a fashion that is probabilistic. This is much like the concept of the weather – just as the weather can change from day to day in some probabilistic way, so too may the brain change in its activation.

2a6556de3969141b8006024f2d14873e

This analysis method not only allows us to observe what regions of the brain are active when the participants are in the MEG scanner. It also allows us to see the probabilistic way in which they can change between each other. For example, just as it is more likely to transition from rain one day to cloudiness the next day, relative to say rain to blistering sun, we find that brain activation patterns can be described in a very similar way over sub-second timescales. We can characterise those dynamic transitions in lots of different ways, such as how long you stay in a specific brain state or how long does it take to return to a state once you’ve transitioned away. (A more theoretical account of this can be found in another recent blog post in our Methods section – “The resting brain… that never rests”.) We have found that a number networks differ between individuals with the mutation and our control subjects.

Picture1

(These are two brain networks that show the most differences in activation – largely in the parietal and frontotemporal regions of the brain.)

Interestingly, these networks strongly overlap with areas of the brain that are known to express the gene (we found this out by using data from the Allen Atlas). This is the first time that we know of that researchers have been able to link a particular gene, to differences dynamic electrical brain networks, to a particular pattern of cognitive difficulties. And we are really excited!

 

Reviewer 2 is not your nemesis – how to revise and resubmit

This is a blog piece is written with Sue Fletcher-Watson, a colleague of supreme wisdom and tact, ideally qualified for this particular post. It is a follow-up to our previous joint post about peer-review. We now turn our attention to the response to reviewers.

16198_10155276140165640_8482032632772318289_n

As with the role of reviewer, junior scientists submitting their work as authors are given little (if any) guidance on how to interact with their reviewers. Interactions with reviewers are an incredibly valuable opportunity to improve your manuscript and find the best way of presenting your science. However, all too often responding to reviewers is seen as an onerous chore, which partly reflects the attitude we take into the process. These exchanges largely happen in private and even though they play a critical role in academia, we rarely talk about them in public. We think this needs to change – here are some pointers for how to interact with your reviewers.

  • Engage with the spirit of the review

Your reviewers will be representative of a portion of your intended readership. Sometimes when reading reviewers’ comments we can find ourselves asking “have they even read the paper?!”. But if the reviewer has misunderstood some critical aspect of the paper then it is entirely possible that a proportion of the broader readership will also. An apparently misguided review, whilst admittedly frustrating, should be taken as a warning sign. Give yourself a day or two to settle your temper, and then recognise that this is your opportunity to make your argument clearer and more convincing.

Similarly, resist the temptation to undertake the minimal possible revisions in order to get your paper past the reviewers. If a reviewer makes a good point and you can think of ways of using your data to address it, then go for it, even if this goes beyond what they specified. Remember – this is your last chance to make this manuscript as good as it can be.

  • Be grateful and respectful. But don’t be afraid to disagree with your reviewers.

Writing a good review takes time. Thank the reviewers for their efforts. Be polite and respectful, even if you think a review is not particularly constructive. But don’t be afraid to disagree with reviewers. Sometimes reviewers ask you to do things that you don’t think are valid or wise, and it’s important to defend your work. No one wants a dog’s dinner of a paper… a sort of patchwork of awkwardly combined paragraphs designed to appease various reviewer comments. As the author you need to retain ownership of the work. This will mean that sometimes you need to explain why a recommendation has not been actioned. You can acknowledge the importance of a reviewer’s point, without including it in your manuscript.

We have both experienced reviewers who have requested changes we don’t feel are legitimate. Examples include the reviewer who requested a correlational analysis on a sub-group with a sample size of n=17. Or the reviewer who asked Sue to describe how her results, from a study with participants aged 18 and over, might relate to early signs of autism in infancy (answer: they have no bearing whatsoever and I’m not prepared to speculate in print). Or the reviewer who asked for inclusion of variables in a regression analysis which did not correlate with the outcome, (despite that being a clearly-stated criterion for inclusion in the analysis), on the basis of their personal hunch. In these cases, politely but firmly refusing to make a change may be the right thing to do, though you can nearly always provide some form of concession. For example, in the last case, you might include an extra justification, with a supporting citation, for your chosen regression method.

  • Give your response a clear and transparent structure

With any luck, your revised manuscript will go out to the same people who reviewed it the first time.  If you do a particularly good job of addressing their comments – and if the original comments themselves were largely minor – your editor may even decide your manuscript doesn’t need peer review a second time. In any case, to maximise the chances of a good result it is essential that you present your response clearly, concisely and fluently.

Start by copying and pasting the reviewer comments into a document.  Organise them into numbered lists, one for each reviewer.  This might mean breaking down multi-part comments into separate items, and you may also wish to paraphrase to make your response a bit more succinct.  However, beware of changing the reviewer’s intended meaning!

Then provide your responses under each numbered point, addressed to the editor (“The reviewer makes an excellent point and…”). In each case, try to: acknowledge the validity of what the reviewer is saying; briefly mention how you have addressed the point; give a page reference.  This ‘response to reviewers’ document should be accompanied by an updated manuscript in which any significant areas of new text , or heavily edited text, are highlighted something like this. Don’t submit a revised manuscript with tracked changes – these are too detailed and messy for a reviewer to have to navigate – and don’t feel the need to highlight every changed word.

If it’s an especially complicated or lengthy response, then it is sometimes a good idea to include a (very) pithy summary up top for the Editor, before you get to the reviewer-specific response. A handful of bullet points can help orient the Editor to the major changes that they can expect to find in the new version of your manuscript.

  • The response letter can be a great place to include additional analyses that didn’t make it into the paper

Often when exploring the impact of various design choices or testing the impact of assumptions on your analysis, additional comparisons can be very useful. We both often include additional analyses in our ‘response to reviewer’ letters. This aids transparency and can also be a useful way of showing reviewers that your findings are solid. Sometimes these will be analyses that have been explicitly asked for, but on other occasions you may well want to do this from your initiative. As reviewers we are both greatly impressed when authors use their own data to address a point, even if we didn’t explicitly ask them to do this.

One word of warning here, however. Remember that you don’t want to put an important piece of information or line of reasoning only in your response letter, if it ought also to be in the final manuscript. If you’ve completed an extra analysis as part of your consideration of a reviewer point, consider whether this might also have relevance to your readership when the paper is published.  It might be important to leave it out – you don’t want to include ‘red herring’ analyses or look like you are scraping the statistical barrel by testing ‘til the cows come home. But on the other hand, if the analysis directly answers a question which is likely to be in your reader’s mind, consider including it.  This could be as a supplement, linked online data set, or a simple statement: e.g. “we repeated all analyses excluding n=2 participants with epilepsy and results were the same”.

  • Sometimes you may need the nuclear option

We have both had experiences where we have been forced to make direct contact with the Action Editor. A caveat to all the points above is that there are occasions where reviewers attempt to block the publication of a manuscript unreasonably. Duncan had an experience of a reviewer who simply cut and paste their original review, and reused it across multiple subsequent rounds of revision. Duncan thought that his team had done a good job of addressing the reviewer’s concerns, where possible, but without any specific guidance from the reviewer they were at a loss to identify what they should do next. Having already satisfied two other reviewers, he decided to contact the Action Editor and explain the situation. They accepted the paper. Sue has blogged before about a paper reporting on a small RCT which was rejected for the simple reason that it reported a null result. She approached the Editor with her concern and it was agreed that the paper should be re-submitted as a new manuscript and sent out again for a fresh set of reviews. This shouldn’t be necessary, but sadly sometimes it is.

Editors will not be happy with authors trying to circumvent the proper review process, but in our experience they are sympathetic to authors when papers are blocked by unreasonable reviewers. After all, we have all been there. If this is the situation you find yourself in, be as diplomatic as possible and outline your concerns to the Editor.

In conclusion, much of what we want to say can probably be summed up with the following: This is not a tick-box exercise, but the last opportunity to improve your paper before it reaches your audience. Engage with your reviewers, be open-minded, and don’t be afraid to rethink.

Really, when it comes to responding to reviewers, the clue is in the name.  It’s a response, not a reaction – so be thoughtful, be engaged and be a good scientist.

Think you’re your own harshest critic? Try peer review…

Our latest blog post is written by me with the wonderful Sue Fletcher-Watson, a colleague whose intellectual excellence is only exceeded by her whit and charm.PeerReview.jpeg

Peer review is a lynch-pin of the scientific process and bookends every scientific project. But despite the crucial importance of the peer review process in determining what research gets funded and published, in our experience PhD students and early career researchers are rarely if ever offered training on how to conduct a good review. Academics frequently find themselves complaining about the unreasonable content and tone of the anonymous reviews they receive, which we attribute partly to a lack of explicit guidance on the review process. In this post we offer some pointers on writing a good review of a journal article.  We hope to help fledgling academics hone their craft, and also provide some insight into the mechanics of peer review for non-academic readers.

What’s my review for?

Before we launch into our list of things to avoid in the review process, let’s just agree what a review of an academic journal article is meant to do. You have one simple decision to make: does this paper add something to the sum of scientific knowledge? Remember of course that reporting a significant effect ≠ adding new knowledge, and similarly, a non-significant result can be highly informative. Don’t get sucked into too much detail – you are not a copy editor, proof-reader, or English-language critic. Beyond that, you will also want to consider whether the manuscript, in its current form, does the best job of presenting that new piece of knowledge. There’s a few specific ways (not) to go about this, so it looks like it might be time for a list…

  1. Remember, this is not YOUR paper

Reviewer 2 walks into a bar and declares that this isn’t the joke they would have written.

First rule of writing a good peer review: remember that this is not your paper. Neither is this a hypothetical study that you wished the authors had conducted. Realising this will have a massive impact on your view of another’s manuscript. The job is not to sculpt it into a paper you could have written. Your job as a reviewer is two-fold: i) make a decision as to the value of this piece of work for your field; and ii) help the authors to present the clearest possible account of their science.

Misunderstanding the role of the reviewer is perhaps at the heart of many peer review horror stories. Duncan does a lot of studies on cognitive training. Primarily he’s interested in the neural mechanisms of change, and tries to be very clear about that. But reviewers almost always ask “where are your far transfer measures?” because they want to assess the potential therapeutic benefit of the training. This is incredibly infuriating. The studies are not designed or powered for looking at this, but instead at something else of equal but different value.

Remember – you can’t ask them to report an imaginary study you wished they had conducted.

  1. Changing the framing, but not the predictions

In this current climate of concern over p-hacking and other nefarious, non-scientific procedures, a question we have to ask ourselves as reviewers is: are there some things I can’t ask them to change? We think the answer is yes – but it may be less than you think. For starters, you can ask authors to re-consider the framing of the study to make it more accurate. Let’s imagine they set out to investigate classroom practice, but used interviews not observations, and so ended up recording teacher attitudes instead. Their framing can end up a bit out of kilter with the methods and findings. As a reviewer, with a bit of distance from the work, you can be very helpful in highlighting this.

If you think there are findings which could be elucidated – for example by including a new covariate, or by running a test again with a specific sub-group excluded – you should feel free to ask.  At the same time, you need to respect that the authors might respond by saying that they think these analyses are not justified.  We all should avoid data-mining for significant results and reviewers should be aware of this risk.

What almost certainly shouldn’t be changed are any predictions being made at the outset. If these represent the authors’ honest, well-founded expectations then they need to be left alone.

However, there may be an exception to this rule… Imagine a paper (and we have both seen these) where the literature reviewed is relatively balanced, or sparse, such that it is impossible to make a concrete prediction about the expected pattern in the new data. And yet these authors have magically extracted hypotheses about the size and direction of effects which match up with their results. In this case, it may be legitimate to ask authors to re-inspect their lit review so that it provides a robust case to support their predictions. Another option is to say that, given the equivocal nature of the field, the study would be better set-up with exploratory research questions. This is a delicate business, and if in doubt, it might be a good place to write something specific to the editor explaining your quandary (more on this in number 5).

  1. Ensuring all the information is there for future readers

In the end the quality of a paper is not determined by the editor or the reviewers… but by those who read and cite it. As a reviewer imagine that you are a naïve reader and ask whether you have all the information you need to make an informed judgement. If you don’t, then request changes. This information could take many forms. In the Method Section, ask yourself whether someone could reasonably replicate the study on the basis of the information provided. In the Results ask whether there are potential confounds or complicating factors that readers are not told about. These kinds of changes are vital.

We also think it is totally legitimate to request that authors include possible alternative interpretations. The whole framing of a paper can sometimes reflect just one of multiple possible interpretations, which could somewhat mislead readers. As a reviewer be wise to this and cut through the spin. The bottom-line: readers should be presented with all information necessary for making up their own minds.

  1. Digging and showing off

There is nothing wrong with a short review. Sometimes papers are good. As an editor, Duncan sometimes feels like reviewers are really clutching at straws, desperate to identify things to comment on. Remember that as a reviewer you are not trying to impress either the authors or the editor. Don’t dig for dirt in order to pad the review or show how brainy you are.

Another pet hate is when reviewers introduce new criticisms in subsequent rounds of review. Certainly if the authors have introduced new analyses or data since the original submission, then yes, this deserves a fresh critique. But please please please don’t wait until they have addressed your initial concerns… and then introduce a fresh set on the same material. When reviewers start doing this it smacks of a desperate attempt to block a paper, thinly veiled by apparently legitimate concerns. Editors shouldn’t stand for that kind of nonsense, so don’t make them pull you up on it.

  1. Honesty about your expertise

You don’t know it all, and there is no point pretending that you do. You have been asked to review a paper because you have relevant expertise, but it isn’t necessarily the case that you are an expert in all aspects of the paper. Make that clear to the authors or the editor (the confidential editor comments box is quite useful for this).

It is increasingly the case that our science is interdisciplinary – we have found this is especially the case where we are developing new neuroimaging methods and applying them to novel populations (e.g. typically and atypically developing children). The papers are usually reviewed by either methods specialists or developmental psychologists, and the reviews can be radically different. This likely reflects the different expertise of the reviewers, and it helps both authors and editor where this is made explicit.

Is it ok to ask authors to cite your work? Controversial. Duncan never has, but Sue (shameless self-publicist) has done. We both agree that it is important to point out areas of the literature that are relevant but have not been covered by the paper – and this might include your own work. After all, there’s a reason why you’ve been selected as a relevant reviewer for this paper.

Now we know what not to do, what should you put in a review?

Start your review with one or two sentences summarising the main purpose of the paper: “This manuscript reports on a study with [population X] using [method Y] to address whether [this thing] affects [this other thing].”  It is also good to highlight one or two key strengths of the paper – interesting topic, clear writing style, novel method, robust analysis etc. The text of your review will be sent, in full and unedited, to the authors. Always remember that someone has slaved over the work being reported, and the article writing itself, and recognise these efforts.

Then follow with your verdict, in a nutshell.  You don’t need to say anything specific about whether the paper should / should not be published (and some journals actively don’t want you to be explicit about this) but you should try to draw out the main themes of your comments to help orient the authors to the specific items which follow.

The next section of your review should be split into two lists – major and minor comments. Major comments are often cross-cutting,   e.g. if you don’t think the conclusions are legitimate based on the results presented. Also in the major comments include anything requiring substantial work on the part of the authors,  like a return to the original data. You might also want to highlight pervasive issues with the writing here – such as poor grammar – but don’t get sucked into noting each individual example.

Minor comments should require less effort on the part of the authors, such as some re-phrasing of key sentences, or addition of extra detail (e.g. “please report confidence intervals as well as p-values”). In each case it is helpful to attach your comments to a specific page and paragraph, and sometimes a numbered line reference too.

At the bottom of the review, you might like to add your signature. Increasing numbers of reviewers are doing this as part of a movement towards more open science practices. But don’t feel obliged – especially if you are relatively junior in your field, it may be difficult to write an honest review without the safety of anonymity.

Ready to review?

So, hopefully any early career researchers reading this might feel a bit more confident about reviewing now. Our key advice is to ensure that your comments are constructive, and framed sensitively. Remember that you and the original authors are both on the same side – trying to get important science out into a public domain where it can have a positive influence on research and practice. Think about what the field needs, and what readers can learn from this paper.

Be kind. Be reasonable. Be a good scientist.

Brain Training: Placebo effects, publication bias, small sample sizes… and what we do next?

Over the past decade the young field of cognitive training – sometimes referred to as ‘brain training’ – has expanded rapidly. In our lab we have been extremely interested in brain training (Astle et al. 2015; Barnes et al. 2016). It has the potential to tell us a lot about the brain and how it can dynamically respond to changes in our experience.

The basic approach is to give someone lots of practice on a set of cognitive exercises (e.g. memory games), see whether they get better at other things too, and in some cases see whether there are significant brain changes following the training. The appeal is obvious: the potential to slow age-related cognitive decline (e.g. Anguera et al. 2013), remediate cognitive deficits following brain injury (e.g. Westerberg et al. 2007), boost learning (e.g. Nevo and Breznitz 2014) and reduce symptoms associated with neurodevelopmental disorders (e.g. Klingberg et al. 2005). But these strong claims require compelling evidence and the findings in this area have been notoriously inconsistent.

menuscreens

(Commercial brain training programmes are available to both academics and the general public)

I have been working on a review paper for a special issue, and having trawled through the various papers, I think that some consensus is emerging. Higher-order cognitive processes like attention and memory can be trained. These gains will transfer to similarly structured but untrained tasks, and are mirrored by enhanced activity and connectivity within the brain systems responsible for these cognitive functions. However, the scope of these gains is currently very narrow. To give an extreme example, learning to remember very long lists of letters does not necessarily transfer to learning long lists of words, even though those two tasks are so similar – the training can be very content specific (Harrison et al. (2013); see also Ericcson et al. (1980)). But other studies seem to buck that trend, and show substantial wide transfer effects – i.e. people get better not just at what they trained on, but even very different tasks. Why this inconsistency? Well I think there are a few important differences in how the studies are designed, here are two of the most important:

  1. Control groups: Some studies don’t have control groups at all, and many that do don’t have active control groups (i.e. the controls don’t actually do anything, so it is pretty obvious that they are controls). This means that these studies can’t properly control for the placebo effect (https://en.wikipedia.org/wiki/Placebo). If a study doesn’t have an active control group then it is more likely to show a wide transfer effect.
  2. Sample size: The smaller the study (i.e. the fewer the participants) the more likely it is to show wider transfer effects. If studies include lots of participants then it is far more likely to accurately estimate the true size of the transfer effect, which is very small.

When you consider these two factors and only look at the best designed studies, the effect size for wider transfer effects is about d=0.25 – if you are not familiar with this statistic, this is small (Melby-Lervag et al., in press). Furthermore, when considering the effect sizes in this field it is important to remember that this literature almost certainly suffers from a publication bias – it is difficult to publish null effects, and easier to publish positive results. Meaning that there are probably quite a few studies showing no training effects sat in researchers’ drawers, unpublished. As a result, even this small effect size is likely an overestimate of the genuine underlying effect. The true effect is probably even closer to zero.

So claims that training on some cognitive games can produce improvements that spread to symptoms associated with particular disorders – like ADHD – are particularly incredible. Just looking at the best designed studies, the effect size is small, again about d=0.25 (Sonuga-Barke et al., 2013). The publication bias caveat applies here too – even this small effect size is likely an overestimate of the true effect. Some studies do show substantially larger effects, but these are usually not double blind. That is, the person rating those symptoms knows whether or not the individual (usually a child) received the training. This will result in a substantial placebo effect, and this likely explains these supposed enhanced benefits.

Where do we go from here? As a field we need to ensure that future studies have active control groups, double blinding and that we include enough participants to show the effects we are looking for. I think we also need theory. A typical approach is to deliver a training programme, alongside a long list of assessments, and then explore which assessments show transfer. There is little work that explicitly generates and then tests a theory, but I think this is necessary for future progress. Where research is theoretically grounded it is far easier for a field to make meaningful progress, because it gives a collective focus, creates a shared set of critical questions, and provides a framework that can be tested, falsified and revised.

Author information:

Dr. Duncan Astle, Medical Research Council Cognition and Brain Science Unit, Cambridge.

https://www.mrc-cbu.cam.ac.uk/people/duncan.astle/

Reference:

Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, Kong E, Larraburo Y, Rolle C, Johnston E, Gazzaley A (2013) Video game training enhances cognitive control in older adults. Nature 501:97-101.

Astle DE, Barnes JJ, Baker K, Colclough GL, Woolrich MW (2015) Cognitive training enhances intrinsic brain connectivity in childhood. J Neurosci 35:6277-6283.

Barnes JJ, Nobre AC, Woolrich MW, Baker K, Astle DE (2016) Training Working Memory in Childhood Enhances Coupling between Frontoparietal Control Network and Task-Related Regions. J Neurosci 36:9001-9011.

Ericcson KA, Chase WG, Faloon S (1980) Acquisition of a memory skill. Science 208:1181-1182.

Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychological science 24:2409-2419.

Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlstrom K, Gillberg CG, Forssberg H, Westerberg H (2005) Computerized training of working memory in children with ADHD–a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44:177-186.

Melby-Lervag M, Redick TS, Hulme C (in press) Working memory training does not improve performance on measures of intelligence or other measures of “Far Transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science.

Nevo E, Breznitz Z (2014) Effects of working memory and reading acceleration training on improving working memory abilities and reading skills among third graders. Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence 20:752-765.

Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Dopfner M, Dittmann RW, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong IC, Sergeant J (2013) Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. The American journal of psychiatry 170:275-289.

Westerberg H, Jacobaeus H, Hirvikoski T, Clevberger P, Ostensson ML, Bartfai A, Klingberg T (2007) Computerized working memory training after stroke–a pilot study. Brain injury 21:21-29.

The connectome goes to school

Children learn an incredible amount whilst at school. Many fundamental skills that typical adults perform effortlessly like reading and maths have to be acquired during childhood. Childhood and adolescence are also a period of important brain development. Particularly the structural connections in the brain show a prolonged maturation that extends throughout childhood and adolescence up to the third decade of life. We are beginning to explore how changes in brain structure over this time support the acquisition of these skills, but also how brain changes may give rise to difficulty developing these skills for some children.

Most research to date has focussed on comparisons of individuals with specific deficits like very low reading performance despite typical performance in other areas. The logic behind this approach is that anatomical structures that are specifically associated with this skill can be isolated. However, learning disorders are rarely that specific. Most children struggling in one aspect of learning also have difficulties in other areas. Furthermore, recent advances in neuroscience suggest that the brain is not a collection of modules that perform particular tasks. In contrast, more recent views suggest that the brain functions as an integrated network.

In our recent study, we wanted to investigate how the white matter brain network may be related to maths and reading performance. The study revealed that children’s reading and maths scores were closely associated with the efficiency of their white matter network. The results further suggested that highly connected regions of the brain were particularly important. These findings indicate that the overall organisation may be more important for reading and maths than differences in very specific areas. This potentially provides a clue to understanding why problems in maths and reading often co-occur.

You can read a pre-print of the article here: https://osf.io/preprints/psyarxiv/jk6yb

The code for the analysis is also available: https://github.com/joebathelt/Learning_Connectome