The Dos and Don’ts of PhD supervision

Myself and Sue Fletcher-Watson (you know, the fabulously clever one…) have been putting our minds to a series of blog posts, attempting to help the fledgling academic get to grips with some of their new professional duties.  This week it is a real classic – how to supervise a PhD student.

download

Being a good supervisor is not easy, and tricky relationships between students and supervisors are all too common (you may have direct experience yourself). Understanding and mutually agreeing upon the role of the student and the supervisor is a crucial starting point. Establishing these expectations is an important early step and will make navigating the PhD easier for all concerned. With a shared idea of where you are both starting from, and where you want to get to, together you can chart the path forward.  Hopefully these DOs and DON’Ts will help you get off on the right foot as a PhD supervisor.

  1. Managing your intellectual contribution

DO challenge their thinking…

A good PhD supervisor should question their student’s decision making – some part of your supervision meetings should be viva-like. Why are you doing this? How does this method answer this question? What do these data mean? Make sure your student understands what you’re doing and why – be explicit that you expect them to defend their interpretation of the literature / research plans NOT adhere to yours. It is important that they don’t feel that this is a test, with just one right answer.

When questioning your student, strike a balance between exploring alternatives, and making forward progress. Probing assumptions is important but don’t become a pedant; you need to recognise when is the time to question the fundamentals, and when is the time to move the debate on to the next decision.

DON’T make decisions for them…

Help students determine the next decision to be made (“now that we have selected our measures we need to consider what analysis framework we will use”) and also the parameters that constrain this decision… but remember that it is not your place to make the decisions for them. Flagging the consequences of the various choices is an excellent way to bring your experience to bear, without eclipsing the student. You may wish to highlight budget constraints, discuss the likely recruitment rate, or consider the consequences of a chosen data type in terms of analysis time and required skills. Help them see how each decision feeds back. Sue recently worked with a student to move from research question, to methods, to power calculation, to ideal sample size, and then – finding that the project budget was inadequate to support the sample target – back to RQ again. It’s tough for a student to have to re-visit earlier decisions but important to consider all the relevant factors before committing to data collection.

  1. Who’s in charge?

DO give them project management responsibility

Both Sue and Duncan run fortnightly lab group meetings with all their students and this is highly recommended as a way to check in and ensure no student gets left hanging. But don’t make the mistake of becoming your student’s project manager. Whether you prefer frequent informal check-ins or more formal supervisions that are spaced apart, your student should be in charge of monitoring actual progress against plans, and recording decisions. For formal meetings, they can provide an agenda, attachments and minutes, assigning action points to you and other supervisors, and chasing these if needed. They should monitor the budget, and develop excellent version control and data management skills.

This achieves a number of different goals simultaneously. First, it gives your student a chance to learn the generic project management skills which will be essential if they continue in an academic career, and useful if they leave academia too. Second, it helps to reinforce the sense that this is their project, since power and responsibility go hand in hand. Finally, it means that your role in the project is as an intellectual contributor, shaping and challenging the research, rather than wasting your skills and time on bureaucratic tasks.

DON’T make them a lackey to serve your professional goals

Graduate students are not cheap research assistants. They are highly talented early career researchers with independent funding (in the majority of cases) that has been awarded to them on merit. They have chosen to place their graduate research project in your lab. They are not there to conduct your research or write your papers. In any case, attempting this approach is self-defeating. Students soon realise that they are being taken advantage of, especially when they chat with friends in other labs. When students become unhappy and the trust in their supervisor breaks down, the whole process can become ineffective for everyone concerned. As the supervisor you are there to help and guide their project… not the other way around.

This can be really challenging when graduate projects are embedded within a larger funded project. How do you balance the commitment you’ve made to the funder alongside the need for students to have sufficient ownership? Firstly, think carefully about whether your project really lends itself to a graduate student. Highly specified and rigid projects need great research assistants rather than graduate students. Secondly, build in sufficient scope for individual projects and analyses, for example by collecting flexible data types (e.g. parent-child play samples) which invite novel analyses, and make sure that students are aware of any constraints before they start.

 

  1. What are they here to learn?

 

DO provide opportunities to learn skills which extend beyond the project goals

A successful graduate project is not just measured in terms of thesis or papers, but in the professional skills acquired and whether these help them launch a career in whichever direction they choose. This will mean allowing your students to learn skills necessary for their research, but also giving them broader opportunities: formal training courses, giving and hearing talks, visiting other labs or attending conferences. This is all to be encouraged, though be careful that it happens alongside research progress, rather than as a displacement activity. Towards the end of the PhD, as the student prepares to fly the nest, these activities can be an important way of building the connections that are necessary to be part of a scientific community and make the next step in their career.

 

DON’T expect them to achieve technical marvels without support

All too often supervisors see exciting new analyses in papers or in talks and want to bring those skills to their lab. But remember, if you cannot teach the students to do something, then who will? Duncan still tries to get his hands dirty with the code (and is humoured in this effort by his lab members), but often manages to underestimate how difficult some technical steps can be. Meanwhile, Sue is coming to terms with the fact that she will never fully understand R.

If you recommend new technical analyses, or development of innovative stimuli, then make sure you fully understand what you are asking of your student – allow sufficient time for this and provide appropriate support. When thinking of co-supervisors, don’t always go for the bigwig you are trying to cosy up to… often a more junior member of staff whose technical skills are up-to-date would be far more useful to your student. Also try and create a lab culture with good peer support. Just as ‘it takes a village to raise a child’, so “it takes a lab to supervise a PhD student”. Proper peer support mechanisms, shared problem solving and an open lab culture are important ingredients for giving students the proper support they need.

  1. Being reasonable

DO set clear expectations            

Step one of any PhD should be to create a project timeline.  Sue normally recommends a detailed 6-month plan alongside a sketched 3-year plan, both of which should be updated about quarterly. For a study which is broken down into a series of short experiments, a different model might be better. Whatever planning system you adopt, you should work with your student to set realistic deadlines for specific tasks, assuming a full time 9-5 working day and no more, and adhere to these. Model best practice by providing timely feedback at an appropriate level of detail – remember, you can give too much as well as too little feedback.

Think carefully about what sort of outputs to ask for – make them reasonable and appropriate to the task. You might want propose a lengthy piece of writing in the first six months, to demonstrate that your student has grasped the literature and can pull it together coherently to make a case for their chosen research.  But, depending on the project, it might also be a good idea to get them to contrast some differing theoretical models in a mini-presentation to your lab, create a table summarising key methodological details and findings from previous work, or publish a web page to prioritise community engagement with the project topic.

DON’T forget their mental health and work-life balance

PhD students might have recently moved from another country or city – they may be tackling cultural differences at work and socially. Simultaneous life changes often happen at the same time as a PhD – meeting a life partner, marriage, children. Your students might be living in rented and shared accommodation, with all the stresses that can bring.

Make sure your students go home at a reasonable hour.  If you must work outside office hours be clear you don’t expect the same from them. Remind them to take a holiday – especially if they have had a period of working hard (e.g. meeting a deadline, collecting data at weekends). Ensure that information about the University support services are available prominently in the lab.

Remember to take into account their personal lives when you are discussing project progress. Being honest about your own personal situation (without over-sharing) creates an environment where it is OK to talk about stuff happening outside the office. Looking after your students’ mental health doesn’t mean being soft, or turning a blind eye to missed deadlines or poor quality work – it means being proactive and taking action when you can see things aren’t going well.

What about the research??

We are planning another blog in the future about MSc and undergrad supervision which might address some other questions more closely tied to the research itself – how to design an encapsulated, realistic but also worthwhile piece of research, and how to structure this over time. But the success (or otherwise) of a PhD hangs on a lot more than just having a great idea and delivering it on time. We hope this post will help readers to reflect on their personal management style and think about how that impacts their students.

So, be humble. Be supportive. Be a good supervisor.

Advertisements

Data-driven subtyping of behavioural problems associated with ADHD in children

Over 20% of children will experience some problems with learning during their schooling that can have a negative impact on their academic attainment, wellbeing, and longer-term prospects. Traditionally difficulties are grouped in diagnostic categories like attention deficit hyperactivity disorder (ADHD), dyslexia, or conduct disorder. Each diagnostic category is associated with certain behaviours that need to be present for the label to be applied. However, children’s difficulties often span multiple domains. For instance, a child could have difficulties with paying attention in the classroom (ADHD) and could also have a deficit in reading (dyslexia). This makes it difficult to decide which diagnostic label is the most appropriate and, consequently, what support to provide. Further, there can be a mixture of difficulties within a diagnostic category. For instance, children with ADHD can have mostly deficits with inattention or hyperactivity, or both. This heterogeneity makes research on the mechanisms behind these disorders very difficult as each subtype may be associated with specific mechanisms. However, there is currently no agreement about the presence, number, or nature of subtypes in most developmental disorders.
In our latest study, we applied a new approach to group behavioural difficulties in a large sample of children who attended the Centre for Attention, Learning, and Memory (CALM). This sample is a mixed group of children who were referred because of any difficulty relating to attention, learning, and/or memory. The parents of the children filled in a questionnaire that is often part of the assessment that informs the advice of education specialists on the most appropriate support for a child. The questionnaire contained questions like “Forgets to turn in completed work”, “Does not understand what he/she reads”, or “Does not get invited to play or go out with others”. In our analysis, we grouped children by the similarity of ratings on this questionnaire using a data-driven algorithm. The algorithm had two aims: a) form groups that are maximally different b) groups should contain cases that are as similar as possible. The results suggested that there were three groups of children with: 1) primary difficulties with attention, hyperactivity, and self-management 2) primary difficulties with school learning, e.g. reading and maths deficits 3) primary difficulties with making and maintaining friendships.
Next, we investigated if the behavioural profiles identified through the algorithm also show differences in brain anatomy. We found that white matter connections of the prefrontal and anterior cingulate cortex mostly strongly distinguished the groups. These areas are implicated in cognitive control, decision making, and behavioural regulation. This indicates that the data-driven grouping aligns well with biological differences that we can investigate in more detail in future studies.
The preprint of the study can be found here: http://www.biorxiv.org/content/early/2017/07/05/158949
The code used for the analysis can be found here: https://github.com/joebathelt/Conners_analysis

The weather and the brain – using new methods to understand developmental disorders

 

The latest article was written by our brilliant lab member Danyal Akarca. It describes some of his MPhil research which aims to explore transient brain networks in individuals with a particular type of genetic mutation. Dan finished his degree in Pre-Clinical Medicine before joining our lab and has since been fascinated by the intersection of genetic disorders and the dynamics of brain networks.

The brain is a complex dynamic system. It can be very difficult to understand how specific differences within that system can be associated with the cognitive and behavioural difficulties that some children experience. This is because even if we group children together on the basis that they all have a particular developmental disorder, that group of children will likely have a heterogeneous aetiology. That is, even though they all fall into the same category, there may be a wide variety of different underlying brain causes. This makes these disorders notoriously difficult to study.

Developmental disorders that have a known genetic cause can be very useful for understanding these brain-cognition relationships, because by definition they all have the same causal mechanism (i.e. the same gene is responsible for the difficulties that each child experiences). We have been studying a language disorder caused by a mutation to a gene called ZDHHC9. These children have broader cognitive difficulties, and more specific difficulties with speech production, alongside a form of childhood epilepsy called rolandic epilepsy.

In our lab, we have explored how brain structure is organised differently in individuals with this mutation, relative to typically developing controls. Since then our attention has turned to applying new analysis methods to explore differences in dynamic brain function. We have done this by directly recording magnetic fields generated by the activity of neurons, through a device known as a magnetoencephalography (MEG) scanner. The scanner uses magnetic fields generated by the brain to infer electrical activity.

The typical way that MEG data is interpreted, is by comparing how electrical activity within the brain changes in response to a stimulus. These changes can take many forms, including how well synchronised different brain areas are, or the how size of the magnetic response differs across individuals. However, in our current work, we are trying to explore how the brain configures itself within different networks, in a dynamic fashion. This is especially interesting to us, because we think that the ZDHHC9 gene has an impact on the excitability of neurons in particular parts of the brain, specifically in those areas that are associated with language. These changes in network dynamics might be linked to the kinds of cognitive difficulties that these individuals have.

We used an analysis method called “Group Level Exploratory Analysis of Networks” – or GLEAN for short – and has recently been developed at the Oxford centre for Human Brain Activity. The concept behind GLEAN is that the brain changes between different patterns of activation in a fashion that is probabilistic. This is much like the concept of the weather – just as the weather can change from day to day in some probabilistic way, so too may the brain change in its activation.

2a6556de3969141b8006024f2d14873e

This analysis method not only allows us to observe what regions of the brain are active when the participants are in the MEG scanner. It also allows us to see the probabilistic way in which they can change between each other. For example, just as it is more likely to transition from rain one day to cloudiness the next day, relative to say rain to blistering sun, we find that brain activation patterns can be described in a very similar way over sub-second timescales. We can characterise those dynamic transitions in lots of different ways, such as how long you stay in a specific brain state or how long does it take to return to a state once you’ve transitioned away. (A more theoretical account of this can be found in another recent blog post in our Methods section – “The resting brain… that never rests”.) We have found that a number networks differ between individuals with the mutation and our control subjects.

Picture1

(These are two brain networks that show the most differences in activation – largely in the parietal and frontotemporal regions of the brain.)

Interestingly, these networks strongly overlap with areas of the brain that are known to express the gene (we found this out by using data from the Allen Atlas). This is the first time that we know of that researchers have been able to link a particular gene, to differences dynamic electrical brain networks, to a particular pattern of cognitive difficulties. And we are really excited!

 

Reviewer 2 is not your nemesis – how to revise and resubmit

This is a blog piece is written with Sue Fletcher-Watson, a colleague of supreme wisdom and tact, ideally qualified for this particular post. It is a follow-up to our previous joint post about peer-review. We now turn our attention to the response to reviewers.

16198_10155276140165640_8482032632772318289_n

As with the role of reviewer, junior scientists submitting their work as authors are given little (if any) guidance on how to interact with their reviewers. Interactions with reviewers are an incredibly valuable opportunity to improve your manuscript and find the best way of presenting your science. However, all too often responding to reviewers is seen as an onerous chore, which partly reflects the attitude we take into the process. These exchanges largely happen in private and even though they play a critical role in academia, we rarely talk about them in public. We think this needs to change – here are some pointers for how to interact with your reviewers.

  • Engage with the spirit of the review

Your reviewers will be representative of a portion of your intended readership. Sometimes when reading reviewers’ comments we can find ourselves asking “have they even read the paper?!”. But if the reviewer has misunderstood some critical aspect of the paper then it is entirely possible that a proportion of the broader readership will also. An apparently misguided review, whilst admittedly frustrating, should be taken as a warning sign. Give yourself a day or two to settle your temper, and then recognise that this is your opportunity to make your argument clearer and more convincing.

Similarly, resist the temptation to undertake the minimal possible revisions in order to get your paper past the reviewers. If a reviewer makes a good point and you can think of ways of using your data to address it, then go for it, even if this goes beyond what they specified. Remember – this is your last chance to make this manuscript as good as it can be.

  • Be grateful and respectful. But don’t be afraid to disagree with your reviewers.

Writing a good review takes time. Thank the reviewers for their efforts. Be polite and respectful, even if you think a review is not particularly constructive. But don’t be afraid to disagree with reviewers. Sometimes reviewers ask you to do things that you don’t think are valid or wise, and it’s important to defend your work. No one wants a dog’s dinner of a paper… a sort of patchwork of awkwardly combined paragraphs designed to appease various reviewer comments. As the author you need to retain ownership of the work. This will mean that sometimes you need to explain why a recommendation has not been actioned. You can acknowledge the importance of a reviewer’s point, without including it in your manuscript.

We have both experienced reviewers who have requested changes we don’t feel are legitimate. Examples include the reviewer who requested a correlational analysis on a sub-group with a sample size of n=17. Or the reviewer who asked Sue to describe how her results, from a study with participants aged 18 and over, might relate to early signs of autism in infancy (answer: they have no bearing whatsoever and I’m not prepared to speculate in print). Or the reviewer who asked for inclusion of variables in a regression analysis which did not correlate with the outcome, (despite that being a clearly-stated criterion for inclusion in the analysis), on the basis of their personal hunch. In these cases, politely but firmly refusing to make a change may be the right thing to do, though you can nearly always provide some form of concession. For example, in the last case, you might include an extra justification, with a supporting citation, for your chosen regression method.

  • Give your response a clear and transparent structure

With any luck, your revised manuscript will go out to the same people who reviewed it the first time.  If you do a particularly good job of addressing their comments – and if the original comments themselves were largely minor – your editor may even decide your manuscript doesn’t need peer review a second time. In any case, to maximise the chances of a good result it is essential that you present your response clearly, concisely and fluently.

Start by copying and pasting the reviewer comments into a document.  Organise them into numbered lists, one for each reviewer.  This might mean breaking down multi-part comments into separate items, and you may also wish to paraphrase to make your response a bit more succinct.  However, beware of changing the reviewer’s intended meaning!

Then provide your responses under each numbered point, addressed to the editor (“The reviewer makes an excellent point and…”). In each case, try to: acknowledge the validity of what the reviewer is saying; briefly mention how you have addressed the point; give a page reference.  This ‘response to reviewers’ document should be accompanied by an updated manuscript in which any significant areas of new text , or heavily edited text, are highlighted something like this. Don’t submit a revised manuscript with tracked changes – these are too detailed and messy for a reviewer to have to navigate – and don’t feel the need to highlight every changed word.

If it’s an especially complicated or lengthy response, then it is sometimes a good idea to include a (very) pithy summary up top for the Editor, before you get to the reviewer-specific response. A handful of bullet points can help orient the Editor to the major changes that they can expect to find in the new version of your manuscript.

  • The response letter can be a great place to include additional analyses that didn’t make it into the paper

Often when exploring the impact of various design choices or testing the impact of assumptions on your analysis, additional comparisons can be very useful. We both often include additional analyses in our ‘response to reviewer’ letters. This aids transparency and can also be a useful way of showing reviewers that your findings are solid. Sometimes these will be analyses that have been explicitly asked for, but on other occasions you may well want to do this from your initiative. As reviewers we are both greatly impressed when authors use their own data to address a point, even if we didn’t explicitly ask them to do this.

One word of warning here, however. Remember that you don’t want to put an important piece of information or line of reasoning only in your response letter, if it ought also to be in the final manuscript. If you’ve completed an extra analysis as part of your consideration of a reviewer point, consider whether this might also have relevance to your readership when the paper is published.  It might be important to leave it out – you don’t want to include ‘red herring’ analyses or look like you are scraping the statistical barrel by testing ‘til the cows come home. But on the other hand, if the analysis directly answers a question which is likely to be in your reader’s mind, consider including it.  This could be as a supplement, linked online data set, or a simple statement: e.g. “we repeated all analyses excluding n=2 participants with epilepsy and results were the same”.

  • Sometimes you may need the nuclear option

We have both had experiences where we have been forced to make direct contact with the Action Editor. A caveat to all the points above is that there are occasions where reviewers attempt to block the publication of a manuscript unreasonably. Duncan had an experience of a reviewer who simply cut and paste their original review, and reused it across multiple subsequent rounds of revision. Duncan thought that his team had done a good job of addressing the reviewer’s concerns, where possible, but without any specific guidance from the reviewer they were at a loss to identify what they should do next. Having already satisfied two other reviewers, he decided to contact the Action Editor and explain the situation. They accepted the paper. Sue has blogged before about a paper reporting on a small RCT which was rejected for the simple reason that it reported a null result. She approached the Editor with her concern and it was agreed that the paper should be re-submitted as a new manuscript and sent out again for a fresh set of reviews. This shouldn’t be necessary, but sadly sometimes it is.

Editors will not be happy with authors trying to circumvent the proper review process, but in our experience they are sympathetic to authors when papers are blocked by unreasonable reviewers. After all, we have all been there. If this is the situation you find yourself in, be as diplomatic as possible and outline your concerns to the Editor.

In conclusion, much of what we want to say can probably be summed up with the following: This is not a tick-box exercise, but the last opportunity to improve your paper before it reaches your audience. Engage with your reviewers, be open-minded, and don’t be afraid to rethink.

Really, when it comes to responding to reviewers, the clue is in the name.  It’s a response, not a reaction – so be thoughtful, be engaged and be a good scientist.

Think you’re your own harshest critic? Try peer review…

Our latest blog post is written by me with the wonderful Sue Fletcher-Watson, a colleague whose intellectual excellence is only exceeded by her whit and charm.PeerReview.jpeg

Peer review is a lynch-pin of the scientific process and bookends every scientific project. But despite the crucial importance of the peer review process in determining what research gets funded and published, in our experience PhD students and early career researchers are rarely if ever offered training on how to conduct a good review. Academics frequently find themselves complaining about the unreasonable content and tone of the anonymous reviews they receive, which we attribute partly to a lack of explicit guidance on the review process. In this post we offer some pointers on writing a good review of a journal article.  We hope to help fledgling academics hone their craft, and also provide some insight into the mechanics of peer review for non-academic readers.

What’s my review for?

Before we launch into our list of things to avoid in the review process, let’s just agree what a review of an academic journal article is meant to do. You have one simple decision to make: does this paper add something to the sum of scientific knowledge? Remember of course that reporting a significant effect ≠ adding new knowledge, and similarly, a non-significant result can be highly informative. Don’t get sucked into too much detail – you are not a copy editor, proof-reader, or English-language critic. Beyond that, you will also want to consider whether the manuscript, in its current form, does the best job of presenting that new piece of knowledge. There’s a few specific ways (not) to go about this, so it looks like it might be time for a list…

  1. Remember, this is not YOUR paper

Reviewer 2 walks into a bar and declares that this isn’t the joke they would have written.

First rule of writing a good peer review: remember that this is not your paper. Neither is this a hypothetical study that you wished the authors had conducted. Realising this will have a massive impact on your view of another’s manuscript. The job is not to sculpt it into a paper you could have written. Your job as a reviewer is two-fold: i) make a decision as to the value of this piece of work for your field; and ii) help the authors to present the clearest possible account of their science.

Misunderstanding the role of the reviewer is perhaps at the heart of many peer review horror stories. Duncan does a lot of studies on cognitive training. Primarily he’s interested in the neural mechanisms of change, and tries to be very clear about that. But reviewers almost always ask “where are your far transfer measures?” because they want to assess the potential therapeutic benefit of the training. This is incredibly infuriating. The studies are not designed or powered for looking at this, but instead at something else of equal but different value.

Remember – you can’t ask them to report an imaginary study you wished they had conducted.

  1. Changing the framing, but not the predictions

In this current climate of concern over p-hacking and other nefarious, non-scientific procedures, a question we have to ask ourselves as reviewers is: are there some things I can’t ask them to change? We think the answer is yes – but it may be less than you think. For starters, you can ask authors to re-consider the framing of the study to make it more accurate. Let’s imagine they set out to investigate classroom practice, but used interviews not observations, and so ended up recording teacher attitudes instead. Their framing can end up a bit out of kilter with the methods and findings. As a reviewer, with a bit of distance from the work, you can be very helpful in highlighting this.

If you think there are findings which could be elucidated – for example by including a new covariate, or by running a test again with a specific sub-group excluded – you should feel free to ask.  At the same time, you need to respect that the authors might respond by saying that they think these analyses are not justified.  We all should avoid data-mining for significant results and reviewers should be aware of this risk.

What almost certainly shouldn’t be changed are any predictions being made at the outset. If these represent the authors’ honest, well-founded expectations then they need to be left alone.

However, there may be an exception to this rule… Imagine a paper (and we have both seen these) where the literature reviewed is relatively balanced, or sparse, such that it is impossible to make a concrete prediction about the expected pattern in the new data. And yet these authors have magically extracted hypotheses about the size and direction of effects which match up with their results. In this case, it may be legitimate to ask authors to re-inspect their lit review so that it provides a robust case to support their predictions. Another option is to say that, given the equivocal nature of the field, the study would be better set-up with exploratory research questions. This is a delicate business, and if in doubt, it might be a good place to write something specific to the editor explaining your quandary (more on this in number 5).

  1. Ensuring all the information is there for future readers

In the end the quality of a paper is not determined by the editor or the reviewers… but by those who read and cite it. As a reviewer imagine that you are a naïve reader and ask whether you have all the information you need to make an informed judgement. If you don’t, then request changes. This information could take many forms. In the Method Section, ask yourself whether someone could reasonably replicate the study on the basis of the information provided. In the Results ask whether there are potential confounds or complicating factors that readers are not told about. These kinds of changes are vital.

We also think it is totally legitimate to request that authors include possible alternative interpretations. The whole framing of a paper can sometimes reflect just one of multiple possible interpretations, which could somewhat mislead readers. As a reviewer be wise to this and cut through the spin. The bottom-line: readers should be presented with all information necessary for making up their own minds.

  1. Digging and showing off

There is nothing wrong with a short review. Sometimes papers are good. As an editor, Duncan sometimes feels like reviewers are really clutching at straws, desperate to identify things to comment on. Remember that as a reviewer you are not trying to impress either the authors or the editor. Don’t dig for dirt in order to pad the review or show how brainy you are.

Another pet hate is when reviewers introduce new criticisms in subsequent rounds of review. Certainly if the authors have introduced new analyses or data since the original submission, then yes, this deserves a fresh critique. But please please please don’t wait until they have addressed your initial concerns… and then introduce a fresh set on the same material. When reviewers start doing this it smacks of a desperate attempt to block a paper, thinly veiled by apparently legitimate concerns. Editors shouldn’t stand for that kind of nonsense, so don’t make them pull you up on it.

  1. Honesty about your expertise

You don’t know it all, and there is no point pretending that you do. You have been asked to review a paper because you have relevant expertise, but it isn’t necessarily the case that you are an expert in all aspects of the paper. Make that clear to the authors or the editor (the confidential editor comments box is quite useful for this).

It is increasingly the case that our science is interdisciplinary – we have found this is especially the case where we are developing new neuroimaging methods and applying them to novel populations (e.g. typically and atypically developing children). The papers are usually reviewed by either methods specialists or developmental psychologists, and the reviews can be radically different. This likely reflects the different expertise of the reviewers, and it helps both authors and editor where this is made explicit.

Is it ok to ask authors to cite your work? Controversial. Duncan never has, but Sue (shameless self-publicist) has done. We both agree that it is important to point out areas of the literature that are relevant but have not been covered by the paper – and this might include your own work. After all, there’s a reason why you’ve been selected as a relevant reviewer for this paper.

Now we know what not to do, what should you put in a review?

Start your review with one or two sentences summarising the main purpose of the paper: “This manuscript reports on a study with [population X] using [method Y] to address whether [this thing] affects [this other thing].”  It is also good to highlight one or two key strengths of the paper – interesting topic, clear writing style, novel method, robust analysis etc. The text of your review will be sent, in full and unedited, to the authors. Always remember that someone has slaved over the work being reported, and the article writing itself, and recognise these efforts.

Then follow with your verdict, in a nutshell.  You don’t need to say anything specific about whether the paper should / should not be published (and some journals actively don’t want you to be explicit about this) but you should try to draw out the main themes of your comments to help orient the authors to the specific items which follow.

The next section of your review should be split into two lists – major and minor comments. Major comments are often cross-cutting,   e.g. if you don’t think the conclusions are legitimate based on the results presented. Also in the major comments include anything requiring substantial work on the part of the authors,  like a return to the original data. You might also want to highlight pervasive issues with the writing here – such as poor grammar – but don’t get sucked into noting each individual example.

Minor comments should require less effort on the part of the authors, such as some re-phrasing of key sentences, or addition of extra detail (e.g. “please report confidence intervals as well as p-values”). In each case it is helpful to attach your comments to a specific page and paragraph, and sometimes a numbered line reference too.

At the bottom of the review, you might like to add your signature. Increasing numbers of reviewers are doing this as part of a movement towards more open science practices. But don’t feel obliged – especially if you are relatively junior in your field, it may be difficult to write an honest review without the safety of anonymity.

Ready to review?

So, hopefully any early career researchers reading this might feel a bit more confident about reviewing now. Our key advice is to ensure that your comments are constructive, and framed sensitively. Remember that you and the original authors are both on the same side – trying to get important science out into a public domain where it can have a positive influence on research and practice. Think about what the field needs, and what readers can learn from this paper.

Be kind. Be reasonable. Be a good scientist.

Brain Training: Placebo effects, publication bias, small sample sizes… and what we do next?

Over the past decade the young field of cognitive training – sometimes referred to as ‘brain training’ – has expanded rapidly. In our lab we have been extremely interested in brain training (Astle et al. 2015; Barnes et al. 2016). It has the potential to tell us a lot about the brain and how it can dynamically respond to changes in our experience.

The basic approach is to give someone lots of practice on a set of cognitive exercises (e.g. memory games), see whether they get better at other things too, and in some cases see whether there are significant brain changes following the training. The appeal is obvious: the potential to slow age-related cognitive decline (e.g. Anguera et al. 2013), remediate cognitive deficits following brain injury (e.g. Westerberg et al. 2007), boost learning (e.g. Nevo and Breznitz 2014) and reduce symptoms associated with neurodevelopmental disorders (e.g. Klingberg et al. 2005). But these strong claims require compelling evidence and the findings in this area have been notoriously inconsistent.

menuscreens

(Commercial brain training programmes are available to both academics and the general public)

I have been working on a review paper for a special issue, and having trawled through the various papers, I think that some consensus is emerging. Higher-order cognitive processes like attention and memory can be trained. These gains will transfer to similarly structured but untrained tasks, and are mirrored by enhanced activity and connectivity within the brain systems responsible for these cognitive functions. However, the scope of these gains is currently very narrow. To give an extreme example, learning to remember very long lists of letters does not necessarily transfer to learning long lists of words, even though those two tasks are so similar – the training can be very content specific (Harrison et al. (2013); see also Ericcson et al. (1980)). But other studies seem to buck that trend, and show substantial wide transfer effects – i.e. people get better not just at what they trained on, but even very different tasks. Why this inconsistency? Well I think there are a few important differences in how the studies are designed, here are two of the most important:

  1. Control groups: Some studies don’t have control groups at all, and many that do don’t have active control groups (i.e. the controls don’t actually do anything, so it is pretty obvious that they are controls). This means that these studies can’t properly control for the placebo effect (https://en.wikipedia.org/wiki/Placebo). If a study doesn’t have an active control group then it is more likely to show a wide transfer effect.
  2. Sample size: The smaller the study (i.e. the fewer the participants) the more likely it is to show wider transfer effects. If studies include lots of participants then it is far more likely to accurately estimate the true size of the transfer effect, which is very small.

When you consider these two factors and only look at the best designed studies, the effect size for wider transfer effects is about d=0.25 – if you are not familiar with this statistic, this is small (Melby-Lervag et al., in press). Furthermore, when considering the effect sizes in this field it is important to remember that this literature almost certainly suffers from a publication bias – it is difficult to publish null effects, and easier to publish positive results. Meaning that there are probably quite a few studies showing no training effects sat in researchers’ drawers, unpublished. As a result, even this small effect size is likely an overestimate of the genuine underlying effect. The true effect is probably even closer to zero.

So claims that training on some cognitive games can produce improvements that spread to symptoms associated with particular disorders – like ADHD – are particularly incredible. Just looking at the best designed studies, the effect size is small, again about d=0.25 (Sonuga-Barke et al., 2013). The publication bias caveat applies here too – even this small effect size is likely an overestimate of the true effect. Some studies do show substantially larger effects, but these are usually not double blind. That is, the person rating those symptoms knows whether or not the individual (usually a child) received the training. This will result in a substantial placebo effect, and this likely explains these supposed enhanced benefits.

Where do we go from here? As a field we need to ensure that future studies have active control groups, double blinding and that we include enough participants to show the effects we are looking for. I think we also need theory. A typical approach is to deliver a training programme, alongside a long list of assessments, and then explore which assessments show transfer. There is little work that explicitly generates and then tests a theory, but I think this is necessary for future progress. Where research is theoretically grounded it is far easier for a field to make meaningful progress, because it gives a collective focus, creates a shared set of critical questions, and provides a framework that can be tested, falsified and revised.

Author information:

Dr. Duncan Astle, Medical Research Council Cognition and Brain Science Unit, Cambridge.

https://www.mrc-cbu.cam.ac.uk/people/duncan.astle/

Reference:

Anguera JA, Boccanfuso J, Rintoul JL, Al-Hashimi O, Faraji F, Janowich J, Kong E, Larraburo Y, Rolle C, Johnston E, Gazzaley A (2013) Video game training enhances cognitive control in older adults. Nature 501:97-101.

Astle DE, Barnes JJ, Baker K, Colclough GL, Woolrich MW (2015) Cognitive training enhances intrinsic brain connectivity in childhood. J Neurosci 35:6277-6283.

Barnes JJ, Nobre AC, Woolrich MW, Baker K, Astle DE (2016) Training Working Memory in Childhood Enhances Coupling between Frontoparietal Control Network and Task-Related Regions. J Neurosci 36:9001-9011.

Ericcson KA, Chase WG, Faloon S (1980) Acquisition of a memory skill. Science 208:1181-1182.

Harrison TL, Shipstead Z, Hicks KL, Hambrick DZ, Redick TS, Engle RW (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychological science 24:2409-2419.

Klingberg T, Fernell E, Olesen PJ, Johnson M, Gustafsson P, Dahlstrom K, Gillberg CG, Forssberg H, Westerberg H (2005) Computerized training of working memory in children with ADHD–a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry 44:177-186.

Melby-Lervag M, Redick TS, Hulme C (in press) Working memory training does not improve performance on measures of intelligence or other measures of “Far Transfer”: Evidence from a meta-analytic review. Perspectives on Psychological Science.

Nevo E, Breznitz Z (2014) Effects of working memory and reading acceleration training on improving working memory abilities and reading skills among third graders. Child neuropsychology : a journal on normal and abnormal development in childhood and adolescence 20:752-765.

Sonuga-Barke EJ, Brandeis D, Cortese S, Daley D, Ferrin M, Holtmann M, Stevenson J, Danckaerts M, van der Oord S, Dopfner M, Dittmann RW, Simonoff E, Zuddas A, Banaschewski T, Buitelaar J, Coghill D, Hollis C, Konofal E, Lecendreux M, Wong IC, Sergeant J (2013) Nonpharmacological interventions for ADHD: systematic review and meta-analyses of randomized controlled trials of dietary and psychological treatments. The American journal of psychiatry 170:275-289.

Westerberg H, Jacobaeus H, Hirvikoski T, Clevberger P, Ostensson ML, Bartfai A, Klingberg T (2007) Computerized working memory training after stroke–a pilot study. Brain injury 21:21-29.

The connectome goes to school

Children learn an incredible amount whilst at school. Many fundamental skills that typical adults perform effortlessly like reading and maths have to be acquired during childhood. Childhood and adolescence are also a period of important brain development. Particularly the structural connections in the brain show a prolonged maturation that extends throughout childhood and adolescence up to the third decade of life. We are beginning to explore how changes in brain structure over this time support the acquisition of these skills, but also how brain changes may give rise to difficulty developing these skills for some children.

Most research to date has focussed on comparisons of individuals with specific deficits like very low reading performance despite typical performance in other areas. The logic behind this approach is that anatomical structures that are specifically associated with this skill can be isolated. However, learning disorders are rarely that specific. Most children struggling in one aspect of learning also have difficulties in other areas. Furthermore, recent advances in neuroscience suggest that the brain is not a collection of modules that perform particular tasks. In contrast, more recent views suggest that the brain functions as an integrated network.

In our recent study, we wanted to investigate how the white matter brain network may be related to maths and reading performance. The study revealed that children’s reading and maths scores were closely associated with the efficiency of their white matter network. The results further suggested that highly connected regions of the brain were particularly important. These findings indicate that the overall organisation may be more important for reading and maths than differences in very specific areas. This potentially provides a clue to understanding why problems in maths and reading often co-occur.

You can read a pre-print of the article here: https://osf.io/preprints/psyarxiv/jk6yb

The code for the analysis is also available: https://github.com/joebathelt/Learning_Connectome

Science Podcasts

Downloadable radio programmes, called podcasts, have been around since the first bricklike iPods in the early 2000s. Thanks to global sensations like ‘Serial’, podcasts are more popular than ever. But they can provide more than the next true crime fix, podcasts are also a great way to stay up-to-date with the latest developments in science and learn about new topics, while blocking out noisy commuters, going for a run in the park, or doing the dishes. Here is a selection of some fantastic science podcasts alongside a few episodes relevant to developmental cognitive neuroscience. Happy listening!

 

ABC All in the Mind

cover170x170
Excellent show about brain science and psychology. Each episode is centred around a particular topic and includes interviews with researchers as well as affected people.

http://www.abc.net.au/radionational/programs/allinthemind/

Interesting episodes:
http://www.abc.net.au/radionational/programs/allinthemind/the-neuroscience-of-learning/7781442
http://www.abc.net.au/radionational/programs/allinthemind/apps-for-autism/7701834 http://www.abc.net.au/radionational/programs/allinthemind/eating-disorders-families-and-technology/7438440

 

BBC All in the Mind

cover170x170-2
This podcast presents various current items from psychology and neuroscience. There is also a focus on mental health with the All in the Mind Awards.

http://www.bbc.co.uk/programmes/b006qxx9/episodes/player

Interesting episodes:

http://www.bbc.co.uk/programmes/b07bzdjy

 

Brain Matters

cover170x170-3
Interviews with speakers mostly covering molecular and systems neuroscience

http://brainpodcast.com

Interesting episodes:
Enhancing cognition with video games: https://tmblr.co/Zxdhyr28KX4Jh

 

Invisibilia

cover170x170-4

A spin-off from the makers of Radiolab. This show focuses on ‘the invisible forces that shape our lives’ with stories around sociology, anthropology, psychology, and neuroscience.

http://www.npr.org/podcasts/510307/invisibilia

 

Nature Podcast

cover170x170-5

The Nature podcast provides a great overview of the latest developments in science. In addition to brief summaries of the main articles in the current issue of Nature, the podcast contains interviews and comments from the main authors of these studies. The News & Views segment presents quick summaries of what’s happening all across science.

http://www.nature.com/nature/podcast/

 

Radiolab

cover170x170-6
This podcast features a variety of stories that cater to various interests – think of long-form articles in the New Yorker, but for listening. Radiolab often features episodes on science around current and special interest topics.

http://www.radiolab.org

Interesting episode:
http://www.radiolab.org/story/235337-how-grow-your-brain/

 

The Brain Science Podcast

cover170x170-7
This podcast contains interviews with eminent researchers in neuroscience and neurology.

http://brainsciencepodcast.com

Interesting episodes:
http://brainsciencepodcast.com/bsp/review-of-the-great-brain-debate-bsp-4.html http://brainsciencepodcast.com/bsp/brain-science-podcasts-first-six-months-bsp-14.html
http://brainsciencepodcast.com/bsp/review-proust-and-the-squid-the-story-and-science-of-the-rea.html

 

Picture credits: Podcast pictures were taken from the websites of each podcast as referenced. The feature image was taken from https://www.scienceworld.ca/sites/default/files/styles/featured/public/images/brain_headphones_image.jpg?itok=g-1kCvgV

The developmental cognitive neuroscience of the Great British Bake-off – Part II

The days are getting shorter, leaves are starting to fall, and a new season of the Great British Bake Off is upon us. We watched as this year’s contestants battled with batter, broke down over bread, crumbled before biscuits, and were torn by torte. One of the most difficult parts of the programme is the technical challenge. In order to succeed, the contestants have to create a perfect bake given the ingredients and basic instructions. The instructions can be extremely sparse. For example, the instructions for the batter week challenge just read ‘make laced pancakes’. This illustrates one of the fundamental challenges that face us in many situations in everyday life. We often have an abstract higher goal, a metaphorical laced pancake, and have to break it down into the necessary steps that get us to that goal, e.g. weight flour, sift flour, crack eggs and mix with flour etc.

The ability to plan is also important outside the Bake-off tent. Anyone who tried getting a four-year-old to bake will know that is also an ability that we are not born with but develop over time. Unfortunately, planning is not usually tested using baking challenges in developmental psych labs due to health & safety concerns among other reasons. Instead, clever games like the Tower of London task [1] are used (Shallice, 1982). In this test, the participant is presented with three pegs and a number of disks of varying sizes. The participant has to create a tower of disks according to a template by arranging the disks in the fewest moves possible while keeping all disks on the pegs and not placing a larger disk on top of a smaller one.

OLYMPUS DIGITAL CAMERAFigure: Illustration of the Tower of London task. Please contact us if you are interested in implementing this task using macarons and sponge fingers

Studies in typical development found that planning ability measured by this task develops continuously throughout childhood and adolescents until stable performance levels are reached in early adulthood (Huizinga, Dolan, & van der Molen, 2006; Luciana, Conklin, Hooper, & Yarger, 2005) – a possible reason for the absence of pre-schoolers in the GBB0 hall of fame. There is also an important lesson for teenage bakers: While general cognitive development contributes to performance improvements between childhood and adolescence, increased scores between late adolescence and adulthood are mostly due to better impulse control (Albert & Steinberg, 2011). So, in baking as in life, think how you will combat moisture before mixing the dough.

You may ask yourself if there are other factors beyond growing up and controlling impulses to get the edge in planning ability. Enthusiastic bakers with little concern about personal safety may find transcranial magnetic stimulation an appealing option. A 2012 study in the journal Experimental Brain Research found that magnetic stimulation of the right dorso-lateral prefrontal cortex significantly increased performance in the Tower of London task in patients with Parkinson’s disease (Srovnalova, Marecek, Kubikova, & Rektorova, 2012). However, the application to the field of fine baking remains to be investigated and the use of TMS in baking tents is not recommended.

 

Footnotes:

[1] This is a version of the classic Tower of Hanoi puzzle that has been adapted for neuropsychological testing)

 

References:

Albert, D., & Steinberg, L. (2011). Age Differences in Strategic Planning as Indexed by the Tower of London. Child Development, 82(5), 1501–1517. http://doi.org/10.1111/j.1467-8624.2011.01613.x

Huizinga, M., Dolan, C. V., & van der Molen, M. W. (2006). Age-related change in executive function: Developmental trends and a latent variable analysis. Neuropsychologia, 44(11), 2017–2036. http://doi.org/10.1016/j.neuropsychologia.2006.01.010

Luciana, M., Conklin, H. M., Hooper, C. J., & Yarger, R. S. (2005). The Development of Nonverbal Working Memory and Executive Control Processes in Adolescents. Child Development, 76(3), 697–712. http://doi.org/10.1111/j.1467-8624.2005.00872.x

Shallice, T. (1982). Specific Impairments of Planning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 298(1089), 199–209. http://doi.org/10.1098/rstb.1982.0082

Srovnalova, H., Marecek, R., Kubikova, R., & Rektorova, I. (2012). The role of the right dorsolateral prefrontal cortex in the Tower of London task performance: repetitive transcranial magnetic stimulation study in patients with Parkinson’s disease. Experimental Brain Research, 223(2), 251–257. http://doi.org/10.1007/s00221-012-3255-9

Darling, the kids are out RAM!

Working memory, the ability to hold things in mind and manipulate them, is very important for children and is closely linked to their success in school. For instance, limited working memory leads to difficulties with following instructions and paying attention in class (also see our previous post https://forgingconnectionsblog.wordpress.com/2015/02/05/adhd-and-low-working-memory-a-comparison/). A major research aim is to understand why some children’s working memory capacity is limited. All children start with lower working memory capacity that increases as they grow up.
We also know that working memory, like all mental functions, is supported by the brain. The brain undergoes considerable growth and reorganisation as children grow up. Most studies so far looked at brain structures that support working memory across development. However, some structure may be more important in younger children and some in older children.
Our new study investigates for the first time how the contribution of brain structures to working memory may change with age. For that, we tested a large number of children between 6 and 16 years on different working memory tasks. We looked at aspects of working memory concerned with storing information (locations, words) and manipulating it. The children also completed MRI scans to image their brain structure. We found that white matter connecting the two hemispheres, and white matter connecting occipital and temporal areas is more important for manipulating information held in mind in younger children, but less important in older ones. In contrast, the thickness of an area in the left posterior temporal lobe was more important in older kids. We think that these findings reflect increased specialisation of the working memory system as it develops from a distributed system in younger children that requires good wiring between different brain areas to a more localised system that is supported by high-quality local machinery. By analogy, imagine you were completing a work project. If you were collaborating with people, quality and speed are largely determined by how well the team communicates – this would be very difficult if you were trying to coordinate via mobile phones in an area with low reception. On the other hand, if one person completed the project, then the outcome would depend on the ability of this worker. The insights from this study will help us to better understand how working memory is constrained at different ages, which may allow us to design better inventions in the future to help children who struggle with working memory.

A preprint of this paper is available on BioArxiv: http://biorxiv.org/content/early/2016/08/15/069617

The analysis code is available on GitHub: https://github.com/joebathelt/WorkingMemory_and_BrainStructure_Code