How do you solve a problem like reproducibility?
by Guest Author on 27 Oct 2016
Today the MRC and a group of partner organisations issued an update on what we have been doing to address of reproducibility and reliability of research since the publication of the report of our symposium on the issue last year. Dr Frances Rawle, our Head of Corporate Governance and Policy, talks about what we’ve done so far.
Reproducibility is everyone’s problem. If we can’t ensure that our results are reliable, then our research can’t improve human health.
Everyone involved in biomedical research, including funders, individual researchers, research institutes, universities, publishers and academies – must play a part in improving research practices.
We’ve worked across the sector to discover the main causes of irreproducible results and what we can do to improve the situation.
Better applications, better reviewing
As a funder, we get to see research plans at an early stage – when funding applications are submitted. So right at the start of the research process we can use peer review to ensure that their experimental design, statistical analysis and methodology are properly thought through.
However, we found that people often didn’t give enough detail about their methodology. In trying to cover everything, people squashed their methodology into too brief a section to be useful. That meant reviewers struggled to assess the robustness of their experimental design.
So we opted for the simple solution of adding extra space to our grant application form.
We’ve updated our guidance for reviewers and applicants to make our requirements as clear as possible. Plus, if you sit on one of our boards or panels, we run a workshop on the importance of statistical methodology and experimental design.
Research using animals
We’ve been asking for extra detail on applications for animal research for a couple of years. We published detailed guidance and worked examples on this. Try some of the great resources available – like the online Experimental Design Assistant developed by the NC3Rs.
It’s too soon to know what impact this has had on reproducibility. However, we’re pleased to see that applicants and reviewers have been paying more attention to experimental design.
Start at the very beginning
In an informal poll, MRC-funded PhD students recently told us they don’t feel they get enough training in statistics and experimental design.
As a researcher it’s important to develop these skills early on and then keep them up to date throughout your career. So we’ve joined up with several other funders to change the requirements for postgraduate training schemes.
All PhD students we fund now need to have training in experimental design and statistics. They will also need to understand the importance of ensuring robust and reproducible results.
You often hear about open science and open data as answers to the reproducibility problem. Sharing data can allow others to assess the robustness of results more easily, as well as allowing more use of existing data and stimulating new discoveries.
We helped develop the Concordat on Open Research Data, published this summer to ensure data are made openly available wherever possible.
If you conduct clinical or public health intervention studies, we have made our expectations for openness (PDF) clearer there too. As part of this guidance we explain how you should register trials, pre-publish protocols and make data available. These steps should make it easier to assess the robustness of the evidence base we use to inform clinical treatments.
Bank on it
The materials we use in biomedical experiments, such as antibodies and cell lines, can be a significant source of variability. Luckily, there are lots of ways to improve how we work. Are your laboratory quality control practices rigorous? Are you sure your cell lines are what you think they are? Could you make use of resources such as cell and tissue banks with exemplary quality control or other specialist facilities? We’ve supported a number of resources and facilities and highly recommend you make use of them.
No reward for reliability
Perhaps the trickiest hurdle is the culture of research. Across disciplines and around the world, we over-emphasise publication in high impact journals. Researchers feel under pressure to publish quickly. Careful and reliable results are less likely to be rewarded than novelty. A certain amount of competitiveness is important, but getting the community (and government and the public) to value the everyday rigour of research could be the key to solving this crisis. As members of the biomedical research community we can all play our part. Ideas on a postcard please!
Leave a reply
- experimental design
- open access
- open data
- Open science
- peer review
- research funding
- Science policy