Lack of experiment reproducibility is one of the main causes of frustration in today’s research. Loads of time and resources are spent in a daily basis when trying to match the results of certain publications. This feeling became an astonishing reality when Amgen, a major biotech company, reported not being able to reproduce nearly 90% of 53 high-profile oncology publications in 2012 (1).

This big amount of flawed publications are often not result of deliberate fraud but caused by a common wishful thinking that automatically introduces a bias into the publication process, which favours positive results.

Great efforts are currently being devoted to challenge the quality and efficiency of medical research in several ways. A couple of examples are summarised here:

  • A special channel on the open science platform F1000Research has been recently created by Amgen and Bruce Alberts, former Science editor-in-chief and Nacional Academy of Science president. This online journal will allow both companies and academic scientists to share their replications (methods, data and results) to reduce time wasted by other scientists (2).
  • The Reproducibility Project, headed by psychologist Brian Nosek of the University of Virginia in Charlottesville, is a collaboration between the nonprofit Center for Open Science and the Science Exchange. They are jointly working with contract labs to replicate experiments from up to 50 high-impact papers in cancer biology.

The EGA, aiming at contributing on the mitigation of this reproducibility problem, is also fostering experimentation clarity and openness. Not only are the files required for all genome and phenome submissions but also the metadata is mandatory in order to complete and share the datasets. The more information defining the experimentation process a researcher provides, the more probability of success on reproducibility a certain publication will have.

(1) Drug development: Raise standards for preclinical cancer research – Nature
C. Glenn Begley & Lee M. Ellis
28 March, 2012; 483(7391): 531–3. doi:10.1038/483531a

(2) Calling all failed replication experiments – Science In Depth
Jocelyn Kaiser
05 February, 2016; Vol. 351, Issue 6273, pp. 548. doi: 10.1126/science.351.6273.548