Monthly Archives: January 2018

Why I took part in the “Preregistration Challenge”

By Sarah Peters

The preregistration of study protocols has a long history in clinical trials, but is a more recent innovation in many other areas. The hope is that it will help counter the “reproducibility crisis” in psychological science – the failure of many published findings to replicate reliably. Here I discuss my experience with the Open Science Framework “Preregistration challenge”, and argue for more widespread adoption of preregistering reports.

There is an ongoing methodological crisis in psychological science – the reproducibility crisis refers to the failure of many scientific findings to be replicated. The Reproducibility Project, a recent initiative led by Professor Brian Nosek at the University of Virginia, aimed to identify the scale of this crisis. A large collaboration between 270 project members reran 100 published psychological experiments, and found that just 36% of the initial findings were replicated. Similarly, some classic textbook experiments have proven difficult to replicate, and publication bias – whereby positive findings are more likely to be published and negative findings to be dismissed – plagues the field.

Given this, scientists are exploring how to improve the way we conduct research and thereby improve the quality of what we produce. One suggestion is to preregister our research question, methods and analysis plan in advance of data collection. It is hoped that public preregistration will limit analytical flexibility and post hoc hypothesising, thereby improving the transparency and robustness of research findings.

Curious about the benefits of preregistration, and to see how it differed from the way I’d previously conducted my research, my colleagues and I published a preregistration for a recent study on Open Science Framework (OSF). We were interested in whether Cognitive Bias Modification, a psychological intervention designed to shift the emotional interpretation of faces, would impact clinically-relevant outcomes. We also entered the study into OSF’s (ongoing!) Preregistration Challenge, which offers the chance to win a $1,000 prize to 1,000 researchers who go from preregistration to publication.

Preregistering our study did require a greater time commitment prior to running it, but thinking about our predictions, design, and analyses meant that we could spot any potential issues and improve our experimental design before we collected data (i.e., before it was too late!). As a preregistration is public and cannot be changed after it’s published, it forced us to think more carefully about our decisions. For example, thinking more carefully about whether our data would truly answer our question made us wonder whether the emotional biases we wanted to study might be more prominent when an individual is under stress, so we decided to include another task to measure this. Also, by knowing which statistical analyses we would conduct before recruiting participants we could ensure that our study was adequately powered and would meet the assumptions of the planned analyses.

Initially I was concerned that this approach could be limiting. What if we found something interesting that we hadn’t expected and wanted to run additional analyses to probe it? But a preregistered report doesn’t prevent that – it simply means that you would (honestly and transparently!) report those analyses as exploratory. This protection against HARKing (hypothesising after the results are known) is important; separating analyses as planned versus exploratory can prevent overconfidence in weaker findings and the publication of attractive, but uncertain, positive findings.

Following data collection, we went back to our preregistration. It was here that our earlier time investment paid off; once our data were cleaned we could immediately run our planned analyses, and much of the manuscript writing (introduction and methods) was already done. We also ran a number of exploratory analyses, such as whether our results were moderated by participants’ anxiety scores. We subsequently published our findings in the academic journal Royal Society Open Science, and were thrilled to receive one of the latest $1,000 Preregistration Challenge prizes for bringing our study from preregistration to publication!

While interpreting findings and making discoveries is an important aim of scientific research, it is just as important to continuously scrutinise the scientific method. As a scientist, there is no question that seeing data can influence my decisions and interpretations. However, the adoption of preregistration can eliminate this, make the process easier in the long term, and improve research quality overall.

Professor Nosek and other members of the Reproducibility Project argue that, “Progress in science is marked by reducing uncertainty about nature”. But, if scientific findings have not or cannot be replicated, we can’t be certain that they exist. Preregistration is a simple change to the way we do research that can help to halt the reproducibility crisis and produce effective and credible science.

Read more about how to take part in the Preregistration Challenge here.

See Peters et al.’s preregistration here, and the published study here.

Sarah Peters can be contacted via email at: