top of page

Open Science

badges_stacked.original.png

Open Science

A cornerstone of scientific study is the ability to accumulate knowledge over time in a progressive manner. However, threats exist that limit the steady accumulation of knowledge.

As poignantly remarked in 1974, Richard Feynman warned against the perils of pseudoscience and the relative ease with which scientists can be fooled into believing things that do not stand up to scientific study or reflect reality (.pdf). His now famous quote is as relevant as ever:

"The first principle is that you must not fool yourself - and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that."

The open science movement is concerned with providing a scientific platform to prevent scieintists from fooling themselves, as well as each other. For a short primer on open science, see a recent paper from the lab (Ramsey, 2020).

Turning to recent developments in psychology and neuroscience, threats that limit the ability to build cumulative knowledge have been brought into sharp focus, which ignited the open science movement and led to suggestions for reform (Simmons et al., 2011; Munafo et al., 2017). Indeed, a cycle of studies with low power and a publication bias skewed towards positive results has produced a poor level of evidence for many claims made (Button et al., 2013; Open Science Collaboration, 2015). Low reproducibility is a problem for the accumulation of knowledge, as well as an inefficient use of public funds. As such, improving the efficiency and robustness of science has important societal impact. Unfortunately, however, there is no easy fix for low reproducibility in science because it is the result of a complex and system-wide problem, which requires a diverse set of solutions (Nelson et al., 2018; Munafo et al., 2017).

 

We focus here on the level of the research group, rather than the role of the wider scientific community (e.g., editors, journal policy, peer review, incentives and hiring committees). Below are six methodological approaches that the SoBA Lab has embraced in recent years to improve research efficiency and the robustness of our findings:

 

1. Pre-registration. All studies in our lab are pre-registered to reduce the opportunity for p-hacking* and to provide pre-specified limits on researcher degrees of freedom (Simmons et al., 2011; 2018). Free, online resources are available to enable pre-registration (Open Science Framework: www.osf.io; www.AsPredicted.org) and a growing number of journals offer a registered report format (https://cos.io/rr/).

 

2. Statistical power and sample size. Power analyses are becoming routine for all experiments. The consequence for research in psychology, whereby small and medium effects are common, will be that much larger sample sizes would be required. Of course, larger sample sizes are not always practical for some types of research, such as training studies, multi-session fMRI studies and when studying atypical populations. As such, we try to clearly justify statistical choices (Lakens et al., 2018), as well as consider alternative ways to increase power and determine target effect sizes (Albers & Lakens, 2018; Open Science Collaboration, 2017; Lakens, Scheel, Isager, 2018).

 

3. Replication. Multi-experiment replications are used to minimise the likelihood that we publish and pursue false positives (Zwaan et al., 2018).**

 

4. Meta-analysis. Meta-analyses and meta-analytical thinking are being incorporated wherever possible (Cumming, 2012).

 

5. Open data. Data should be made freely available to facilitate meta-analysis, synthesis, and further analyses. User friendly data repositories are available for data in general (e.g., Open Science Framework), as well as for specialised data formats such as neuroimaging data (e.g., neurovault.org, OpenfMRI.org). We are also trying to make the analysis pipeline open and availbale to others via the use of R and R Markdown (https://rmarkdown.rstudio.com/).

 

6. Pre-prints. Wherever possible we try to post pre-print versions of articles online to widen the “pre-review” process and to speed up the communication of research findings. Preprint servers are readily available and easy to use (e.g., bioRxiv, PsyArXiv).

 

We welcome feedback on our attempts to embrace open science initiatives. Therefore, please get in touch if you have any relevant suggestions.

------------------------------------------

*p-hacking or data dredging refers to the process of analysing data until a statistically significant pattern emerges, without first devising a specific hypothesis or a pre-determined analysis pipeline.

**A complementary approach would be to employ approaches developed from machine learning that aim to predict out-of-sample effects using cross-validation methods (Yarkoni & Westfall, 2017). Such machine learning approaches can be data-efficient by avoiding the requirement to collect an additional dataset and permit left-out runs to be used as independent data


References

 

Albers, C. & Lakens, D. (2018). Biased sample size estimates in a-priori power analysis due to the choice of the effect size index and follow-up bias. Journal of Experimental Social Psychology, 187-195.

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci, 14(5), 365-376. doi: 10.1038/nrn3475

Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge.

Lakens, D. et al. (2018). Justify your alpha. Nature Human Behaviour, 2, 168-171. https://doi.org/10.1038/s41562-018-0311-x

Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence Testing for Psychological Research: A Tutorial. Advances in Methods and Practices in Psychological Science.

Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., . . . Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. doi: 10.1038/s41562-016-0021

Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology's Renaissance. Annu Rev Psychol, 69, 511-534. doi: 10.1146/annurev-psych-122216-011836

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251). doi: 10.1126/science.aac4716

Open Science Collaboration (2017). Maximizing the reproducibility of your research. In S. O. Lilienfeld & I. D. Waldman (Eds.), Psychological Science Under Scrutiny: Recent Challenges and Proposed Solutions. New York, NY: Wiley.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359-1366. doi: 10.1177/0956797611417632

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2018). False-positive citations. Perspectives on Psychological Science, 13(2), 255-259.

Yarkoni, T., & Westfall, J. (2017). Choosing Prediction Over Explanation in Psychology: Lessons From Machine Learning. Perspect Psychol Sci, 12(6), 1100-1122. doi: 10.1177/1745691617693393

Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making Replication Mainstream. Behav Brain Sci, 1-50. doi: 10.1017/S0140525X17001972

Stampfenbachstrasse 69, Zurich 8006

SWITZERLAND



Copyright © 2023. All rights reserved. Social Brain in Action Laboratory

  • Twitter Clean
bottom of page