Experiences with open science practices

Participating in open science practices does not mean that you need to master everything beforehand nor is it prohibited to make mistakes. Opinions of a PhD student and author.

Like Comment
Read the paper

Two plus two equals four. Vaccines are safe and effective. Humans cause global warming. None of these statements are called into question unless rhetorical techniques are used to trick people to believe otherwise. When unknowingly confronted with fake expertsfalse logicimpossible expectationsselectivity and conspiracy theories we may even believe that two plus two equals five. The scope of this rhetorical deconstruction of truth was especially noticeable when the elected president of the U.S.A. questioned the safety of vaccination, the Oxford Dictionaries declared post-truth the word of the year 2016 and advocates all over the world looked for ways to support the evidence-based voice for science in public. Thus, our now published article ‘Effective strategies for rebutting science denialism in public discussions’ started as a response to the so-called post truth era.

During the rise of the post truth era new initiatives to increase and regain trust in science evolved. Unlike our article these initiatives did not focus on how to communicate facts to the public but rather how to conduct high-quality research that leads to communicable facts. The suggested answer to this question shared by all initiatives is open science. Our data collection for the manuscript began a year after the ground-breaking article of the Open Science Collaboration ‘Estimating the reproducibility of psychological science’. With this in mind we aimed to replicate our own findings multiple times before drawing any conclusions. However, replication was not the only new standard that was finally gaining the long overdue attention in scientific communities. Open science initiatives left no doubt that mechanisms like preregistration and open materials will improve the transparency and quality of science. Thus, with the experiments included in our article I started my endeavour into the new world of open science and used the aspredicted.org website to preregister the experiments and the Open Science Foundation osf.io website as a repository to store data and materials. During my first steps I had to learn the following lessons, especially about what can go wrong on the path to open science

Lesson 1: Getting used to the new infrastructure

On the OSF website, repositories can be made public at any time. Remaining the private status felt more comfortable until the manuscript was accepted. So I provided the reviewers with a link to the OSF webspace - and lost a reviewer. How did this happen? When the review process started, said reviewer accidently requested access to the materials and data at the Open Science Framework (OSF). The automatic email notification revealed his name to me. After realizing this, the reviewer sent an apology to me because he – like me - was new to OSF and had no prior experiences with this platform. Next time I will prevent this situation by using view-only links that OSF offers for peer review. 

Lesson 2: Deviations from pre-registration protocols are happening 

I also learned how much I am still at the beginning of performing with excellence in the new discipline of preregistration. For example, I failed to specify some aspects of our analyses in some of the preregistration protocols while I explicated them in others. For reasons of transparency I reported deviations from the protocols but only after repeatedly cursing my past-self for being so careless.  

Lesson 3: Non-significant results are no reason for a rejection at Nature

Following the reviewers’ advice, we collected more data during the review process and updated our belief about the effectiveness of combining rebuttal strategies. The result is known: either strategy works and we found no evidence that a combination was more effective than the single strategies (which had been the hypothesis). When submitting the revised version, I was still concerned due to previous experiences. What if non-significant findings are still a reason to reject a manuscript? A foolish, but widely applied rationale that has significantly contributed to the existence of the publication bias. After resubmitting the manuscript with a feeling of uneasiness, I received an Email from the editor that applauded the diligence of updating our initial results.

My major experience is: Participating in open science practices does not mean that you need to master everything beforehand nor is it prohibited to make mistakes. Participating in open science means to be transparent about the mistakes we make and to update prior beliefs as data comes in. 

This is very much in line with the idea of publicly uncovering rhetorical techniques in discussions about science. Once the techniques used for an argument are visible to all we can focus on the evidence at hand.

Thus, open science practices just as the uncovering of rhetorical techniques are mechanisms to ensure transparency and detect errors. Errors that otherwise trick people to believe that two plus two equals five.

Philipp Schmid

Research fellow, University of Erfurt