I have to admit that I was nervous about attending a conference about replication and transparency, particularly one that had the word "crisis" in the name. I was expecting speakers to be advocating for their preferred "solution", or that the message of the day would be "if you don't do [insert research transparency innovation here] you are a bad editor/scientist".
I am happy to say that I could not have been more wrong about the tone of the conference, and I cannot commend the organizers enough for bringing together such a thoughtful group of speakers and panelists.
Andrew Gelman did a fantastic job setting the stage for the day with his keynote. I am familiar with Andrew's blog, and so was surprised that the message he conveyed is that statistics will not save us. Of course, he did discuss the need for better study design and analysis, and in particular he advocated for better measurement - which requires thought - over the current trend of pushing for larger N - which requires money. But, he put all of this in the context of the goal of social science research, which he says is to make progress, not produce definitive results. With that in mind, he thinks we need to stop arguing about p-value thresholds, because "people don't have to believe you". Instead, we should accept that there is going to be uncertainty, and talk about what this means in an honest way.
From there, every single speaker, panelist, and audience member brought nuanced reflection to replication and transparency issues and how they relate to the diverse range of social science methods and data types. For instance, although people naturally had a lot of good things to say about the value of pre-registration, there was hesitation about presenting this as a blanket solution. Susan Goldin-Meadow noted that different fields come up with different answers to similar question because they take different data seriously, and some types of data are more amenable to pre-registration than others; if we move toward pre-registration as a standard, she worries that this will devalue research that doesn't fit this process.
I had the pleasure of participating in an editor's panel, where I joined academic editors from top journals in psychology (Dolores Albarracin, Psychological Bulletin), sociology (Elisabeth Clemens, American Journal of Sociology), and political science (Paul Huth, Journal of Conflict Resolution; Jan Leighley, American Journal of Political Science). It was really interesting to hear about how different journals with different aims and scope are addressing replication and transparency, and the resulting challenges. For instance, Paul Huth described the problems he encounters with the journal policy of mandating replication files for accepted papers, ranging from the cost of actually running these files to verify them to how to handle issues with proprietary data that authors are unable to provide.
In addition to having a range of social science disciplines represented, the speakers also reflected both quantitative and qualitative approaches. I come from a quantitative background, but it has become a pet project of mine to learn more about qualitative data and methods so I can better understand how our data policies at Nature Research apply, and how we can support the move toward greater transparency in this area. To that end I was delighted to have the opportunity to meet Colin Elman, who is not only an expert in qualitative methods but is the co-director of the Qualitative Data Repository. However, even with resources like QDR, qualitative data presents unique challenges. For instance, Jenny Trinitapoli told me she wants to make her data available, but her main variable of interest is HIV status, and her interview data contain information that could easily identify her participants, who come from a small village, even if their names are omitted. It's not obvious how she can share these data in a meaningful way when the parts of the dataset that form the basis of her analysis are by their very nature sensitive. I didn't come away from the conference with clear solutions, but as I continue to think about this I will definitely keep in mind what historian James Morrison said about increasing the rigor of qualitative methods reporting: "The objective is not to help qualitative match quantitative in the science race. The objective is to share wisdom with subsequent scholars".
The question the conference organizer put to the panelists, speakers, and audience was are we in a crisis or at crossroads? The overwhelming consensus was in favor of crossroads, and this framed the conversation in an incredibly productive and inspiring way - rather than talking about how to "fix" science because it is broken, we just talked about how to make science better.