Science needs Both Humility and Audacity
Comment on Hoekstra & Vazire (2021). Aspiring to greater intellectual humility in science. In Nature Human Behaviour.
Science needs Both Humility and Audacity
Alles van waarde is weerloos [All of value is vulnerable]
Lucebert (Dutch painter and poet)
Stigler's (1980) law of eponymy, first articulated by Robert Merton, states that no scientific discovery is named after its original discoverer. The classic example, of course, is Hubble's law on the expansion of the universe, first formulated by Georges Lemaître. This Belgian priest could serve as a paragon of the kind of humility in science that Hoekstra and Vazire (2021) advocated in Nature Human Behaviour. Almost as a rule, creative scientists remain uncertain and can also see the downside of their ideas. Another example in the same field: Einstein called the introduction of the cosmological constant in the theory of relativity his "biggest blunder." Nowadays, the constant is highly successful in explaining the expansion of the universe through the “dark energy” model. As Einstein himself remarked, “A person who never made a mistake never tried anything new” (Lea, 2021). Good science is always an uphill struggle along a narrow and winding path. Even the greatest scientists face success and failure, conviction and doubt, joy and sorrow, pleasure and disappointment, and bravado and modesty (cf, Firestein, 2015). The "slow science" (Frith, 2020) that these creative scientists engage in will undoubtedly lead to fewer publications to their name, lower scores on quantitative metrics of research quality (cf, Aitkenhead, 2013), and less initial recognition of their discoveries (cf, Phaf, 2020ab). One might even wonder how many outstanding scientists with brilliant ideas have gone unnoticed. If we may rely on the historical examples, humility indeed seems to be the hallmark of creative and credible science (cf, Hoekstra & Vazire). Without audacious theorizing however, these scientists would not have made their discoveries.
Humbly Retreating to Mere Data
Whether you can observe a thing or not depends on the theory which you use. It is theory which decides what can be observed.
As much as I applaud Hoekstra and Vazire’s (2021) call for humility in (psychological) science, it can also overshoot and lead to a retreat from theory. Humble researchers may shy away from taking clear theoretical positions, further exacerbating the theoretical impasse in which experimental psychology currently finds itself (cf, Phaf, 2020a). Humility in Science then further adds to the publication deluge of theoretically vacuous research papers that “…remain agnostic about which interpretation is most likely to be valid” (Hoekstra and Vazire). Humility should not entail an absence of ideas. Already too many researchers seem to assume that the data “speak for themselves” and refrain from making the theoretical implications of their findings explicit (e.g., in “mechanical” replication attempts, where no decision is made about what is observed; Phaf, 2020a). In my opinion, if the study is well done and satisfies all the open-science requirements, this by itself should not be sufficient for publication. Preregistered research is fine, but publication should only be guaranteed if it makes theoretical progress. A published study should contribute to our understanding of the topic under investigation, which is the primary goal of all scientific endeavors. Theory is always underdetermined by the available, observational and experimental (i.e., empirical), data, and thus requires some imaginative insights from the scientist. Hoekstra and Vazire do not adequately distinguish between data and theory, and their advice seems to be primarily concerned with data collection. Given our ignorance of the best stimuli, task settings, and context conditions (cf, Fiedler, Kutzner, & Krueger, 2012), there is every reason to remain humble about data collection and also about statistical analysis (for a possible majority of false positives, e.g., see, Ioannidis, 2005; and a possible majority of false negatives, e.g., see, Hartgerink, Wicherts, & van Assen, 2017). Humility in theoretical analysis is also good, but it should focus on the conclusions, whereas audacity is required in the positing of new hypotheses, as long as they contribute to a further theoretical integration and do not neglect previous work (cf, Phaf 2020a).
Humility must not Equal Disinterestness
In general, there is a degree of doubt, caution, and modesty, which, in all kinds of scrutiny and decision, ought for ever to accompany a just reasoner.
Instead of being noncommittal and appearing disinterested, researchers should strive for theoretical audacity while recognizing that their ideas represent only a small step in the evolution of our understanding of the phenomena under investigation. “Just” humility requires that you audaciously profess your own theoretical position and that of which you are trying to convince the reader, but at the same time are able to see the limitations and always remain open to contradictory evidence. Disinterestedness on the part of scientists represents a seemingly humble ideal that, however, can and should not be pursued. Conducting the research already means that the researcher has invested heavily in it. As the wonderfully humble Sir Peter Medawar (1964) remarked “There is no such thing as unprejudiced observation.” If the researcher then pretends to be disinterested this would be less than transparent and could even be seen to be hypocritical and lacking sufficient intellectual honesty. Hoekstra and Vazire (2021) seem to share with the general public a glorified but rather naive view of the pursuit of scientific research. Yet the idealized perception of science as a rule-based, methodical system for establishing "facts" and approaching the "truth" has been falsified by almost all major discoveries in the history of science. More often than not, such discoveries follow from an idiosyncratic, sometimes erratic, quest for understanding in uncharted territories, full of wrong turns, failures, and a rare success (cf, Firestein, 2015; Lehrer, 2009). Science should not be seen as working towards more factual knowledge (i.e., increasing certainty) by disinterested research “robots”, but rather as advancing our understanding (i.e., reducing uncertainty) of the world and ourselves by passionate human beings who are strongly engaged in the research process. The irregular historical development of science shows striking parallels to biological evolution (cf, Marcum, 2017; see also Phaf, 2020a), which probably represents nature’s most efficient optimization procedure (cf, Dawkins, 1986). A theory that has proved viable in the past is not overturned by simple Popperian falsification, but by an accumulation of anomalies, and if and only if it can be replaced by a more adaptive theory (cf, Lakatos, 1970). Selection (i.e., entailing gradual falsification of losing theories) based on both experimental results and further theoretical work always takes place in the presence of competing theories and makes no sense on isolated theories. In an evolutionary philosophy of science, nature does not set the goal for science to provide a true picture of the world and science is not progressing or advancing closer to the truth (i.e., the “facts”) but away from what is an inadequate worldview (cf, Marcum, 2017). An evolutionary philosophy, thus, replaces the goal of truth in a teleological philosophy of science with a more or less undirected scanning of environmental constraints by quasi-random variation combined with the successive selection by competition of the fittest hypotheses. Imaginative and audacious, rather than disinterested, scientists are needed to set up this competition. Without a competition between ideas, science cannot evolve.
Humility in Normative Prescriptions
All models are wrong but some are useful
Hoekstra and Vazire’s (2021) prescriptions for humility emphasize the potential “wrongness” of all models but neglect the usefulness of some of these defective models. Hypotheses may be wrong but are not “half dozen of one, six of the other”. Some models fit the theoretical and empirical landscape better than others. Perhaps Hoekstra and Vazire should also exercise humility in their own methodological prescriptions for conducting research. While starting from the best intentions to improve research and publication practices, their “science engineering” model may be unwarranted and may even have harmful side effects. The falsification of all “wrong”, but useful, models and hypotheses could easily lead to a “race to the bottom”, leaving us in total darkness. I always wonder why normative methodologists do not more often study the history of successful science, which is full of "wrong" but extremely useful models (e.g., Newtonian classical mechanics). Even in psychology, many, or even most, successful innovations (e.g., the discovery of mirror neurons) do not seem to follow the normative prescriptions of Hoekstra and Vazire. Following their advice may even further increase the level of misrepresentation in scientific publications, which almost never reflect the actual thought processes that gave rise to the work (cf, Medawar, 1964). The evolution of science proceeds in a haphazard manner and depends on many contingencies that could potentially be blocked by these prescriptions (cf, Noble, 2010). Firestein (2015) recounts the amusing example of detergents contaminating experimental glassware that activated the G proteins crucial for transmitting signals from a variety of stimuli outside a cell to its interior. After many years of failed replications, the role of aluminum traces in the detergents was eventually recognized and led to a Nobel Prize. The accidental discovery of Penicillin by Alexander Fleming when, after returning from vacation, he sorted through some discarded Petri dishes containing Staphylococcus colonies can serve as another humbling example that it is impossible to predict and "normalize" scientific development. The evolution of science capitalizes on inadvertent errors, deviating trains of thought, and breaking with procedures, which cannot easily be captured by normative prescriptions, such as of Hoekstra and Vazire (e.g., see also Munafò et al., 2017).
The value of a college education is not the learning of many facts but the training of the mind to think.
What can be done to steer scientists and the general public away from the naive notion that science involves collecting "facts" in a methodical fashion and that the hypotheses are automatically generated by the data? Probably, the highly structured and polished format of a research paper (see Medawar, 1964) has contributed to this totally wrong impression of how science actually works and has perpetuated the myth of scientists doggedly adhering to this linear method. Of course, for reasons of clarity, it is better not to describe the messy research process and instead present an idealized chain of events, from hypotheses, through the evidence, to the conclusions. Particularly undergraduate students, and also the general public, may confuse the presentation of a logical argument with how the research was actually conducted. In today's academic education, we only paint a picture of successful science, but not of its failures. As Firestein (2015) noted, "...removing failure from science education removes explanation..." Humility, and an awareness of the limitations of science, should therefore also transpire our scientific training. All students should realize that every idea has an erratic development behind it, and in front of it. Courses could be given (e.g., “Failures in Psychology”) discussing ideas that have lost the competition in psychological science and how they succumbed. The few successful innovations emerging from this struggle, and the thought processes leading up to them, will automatically come to light when dealing with this history of origins. The humility training could eventually trickle down to policymakers and funding agencies as well, allowing them to better align their policies with the intrinsic unpredictability of developments in science. An analogue of Stigler’s law applying to funding agencies would state that no scientific innovation is subsidized by these funds in the run-up to the original discovery. Follow-up research, sometimes by scientists less humble than the original discoverer, is more readily funded by these agencies and therefore more likely to receive the recognition the original discoverer deserves. As Noble (2010) astutely points out, the few successful revolutionaries are indistinguishable from the numerous failed heretics, except in retrospect. The much larger number of heretics than revolutionaries bias the funding agencies toward conservative policies. Consequently, funding the pink diamonds in research is more likely with a lottery system after minimal triage than with the current review procedures by these agencies (cf, Firestein 2015; Phaf, 2020b). Such a system would, moreover, lead to less interference with scientific work by writing and reviewing these grant proposals and significant savings of valuable time and money. More humility among methodologists and policymakers than among practicing scientists seems called for, and the latter in particular should adopt an audacious stance toward theoretical innovation.
Aitkenhead, D. (2013, December 6). Peter Higgs: I wouldn't be productive enough for today's academic system. Retrieved from https://www.theguardian.com/science/2013/dec/06/peter-higgs-boson-academic-system
Dawkins, R. (1986). The blind watchmaker: Why the evidence of evolution reveals a universe without design. New York: W.W. Norton.
Fiedler, K., Kutzner, F., & Krueger, J.I. (2012). The long way from α-error control to validity proper: Problems with a short-sighted false-positive debate. Perspectives on Psychological Science, 7, 661-669.
Firestein, S. (2015). Failure: Why science is so successful. Oxford UK: Oxford University Press.
Frith, U. (2020). Fast lane to slow science. Trends in Cognitive Sciences, 24, 1-2.
Hartgerink, C.H.J., Wicherts, J.M., & van Assen, M.A.L.M. (2017). Too good to be false: Nonsignificant results revisited. Collabra: Psychology, 3(1), 9. http://doi.org/10.1525/collabra.71
Hoekstra, R. & Vazire, S. (2021). Aspiring to greater intellectual humility in science. Nature Human Behaviour. https://doi.org/10.1038/s41562-021-01203-8
Ioannidis, J.P.A. (2005). Why most published research findings are false. PLoS Medicine, 2, 0696-0701. http://doi.org/10.1371/journal.pmed.0020124
Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatos and A. Musgrave, (Eds). Criticism and the growth of knowledge (pp. 91-196). Cambridge UK: Cambridge University Press.
Lea, R. (2021). A new generation takes on the cosmological constant. Physics World, 34, 42-47.
Lehrer, J. (2009, December 21). Accept Defeat: The Neuroscience of Screwing Up. Retrieved from: http://www.wired.com/2009/12/fail_accept_defeat/2/
Marcum, J.A. (2017). Evolutionary philosophy of science: A new image of science and stance towards general philosophy of science. Philosophies, 2, 25.
Munafò, M.R., Nosek, B.A., Bishop, D.V., Button, K.S., Chambers, C.D., Du Sert, N.P., ... & Ioannidis, J.P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/10.1038/s41562-016-0021
Medawar, P.B. (1964). Is the scientific paper fraudulent? Saturday Review, August 1, 1964, 42-43.
Noble, D. (2010). Funding the pink diamonds: A historical perspective. Notes and Records of the Royal Society 64, 97-102.
Phaf, R.H. (2020a). Publish less, read more. Theory & Psychology, 30, 263-285.
Phaf, R.H. (2020b). Publish less, read more: Replies to Clegg, Wiggins, and Ostenson; and to Trafimow. Theory & Psychology, 30, 299-304.
Stigler, S.M. (1980). Stigler's law of eponymy. Transactions of the New York Academy of Sciences, 39, 147-157.