On the alleged unsuitability of behavioural science for fighting COVID-19

A recent paper questioned the suitability of the whole discipline to provide evidence for policy. The claim is unjustified and unworldly.

Like Comment

Ijzerman et al. question whether behavioural research on COVID-19 is “suitable for making policy decisions” and conclude that policymakers should use it with “extreme care”. Since this conclusion is stated by established behavioural scientists, it is likely to be influential and to provide grounds for policymakers to ignore or downgrade behavioural research.

From my perspective, as head of a team that undertakes behavioural science for governments, including on COVID-19, the commentary is undermining and unhelpful. From an objective perspective, however, the issue is whether its conclusion derives from sound analysis.

The analysis draws an analogy to NASA’s nine "technology readiness" levels and proposes an equivalent for behavioural science. As a framework for thinking about how evidence informs policy, this is badly flawed. NASA does not have to launch a spacecraft until it is extremely confident that the technology will work. Policymakers enjoy no such luxury, especially in a crisis. Moreover, success for policymakers does not depend only on the reliability of the causal relationships underpinning the project; policy decisions are more complex. They embed priorities, values, and preferences regarding risk, uncertainty, and time. Public acceptance matters too.

These factors have been understood for decades by public administration scholars researching the relationship between evidence and policy. This large volume of scholarship is ignored by Ijzerman et al, as is specific work on behavioural evidence. This is unfortunate, because the literature overwhelmingly concludes that idealised systems for assessing evidence and converting it into replicable technologies for policy use are nonapplicable, impractical, even naïve, given the dynamic, complex context of real policymaking.

After 15 years of doing research for policy, I concur. Policy discussions rarely touch technical issues of methodological robustness, but typically focus on whether evidence can help at all: Is there is time to gather it? Do we have funding for it? Will stakeholders engage with it? Not even the best policymakers want research positioned on a nine-point evidence-readiness scale. In their environment, such niceties are not relevant. In fact, behavioural evidence supports this: weighted integration of factors is not the best strategy for tackling complex multidimensional decisions.

Consequently, it is unworldly and analytically unjustified to imply that the work of behavioural scientists who do not follow such a scheme is unsound.

This is not to downplay meta-scientific issues about statistical inference, replication and effect overestimation. There is a need for humility about what evidence does and does not tell us. These issues apply similarly to medical science. Yet it would be unthinkable to question whether medical science is, generally, unsuitable for policy decisions. The benefits are demonstrable.

Applied behavioural science is providing demonstrable benefits too, with ongoing critical appraisal. It has increased experimental pre-testing of policies, often via high-quality trials. It has helped policymakers to recognise when orthodox economic solutions, which dominate policy design, are unlikely to work. In response to COVID-19, governments’ use of behavioural evidence on handwashing and support for collective action exemplify these advances and are based on solid science.

We should think more critically before setting out to undermine such work.

Pete Lunn

Head, Behavioural Research Unit, Economic and Social Research Institute


Go to the profile of René van Bavel
4 months ago

Indeed. Policy decisions are difficult. It would be nice to live in a world where all policy decisions had a 100% perfect solution, but that's not the way it works. Not for the important decisions anyway. There is no certainty that one solution is better than another, and no certainty that any solution will work at all. This is all the more true in a changing environment with plenty of unknowns about the nature of the problem itself. Still, evidence beats no evidence.

Go to the profile of Lucia Reisch
4 months ago

so true and thanks, Pete Dunn - some researchers seem to have little experience of how policymaking really works, and ignoring the practicalities, restrictions, expectations and boundaries of political settings is not helpful  

Go to the profile of Juan Giraldo-Huertas
4 months ago

Ijzerman et al. questioning the social and behavioural science (SBS) condition about if it is suitable or not “for making policy decisions” in actual Covid-19 crisis lack more of the epistemic humility they ask for than SBS will have now. NASA scaling-framework and examples coming from the field of social-personality psychology pretending to embrace all psychological, experimental and field economics, sociology, anthropology and even educative sciences, are not only ambitious: it is very close to absurd. Not all the argument coming from Ijzerman et al. is inaccurate. The need for more exhaustive population studies, the improvement for reported precision, stimulus generalizability and validation and with independently replicated findings as well, are fundamental for sustainability and replicable effects in cultural, historical, political and structural scenarios. The problem is what they call dynamics in all the previous list: a NASA’s scale adaptation for every discipline, method, participants and outcome sound not rigorous, but limiting.

¿Energy particles or rocket’s fuel has an internal process that might reduce their participation or response in lab testing between TRL4 and TRL8? ¿Cognitive scarcity might represent a concern for efficiency transition between a material resistance technology between TRL3 and TRL6? It is hard to take those questions barely seriously.

We, SBS scientist, should need rigorous frameworks, but they sure not will be coming from “mature” science. Our immaturity has now powerful frameworks like nurturing care for children development in vulnerable communities (Banerjee et al., 2019; Black, 2020). Also, probably Covid-19 pandemic intensifies adverse effects and impact the breach between vulnerable and not at-risk families (Gupta & Jawanda, 2020). We will need educative efforts against the pandemic, including designs of behavioural health strategies in caring communities (Kaslow et al., 2020) and general models as well (Van Bavel et al., 2020). However, we will have more tendencies not oriented only to bottom-up or politics interventions and, with all parsimony and scientific rigour, explore integrative interventions with inclusive community-based programs (Guralnick, 2020) and family-at home centred framework (Giraldo-Huertas et al., In review).

Dynamics and humility at best will do not pretend to unify psychology or any other social and behavioural science under linear and continuous escalated process. We hope to learn that from crisis and scientific contributions to human wellbeing, just like rocket science does.


Banerjee, A., Britto, P., Daelmans, B., Goh, E. & Peterson, S. (2019). Reaching the dream of optimal development for every child, everywhere: what do we know about ‘how to’? Archives of Disease in Childhood, 104:S1-S2. http://dx.doi.org/10.1136/archdischild-2019-317087

Black, M. (2020). Nurturing Care Framework and Implementation Science: Promoting Nutrition, Health and Development among Infants and Toddlers Globally. In: Black, M., Delichatsios, H., & Story, M. (Eds). Nutrition Education: Strategies for Improving Nutrition and Healthy Eating in Individuals and Communities. Nestlé Nutrition Institute Workshop Series, (Vol. 92, pp. 53-63). Karger Publishers. DOI: 10.1159/isbn.978-3-318-06528-2

Giraldo-Huertas, J., Rueda-Posada, M., Quiroz-Padilla, M., Jauregui, M., & Shafer, G. (In review) Integration of developmental screening and executive function measurement for social intervention in 2-to-6 years-old children in a LMIC.

Gupta, S. & Jawanda, M.K. (2020). The impacts of COVID‐19 on children. Acta Paediatrica, 109, 2181-2183. https://doi.org/10.1111/apa.15484

Guralnick, M. (2020). Applying The Developmental Systems Approach To Inclusive Community-Based Early Intervention Programs. Infants & Young Children, 33(3), 173-183. DOI: 10.1097/IYC.0000000000000167

Kaslow, N., Friis-Healy, E., Cattie, J., Cook, S., Crowell, A., Cullum, K., Del Rio, C., Marshall-Lee, E., LoPilato, A., VanderBroek-Stice, L., et al. (2020). Flattening the emotional distress curve: A behavioral health pandemic response strategy for COVID-19. American Psychologist, 75(7), 875–886.

Van Bavel, J., Baicker, K., Boggio, P. S., Capraro, V., Cichocka, A., Cikara, M., ..., & Willer, R. (2020). Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour, 4, 460–471. http://dx.doi.org/10.1038/s41562- 020-0884-z