Test and Adapt: speeding up climate change adaptation is a tall order

For climate change adaptation, we have little scientific evidence about the value of scientific evidence.
Published in Social Sciences
Test and Adapt: speeding up climate change adaptation is a tall order
Like

The paper, authored by me (Paul) and my co-authors Ben, Francisco, and Maria, is available in Nature Climate Change: https://www.nature.com/articles/s41558-019-0536-3

At the end of every good applied science project is the "dissemination of results" phase.  In that phase, we publish, tweet, op-ed, and present. We blog. In rarer cases, we teach and train people to build their capacity to act. When asked what evidence we have that any of these modes of dissemination move the needle in important applied contexts, we typically shrug. Given scientists put evidence on a pedestal, that seems like an odd reaction.

Odd, but understandable. Applying a scientific lens to measure the impact of our science is not your typical scientific study. That's the conclusion we came to after completing a study that examined whether an effort to develop and disseminate locally-relevant climate science in a drought-prone area changed the way in which communities managed their scarce drinking water. 

The first hurdles we had to overcome were put up - and eventually taken down - by funders. Canada's International Development Research Center (IDRC) planned to fund a large climate change adaptation project in Central America. The project funded advances in locally-relevant climate science and social science work on high-performing drinking water systems. Like most adaptation projects, the project ended with a "dissemination" phase: reports, presentations, and capacity building workshops targeted at the managers of community water systems. 

When Fran and I pitched the idea of testing the impacts of the workshops in a randomized controlled trial, IDRC had two objections. First, they objected to randomization - specifically the idea of a control group that received no workshops. Most people working on adaptation think support for adaptation - in any form - is a good thing. Their question was, "Is it ethical to deny poor, rural communities access to a good thing?" The answer is clearly "Yes!", but only once we admit we don't know if capacity building is actually a good thing and, importantly, we know that it has costs: costs for the funders, costs for the trainers, and costs for the participants (including adopting bad advice or incorrectly understood advice). We were lucky: IDRC staff were willing to admit this point and agreed to randomization.

Yet there was a second objection. After dissemination, the project ends, including the funds. How will we know whether the behaviors of the drinking water systems changed as a result of the capacity building? We simply replied, "Trust us. We'll find money and collect the endline after two years." In fact, we had no idea from whom we would get the money, but we assumed it would be easy. Who wouldn't want to spend a paltry $15K to take a peek at the results? The answer, it turns out, is "a lot of funders." I would end up spending my own discretionary research funds to collect the endline data.

- Access to clean drinking water is a global concern. Photo by Daniela Linares (© DanielaLinares/EfD Initiative) 

Once we had the funders on board for the trial, we had to make it happen. Fran was the overall supervisor of the project, and thus the supervisor of the team that developed the scientific inputs that went into the capacity building training and the team that designed the workshop and its materials. We thought it would thus be best to have someone else supervise the data collection, someone who had no stake in the science that went into the workshops nor the design of the workshops themselves. Maria, who already had a wealth of field experience in Costa Rica, came on board to ensure that the survey instruments and data collection were high-quality and unaffected by any potential pre-existing biases in the field team. To do the data analysis and write the first draft, we thought it would also be best to have someone who was not involved in the intervention or data collection. Ben came on board with fresh, unbiased eyes and framed the Big Picture for us.

Walling off the design, implementation and analysis from each other can be useful (as can pre-registered analysis plans). Why? It helps ensure we don't find something where there is nothing. Although you wouldn't know it from reading scientific journals, most interventions barely move the needle. In fact, Peter Rossi, an American sociologist who participated in dozens of randomized social programs, summed up this expectation with his two metallic laws of evaluation:

  • The Iron law: The expected value of any net impact assessment of any large scale social program is zero
  • The Stainless Steel law: The better designed the impact assessment of a social program, the more likely is the resulting estimate of net impact to be zero.

Null results, however, disappoint everyone. They disappoint the field team that designed and implemented the intervention. They disappoint the myriad practitioners and policymakers who are doing, or thinking of doing, the same sort of interventions in other parts of the world. They disappoint the authors who know it is now going to be more challenging to publish the results in a top journal (hats off to Nature Climate Change for bucking the forces of publication bias!). They disappoint the media who wonder how they can write an interesting article about something that did not work. No one wants a null result, and thus care has to be taken to avoid convincing yourself you see something in the nothing.

Yet these null results are just as valuable - perhaps more valuable - than results that generate a reaction of, "Wow, I never thought I would see that!" Null results rarely grab headlines, but without them, it’s hard to advance the evidence base for policy. To keep us from finding spurious "Wow's" in the data, pre-analysis plans and division of labor among the team members can help.

And a null result is exactly what we found: we could not detect any meaningful system changes two years after the coordinated effort to disseminate locally-relevant climate science and build local capacity about best adaptation responses.  If I'm being honest, I was disappointed.

It's obvious that knowledge can change behavior - scientists are a testament to that fact. Based on new knowledge, we change our behavior every day, from what we study to how we study it. The more interesting claim is that creating and disseminating knowledge can reliably and systematically change behavior.  If it cannot, then it's hard to imagine how that knowledge could be used to accelerate adaptation.

Our team believes that capacity building is important and can change behavior, but that the evidence for how best to do it is weak. Given myriad types of knowledge and ways to deliver them, a single study cannot provide a complete picture. Instead, we hope our study spurs a "Culture of Experimentation" in sustainability programs more broadly, and adaptation programs in particular. As we write in our article, with such a culture, we can begin to build a more credible evidence base for encouraging greater uptake of adaptation behaviors globally. 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in