From shock machines in a basement to a new theory of moral inference

By Jenifer Z. Siegel (University of Oxford) & Molly J. Crockett (Yale University)

Go to the profile of Molly J. Crockett
Nov 20, 2018
1
2

The paper in Nature Human Behaviour is here: https://go.nature.com/2A7us3b

How do we tell if someone might exploit us? What is different between them and someone we can trust? Determining this isn’t always easy because information about someone’s moral character is not normally tattooed on their forehead. Usually, we have to infer people’s character from their behavior.

One way to learn about someone’s character is to observe how they make choices that pit their own interests against the interests of others. Six years ago, we started a program of research to study just how people make such tradeoffs. Building on a long tradition in social psychology, we invited research participants to deliver electric shocks to either themselves or strangers in exchange for money. When we began this work (as an MSc student and a postdoc in Prof Ray Dolan’s lab, respectively), we had no idea how much money people would require to shock a stranger. At first, we imagined that many people would flat out refuse. We spent countless hours in the basement of the UCL Wellcome Trust Centre for Neuroimaging shocking ourselves to fine-tune our research protocol. We ended up using EEG electrodes, because our alternative option – aptly called “the wasp” – delivered such a punch we suspected no one would be willing to pull the trigger.

Dr. Crockett testing out the electrodes in the basement of the Wellcome Trust Centre for Neuroimaging.

It turned out our suspicions were unfounded. After we ran our first experiments, we used a simple mathematical model to quantify how much each person valued their own profits relative to the pain of a stranger. Against our initial expectations, many people were readily willing to shock a stranger in exchange for quite small amounts of money - on average, about $0.50 per shock. Of course, we observed a wide range of individual differences. Some of our participants refused to deliver even a single shock to a stranger in exchange for $30. At the other end of the spectrum, we observed people who chose to deliver 20 shocks in exchange for just $0.10.

As we analyzed this data, we couldn’t help but imagine how people in our own lives would make these kinds of decisions. Would our best friend flat out refuse to shock a stranger for money (we hoped)? Would our ex-boyfriend shock a stranger for mere pennies (good riddance)? We noticed that we were automatically making inferences about people’s moral character from observing or imagining how they made these simple moral decisions. By reflecting on our own reactions to our research, we realized we’d stumbled on a potential method for probing the cognitive mechanisms of moral inference.

At the same time, we’d been having conversations with two other postdocs at the Wellcome Trust Centre for Neuroimaging whose work fit perfectly with our plans to study moral inference. Robb Rutledge had just published a paper showing how to quantify subjective feelings like happiness, which suggested a way we could measure people’s subjective impressions about the moral character of others. And Christoph Mathys had recently developed a Bayesian model of belief updating that we thought might be able to capture how people update their beliefs about others’ morality. An important feature of Christoph’s model was that it allowed us to measure not just what people believed about the morality of others, but also how certain or uncertain people were about these beliefs. By measuring belief uncertainty, we were able to make new discoveries about how people form impressions about people with different moral characters. 

In our moral inference experiments, we had people predict and observe the decisions made by two participants from one of our earlier studies: one who was characteristically “bad” and required very little money for each additional shock to the stranger, and one who was characteristically “good” and required much more money to administer each shock. We used Christoph’s model to characterize people’s evolving beliefs about our past participants’ morality. This allowed us to gain insights into how impressions are formed, and how impressions change when new information suggests that prior impressions may have been wrong.

Five years, eight experiments and more than 1500 participants later, we arrived at a clear conclusion: when people infer a bad moral character, relative to a good moral character, they are more uncertain about their impression. Consequently, their impressions of putatively bad people are more malleable, making it easier for new information to influence their impression and change it. In other words, people cling to good impressions, whereas bad impressions are more subject to change.

The findings have important implications for theories about how cooperation may have evolved. They suggest that when we observe others perform mildly harmful acts, we remain cautious, but do not write them off entirely. Within this optimistic framework of impression formation, we can forgive people for making mistakes and allow them to change over time. Given the importance of forgiveness in the maintenance of social relationships, the mechanism we identified may be a cognitive feature that is a requirement for the evolution of cooperation. Moving forward, we hope this work can inspire further research on the development and maintenance of healthy relationships. Using precise tools to measure the formation of moral impressions, we may be able to better understand psychiatric disorders characterized by interpersonal dysfunction. 

Go to the profile of Molly J. Crockett

Molly J. Crockett

Assistant Professor, Yale University

2 Comments

Go to the profile of Adrian Dahl Askelund

Hi! Just wanted to make you aware that the 'Read the Paper' button links to a different paper. 

Best wishes, Adrian Dahl Askelund

Go to the profile of Jenn Richler
Jenn Richler 7 days ago

Thanks for letting us know - it should be fixed now.