Marketing justice: what consumer research taught us about legal biases
Deciding whether someone accused of a crime is guilty may be a more serious decision than choosing a new apartment or a new car, but the decisions share some essential features. It took a mid-career shift, a fascination with computational methods, and a coincidental choice of lunch spot for us to see the connection. By Pate Skene, John Pearson, and McKell Carter
So how did a statistician, a couple of neurobiologists, and a psychologist wind up working with trial lawyers and consultants to ask people on Amazon their opinions about made up legal cases?
The story starts several years ago. Pate, a neurobiologist, had decided to follow a lifelong interest in law by enrolling in law school during a sabbatical. In his other life, he collaborated with experts in the neuroscience of decision making, who just happened to be John and McKell’s postdoctoral advisors.
During this time, John and Pate kept running into each other at lunch. John spent a lot of time building computational models of decision-making, and for fun, the two of them tried to think about ways to study legal decisions in this framework. It was obvious that legal decisions were much more complicated than the typical paradigms studied in the lab—they had a lot more features and a lot higher stakes—but it had to be possible to pull out some kind of pattern with enough data.
The key idea turned out to come from marketing. Marketers face some of the same problems as lawyers: their products are often multi-faceted, their presentation draws on both intellectual and emotional dimensions, and those on the receiving end of their messages boil all the information down into a simple decision of whether to buy or not. And they can’t test every possible combination of features, so they need some way to determine which are most important to consumers in reaching a decision.
The marketers’ answer to this dilemma is conjoint analysis. In a nutshell, conjoint analysis is about assigning value to the individual features of a product when all you have are ratings of them in aggregate. It occurred to John that something like this should be possible for legal cases, at least if one had enough data. At the time, using workers on Amazon’s Mechanical Turk platform was still fairly novel, and he and Pate agreed that constructing some simple cases in which evidence was systematically varied might be a good way to study the value individuals placed on each type.
To Pate’s surprise, his professors embraced the idea. To Don Beskind, an experienced trial lawyer who taught one of Pate’s first-semester classes, and Neil Vidmar, a social psychologist who had become a law professor and leading expert on jury decision-making, it made perfect sense. As Don and Neil joined our conversation, we quickly focused our project on jury decisions in civil or criminal trials. Pate’s work as a law student in Duke’s Wrongful Convictions Clinic highlighted the importance of understanding decision-making in the criminal justice system, including decisions by prosecutors, who make pivotal decisions in the large majority of criminal cases that are resolved without ever going to trial.
The resulting studies, reported in our paper in Nature Human Behaviour, unpack the effects of various types of evidence and the type of crime on decisions about the guilt of someone accused of a crime. We concentrated on categories of evidence that had been highlighted as leading risk factors for wrongful conviction in earlier statistical analyses of real criminal cases. We were heartened to find that the effect sizes for these types of evidence in our experiments are broadly consistent with their effects in real criminal cases. For example, all of our research subjects tended to give great weight to conclusions based on forensic sciences, even when those conclusions were offered without any explanation of how the tests were conducted or the potential for error. We interpret this to mean that people come to these cases with a strong prior belief that conclusions based on forensic evidence are accurate. In the real-world criminal cases, this suggests that prosecutors, judges, and defense lawyers may need to be careful to provide jurors with clear expert testimony or instructions from the judge about the scientific validity and limitations of various forensic methods in order for jurors to have the information they need to decide how to weigh forensic evidence.
At the same time, our experimental approach allowed us to look at some important effects that have not been easy to analyze in real criminal cases. Most of the documented cases of wrongful conviction, for example, involve very serious crimes, but it has been difficult to determine whether the seriousness of those crimes has any effect on the outcome of those case. By testing the same types of evidence across a wide range of crimes, we found that the type of crime itself increases confidence in the guilt, independent of the evidence. That is, any given combination of evidence in our study led to greater confidence in guilt for the most serious crimes than for lesser crimes. We think this is a particularly important finding, because it raises the possibility that defendants charged with very serious crimes could be convicted on the basis of evidence that jurors would not find convincing in ordinary cases. So, one of our priorities as we continue this research is to identify factors that may exacerbate or mitigate the crime-type bias and then to develop best practices for prosecutors, judges, and defense attorneys that can minimize the effects of this bias on the outcome of criminal cases involving very serious crimes.