The article in Nature is here: https://go.nature.com/2yYhbcR
A companion website for visually exploring the data by country is here: http://moralmachineresults.scalablecoop.org
On June 23rd, 2016, we deployed Moral Machine. The website was intended to be a mere companion survey to a paper being published that day. Thirty minutes later, it crashed.
For the next two days, we had to deal with a problem that we were both grateful and stressed to have, scrambling to make this website serve a demand far greater than expected. As the coverage of the paper — authored by Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan, and published in Science — picked up in the media, traffic poured in beyond the capacity of the website. As luck would have it, we had to take a flight to the International Conference on Computational Social Science on the day of publication, and were only able to intervene once we had landed in Chicago. Panicked at the prospect of losing the initial wave of respondents, we hacked in overdrive — in the taxi from the airport (yay, mobile hotspot), in the hotel lobby, all evening, and through most of the next day.
Six months earlier, the three authors had been sitting in a bakery located minutes away from the MIT Main Building, discussing a follow-up study to their paper on the social dilemma of autonomous vehicles. On that day, Flour Bakery on Mass Ave witnessed the inception of the Moral Machine.
Sketch of the Moral Machine concept in Iyad Rahwan’s notebook
After Iyad presented the idea to us, we spent the next month running and refining several design concepts, before finally committing to develop a survey focused on the user experience, engineered to encourage sharing, and modeled on personality quizzes.
Early concept sketches for the scenarios (left) and characters (right)
Fast-forward six months to that frenetic weekend, we eventually brought the website to a semblance of stability, and continued to work on systematically increasing its capacity over the months that followed. A sizeable amount of traffic was coming in regularly, and things looked good for accumulating a dataset that could fuel a good study.
But that was not to be the end of it. To our delight, our efforts to further optimize the site over the next couple of months were rewarded as waves of traffic spikes slammed into it thereafter. The biggest one was in early October of that year, as the Moral Machine racked up enough upvotes to win an extended stay on the front page of Reddit — twice. It subsequently went viral on multiple occasions — sometimes boosted by coverage in international news outlets, sometimes driven by the vast fandoms of mega-popular livestream celebrities in the playthrough/reaction video genre, and sometimes through exposure in national publications with niche readership bases around the world.
Moral Machine was very popular on Japanese Twitter and on Russian-language VK (yes, we generated language-specific sharing widgets too)
The accumulated coverage and subsequent spikes resulted in a sustained flow of traffic to the Moral Machine since, allowing us to collect 40 million human decisions on ethical dilemmas of autonomous vehicles within the span of 18 months. Much of this traffic came from users interacting with the website in one among nine other languages, to which we translated the website by year-end. We also added a demographic survey shortly after, thereby gathering yet deeper information about users.
“This is stupid! The car should never change its track”, said one Redditor. As we browsed the comments, we saw many Redditors succumbing to the temptation to formalize their approaches with rules. Many offered simple rules like “the car should minimize harm”, or “passengers should take on the risk”. But then, there were some who went on to propose a set of prioritized rules, probably inspired by Asimov’s Laws. Many were confused: “can anyone tell me why the car can’t just stop and avoid all this mess?” Youtubers doing playthroughs would also increasingly angst with the passage of each scenario, as each scenario would confound the set of rules they had formulated over preceding scenarios.
Reaction playthrough video of the Moral Machine by YouTuber @Jacksepticeye
A more pessimistic stream of feedback also flowed in. For example, a notable machine learning researcher pointed out that “the trolley problem wasn't critical even for trolleys”, while a prominent roboticist described it as “pure mental masturbation dressed up as moral philosophy”. As media coverage grew, we did our best to explain that we were neither prescribing the use of Moral Machine data to train ethical decision-making in real-world autonomous vehicles, nor seeking to supplant moral philosophy in this domain. Instead, we merely wished to provide ground truth that would seed a conversation about moral expectations for autonomous vehicles — a conversation we hoped would include the public, industry, legislators, and yes, moral philosophers.
Looking back, a number of cultural, technological, and social developments early in its deployment were relevant to the Moral Machine. General advancement in autonomous vehicle technology, coverage, and proliferation. Major autonomous vehicle technology development ventures and real-world testing. New autonomous vehicle technology startups. The first major autonomous vehicle crash. The US Department of Transportation publishing its autonomous vehicle policy statement. Renewed interest in the Trolley Problem meme after nearly two years of relative quiet. We aren’t certain as to what extent the Moral Machine amplified or was amplified by each of these, but we do know that it appeared frequently in the growing autonomous vehicle ethics conversation that we set out to start.
In any case, the size and complexity of the dataset kept growing, and so did our ambitions and expectations. We began exploring possibilities of applying new perspectives from ethics, anthropology, and psychology to it. We noticed that we had a substantial representation from many countries and territories — eventually, over 233. Indeed, our reach was so great, that when visualized as a global map scatterplot, the result strongly resembles one of those “world at night” images, peppering pretty much every place on earth that had entered the information age. This was an unprecedented opportunity to do a global cross-cultural analysis of people’s ethical prescriptions for autonomous machines.
Moral Machine respondents (left), and the “lit” world (right)
Our team expanded with the addition of Joe Henrich and Jonathan Schulz of Harvard University — world leaders in the study of cross-cultural moral precepts and decision-making. Our fellow lab member, Richard Kim, came to the rescue when we needed to speed up the process of data analysis at the country level.
This cross-cultural analysis would become a pillar of the paper we would eventually develop together and see published today.