The test center where I took my one and only driving exam was located in front of a prison. The sign warned that attempts to break in — or out, presumably — would result in a custodial sentence. Thus, one might argue that wrong turns could be punished with jail time.
The computerized exam that I took, however, was far less daunting. You were permitted to get a number of questions wrong, which was another way of saying that, like a batter in baseball, you could wait for something in your wheelhouse. In the safety section, for instance, you could get by with only answering questions about drinking and driving. The correct answer of all the choices was always the lowest amount of alcohol. I earned my learner’s permit on my first try and eight years later I have yet to get behind the wheel of a car. The experience, however, stuck with me for its simplicity as opposed to its moral dilemmas.
do you kill many people or swerve and kill just one?
Autonomous cars, which are presumably gifted with more consistent judgment than the average twenty something male, will face much sterner tests. Most car crashes barely make it into local news coverage, but the first notable failure of Tesla’s autopilot system was national news. Such failings are, in a sense, mechanical. Maybe the system mistook a truck for something else. (That theory, though it is the vehicular version of “the dog ate my homework,” is actually being floated around.) But in a broader sense, all software failings are human failings. We, after all, made these systems.
MIT Media Lab’s Moral Machine is an attempt to reintroduce the human factor in autonomous driving. It presents a series of dilemmas an autonomous car might face—situations in which there is often no perfect answer, only trade-offs—and asks humans to weigh in on the optimal solution. These are all variations of the classic trolley dilemma: do you keep going in a straight line and kill many people or swerve and kill just one? Is it better to kill more people through inaction or fewer but have acted to select your one victim? Worryingly, the possibilities in the Moral Machine feel endless, just as the answers all feel imperfect.
You might not want your car to choose between hitting a cat and an elderly woman, but would you really prefer to make that choice yourself? Autonomous cars may be more cold and calculating than normal drivers, but that creates a certain level of discomfort. To know that a vehicle is doing cost-benefit analyses as opposed to madly swerving like your cousin in a snowstorm is not an entirely reassuring thought; when conscious calculation is involved, we are far less forgiving. That, in a nutshell, is what a Tesla Autopilot accident will be more newsworthy than any other for the foreseeable future. It is also why Moral Machine aims to redistribute some of the power away from systems and back to regular people.
It is entirely possible that over the course of one’s decades as a driver, a variant on the trolley dilemma will crop up. There is no perfect solution to this problem, and most people understand as much. But is that necessarily a good thing? Even if there’s no perfect answer, it’s not obvious that a driver should be able to get behind the wheel of a car only knowing that, when in doubt, less alcohol is better and speed limits ought be obeyed. Those are, however, standards that currently exist. In attempting to make autonomous cars a little more humane, then, we are also attempting to make human drivers a little more calculating. You may choose to find that comforting if you are so inclined.
Try out Moral Machine for yourself on its website.