Martin George evaluates the moral issues surrounding the programming of self-driving cars and the hopes for the future.
Imagine you are a self-driving car. You hit a bump in the road that your many sensors couldn’t predict, causing you to veer off the road. Your two options are to swerve to hit a tree killing your passenger, or stay on course and kill two pedestrians that happen to be passing by. How can you program something to make a decision like this? Is it even ethical to leave the fate of human beings in the hands of cold, hard logic?
Researchers are attempting to get to the bottom of this.
The MIT media lab have created something called a “Moral Machine”. Essentially it is a database of scenarios where there is no real positive outcome, and they collected 40 million anonymous responses.
Although this experiment seems bizarre, researchers at MIT claim it won’t be long before we have to take the results seriously:
“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now,”
Most of the trends were predictable. People opted to save humans rather than animals, save as many people as possible, and save young over elderly people. Some of the less well defined trends included, “saving females over males, saving wealthy and successful people over poor people, and saving pedestrians rather than passengers.”
Even if trends can be spotted, the issue was never going to be clear cut. Differences in voting habits became apparent from country to country. For example, France, Greece, Canada and the UK putting most emphasis on sparing the young whereas Taiwan and China put the least. This brings up more questions, such as whether or not the rules should vary from country to country?
The team behind Moral Machine remarked: “Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”
This “global conversation” will prove very important. Can you imagine if no regulations were in place, and car manufacturers started using the fact that their car prioritised your life over others as a selling point? Mercedes have already decided that allowing self-driving cars to hit pedestrians is preferable should such a situation arise. It is a matter of perspective as to whether or not they are prioritising the passenger’s life because it is a guaranteed way to save at least one person, or because that one person has helped line the companies pocket.
This conversation actually started a lot earlier than you might think. In 1942, author Isacc Asimov wrote a short story called “Runaround”, in which he defined hypothetical rules that robots must follow. These were known as “Asimov’s laws of Robotics” and transitioned from the fictional world into real scientific discussions.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The “zeroth” law precedes the others, and is particularly important in the self-driving car discussion. How can a robot not allow humanity come to harm if the decision is not about saving lives, but about who to save?
Clearly deciding who to save is never going to be met by a unanimous decision. However The Guardian analysed data from one of the many publications in the Journal of Science on the subject, and found the following:
“In one survey, 76% of people agreed that a driverless car should sacrifice its passenger rather than plow into and kill 10 pedestrians. They agreed, too, that it was moral for autonomous vehicles to be programmed in this way: it minimised deaths the cars caused. And the view held even when people were asked to imagine themselves or a family member travelling in the car.”
On the face of it this data suggests we might be getting somewhere in terms of creating a global standard that people would be content with. However, when you specify who’s in the driver’s seat then results change.
“When people were asked whether they would buy a car controlled by such a moral algorithm, their enthusiasm cooled. Those surveyed said they would much rather purchase a car programmed to protect themselves instead of pedestrians. In other words, driverless cars that occasionally sacrificed their drivers for the greater good were a fine idea, but only for other people.”
This probably highlights the selfish side of mankind, but it is completely understandable. If your son or daughter were in the passenger seat, it’s obvious what you would want to save. Conversely if your child were the one crossing the road, you would have difficulty coming to terms with the fact that a machine had been programmed to view your child as expendable.
Despite all this I am convinced the future is bright for self-driving cars. The conversation definitely needs to keep going, but I am doubtful that more surveys will get to the bottom of the issue. Just make sure when you do eventually find yourself in the market for a self-driving car, you pay close attention to the sales pitch.