The riddle of the road

Print

Illustrations: Christian Tate

It was the least dramatic car crash ever to make international headlines. On 14th February, a Lexus RX450h SUV drove down El Camino Real – the busy three-lane road that cuts through Mountain View, California – and signalled its intention to make a right turn onto Castro Street. It moved to the right-hand side of the lane and stopped behind some sandbags that had been laid out around a storm drain. The traffic lights turned green and the Lexus moved towards the centre of the lane to pass the sandbags.

A bus approached from behind the Lexus. It advanced, at 15mph. So did the Lexus – at just under 2mph. Clang. The bus hit the side of the Lexus, causing minor damage to its chassis and left front tyre. There were no injuries.

Except, perhaps, to Google’s pride. Undramatic though the low-speed Valentine’s Day fender bender might have seemed to a casual observer, it marked the first time that the company could not blame an accident involving one of its 58 self-driving cars on human error. Google issued a mea culpa in which it said it bore “some responsibility” and said that the software governing the Lexus had now been refined: “Our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”

Google aims to start selling its fully autonomous cars by 2020 and Elon Musk’s Tesla is already producing models with an autopilot function that allows drivers to take their hands off the wheel and have the car change lanes and and park itself. One of the major selling points of autonomous vehicles (AVs) is that they will be safer than human-driven cars, but incidents such as the El Camino Real crash show that the possibility of accidents still remains.

If five pedestrians step into the road unexpectedly, should the AV plough into them or swerve into the crash barrier?”

When the potential for a collision arises, the AVs respond not with the gut instinct of a panicked human but in a logical fashion predetermined by the software engineers who programmed them. The decisions taken may result in mildly scratched paintwork – but could also, in some cases, end in fatalities.

And this raises a wholly new question for humanity: who should be preordained to die in a car crash by design?

Death by algorithm

On the morning of 14th September 2015, 31 men and women filed into a conference room on the campus of Stanford University in Palo Alto, California. The participants represented the whole gamut of industries involved in driverless car technology. Lawyers, engineers, manufacturers and regulators took their seats around a U-shaped table, swapping pleasantries over coffee and biscuits. But this was not your average tech get-together. The discussions centred around ethics, and a sizeable delegation of philosophers joined in to add a dash of John Stuart Mill and Immanuel Kant to the debate on how driverless cars should be programmed to behave.

At the heart of the meeting lay a moral dilemma posed by British philosopher Philippa Foot in 1967. It’s called the Trolley Problem and it goes like this.

You’re standing in a desert that stretches out as far as the eye can see, broken only by a train track that winds its way through the sands. As you approach it, you realise something’s wrong. At a certain point in the near distance the track splits into two branches, on one of which five people have been tied down against their will. On the other branch, a single person is wriggling helplessly, trying in vain to undo the ropes which hold him in place. A train approaches and is going to run over the group of five. You spot a lever and realise your choice: either you do nothing and let the train career over the group of five, or you pull the lever, change the points and save five lives – but bear responsibility for the death of one. What do you do?

As AVs spread across the planet they will be exposed to similar moral quandaries. If five pedestrians step into the road unexpectedly, should the AV plough into them or swerve into the crash barrier and potentially kill its own passenger? Does the calculus change when the pedestrians are convicted murderers who have just escaped from prison? When one of the passengers is a six-year-old child?

“In normal cars, people just react. They swerve, they slam on the brakes and they don’t really have time to deliberate about what they do,” says Professor Helen Frowe, professor of practical philosophy at the University of Stockholm. “The thing about driverless cars is that you can programme them in advance. You’ve got to make a choice about how you’ll let them respond.” Software engineers making such decisions in advance may run into novel legal issues as well as theoretical brainteasers. “If programmers are designing a self-driving car ahead of time to [kill] the one person, that could be a kind of pre-meditated murder,” says Ryan Jenkins, an assistant professor of philosophy at California Polytechnic State University who co-organised the Stanford event.

Print

Illustration: Christian Tate

One decision-making algorithm assessed during the Stanford meetup was a simple preference ranking: “First of all, avoid hitting anything if you can, but if you have to hit something, hit another car,” Jenkins explains. “If you can’t avoid that then hit someone on a bike and your absolute last choice should be hitting an unshielded person.” The theory underpinning this ranking appears to be that an AV should strive to minimise damage. But imagine a situation in which a car will inevitably hit one of two cyclists; one wearing a helmet, one without. Do you let the car hit the cyclist with the helmet because they’re less likely to get seriously injured, even though they’ve made a greater effort to ensure their own safety?

One approach is to crowdsource the moral decision-making and hand the results to programmers to code into the cars. Dr Jean-François Bonnefon, a psychologist at the Toulouse School of Economics, recently authored an academic paper for which he surveyed 913 people about whether self-driving vehicles should be programmed to kill their passengers if it would save the lives of bystanders. Perhaps unsurprisingly, Dr Bonnefon and his colleagues found that while their respondents agreed in principle with a car that would sacrifice its passengers, they were less keen on actually buying one themselves. That raises the question of whether manufacturers will want to build such cars in the first place – “Designed to kill you in certain circumstances” is one of the weaker automotive straplines.

We can be pretty confident that if we could switch over to full autonomy tomorrow, we’d save lots and lots of lives”

Regulation might be the logical answer to hesitations over whether to code a car to kill its passengers: a piece of legislation which says that manufacturers will all have to plug in the same accident reflexes. It can’t be the case that a self-driving Rolls Royce will be programmed to save your life, while an autonomous Honda will plunge you off a cliff to save a cyclist. Surveys such as the ones carried out by Dr Bonnefon could be useful tools in building support for such a measure. “If people have an intuition that the thing to do is to ride in a self-sacrificing car, regulation is one way of making that work,” Dr Bonnefon says. “The idea that a government would say ‘You have to buy a car that’s programmed to kill you’… is not going to be easy to implement. Still, as a psychologist, intuition to me is a very strong argument. You’ll have a better time working with people’s intuition than against it.”

The moral code

Self-driving cars are just one of many emerging technologies in which human decision-making is being outsourced to algorithms. They are already used to diagnose diseases and to execute financial transactions. The FBI compiles its no-fly list with the help of algorithms. These pieces of software might not be as potentially lethal as a faulty self-driving car, but they can certainly affect whether humans are treated in a moral way.

In January, controversy erupted over a piece of software used by police in Fresno, California to profile suspects before approaching them. By scouring the web for data including arrest reports, property records and social media posts, Beware would generate ‘threat level’ scores for individuals – red, yellow or green – which could help determine the officers’ behaviour towards them. But the algorithms were flawed because of the data that they drew on: in one instance, a woman’s threat level was reportedly raised because she had tweeted about a card game called Rage.

And this is just one of many ways in which algorithms can fall short. “Algorithms are built by humans, who are themselves biased. They’re trained by humans, who are biased. They learn from data, which we think is objective but is generated by humans who have biased practices too,” says Jenkins. “Bias seems to infect the creation and evolution of these algorithms at every single stage.”

Algorithms can also be unpredictable. In October 2015, a programme called AlphaGo beat the European champion of the Chinese strategy game Go by five games to nil. The creators of the software had no idea how the algorithm did it – but it looked a lot like intuition. In March AlphaGo won again, this time beating the world champion four games to one. Algorithms, says Jenkins, have become complex to the point where we can no longer fully anticipate their behaviour. And to make matters even more confusing, they can interact with each other. “We’re multiplying exponentially the ways that these things can interact, the kinds of choices they can make and the number of people that they can affect significantly,” says Jenkins. “It ultimately becomes impossible to fully test them and to anticipate all the ways that can go wrong. I think that’s a reason to be worried.”

For the time being, however, the main concern over self-driving cars is how they interact with humans, not other algorithms. If AVs end up including a self-destruct clause, the riskiest time for their owners will be the crossover period when traffic will still be a mix of human and robot drivers. “There will have to be a tipping point,” says Professor Frowe. “At the moment, with most people in ordinary cars, the odds are really stacked against me if there’s an accident [and I’m in an AV]: They’re going to try to save themselves. My car’s going to try and kill me.” One way of tackling this problem could be issuing a blanket ban on human driving. That might sound absurd today, but it’s the direction in which Elon Musk apparently thinks we’re headed. “[Human driving] is too dangerous,” he said at a tech conference in March 2015. “You can’t have a person driving a two-tonne death machine.”

It’s unlikely that self-driving algorithms will incorporate answers to all the questions discussed at Stanford last September before they go mainstream. Chris Urmson, director of Google’s AV programme, weighed in on the debate himself and downplayed the philosophers’ point of view. “It’s a fun problem for [them] to think about, but in real time, humans don’t do that [when they are driving],” he said in December. “There’s some kind of reaction that happens. It may be one that they look back on and say I was proud of, or it may just be what happened in the moment.” If humans can start driving as soon as they can pass their test, is it fair to expect self-driving cars to be more skilled and sophisticated than humans before they hit the road?

Rationally speaking, perhaps not. But public opinion is not always rational. Dr Bonnefon is currently investigating perceptions of robot surgeons, asking what level of risk we are willing to accept from robot and human surgeons. “It seems that we are willing to overreact to failures by robots, which tells me that every incident [with driverless cars] is going to make a lot of noise,” he says.

A report issued by McKinsey in June 2015 estimated that by 2050, autonomous vehicles could reduce traffic accidents by up to 90 percent. That means that about 30,000 lives would be saved each year in the US alone. But if February’s crash attracted such global attention, we can only imagine what will happen on the day a toddler gets run over by a self-driving car. “It really is going to be a reckoning,” believes Jenkins. “It will go to court and a jury and a judge are going to have to set a precedent for how we deal with cases like this. Until that happens – knock on wood that it never happens – we’re not really going to know how the public and the law are going to treat it.”

That doesn’t mean, says Jenkins, that engineers should retreat and keep their algorithms to themselves until they’ve figured out the ethical implications of every single scenario imaginable. There’s a moral case to be made for pushing driverless cars out into the world as soon as possible, even if it means there’s the occasional fatal accident for which the algorithm is to blame. “We can be pretty confident that if we could switch over to full autonomy tomorrow, we’d save lots and lots of lives,” says Jenkins.

On Wednesday 4th May there was another utterly unremarkable collision involving one of Google’s driverless fleet. A prototype vehicle making its way along Latham Street, also in Mountain View, was approaching the intersection with Chiquita Avenue when, at 9 mph, it bumped into a small traffic island. “There were no other vehicles involved and no traffic in the vicinity,” the accident report read.

There were no injuries. Except, perhaps, to the driver’s pride – the Google car had been operating in manual mode when the collision took place.

We hope you enjoyed this sample feature from issue #22 of Delayed Gratification

You can buy the issue from our shop or

Subscribe and receive the magazine through your letterbox every three months

Honed design, relaxed writing and an almanac approach to the passing years”Observer

Jam-packed with information... a counterpoint to the speedy news feeds we've grown accustomed to”Creative Review

A slower, more reflective type of journalism”Creative Review

A chic magazine with fine infographics and long stories”Die Zeit

A very cool magazine... It's like if Greenland Sharks made a newspaper”Qi podcast

A fantastic publication that puts current events into perspective”Qi podcast

Quality, intelligence and inspiration: the trilogy that drives the makers of Delayed Gratification”El Mundo

Refreshing... parries the rush of 24-hour news with 'slow journalism'”The Telegraph

A leisurely (and contrary) look backwards over the previous three months”The Telegraph

Perhaps we could all get used to this Delayed idea...”BBC Radio 4 - Today Programme

Everyone should read this magazine”Stacks Magazine

Wonderful title and wonderful concept”BBC Two