Does robotisation spell the end for humanity?
“Society is facing the new unknown,” said Klaus Schwab, World Economic Forum chairman, in a speech in October 2016 that darkly predicted a global workforce obliterated by automation. But that’s only the half of it. Robotisation also threatens to displace humans in the arts and military decision-making. As smart tech accelerates, Rob Orchard asks, is there any way we can avoid hitting auto-destruct?
Illustrations: Christian Tate
11th October 2016 (Taken from: #25)
In 2016 the sporadic chatter about automation became a persistent, unavoidable roar. In January, a report presented at Davos claimed that more than five million jobs will be lost to robots by 2020, in a “revolution more comprehensive and all-encompassing than anything we have ever seen”. In July, a major report from management consultancy giants McKinsey gave an industry-by-industry breakdown of the carnage to come. The biggest losers will be “welders, cutters, solderers and brazers”, nine in ten of whom can expect to have their job stolen by a robot in the next few years.
On 2nd October, World Bank president Jim Yong Kim announced that his organisation believes automation threatens 69 percent of jobs in India and 77 percent in China. That roughly translates to 934 million Indians and Chinese joining the unemployment line in the future. A week later, on 11th October, the World Economic Forum founder and chairman Klaus Schwab addressed what he sees as the current global malaise. “Globalisation and capitalism are seen as the main reason for people’s anger, but the most profound anxiety comes from disruptive new technologies such as robotisation… society is facing the ‘new unknown’, adding to the general morosity.”
If anyone is not overwhelmed, they are not living on this planet at the moment”
General morosity is an understandable reaction to the new era of automation, says Luciano Floridi, professor of philosophy and ethics of information at the Oxford Internet Institute. “Being overwhelmed is a good sign. It means you are actually grasping the seriousness of the situation. This is the fastest ever revolution of this magnitude that we know to have happened to humanity. The agricultural revolution took millennia, the industrial revolution took centuries, the digital revolution is taking years. If anyone is not overwhelmed, they are not living on this planet at the moment.”
The pace of innovation is swift. In January 2016, four former Google engineers launched a startup called Otto, with the aim of retrofitting trucks to make them capable of self-driving on highways. Just seven months later the company was bought for $680 million by Uber. Otto’s technology threatens the jobs of 3.5 million truckers in the US alone. But Uber is not content with just cornering the market in driver-free HGVs: in September it sent a test fleet of driverless Ford Focus taxis to ply their trade in Pittsburgh.
And it’s not just the solderers, the truckers and the cab drivers who felt the chill wind of automation in 2016. In June the robots came for the shelf-stackers as a stock-taking droid was set loose in a DIY store in San Francisco. In July they came for the posties: Amazon began testing a drone-powered ‘Prime Air’ service which aims to bring users their products within 30 minutes of an order being placed. And in September they came for the seamstresses: in a world first in Seattle, Sewbo the sewing robot sewed a whole garment from scratch with no human input.
Human translators are training the algorithms that will eventually replace them”
Many other job-killing technologies are bubbling under the surface. One major translation company requires its staff to do all translation work within the company’s proprietary software. As they teach the program the infinite complexities of human idioms – long seen as the stumbling block of truly automated translation – they are training the system that will eventually replace them, as the industry shifts from real, live linguists to algorithms. Having been obliged to vote for Christmas, the turkeys flip through their dictionaries nervously, awaiting the sound of sleigh bells and the clank of the roasting dish.
The creation of artificial-intelligence-driven automation systems is proceeding apace in other white-collar professions too, including medicine, architecture, teaching, insurance and the financial industry. Perhaps, though, as the robots take over every human process that can be automated, they’ll leave humans to excel at jobs which require creativity?
Don’t count on it. The section of music shown above is the opening bars of a new version of Wer nur den lieben Gott läßt walten (He who allows dear God to rule him), originally written in 1641 by Georg Neumark. It’s one of many pieces of music convincingly reharmonised into the style of Johann Sebastian Bach by an algorithm, DeepBach, created by programmers Gaetan Hadjeres and François Pachet. In tests last year, a group of 365 professional musicians and composition students were fooled into attributing DeepBach’s auto-generated ditties to the flesh-and-blood Baroque composer in almost half of the instances they heard them played.
Meanwhile, the poem For the Bristlecone Snag (sample line: They attacked it with mechanical horses / because they love you, love, in fire and wind) was one of several created by a poetry engine made by programmer Zackary Scholl. It was submitted under a human name to a series of US literary journals, several of which published it. And then there’s Daddy’s Car, the first pop song written by artificial intelligence, which made its YouTube debut in September 2016. The program, designed by scientists at the Sony CSL Research Lab, synthesised 45 snippets of music by The Beatles to create a brand new tune, sung by computer-generated voices. Record company executives will be delighted: they’ll get all the hits without the awkward mess of rehab, broken-hearted groupies and TVs thrown through hotel windows.
Algorithm and blues
Despite the frenetic pace of change, Floridi believes that automation will not come all at once. “It will be patchy,” he says. “And how do you cover the gaps that remain? With human beings. All of a sudden, human beings become the things that connect machine A and machine B, the interface between them. What is left behind after automation are a million boring interface tasks, which require understanding but no intelligence. We will be the gears that connect one wheel to another.”
Given this unappetising potential future, Floridi believes that we need to engage with what new technologies could do to our societies. “We should sit down and decide what kind of world we want to live in ten, 20, 30 years down the road. We should do something like what the Founding Fathers did with the United States: we need a constitution for the kind of world we want to see emerging out of this mess.”
This will mean working out our attitudes towards things like a Universal Basic Income – a stipend which could potentially be paid to the tens of millions left jobless by automation, to keep their heads above water. We’ll also need to have a serious conversation about how we’ll fill our days if the majority of us lose our jobs. “We’ll entertain ourselves, mostly,” says Floridi. “The fear is that we will entertain ourselves into a stupor.”
If many of us are no longer productive members of society, and are claiming an income without paying into the system, we may find it harder to demand the political representation of our views. “There is the end of democracy as we know it,” suggests Floridi.
It is easy, says Floridi, for humans to lose a sense of morality in our rush to embrace automation. “Take the example of online markets,” he says. “It’s not immediately evident to everyone working in this area that they are affecting millions of people by buying, selling and rebuying at the speed of light through algorithms that are not entirely under control. Because morality is one or two steps away, we have seen a race to the bottom… The best, the fastest, the most merciless algorithm will always win.”
Simply hoping for the best is not a good option. “I don’t think that we should allow ourselves the leisure of saying either that there are no boundaries or that the boundaries will be found by the market,” says Floridi. “In both cases we would be deluding ourselves into closing our eyes towards the impact automation is going to have.”
Rage against the machines
Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield, has dedicated the past five years of his life to fighting back against the onward march of automation. As a senior spokesperson of the Campaign to Stop Killer Robots, his quest has a particular focus – he wants an international ban on the development and deployment of automated weapons systems that can seek out and destroy targets without human input.
Such weapons already exist. “The Israelis have an autonomous weapon already, which they’ve been using for quite a long time: the IAI Harpy,” he says. “Before they commit to an aerial attack, they send in the Harpys, lots of them, to look for radar signals. When they detect the radar signal, they look up its signature to see if it’s one of theirs. If it isn’t, the Harpy turns into a fully autonomous dive bomber and blows it to pieces… Any kind of terror organisation could take one of these radars and put it on a hospital or school roof. The Harpy can’t tell, it just detects a radar: bang, and the thing’s gone.”
There’s an awful logic behind the automation of killing machines. “It kind of started because of drones,” says Sharkey. “They’ve been a weapon of success for the military. The problem with them is that they’re being used against low-tech nations. As the US has said, if we’re fighting a high-tech nation the first thing they’re going to do is jam all our signals so they’re useless. What they want is mission completeness: if you send the drone off on a mission, it will complete the mission without any need for signals. That means you have no communication or any kind of judgement. You don’t know what it’s doing. It’s gone.”
And by the time your drone is ready to fulfil its mission, the mission may have changed. “It can take two hours to get to a target. So what was tanks on a bridge, may now be cars on a bridge.”
A war could take place and be over in, say, 20 minutes… How did that start? It was an accident!”
It’s not just the fear of technology being jammed that’s pushing the development of automated drones. If two drones face off against one another and one is operating autonomously and the other has to wait – even for a split second – for its orders from a human controller many thousands of miles away, it’s clear which will win. “One of the big excuses for automation is that war is becoming too fast for humans to make quick decisions,” says Sharkey. “That’s why we feel this need to automate. The more you automate, the more other people automate, the quicker it all has to become.”
The issue goes way beyond drones. “The Russians have this Armata super-tank, the T14. It’s not autonomous but they’re rushing to make it so. It’s ten or 20 years ahead of any other tank in the world. You can imagine thousands of those swarming on the borders of Europe. I don’t like the look of it,” says Sharkey.
“The Americans have been developing a hypersonic, unmanned aircraft that can reach anywhere on the planet in a window of one hour. They’re testing it at 21,000 kilometres an hour.”
And then there’s ‘Ivan the Terminator’, a robot soldier revealed with great fanfare by the Russians in May 2016. Designed to replace human troops “in the battle or in emergency areas where there is a risk of explosion, fire [or] high background radiation”, it is currently remote controlled and unarmed. Video footage shows it padding along a treadmill and playing a driving simulation game with a stern grimace on its shiny metal face. It’s not particularly scary – but when it’s armed with an AK-47 and a program that autonomously identifies enemy combatants, it will command significantly more respect.
Automatic or the people?
Automated killing machines take away the military chain of human accountability in war. They remove the immediate risk of soldiers coming home in body bags, and so make politicians more likely to order military action. They are also likely to lead to job losses among armies, navies and air forces, as GI Joes are replaced by AI Ivans.
Most worryingly, though, they make an accidental apocalypse exponentially more likely. “Once there are autonomous weapons, you need autonomous counter-weapons,” says Sharkey. “A war could take place and be over in, say, 20 minutes, and there’s nothing but devastation, and you ask how did that start? It was an accident! One of their autonomous drone swarms accidentally veered into our border land. Or one of their weapons was hacked.”
It’s what we have been working towards all our life as humanity on this planet”
Sharkey and his colleagues are working hard to gain agreement on autonomous killing machines. They have persuaded 20 countries to call for an immediate ban. But they’re only too aware of how unwilling the most powerful nations are to place limits on military developments – and of the weaselly ways they get round the wording of agreements. “There were lots of treaties and attempted treaties in the 20th century,” says Sharkey. “There were treaties about not dropping bombs from Zeppelins – and then Austria used a plane to drop bombs, and said the treaty didn’t count because it wasn’t a Zeppelin!” He still believes, though, that any treaty is better than no treaty. “I think we will achieve an outcome, and I think we probably won’t be happy with it,” he says.
As the outlines of a world in which humans have largely automated themselves into irrelevance swim into focus, we should bear in mind that we are the ones who wanted this. “It’s what we have been working towards all our life as humanity on this planet,” says Floridi. “We have been planning since Day One to stop working.” Now we just have to deal with the consequences.
Slow Journalism in your inbox, plus infographics, offers and more: sign up for the DG newsletter. Sign me up
Thanks for signing up.