We cannot afford to sit back and watch: robot wars are something to fear

From Lethal Autonomous Weapons to tiny nano-drones, the warfare technology in development is thoroughly alarming

Features

20 Sep 2017

One of the most bewildering things about this point in the 21st century is how utterly bored our leaders are by the greatest existential threat humans have ever faced. ‘Artificial Intelligence?’ they say with a chortle. ‘Don’t worry — it’ll make more jobs than it takes’, by which I think they mean: ‘No algorithm could replace me.’ They take their line from the Economist, which has declared in a lofty way that fears about AI are exaggerated.

Then they return to the pressing matter of which twentysomething might secure a minor ministerial position in the next reshuffle. If you’ve ever heard of Dunning-Kruger syndrome, well, there it is in action.

For a politician or a serious-minded magazine to be so very relaxed about AI seems to me bizarre. Super Intelligent machines, much cleverer than us, are heading our way. For better or for worse, their impact on jobs, war and the future of our species will be cataclysmic.

Nothing can (or arguably should) stop the science — if the West puts the brakes on machine learning, we’ll have handed the future of humanity to China. But to decide you don’t have to think about it? It’s as if we knew that hyper-smart aliens were on their way here, but our leaders had concluded that because they might be benign, there was no point in preparing for a scenario in which they were not.

To a certain extent, I blame sci-fi. Our politicians and TV talking heads are often children of the 1980s. To them, an interest in robots means you’re a loner nerd, no good at sports. They steer clear for fear of ridicule.

Perhaps that’s also the reason it’s become fashionable to sneer at the tech billionaire Elon Musk, who has repeatedly tried to warn the world about AI. Musk is the Fonzie of the Silicon Valley geek billionaire gang. He dates Hollywood blondes and swanks about in his Tesla convertible — but that doesn’t stop him being right. Musk co-owns Open AI, a company whose mission is to build ethical robots, and it’s after seeing the frightening advances that his own team have made that he’s decided to speak out. At some point soon, machines will be as smart as humans (that’s AGI, Artificial General Intelligence), he says — then soon after, smarter. We’ve got to consider the implications now. AI is a bigger risk to the world than a nuclear war with North Korea, said Musk in desperation last month. AI is ‘most likely’ to be the cause of a third world war. Do you remember the great fuss made in the media about that? No.

What’s almost stranger than our indifference to the thought of super-intelligent machines of the future is our indifference to the smart autonomous weapons that are already being developed and deployed. In July, Elon Musk and 115 other AI specialists wrote an open letter calling on the UN to ban autonomous weapons — that is, machines that can make their own decisions to kill without any say-so from a human. The Musk gang wrote: ‘Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.’

Facebook is full of middle-class indignation about drone strikes and sarin attacks after the awful events. Why did so few share the letter? Why not deploy the outrage before an atrocity, when it can do some good?

There’s no limit to the hunger of governments for sci-fi weapons and almost no limit to the funds available for military R&D, so billions have already been poured into intelligent weapons. That Pandora’s box is already creaking open, and some of the hinged claws of what lies inside are already poking through the gap.

A soldier holds up a ‘Black Hornet’ drone

Cyborgs have been a comic-book staple since 1980, but in the past few years Russia’s Fund for Advanced Research (FPI) has been developing a real-life version. The FPI ‘Defenders of the Future’ are soldiers enhanced by ‘intelligent’ armour. America already has Packbots for defusing and disposing of bombs, and the Pentagon has plans to replace soldiers with autonomous robots over the next few decades. No casualties, no ‘Our boys’ in body-bags, no training of troops, no feeding them, no defecting or whistle-blowing. No PTSD. What politician could resist?

Over the heads of the robot armies, unless we summon the collective will to resist them, will fly an automated airforce of Lethal Autonomous Weapons (LAWs). BAE Systems’s new baby is Taranis, an armed, unmanned combat aerial vehicle which can fly unseen by radar and fire on targets on land or in the air. Taranis will enter military service after 2030. The Sea Hunter, developed by America’s DARPA, can patrol the water without human guidance and fire on enemy subs at a fraction of the cost of operating a Destroyer.

The point about Taranis and Sea Hunter, and the swarms of potentially killer nano-drones, is not simply that they’re unmanned but that they’re unguided. There’s no officer with a joystick. And unless the UN bans the LAWs soon, they’ll be no human making that final critical decision to fire on the enemy. The algorithm will have power over life and death.

Taranis is pretty alarming: a giant unblinking manta ray floating overhead. But under her wing will shelter her more terrifying cousins, almost invisible to the human eye. It was reported in the spring that the US, China and Russia are all investing billions in nano-weapons, especially nano-drones, smaller than the width of a human hair. A nano-drone might just crouch, fly-like, and spy on an enemy, but it could also carry a payload of toxins or mini-nukes. Most frighteningly, it could be programmed to replicate — each new drone making others that make others, until a biblical plague of lethal insects envelopes some godforsaken country. Nano-drones are a given — they’re already here. What’s vital is that we don’t permit them to kill.

Stuart Russell, professor of computer science at Berkeley, recently gave evidence to a UN meeting on LAWs in Geneva. He wrote a piece for Nature soon afterwards, in which he said: ‘In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.’

Fyodor, a Russian military robot that responds to voice ­commands

Germany wants a ban, Japan wants a ban. Britain, Israel and America — all leaders in the field — hold out. Why? The answer probably lies in the fathomless billlions they stand to make, but there is an intellectual case to be made for LAWs. ‘Kill-decision’ robots will save lives, some say. If a little robotic wasp can fly off on its own, infiltrate the house of some terrorist on a kill list, identify him using sophisticated facial recognition software and plunge a lethal nano-shard through his skull, what’s not to like? No collateral damage. Wouldn’t that be so much better than the usual carnage?

Five years ago, the futuristic thriller writer Daniel Suarez gave an interview to The Spectator about the in-evitable coming drone wars. Suarez made the point that if even regular drone strikes create more enemies than they eliminate, ‘kill-decision’ drones, for all their accuracy, will provoke an even greater backlash. The more powerless and preyed upon that people feel, the more furiously they react. And what reading of human history shows it to be a good idea to give world leaders the power to exterminate enemies in large numbers, secretly and precisely? A robot can have no mercy, no sudden change of heart. There will be no Christmas football between the trenches in a robot war.

It’s not even clear who’s accountable if a machine gets it wrong. Doesn’t civilisation depend on accountability?

Stuart Russell ends his piece in Nature with a plea: ‘The AI and robotics science communities, represented by their professional societies, are obliged to take a position, just as physicists have done on the use of nuclear weapons, chemists on the use of chemical agents and biologists on the use of disease agents in warfare. Debates should be organised at scientific meetings; arguments studied by ethics committees; position papers written for society publications; and votes taken by society members. Doing nothing is a vote in favour of continued development and deployment.’ This applies to all of us, right now. Doing nothing, reading nothing about it, taking an easy, lazy line, makes you culpable. Alarmingly, future generations depend on us.


Close