[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Things go wrong, with technology and with people. Case in point: this year, I arrived in Geneva on time after a three-leg flight, but last year’s trip was a surreal adventure. United’s hopelessly overworked agents didn’t inform me that my first destination airport was closed as I waited for the flight, then lied about the unavailability of alternative flights, all while attempting to work a dysfunctional computer system — followed by a plane change due to mechanical problems, and then another missed connection.

So yes, things go wrong, with technology and with people, and even more so with vast systems of people enmeshed with machines and operating close to some margin determined by the unstable equilibria of markets, military budgets, and deterrence. Sometimes, one man loses his mind and scores lose their lives; other times, one keeps his sanity and the world is saved from a peril it hardly knew.

On September 26, 1983, the Soviet infrared satellite surveillance system indicated an American missile launch; the computers had gone through all their “28 or 29 security levels” and it fell to Russian air-defense lieutenant colonel Stanislav Petrov to decide that it had to be a false alarm, given the small number of missiles the system was indicating. This incident occurred just three weeks after another military-technical-human screw-up had led to the destruction of Korean Air Lines flight 007 by a Soviet fighter, during one of the most tense years of the Cold War, and at a time when the Kremlin seriously feared an American first strike.

Petrov was the subject of a 2014 documentary, The Man Who Saved the World, but if you google that title you may find another film about another man, Vasili Arkhipov, who in 1962, during the Cuban Missile Crisis, was the only one of three officers on board a Soviet submarine to veto a decision to launch a nuclear torpedo at American ships. The Americans had detected the sub and were dropping depth charges as a signal for it to surface.

But of course, there have been many “men who saved the world.” During the 1962 crisis alone one could also cite the role of Llewellyn Thompson, who had been President Eisenhower’s ambassador to the Soviets, as well as the role of Kennedy for trusting Thompson’s calm advice, and of Khrushchev, whose memoirs described how deeply he’d been shaken by nuclear test films he’d seen on joining the top leadership. And Golda Meir is reputed to have put the kibosh on Moshe Dayan’s impulse to brandish nuclear spears during the disastrous early days of the Yom Kippur War, so it isn’t only men who have been in that position.

An entire study could surely be done on this topic, but for now we might just observe that when a single human being understands that the fate of all humanity may rest on his or her own decision, such individuals tend to exhibit a level of caution, if not wisdom, that may be lacking in even the people around them. Petrov wasn’t certain the missile launch indication was false, and he was supposed to go by the “computer readouts” and report an attack warning up the chain of command. Kennedy’s senior military leadership unanimously advised him to launch an immediate attack on the Soviet missiles in Cuba and invade the island, which we now know would have been defended with tactical nuclear weapons. Arkhipov had to stare down two other high-ranking officers as the three of them sat for hours in the sweltering heat of a crippled sub, with depth charges going off all around them. That’s how and why deterrence works.

But this all shows something else: if we build an automated military system that works perfectly, and carries out doctrines and protocols to the letter, we may find that we have removed the pin that up to now has kept the wheels of war from spinning out of control — the simple fact that, if you are a human being, it’s never a good day for the entire world to die, no matter what the rulebook says. You always look for a glimmer of hope to avoid an apocalypse.

This, to my mind, is the most compelling reason why we must draw a clear red line against the further automation of conflict and confrontation. Human control of technology, and especially of weapons, must be asserted as an absolute principle, and we have to be clear about what decisions we are not going to delegate to machines. The direct control of violent force must be reserved to human beings. This is not a “human right to be killed by other humans”; rather, it is a human right to live in a world where human beings have not been reduced to targets for machines to dispatch, or mere collateral damage in wars between artificially intelligent robots, in which military necessity has driven humankind out of the loop.

These were some of my thoughts as I sat having a beer with fellow Stop Killer Robots campaigners at a bar on the Rhône (actually a dive on a rather dank canal in one of Geneva’s seedier districts). One of my colleagues had mentioned Petrov, who reportedly is now a frail, poor, lonely old man. Saving the world can be a thankless task. Arkhipov died from radiation, Kennedy was shot, Khrushchev deposed, and Thompson retired into obscurity. Most of my Stop Killer Robots colleagues struggle to make ends meet, and I had to beg my way to Geneva. Whether we’ll be of any use in world-saving remains to be seen.

2 Comments

  1. I don't think LAWS are as scary as nukes. And depending on how you define them they are clearly with us already. "off the loop" LAWS with no human finger on the button go back to the American Civil War but they were pretty dumb landmines. Even so, they were machines that "autonomously" killed people. A dumb AWS but L all the same.

    I think the focus on "future tech" is strategically unwise. If the Campaign is to get its motions passed in a diplomatic assembly, it needs to draw some normative lines in the sand that have retrospective implications. Or it needs to draw a normative line in the sand and have an exemption for the extant AWS.

  2. Thanks for your thoughtful reply, robot. See my article in Bulletin of the Atomic Scientists, which addresses some of what you are saying.

    I personally believe AWS are as scary as nukes for the simple reason that they are a likely route to war between the nuclear powers, which is perhaps not certain to be nuclear war, but I would regard any significant probability that it would "go nuclear" as pretty scary.

    It's true that almost everything new has some continuity with what has existed in the past. This does not mean that it isn't significantly different, especially after some time has elapsed and you find that the world you are in is "qualitatively" different from the world you remember.

Comments are closed.