Futurisms: Critiquing the project to reengineer humanity

Friday, April 17, 2015

Killer robots, international law, and just war theory

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

If the ongoing process of deliberation under the auspices of the United Nations is to result in new law limiting the autonomy of weapon systems, at some point there will need to be lawyers involved. Court was in session on Wednesday afternoon, and the usual rules governing the admissibility of evidence and arguments were in force.

As the debate began to ramp up about three years ago, a handful of law professors, notably Michael Schmitt and Jeffrey Thurnher (of the U.S. Naval War College) and Kenneth Anderson and Matthew Waxman (of American University and Columbia University, respectively), emerged among the first serious public advocates for autonomous weapons. If you’ve ever read law review articles, you know that they are heavy on arguments from precedent; new situations are to be adjudicated in terms of arguments and judgments from the past. For these lawerly defenders of killer robots, the main question seems to be whether autonomous weapons are prohibited by law that was written before the possibility of machines making lethal decisions, in place of soldiers or police, was even considered — outside of science fiction.

The legal debate is complicated by the distinctions between domestic law, international human rights law, and international humanitarian law (IHL, which nowadays is essentially synonymous with the “law of war” or the “law of armed conflict”). The convention under which these talks are being conducted is an IHL treaty, which might mean that even if it were to result in a broad ban on killer robots in interstate warfare, they might remain legal for intrastate use by police and by militaries in civil war.

A quick primer on just war theory


The principles of IHL are rooted in the proposition that warfare is lawful if it is just. According to longstanding tradition in just war theory, a war is only just if there is a just reason to go to war (jus ad bellum) and if the conduct in warfare has been conducted justly (jus in bello). Over time, these abstract principles have been given definition in the form of treaties and other instruments of international law.

At least in theory, jus ad bellum is addressed today primarily by the UN Charter, which proscribes the use of force except in two circumstances: 1) The Security Council has determined that force should be used, or 2) You are under armed attack, and the Security Council hasn’t had time to take action (such as ordering you to surrender). In practice, the Security Council acts when it’s able to, and almost every nation that chooses to go to war claims to be under attack, and invokes Article 51 (the right of self-defense).

Jus in bello comprises principles that are addressed in a number of treaties, the most important and comprehensive of which are the Geneva Conventions of 1949 and their 1977 Additional Protocols. These treaties enshrine and embody a number of principles, the most important of which, in the debate over killer robots, have been 1) distinction, the principle that armed forces must “at all times distinguish” between combatants and civilians, and 2) proportionality, the principle that commanders considering an attack must assess expected harm to civilians, and weigh this against anticipated military gains. There is no particular formula for proportionality, except what a “reasonable” person would decide. A third principle often included under jus in bello is that of humanity, which is usually taken to mean the avoidance of causing suffering that is not needed to achieve military objectives, but actually has deeper roots in the recognition of common humanity — all men are brothers, y’know?

Distinction and proportionality don’t only affect the conduct of armies; they also have implications for weapons. Specifically, a weapon that by its nature is incapable of being directed at military objectives and avoiding civilians is considered inherently indiscriminate, and thus effectively banned. Weapons that cause unnecessary suffering may be deemed inhumane. Many argue that both of these are true of nuclear weapons, but the argument was tested in the International Court of Justice in 1996, and did not carry the day.

“A chilling example of what some may be thinking”


The principal argument that has been advanced by the Campaign to Stop Killer Robots is that computers today do not have — and for the immediate future will not have — capabilities adequate to comply with the jus in bello requirements of distinction and proportionality. The principal counterargument of autonomous weapons proponents is that we don’t know what computers may be capable of in the future; they might ultimately be able to exercise distinction and judge proportionality even better than humans, assuming that “better” is always well-defined.

Lawyers for killer robots like to argue that some nations, especially the United States, already conduct legal reviews of new weapons to determine their consistency with the laws of war. (These are known as “Article 36” reviews.) The United States, and countries which share its aversion to binding arms control, have suggested an increased emphasis on such legal reviews — conducted internally and not subject to public or international oversight, of course — as an alternative to any new law.

This was the case argued by William Boothby, a former Deputy Director of Legal Services for the Royal Air Force (U.K.), and now an associate fellow on “emerging security challenges” at an outfit called the Geneva Centre for Security Policy. (In fact, Boothby runs an intensive four-day course on “Weapons Law and the Legal Review of Weapons” at the Centre, which he was keen to announce to the entire conference. The course is intended for “lawyers, diplomats and other officials”; the next one runs in December and, if you act now, registration is only 750 Swiss francs and includes meals! It’s hard for some folks to turn down a chance for free advertising, I guess.)

Boothby was dismissive of the concept of “meaningful human control,” which he asserted was incompatible with autonomous weapons systems. “That may be obvious,” he informed us, “but I do believe it is worth stating.” We agree. That is the point.

One of Jensen's slides
His co-panelist Eric Talbot Jensen of Brigham Young University had much to say about balloons and submarines. The Hague Convention of 1899 had imposed a moratorium on dropping bombs from balloons, which became moot when the First World War filled the skies with airplanes. Submarines were originally regarded with horror because no quarter could be given to the hundreds who would drown when a ship was sunk, yet submarine warfare was eventually normalized because it was so militarily effective. The point of all this seemed to be that, since these early efforts to preemptively ban emerging technology weapons failed, we should not bother to try stopping killer robots today. (Never mind the fact that other weapons technologies have, through both developing norms and international treaties, been limited with great success.)

Jensen then described his vision of

an autonomous weapon ... able to determine which civilian in the crowd has a metal object that might be a weapon, able to sense an increased pulse and breathing rate amongst the many civilians in the crowd, able to have a 360 degree view of the situation, able to process all that data in milliseconds, detect who the shooter is, and take the appropriate action based on pre-programmed algorithms....

I doubt this narrative had quite the effect that Jensen was hoping for. In a response statement to the plenary, another NGO representative described it as “a chilling example of what some may be thinking.”

The third lawyer on the panel was Kathleen Lawand, representing the International Committee of the Red Cross. She did a good job of being evenhanded as she ran down a list of legal criteria that the use of an autonomous weapon would have to meet. In answer to “the general question of whether or not AWS are unlawful,” her thoughtful answer was that “it depends.” She certainly brought up quite a few reasons to doubt it.

Killer robots — a jus ad bellum concern


Listening to this rather desiccated discussion, it occurred to me that until now, essentially all lawyerly debate about autonomous weapons has been conducted on the assumption that it is entirely a matter of jus in bello, perhaps because all previous debates on the legality of weapons have been entirely within this domain of the law of war. After all, nobody had ever had to consider before that a weapon itself might decide to start a war, unjustly.

This suddenly appeared to me as a door back out to the real world, where we are less concerned about legal correctness and more about things like human dignity, freedom, and survival. Why weren’t any of these things legal issues, I wondered. Do none of them have any place in the law, or in the room that afternoon deciding the future of humanity and killer robots?

Minutes later, after considerable wrangling with my ICRAC colleagues, we had a statement prepared, just in time to be called on so it could be read to the plenary. I’ll simply quote it here:

This discussion has been directed almost entirely to considerations of law derived from the principle of jus in bello. We appear to be overlooking, or excluding, considerations of jus ad bellum that arise from the use of autonomous weapons systems. It is in this context that those considerations also typically discussed as matters of international peace and security may be considered to have implications under the law of armed conflict.

Juergen Altmann reading the ICRAC
statement on
jus ad bellum, with
Noel Sharkey looking on
We are concerned about the destabilization and chaos that may be introduced into the international system by arms races and the appearance of new, unfamiliar threats. In addition, we are concerned as scientists, about what may happen when nations with an uneasy relationship field increasingly complex, autonomous systems in confrontation with one another. We know that the interactions of such systems are unpredictable for two reasons.

The first is the inherent error-proneness of complex software even when it is engineered by a single co-operative team. The second is that, in reality, these interacting systems will have been developed by non-cooperating teams, who will do their utmost to maintain secrecy and to ensure that their systems will exploit every opportunity to prevail once hostilities are understood to have commenced or, perhaps, are believed to be imminent. Once hostilities have begun, it may become very difficult for humans to intervene and to reestablish peace, due to the high speed and complexity of events. Neither side would want to risk losing the battle once it had begun.

Do these considerations have no implications for the legality of autonomous weapons? Can we consider a war that has been initiated as a result of needless political or military instability, or due to the unpredictable interactions of machines, or escalated out of human control due to the high speed and complexity of events, and not for any human moral or political cause, to be a just war?

Wednesday, April 15, 2015

Killer Robots, the Free Market, and the Need for Law

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

In a well-attended lunchtime side event yesterday (don’t go to UN meetings for the free food; plastic-wrapped sandwiches and water or pop were the offerings, and these quickly disappeared at the hands of the horde of hungry delegates), Canadian robotics entrepreneur Ryan Gariepy spoke about why his company, Clearpath Robotics, declared last year that it does not and will not produce killer robots. With about eighty employees, Clearpath is a young, aggressive developer of autonomous ground and maritime vehicle systems, putting about equal emphasis on hardware and software. The company’s name reflects its original goal of developing mine-clearing robots, and Clearpath is by no means allergic to military robotics in general; its client list includes “various militaries worldwide” and major military contractors. Nevertheless, in a statement released in August 2014, Gariepy, as co-founder and Chief Technology Officer, wrote, “To the people against killer robots: we support you.... Clearpath Robotics believes that the development of killer robots is unwise, unethical, and should be banned on an international scale.”

Ryan Gariepy’s presentation
At lunch yesterday, Gariepy explained some of his reasons. He sees a general tradeoff in robotic systems between “flexibility” or “capability” and “predictability” or “controllability,” and worries that military imperatives will drive autonomous weapons toward the former goals. He talked about recent findings that the same “deep learning” neural networks that Professor Stuart Russell had earlier described as displaying “superhuman” performance in visual object classification tasks are also prone to bizarre errors: uniform patterns misclassified as images of familiar objects, and images that the machines recognize correctly but fail to recognize after the addition of what to a human is an imperceptible amount of engineered (non-random) noise. This is one example of the “Black Swan” phenomenon that characterizes complex systems in general. Gariepy also talked about the low costs of subcomponents that would go into killer robots, implying that they could be produced in massive numbers.

Gariepy believes in a “robotics revolution” that can be purely benevolent: “After all, the development of killer robots isn’t a necessary step on the road to self-driving cars, robot caregivers, safer manufacturing plants, or any of the other multitudes of ways autonomous robots can make our lives better.” I and, I suspect, many readers of this blog have some questions about what kind of care robots will be able to give, and whether manufacturing plants are going to be “safer” or just not have people working in them at all (and why those people shouldn’t then be doing the caregiving). But it’s clear that we are no longer living in the military spin-off economy of the Cold War era; the flow of technology from military R&D to civilian application has largely reversed. This makes it doubtful that Clearpath really has “more to lose” than it has to gain from the free publicity that came with its declaration, and Gariepy admits it has actually helped him to recruit top-notch engineers who would rather work with a clear conscience.

In contrast with those who find they must wrestle with complexity and nuance in their quest for the meaning of autonomy (see my previous post), Gariepy’s statement took a pretty straightforward approach to defining what he was talking about: “systems where a human does not make the final decision for a machine to take a potentially lethal action.” That’s the no-go, but otherwise, he pledged that “we will continue to support our military clients and provide them with autonomous systems — especially in areas with direct civilian applications such as logistics, reconnaissance, and search and rescue.”

Ryan Gariepy, on Lake Geneva
Fair enough, but in a conversation over beers on the quay at Lake Geneva at day’s end, I pressed Gariepy on just where he would draw the line. For example, I asked, what if a client came to him and said, “We’ve got an autonomous tank, but we don’t want you to work on the fire controls, just the vehicle navigation so it doesn’t run over anybody.” Gariepy was categorical: “You just admitted it’s a lethal autonomous weapon, so I won’t work on it.” What about a “nonlethal” weapon; suppose somebody wants to arm a drone with a taser and have it patrol their estate? Or suppose they have a missile of some sort, and they want to use an algorithm you own a patent on, not to make the missile home in on a target, but to divert it away in case it detects the presence of a human being? It would only be saving lives, then.

Gariepy threw up his hands at such questions and said, “I don’t want to think about all that. I have a business to run." And in fairness, he is probably the only person who was sitting in the plenary sessions with his laptop open, coding. Referring to the community with nothing else to do than brainstorm and debate about the fine print of a killer-robot ban, he added, "You guys think about it, and tell me what to do.”

One of the advantages of being a private entrepreneur, he explained, is not having to make policy to govern such cases in advance. “I can change my mind, or decide as the situation arises.” Unless, that is, there is a law about the matter, and Gariepy wants a law. So he doesn’t have to think about all that.


(Edit: Expanded the penultimate paragraph, to add more detail.)

Killer Robots: How could a ban be verified?

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Here’s my latest dispatch from the second major diplomatic conference on Lethal Autonomous Weapons Systems, or “killer robots” as the less pretentious know them. (A UN employee, for whom important-sounding meetings are daily background noise, approached me in the cafeteria to ask where she could get a “Stop Killer Robots” bumper sticker like the one I had on my computer, and said she’d have paid no attention to the goings-on if that phrase hadn’t caught her eye.) The conference continued yesterday with what those who make a living out of attending such proceedings like to describe as “the hard work.”


Wishful thinking on Strategy


Expert presentations in the morning session centered on the reasons why militaries are interested in autonomous systems in general and autonomous weapons systems in particular. As Heather Roff of the International Committee for Robot Arms Control (ICRAC) put it, this is not just a matter of assisting or replacing personnel and reducing their exposure to danger and stress; militaries are also pursuing these systems as a matter of “strategic, operational, and tactical advantage.”

Roff traced the origin of the current generation of “precision-guided” weapons to the doctrine of “AirLand Battle” developed by the United States in the 1970s, responding then to perceived Soviet conventional superiority on the European “central front” of the Cold War. Similarly, Roff connected the U.S. thrust toward autonomous weapons today with the doctrine of “AirSea Battle,” responding to the perceived “Anti-Access/Area Denial” capabilities of China (and others).

Some background: The traditional American way of staging an overseas intervention is to park a few aircraft carriers off the shores of the target nation, from which to launch strikes on land and naval targets, plus to mass troops, armor, and logistics at forward bases in preparation for land warfare. But shifts in technology and economic power are undermining this paradigm, particularly with respect to a major power like China, which can produce thousands of ballistic and cruise missiles, advanced combat aircraft, mines, and submarines. Together, these weapons are capable of disrupting forward bases and “pushing” the U.S. Navy back out to sea. This is where the AirSea Battle concept comes in. As first articulated by military analysts connected with Center for Strategic and Budgetary Analysis and the Pentagon’s Office of Net Assessment, the AirSea Battle concept is based on the notion that at the outset of war, the United States should escalate rapidly to massive strikes against military targets on the Chinese mainland (predicated on the assumption that this will not lead to nuclear war).

Now, from the narrow perspective of a war planner, this changing situation may seem to support a case for moving toward autonomous weapon systems. For Roff, however, the main problems with this argument are arms races and proliferation. The “emerging technologies” that underlie the advent of autonomous systems are information technology and robotics, which are already widely proliferated and dispersed, especially in Asia. Every major power will be getting into this game, and as autonomous weapon systems are produced in the thousands, they will become available to lesser powers and non-state actors as well. Any advantages the United States and its allies might gain by leading the world into this new arms race will be short-term at best, leaving us in an even more dangerous and unstable situation.

Autonomous vs. “Semi-Autonomous”


Afternoon presentations yesterday focused on how to characterize autonomy. (I have written a bit on this myself; see my recent article on “Killer Robots in Plato’s Cave” for an introduction and further links.) I actually like the U.S. definition of autonomous weapon systems as simply those that can select and engage targets without further human intervention (after being built, programmed, and activated). The problems arise when you ask what it means to “select” targets, and when you add in the concept of “semi-autonomous” weapons, which are actually fully autonomous except they are only supposed to attack targets that a human has “selected.” I think this is like saying that your autonomous robot is merely semi-autonomous as long as it does what you wanted — that is, it hasn’t malfunctioned yet.

I would carry the logic of the U.S. definition a step further, and simply say that any system is (operationally) autonomous if it operates without further intervention. I call this autonomy without mystery. It leads to the conclusion that, actually, what we want to do is not to ban everything that is an autonomous weapon, but simply to avoid a coming arms race. This can be done by presumptively banning autonomous weapons, minus a list of exceptions for things that are too simple to be of concern, or that we want to allow for other reasons.

Implementing a ban of course raises other questions, such as how to verify that systems are not capable of operating autonomously. This might seem to be a very thorny problem, but I think it makes sense to reframe it: instead of trying to verify that systems cannot operate autonomously, we should instead seek to verify that weapons are, in fact, being operated under meaningful human control. For instance, we could ask compliant states to maintain encrypted records of each engagement involving any remotely operated weapons (such as drones). About two years ago, I along with other ICRAC members produced a paper that explores this proposal; I would commend it to others who might have felt frustrated by some of the confusion and babble during the conference yesterday afternoon.

Tuesday, April 14, 2015

Killer Robots: The Arms Race and the Human Race

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

I mentioned in my first post in this series that last year’s meeting on Lethal Autonomous Weapons Systems was extraordinary for the UN body conducting it in that delegations actually showed up, made statements and paid attention. One thing that was lacking, though, was high-quality, on-topic expert presentations — other than those of my colleagues in the Campaign to Stop Killer Robots, of course. If Monday’s session on “technical issues” is any indication, that sad story will not be repeated this year.

Aggressive Maneuvers for Autonomous Quadrotor Flight
Berkeley computer science professor Stuart Russell, coauthor (with Peter Norvig of Google) of the leading textbook on artificial intelligence, scared the assembled diplomats out of their tailored pants with his account of where we are in the development of technology that could enable the creation of autonomous weapons. (You can see Professor Russell’s slides here.) Thanks to “deep learning” algorithms, the new wave of what used to be called artificial neural networks, “We have achieved human-level performance in face and object recognition with a thousand categories, and super-human performance in aircraft flight control.” Of course, human beings can recognize far more than a thousand categories of objects plus faces, but the kicker is that with thousand-frame-per second cameras, computers can do this with cycle times “in the millisecond range.”

“embarrassingly slow, inaccurate, and ineffective”
After showing a brief clip of Vijay Kumar’s dancing quadrotor micro-drones engaged in cooperative construction activities entirely scheduled by autonomous AI algorithms, Russell discussed what this implied for assassination robots. He lamented that a certain gleaming metallic avatar of Death (pictured at right) had become the iconic representation of killer robots, not only because this is bad PR for the artificial intelligence profession, but because such a bulky contraption would be “embarrassingly slow, inaccurate, and ineffective compared to what we can build in the near future.” For effect, he added that since small flying drones cannot carry much firepower, they should target vulnerable parts of the body such as eyeballs — but if needed, a gram of shaped-charge explosive could easily pierce the skull like a bazooka busting a tank.

Professor Russell then criticized the entire discussion of this issue for focusing only on near-term developments in autonomous weaponry and asking whether they would be acceptable. Rather, “we should ask what is the end point of the arms race, and is that desirable for the human race?” In other words, “Given long-term concerns about the controllability of artificial intelligence,” should we begin by arming it? He assured the audience that it would be physics, not AI technology, that would limit what autonomous weapons could do. He called on his own colleagues to rehabilitate their public image by repudiating the push to develop killer robots, and noted that major professional organizations had already begun to do this.

Of course, every panel must be balanced, and the counterweight to Russell’s presentation was that of Paul Scharre, one of the architects of current U.S. policy on autonomous weapon systems (AWS), who has emerged as perhaps their most effective advocate. Now with the Center for a New American Security, Scharre worked for five years as a civilian appointee in the Pentagon. In his presentation, he embraced the conversation about the “risks and downsides” of AWS, as well as discussion about the need for human involvement to ensure correct decisions, both to provide a “human fuse” in case things go haywire and to act as a “moral agent.” However, it seems to me that Scharre engages these concerns with the aim of disarming those who raise them, while blunting efforts to draw hard conclusions that would point to the need for legally binding arms control. (Over the past few months I have had a few exchanges with Scharre that you can read about in this post on my own blog, as well as in my new article in the Bulletin of the Atomic Scientists on “Semi-Autonomous Weapons in Plato’s Cave.”)

In a recent roundtable discussion hosted by Scharre at the Center for a New American Security, I emphasized the danger posed by interacting systems of armed autonomous agents fielded by different nations. To illustrate the threat, I drew an analogy to the interactions of automated financial agents trading at speeds beyond human control. On March 6, 2010, these trading systems caused a “flash crash” on U.S. stock exchanges during which the Dow Jones Industrial Average rapidly lost almost a tenth of its value. However, the stock market recovered most of its loss — unlike what would happen if major (nuclear) powers were involved in a “flash war” because of autonomous weapons systems.

Although some critics (including yours truly) have been talking about this aspect of the issue for years, Scharre has recently gotten out ahead of most of his own community of hawkish liberals in emphasizing it, apparently with genuine concern. He acknowledges, for example, that because nations will keep their algorithms secret, they will not know what opposing systems are programmed to do.

However, Scharre proposes multilateral negotiations on “rules of the road” and “firebreaks” for armed autonomous systems as the way to address this problem, rather than avoiding creating such a problem in the first place. In an intervention yesterday on behalf of the International Committee for Robot Arms Control (ICRAC), I asked whether such talks, if begun, should not be seen as an effort to legalize killer robots as much as make them safe.

Of course, to a certain kind of political realist, this may seem the only possible solution. I will admit that if nation-states did field automated networks of sensors and weapons in confrontation with one another, I would want those nation-states to be talking and trying to minimize the likelihood of unintended ignition or escalation of violence, even if I doubt such an effort could succeed before it were too late. But why, I again ask, would we not prefer, if possible, to banish this specter of out-of-control war machines from our vision of the future?

The author, delivering the ICRAC opening statement.
I missed most of the opening country statements because I was busy helping to prepare, and then deliver, ICRAC’s opening statement. Here’s a snippet of what I read:

ICRAC urges the international community to seriously consider the prohibition of autonomous weapons systems in light of the pressing dangers they pose to global peace and security.... We fear that once they are developed, they will proliferate rapidly, and if deployed they may interact unpredictably and contribute to regional and global destabilization and arms races.

ICRAC urges nations to be guided by the principles of humanity in its deliberations and take into account considerations of human security, human rights, human dignity, humanitarian law and the public conscience.... Human judgment and meaningful human control over the use of violence must be made an explicit requirement in international policymaking on autonomous weapons.

From what I did get to hear of the countries’ opening statements, they showed a substantial deepening of understanding since last year. The representative from Japan stated that their country would not create autonomous weapons, and France and Germany remained in the peace camp, although I am told the German position has weakened slightly. (The German statement doesn’t seem to be online yet.) The strongest statement from any NATO member state was that of Croatia, which unequivocally called for a legal ban on autonomous weapons. But perhaps most significant of all was the Chinese statement (also not yet online), which called autonomous weapons a threat to humanity and noted the warnings of Russell and Stephen Hawking about the dangers of out-of-control “superintelligent” AI.

If the Chinese are interested in talking seriously about banning killer robots, shouldn’t the United States be as well? I see a glimmer of hope in the U.S. opening statement, which referred to the 2012 directive on autonomous weapons as merely providing a starting point that would not necessarily set a policy for the future. The Obama administration has a bit less than two years left to come up with a better one.

Monday, April 13, 2015

Killer Robots, Human Responsibility, and a Reason to Hope

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Things go wrong, with technology and with people. Case in point: this year, I arrived in Geneva on time after a three-leg flight, but last year’s trip was a surreal adventure. United’s hopelessly overworked agents didn’t inform me that my first destination airport was closed as I waited for the flight, then lied about the unavailability of alternative flights, all while attempting to work a dysfunctional computer system — followed by a plane change due to mechanical problems, and then another missed connection.

So yes, things go wrong, with technology and with people, and even more so with vast systems of people enmeshed with machines and operating close to some margin determined by the unstable equilibria of markets, military budgets, and deterrence. Sometimes, one man loses his mind and scores lose their lives; other times, one keeps his sanity and the world is saved from a peril it hardly knew.

On September 26, 1983, the Soviet infrared satellite surveillance system indicated an American missile launch; the computers had gone through all their “28 or 29 security levels” and it fell to Russian air-defense lieutenant colonel Stanislav Petrov to decide that it had to be a false alarm, given the small number of missiles the system was indicating. This incident occurred just three weeks after another military-technical-human screw-up had led to the destruction of Korean Air Lines flight 007 by a Soviet fighter, during one of the most tense years of the Cold War, and at a time when the Kremlin seriously feared an American first strike.

Petrov was the subject of a 2014 documentary, The Man Who Saved the World, but if you google that title you may find another film about another man, Vasili Arkhipov, who in 1962, during the Cuban Missile Crisis, was the only one of three officers on board a Soviet submarine to veto a decision to launch a nuclear torpedo at American ships. The Americans had detected the sub and were dropping depth charges as a signal for it to surface.

But of course, there have been many “men who saved the world.” During the 1962 crisis alone one could also cite the role of Llewellyn Thompson, who had been President Eisenhower’s ambassador to the Soviets, as well as the role of Kennedy for trusting Thompson’s calm advice, and of Khrushchev, whose memoirs described how deeply he’d been shaken by nuclear test films he’d seen on joining the top leadership. And Golda Meir is reputed to have put the kibosh on Moshe Dayan’s impulse to brandish nuclear spears during the disastrous early days of the Yom Kippur War, so it isn’t only men who have been in that position.

An entire study could surely be done on this topic, but for now we might just observe that when a single human being understands that the fate of all humanity may rest on his or her own decision, such individuals tend to exhibit a level of caution, if not wisdom, that may be lacking in even the people around them. Petrov wasn’t certain the missile launch indication was false, and he was supposed to go by the “computer readouts” and report an attack warning up the chain of command. Kennedy’s senior military leadership unanimously advised him to launch an immediate attack on the Soviet missiles in Cuba and invade the island, which we now know would have been defended with tactical nuclear weapons. Arkhipov had to stare down two other high-ranking officers as the three of them sat for hours in the sweltering heat of a crippled sub, with depth charges going off all around them. That’s how and why deterrence works.

But this all shows something else: if we build an automated military system that works perfectly, and carries out doctrines and protocols to the letter, we may find that we have removed the pin that up to now has kept the wheels of war from spinning out of control — the simple fact that, if you are a human being, it’s never a good day for the entire world to die, no matter what the rulebook says. You always look for a glimmer of hope to avoid an apocalypse.

This, to my mind, is the most compelling reason why we must draw a clear red line against the further automation of conflict and confrontation. Human control of technology, and especially of weapons, must be asserted as an absolute principle, and we have to be clear about what decisions we are not going to delegate to machines. The direct control of violent force must be reserved to human beings. This is not a “human right to be killed by other humans”; rather, it is a human right to live in a world where human beings have not been reduced to targets for machines to dispatch, or mere collateral damage in wars between artificially intelligent robots, in which military necessity has driven humankind out of the loop.

These were some of my thoughts as I sat having a beer with fellow Stop Killer Robots campaigners at a bar on the Rhône (actually a dive on a rather dank canal in one of Geneva’s seedier districts). One of my colleagues had mentioned Petrov, who reportedly is now a frail, poor, lonely old man. Saving the world can be a thankless task. Arkhipov died from radiation, Kennedy was shot, Khrushchev deposed, and Thompson retired into obscurity. Most of my Stop Killer Robots colleagues struggle to make ends meet, and I had to beg my way to Geneva. Whether we’ll be of any use in world-saving remains to be seen.

Sunday, April 12, 2015

Killer Robots: Where Is the World Heading?

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Before I start blogging the kickoff of this week’s United Nations meeting on killer robots, a little background is called for, both about the issue and my views on it.

I have worked on this issue in different capacities for many years now. (In fact, I proposed a ban on autonomous weapons as early as 1988, and again in 2002 and 2004.) In the present context, the first thing I want to say is about the Obama administration’s 2012 policy directive on Autonomy in Weapon Systems. It was not so much a decision made by the military as a decision made for the military after long internal resistance and at least a decade of debate within the U.S. Department of Defense. You may have heard that the directive imposed a moratorium on killer robots. It did not. Rather, as I explained in 2013 in the Bulletin of the Atomic Scientists, it “establishes a framework for managing legal, ethical, and technical concerns, and signals to developers and vendors that the Pentagon is serious about autonomous weapons.” As a Defense Department spokesman told me directly, the directive “is not a moratorium on anything.” It’s a full-speed-ahead policy.

What counts as "semi-autonomous"?
Top: Artist's conception of Lockheed Martin's
planned Long Range Anti-Ship Missile in flight.

Bottom: The Obama administration would
define the original T-800 Terminator as
merely "semi-autonomous."
The story of how so many people misinterpreted or were misled by the directive is complicated, and I won’t get into details right now, but basically the policy was rather cleverly constructed by strong proponents of autonomous weapons to deflect concerns about actual emerging (and some existing) weaponry by suggesting that the real issue is futuristic machines that independently “select and engage” targets of their own choosing. These are supposedly placed under close scrutiny by the policy — but not really. The directive defines a separate category of “semi-autonomous” weapons which in reality includes everything that is happening today or is likely to happen in the near future as we head down the road toward Terminator territory.  A prime example is Lockheed Martin’s Long Range Anti-Ship Missile, a program now entering “accelerated acquisition” with initial deployment slated for 2018. This wonder-weapon can autonomously steer itself around emergent threats, scan a wide area searching for an enemy fleet, identify target ships among civilian vessels and others in the vicinity, and plan its attack in collaboration with sister missiles in a salvo. It’s classified as “semi-autonomous,” which under the policy means it’s given a green light and does not require senior review. In fact, as I’ve argued, under the bizarre definition in the administration’s policy, The Terminator himself (excuse me, itself) could qualify as a merely “semi-autonomous” weapon system.

If it sounds like I’m casting the United States as the villain here, let me be clear: the rest of the world is in the game, and they’re right behind us, but we happen to be the leader, in both technology and policy. For every type of drone (and here I can be accused of conflating issues: today’s drones are not autonomous, although some call them semi-autonomous, but the existence of a close relationship between drone and autonomous weapons technologies is undeniable) that the United States has in use or development, China has produced a similar model, and when the U.S. Navy opened its Laboratory for Autonomous Systems Research in 2012, Russia responded by establishing its own military robotics lab the following year. Some have characterized Russia as “taking the lead,” but the reality is better characterized by the statement of a Russian academician that “From the point of view of theory, engineering and design ideas, we are not in the last place in the world.”

The Big Dog that has Russia's military
leadership barking.
At the 2014 LAWS meeting, Russian and Chinese statements were as bland and obtuse as their American counterparts, but it’s clear that, like the rest of the world, those countries are watching closely what we do, and showing that they are not ready to accept “last place.” Russian deputy prime minister Dmitry Rogozin, head of military industries, penned an article in Rossiya Gazeta in 2014 that amounts to perhaps the closest thing to an official Russian policy response to the publicly released U.S. directive: a clarion call to Russian industry, mired as it is in post-Soviet mediocrity, to step up to the challenge posed by American achievements like “Big Dog” and to develop “robotic systems that are fully integrated in the command and control system, capable not only to gather intelligence and to receive from the other components of the combat system, but also on their own strike.” China eschews such straightforwardly belligerent declarations, and interestingly, the Chinese closing statement at last year’s meeting rebuked the American suggestion to focus on the process of legality reviews for new weapons, on the grounds that this would exclude countries which did not yet have autonomous weapons to review — a suggestion of possible Chinese support for a more activist approach to arms control. But China’s activity in areas of drones, robots, and artificial intelligence speak for themselves; China will not accept last place either.

My question for those setting U.S. policy is this: Given that we are the world’s leader in this technology, but with only a narrow lead at best, why are we not at least trying to lead in a different direction, away from a global robot arms race? Why are we not saying that, of course, we will develop autonomous weapons if necessary, but we would prefer an arms-control approach, based on strong moral principles and the overwhelming sentiment of the world’s people (including strong majorities among U.S. military personnel)? Why not? Why are we not even signaling interest in such an approach? Comments are open, fellas.

In the days to come, I’ll report on both the expert talks and country statements, and whatever else I see going on in Geneva, as well as dig deeper into the underlying issues as they come up. More tomorrow...

Saturday, April 11, 2015

Blogging the UN Killer Robots Meeting

[First post in a series covering the UN’s 2015 conference on killer robots. See all posts in the series here.]

Over the next week, I’ll be blogging from Geneva, where 118 nations (if they all show up) will be meeting to discuss “Lethal Autonomous Weapons Systems” (LAWS) and, you know, the fate of humanity. You may have seen headlines about the United Nations trying to outlaw killer robots, which is a bit inaccurate. First of all, the UN can’t actually outlaw anything; Security Council resolutions are supposed to have the force of law on matters of international peace and security, but apart from attempts to shackle miscreants like Iraq, Iran, and North Korea, the Security Council has never tried to impose arms control on the major military powers, most of which can just veto its resolutions anyway. And anyway, the first point is irrelevant; this meeting is taking place under a subsidiary of the UN, the Convention on Certain Conventional Weapons (CCW), whose full name is actually longer and even more boring-sounding than that but has something to do with “excessively injurious” or “indiscriminate” weapons. As an aside, I note that “excessively injurious” weapons are the ones that don’t kill you, not the ones that do. But delegating the issue of autonomous weapons to the CCW is more related to the notion that stupid killer robots, like land mines, would be unable to distinguish civilians from combatants, hence “indiscriminate.”

The author (on the right)
This will actually be the second CCW meeting on LAWS, which is a nice acronym but doesn’t have any official definition. The first meeting, held in 2014, was attended by at least 80 nations, which is very good for a treaty organization whose typical meeting was described by a colleague of mine as “start late, nobody wants to say anything, routine business announcements, and adjourn early.” The 2014 LAWS meeting was nothing like that. The room was packed, expert presentations were listened to intently both in the main sessions and side events, and dozens of countries plus a handful of NGOs made statements. The highlight of the entire week was a statement by the Holy See (Vatican): “... weighing military gain and human suffering... is not reducible to technical matters of programming.” (You can read the full Vatican statement here or listen to it here.) The nadir had to be when the U.S. delegation asserted that the Obama administration’s 2012 policy directive to the military on Autonomy in Weapon Systems represents an example for the rest of the world. Another low point was the closing statement from U.S. State Department legal advisor Stephen Townley, in which he reasserted the same position, adding with condescension that “it is important to remind ourselves that machines do not make decisions.” Oookay, nothing to worry about then, now that we know that autonomy in weapon systems is actually impossible.

Full disclosure: I am a member of one of those NGOs, the International Committee for Robot Arms Control, part of the Campaign to Stop Killer Robots, a multinational coalition led by Human Rights Watch. I don’t speak for them; in fact, I am liable to say things that higher-ups in the hierarchy don’t want to hear (but should listen to, IMHO). But at least you know where I stand (and where I will sit in the big room), in case you were still wondering. I’m grateful to my colleagues on Futurisms for inviting me to blog here, although they may not agree with everything (or anything) I say, either, so please don’t call in drone strikes on them; let me be the martyr, please, if anything I say arouses your human capacity for violence.

Another preview post to come tomorrow, and then more over the next week as the meeting proceeds.