Before I start blogging the kickoff of this week’s United Nations meeting on killer robots, a little background is called for, both about the issue and my views on it.
I have worked on this issue in different capacities for many years now. (In fact, I proposed a ban on autonomous weapons as early as 1988, and again in 2002 and 2004.) In the present context, the first thing I want to say is about the Obama administration’s 2012 policy directive on Autonomy in Weapon Systems. It was not so much a decision made by the military as a decision made for the military after long internal resistance and at least a decade of debate within the U.S. Department of Defense. You may have heard that the directive imposed a moratorium on killer robots. It did not. Rather, as I explained in 2013 in the Bulletin of the Atomic Scientists, it “establishes a framework for managing legal, ethical, and technical concerns, and signals to developers and vendors that the Pentagon is serious about autonomous weapons.” As a Defense Department spokesman told me directly, the directive “is not a moratorium on anything.” It’s a full-speed-ahead policy.
![]() |
What counts as "semi-autonomous"? Top: Artist's conception of Lockheed Martin's planned Long Range Anti-Ship Missile in flight. Bottom: The Obama administration would define the original T-800 Terminator as merely "semi-autonomous." |
If it sounds like I’m casting the United States as the villain here, let me be clear: the rest of the world is in the game, and they’re right behind us, but we happen to be the leader, in both technology and policy. For every type of drone (and here I can be accused of conflating issues: today’s drones are not autonomous, although some call them semi-autonomous, but the existence of a close relationship between drone and autonomous weapons technologies is undeniable) that the United States has in use or development, China has produced a similar model, and when the U.S. Navy opened its Laboratory for Autonomous Systems Research in 2012, Russia responded by establishing its own military robotics lab the following year. Some have characterized Russia as “taking the lead,” but the reality is better characterized by the statement of a Russian academician that “From the point of view of theory, engineering and design ideas, we are not in the last place in the world.”
![]() |
The Big Dog that has Russia's military leadership barking. |
My question for those setting U.S. policy is this: Given that we are the world’s leader in this technology, but with only a narrow lead at best, why are we not at least trying to lead in a different direction, away from a global robot arms race? Why are we not saying that, of course, we will develop autonomous weapons if necessary, but we would prefer an arms-control approach, based on strong moral principles and the overwhelming sentiment of the world’s people (including strong majorities among U.S. military personnel)? Why not? Why are we not even signaling interest in such an approach? Comments are open, fellas.
In the days to come, I’ll report on both the expert talks and country statements, and whatever else I see going on in Geneva, as well as dig deeper into the underlying issues as they come up. More tomorrow...
I am in complete agreement here, control the situation (the need to go to war) before creating all of the pieces in a real-life game of Risk using our tax dollars and risking our lives in a different type of global war.
ReplyDelete