Futurisms: Critiquing the project to reengineer humanity

Tuesday, November 24, 2015

Future Selves

In the latest issue of the Claremont Review of Books, political philosopher Mark Blitz — a professor at Claremont McKenna College — has an insightful review of Eclipse of Man, the new book from our own Charles T. Rubin. Blitz writes:
What concerns Charles Rubin in Eclipse of Man is well conveyed by his title. Human beings stand on the threshold of a world in which our lives and practices may be radically altered, and our dominance no longer assured. What began a half-millennium ago as a project to reduce our burdens threatens to conclude in a realm in which we no longer prevail. The original human subject who was convinced to receive technology’s benefits becomes unrecognizable once he accepts the benefits, as if birds were persuaded to become airplanes. What would remain of the original birds? Indeed, we may be eclipsed altogether by species we have generated but which are so unlike us that “we” do not exist at all—or persist only as inferior relics, stuffed for museums. What starts as Enlightenment ends in permanent night....

Rubin’s major concern is with the contemporary transhumanists (the term he chooses to cover a variety of what from his standpoint are similar positions) who both predict and encourage the overcoming of man.
Blitz praises Rubin for his “fair, judicious, and critical summaries” of the transhumanist authors he discusses, and says the author “approaches his topic with admirable thoughtfulness and restraint.”

Some of the subjects Professor Blitz raises in his review essay are worth considering and perhaps debating at greater length, but I would just like to point out one of them. Blitz mentions several kinds of eternal things — things that we are stuck with no matter what the future brings:

One question involves the goods or perfections that our successors might seek or enjoy. Here, I might suggest that these goods cannot change as such, although our appreciation of them may. The allure of promises for the future is connected to the perfections of truth, beauty, and virtue that we currently desire. How could one today argue reasonably against the greater intelligence, expanded artistic talent, or improved health that might help us or those we love realize these goods? Who would now give up freedom, self-direction, and self-reflection?...

There are still other limits that no promise of transhuman change can overcome. These are not only, or primarily, mathematical regularities or apparent scientific laws; they involve inevitable scarcities or contradictions. Whatever happens “virtually,” there are only so many actual houses on actual beautiful beaches. Honesty differs from lying, the loyal and true differ from the fickle and untrustworthy, fame and power cannot belong both to one or a few and to everyone. These limits will set some of the direction for the distribution of goods and our attachment to them, either to restrain competition or to encourage it. They will thus also help to organize political life. Regulating differences of opinion, within appropriate freedom, and judging among the things we are able to choose will remain necessary.

Nonetheless, even if it is true that what we (or any rational being) may properly consider to be good is ultimately invariable, and even if the other limits I mentioned truly exist, our experience of such matters presumably will change as many good things become more available, and as we alter our experience of what is our own — birth, death, locality, and the body.

Let us look carefully at the items listed in this very rich passage. Blitz does not refer to security and health and long life, the goods that modernity arguably emphasizes above all others. Instead, Blitz begins by mentioning the goods of “the perfections of truth, beauty, and virtue.” These are things that “we currently desire” but that also “cannot change as such, although our appreciation of them may.”

Let us set aside for now beauty — which is very complicated, and which may be the item in Blitz’s Platonic triad that would perhaps be likeliest to be transformed by a radical shift in human nature — and focus on truth and virtue. How can they be permanent, unchanging things?

To understand how truth and virtue can be eternal goods, see how Blitz turns to physical realities — the kinds of scarcities of material resources that Malthus and Darwin would have noticed, although those guys tended to think more in terms of scarcities of food than of beach houses. Blitz also mentions traits that seem ineluctably to arise from the existence of those physical limitations. The clash of interests will inevitably lead to scenarios in which there will be “differences of opinions” and in which some actors may be more or less honest, more or less trustworthy. There will arise situations in which honesty can be judged differently from lying, loyalty from untrustworthiness. “Any rational being,” including presumably any distant descendant of humanity, will prize truth and virtue. They are arguably pre-political and pre-philosophical — they are facts of humanity and society that arise from the facts of nature — but they “help to organize political life.”

And yet this entire edifice is wiped away in the last paragraph quoted above. “Our experience” of truth and virtue, Blitz notes, “presumably will change” as our experience of “birth, death, locality, and the body” changes. Still, we may experience truth and virtue differently, but they will continue to provide the goals of human striving, right?

Yet consider some of the transhumanist dreams on offer: a future where mortality is a choice, a future where individual minds merge and melt together into machine-aided masses, a future where the resources of the universe are absorbed and reordered by our man-machine offspring to make a vast “extended thinking entity.” Blitz may be right that “what is good ... cannot in the last analysis be obliterated,” but if we embark down the path to the posthuman, our descendants may, in exchange for vast power over themselves and over nature, lose forever the ability to “properly orient” themselves toward the goods of truth and virtue.

Read the whole Blitz review essay here; subscribe to the Claremont Review of Books here; and order a copy of Eclipse of Man here.

Tuesday, November 3, 2015

Do We Love Robots Because We Hate Ourselves?

A piece by our very own Ari N. Schulman, on WashingtonPost.com today:

... Even as the significance of the Turing Test has been challenged, its attitude continues to characterize the project of strong artificial intelligence. AI guru Marvin Minsky refers to humans as “meat machines.” To roboticist Rodney Brooks, we’re no more than “a big bag of skin full of biomolecules.” One could fill volumes with these lovely aphorisms from AI’s leading luminaries.

And for the true believers, these are not gloomy descriptions but gleeful mandates. AI’s most strident supporters see it as the next step in our evolution. Our accidental nature will be replaced with design, our frail bodies with immortal software, our marginal minds with intellect of a kind we cannot now comprehend, and our nasty and brutish meat-world with the infinite possibilities of the virtual. 

Most critics of heady AI predictions do not see this vision as remotely plausible. But lesser versions might be — and it’s important to ask why many find it so compelling, even if it doesn’t come to pass. Even if “we” would survive in some vague way, this future is one in which the human condition is done away with. This, indeed, seems to be the appeal....

To read the whole thing, click here.

Tuesday, October 20, 2015

Science, Virtue, and the Future of Humanity

The new book Science, Virtue, and the Future of Humanity, just published by Rowman & Littlefield, brings together essays examining the future — particularly scientific and technological visions of the future, and the role that virtue ought to play in that future. Several of the essays appeared in The New Atlantis, including essays about robots and “friendly AI,” and most of them grew out of a conference that New Atlantis contributing editor Peter A. Lawler hosted at Berry College in Georgia back in 2011. (Professor Lawler edited this new book, along with Marc D. Guerra of Assumption College.)

Lawler’s own introductory essay is a real treat, weaving together references to recent movies, philosophers and economists, the goings-on in Silicon Valley, and a Tocquevillian appreciation for the complicated and surprising ways that liberty and religion are intertwined in the United States. No one is better than Lawler at revealing the gap between who we believe ourselves to be and who we really are as a people, and at showing how our longing for liberty is really only sensible in a relational context — in a world of families, communities, institutions, citizenship, and interests.

Charles Rubin’s marvelous essay about robots and the play R.U.R. is joined by the essay that Ari Schulman and I wrote on so-called “friendly” AI. The libertarian journalist Ron Bailey of Reason magazine makes the case for radical human enhancement, arguing, among other things, that enhancement will allow people to become more virtuous. Jim Capretta and William English each contribute essays on demographics and our entitlement system. Dr. Ben Hippen discusses organ donation (and organ selling).

Patrick Deneen, Robert Kraynak, and J. Daryl Charles each offer wide-ranging essays that challenge the foundations of modernity. Deneen discusses some of the assumptions and tendencies in modern science and modern political science that corrode the very institutions, traditions, and beliefs that made them possible. Kraynak shows how thinkers like Richard Rorty and Steven Pinker must scramble to explain the roots of their beliefs about justice. Do their “human values” — mostly just secularized versions of Judeo-Christian morality — make any sense without a belief in God? And J. Daryl Charles looks at the ways that genetics and even evolutionary theory affect our understanding of moral agency, a question with implications for fields such as criminal law.

Each of the editors offers an essay about education: Lawler critiques the libertarian critique of liberal education, and Guerra explores the ways that liberal education fits (sometimes uncomfortably) in the broader setting of higher education.

The collection is rounded out by Ben Storey’s smart essay about Alexis de Tocqueville and technology — focusing not just on Democracy in America but on two of Tocqueville’s lesser known works.

So far, Science, Virtue, and the Future of Humanity is only available in a hardcover format that is rather costly (more than $80 new). Here’s hoping it comes out in a more affordable format before long. Readers of The New Atlantis and of our Futurisms blog, and indeed anyone interested in a deeper understanding of the meaning of progress, will find much to learn in its pages.

Thursday, October 1, 2015

Free to Experiment?

Last month, Vice published a short article by Jason Koebler about how genetic engineering, including the genetic engineering of human beings, is probably protected by the First Amendment. The basic argument behind this seemingly ridiculous notion is that the First Amendment protects not only speech but also “expressive conduct,” which can include offensive performance art, flag burning, and, perhaps, “acts of science.” Such acts of science may be especially worth protecting when they are very controversial, since that might mean they should be treated as political or religious speech. The 2010 hullabaloo over Craig Venter’s “synthetic cell” was trotted out as an example of the deep political and even religious implications of scientific experiments, since the idea of creating synthetic life might, as it did for Venter himself, change our “views of definitions of life and how life works.”

It is worth noting that, notwithstanding breathless headlines and press releases, Craig Venter did not create a “synthetic life form.” What Venter did was synthesize a bacterial genome, though he did not design that genome, but rather used a slightly modified version of the sequence of an existing bacterial species. Venter then put this synthesized genome into cells of a closely related bacterial species whose genome had been removed, and, lo, the cells used their new genomes and eventually came to resemble the (slightly different) species from which the synthetic genome was derived.

Unless Venter once believed that DNA possessed mystical properties that made it impossible to manufacture, or that he had never heard of bacterial transformation experiments by which bacteria can pick up and use foreign pieces of DNA (experiments that predate, and were in fact used to establish, our knowledge that DNA is the molecule of heredity), it is hard to see why he would need to change his “views of definitions of life and how life works” in light of his experiment.

Of course, freedom of speech is not only for the coherent but also the confused arguments made for the deep implications of some controversial forms of research. In a talk given at a recent DARPA conference, bioethicist Alta Charo suggested that controversial experiments like cloning or genetic engineering may be carried out to "challenge" those who think that these experiments are wrong, and that this might mean they should be protected as forms of political expression.

Scientists and academics should be free to challenge deeply held beliefs about human nature and morality. As Robert P. George has argued regarding his pro-infanticide Princeton colleague Peter Singer, “freedom of thought and expression and academic freedom are for everyone — not just those whose views others find congenial.” But this academic freedom is premised on doing business “in the currency of academic discourse: a currency consisting of reasons and arguments.” Cloned or genetically engineered children are not reasons or arguments, and they are certainly not the currency of academic discourse.

The use of reproductive biotechnologies like cloning or genetic engineering to express a political or religious view would mean that the child that results from these technologies would be treated as a form of political or artistic expression. But as the Witherspoon Council on Ethics and the Integrity of Science argued in its recent report on human cloning, this kind of perversion of the relationship between parents and children, where children become seen as products to be manufactured in accordance to the parents’ specifications and to serve their interests, is at the heart of what is wrong with technologies like human cloning and genetic engineering. And moreover, as the Council notes, to claim First Amendment protection would require satisfying several legal criteria that cloning almost certainly would not satisfy. That there are respectable bioethicists arguing that the creation of human beings is now seen as a form of artistic or political self-expression is in fact a very good reason for passing laws to ban technologies like cloning for manufacturing human beings.

Friday, September 25, 2015

What’s the Difference?

“How is having a cochlear implant that helps the deaf hear any different than having a chip in your brain that could help control your thoughts?”   —Michael Goldblatt, former director of DARPA’s Defense Sciences Office, quoted in the Atlantic

What’s the difference between reading books all day and playing video games?

Come on, what’s the difference between spending your time with friends and family “in person” and spending your time with them virtually?

How is having a child through cloning any different from having a child the old-fashioned way?

Why is the feeling of happiness that you have after a good day any different from the feeling of happiness I have after I take this drug?

Why is talking with your spouse and children using your mouth and ears different, in any way that counts, from communicating with them through brain chips that link your minds directly?

We already pick our mates with some idea of what our kids might look and act like. How is that any different from genetically engineering our children so they look and act the way we want?

Don’t we already send our children to school to make them smarter? How is that any different from just downloading information straight into their brains?

If your grandmother is already in a nursing home, what’s the difference if the nurses are robots?

Memory is already so fluid and fallible that we forget things all the time; what’s the difference if we just help people forget things they would rather not be stuck remembering?

What does it matter, in the end, if a soldier is killed by another soldier or by a robot?

How is it really different if, instead of marrying other human beings, some people choose to marry and have sex with machines programmed for obedience and pleasure?

What’s the difference if our bodies are replaced by machines?

In the scheme of things, what’s the difference if humanity is replaced with artificial life? The persistence of intelligence is all that matters, right?

What’s the difference?

Tuesday, September 22, 2015

Who’s Afraid of ‘Brave New World’?

I was very happy to learn from George Dvorsky at io9 that Aldous Huxley’s novel “Brave New World is not the terrifying dystopia it used to be.” It’s not that the things in the novel couldn’t happen (more or less), but rather that they are happening and “we” have become much more enlightened and simply don’t need to worry about them anymore. The book is a product of its time, and our time understands these things much better, apparently.

Thus, the strongest condemnation Dvorsky can offer of the eugenics program depicted in Brave New World is that it is “disquieting.” But we can get over that. Genetic engineering techniques that might have once have been met with “repugnance” are now commonplace. Newer techniques which promise more control to make more things like those in Brave New World possible will have the same fate, Dvorsky expects, trotting out the number-one cliché of progressive bioethics: “While potentially alarming, these biotechnologies and others currently in development hold great promise.” And on the basis of that “great promise” we merrily slide right down the slippery slope:

Advances in genetics will serve to eliminate a host of genetic diseases, while offering humans the opportunity to forgo the haphazard genetic roll of the dice when it comes to determining the traits of offspring. A strong case can be made that it’s both our duty and right to develop these technologies.

Problem solved!

The next non-problem Dvorsky sees in Brave New World is totalitarianism, which, along with “top-down” eugenics, he proclaims “dead.” Happy day! One might trust Dvorsky more on this topic if he did not declare that even in Huxley’s own time the book conveyed a “false sense of urgency” on this topic. But we now know that biotechnology will be “tools made by the people, for the people.” A case in point, I suppose, would be the drug company that just raised the price of one of its pills by 5000 percent.

And on and on. Concern about widespread use of psychoactive prescription and non-prescription drugs is, Dvorsky says, either “not entirely fair” or “hysterical.” On sex and the family Huxley’s “prescience is remarkable” but his concerns  are “grossly old fashioned and moralizing.” So too his Malthusian concerns are “grossly overstated” particularly when population control (apparently it is necessary after all) can be achieved by “humanitarian methods.”

So there you have it. It seems that for the advocates of technological “progress” and human redesign “don’t worry, be happy” has become a respectable line of argument. I know I feel much better now.

Friday, September 4, 2015

Using cloning for human enhancement?

We have occasionally written about human cloning here on Futurisms — for example, five years ago we had a back-and-forth with Kyle Munkittrick about cloning — and we return to the subject today, with an excerpt from the latest issue of The New Atlantis. The entirety of that new issue is dedicated to a report called The Threat of Human Cloning: Ethics, Recent Developments, and the Case for Action. The report, written by a distinguished body of academics and policy experts, makes the case against all forms of human cloning — both for the purpose of creating children and for the purpose of biomedical research.

Below is one excerpt from the report, a section exploring the possibility of using cloning to create “enhanced” offspring. (I have removed the citations from this excerpt, but you can find them and read this section in context here.)

*     *     *

Cloning for “human enhancement.” Much of the enthusiasm for and anxiety about human cloning over the years has been concerned with the use of cloning as a genetic enhancement technology. Scientists, and especially science-fiction writers, have imagined ways of using cloning to replicate “persons of attested ability” as a way to “raise the possibility of human achievement dramatically,” in the words of J.B.S. Haldane. As molecular biologist Robert L. Sinsheimer argued in 1972, “cloning would in principle permit the preservation and perpetuation of the finest genotypes that arise in our species.” Candidates for this distinction often include Mozart and Einstein, though the legacy of eugenics in the twentieth century has left many authors with an awareness that those who would use these technologies may be more interested in replicating men like Hitler. (While in most cases, the idea of cloning a dictator like Hitler is invoked as a criticism of eugenic schemes, some writers have actually advocated the selective eugenic propagation of tyrants — for instance, the American geneticist Hermann J. Muller who, in a 1936 letter to Stalin advocating the eugenic use of artificial insemination, named Lenin as an example of a source of genetic material whose outstanding worth “virtually all would gladly recognize.”)

Today, eugenics has a deservedly negative reputation, and the idea of using a biotechnology like cloning to replicate individuals of exceptional merit is prima facie ethically suspect. However, advocates of eugenic enhancement have never entirely disappeared, and their influence in bioethics is arguably not waning, but waxing. In recent years academic bioethicists like John Harris and Julian Savulescu have been attempting to rehabilitate the case for eugenic enhancements on utilitarian grounds. For these new eugenicists, cloning-to-produce-children represents “power and opportunity over our destiny.”

This new eugenics needs to be confronted and refuted directly, since insisting on the self-evident evil of eugenics by pointing to historical atrocities committed in its name may become increasingly unpersuasive as memories of those atrocities dim with time, and as new technologies like cloning and genetic engineering make eugenic schemes all the more attractive. Furthermore, as the philosopher Hans Jonas noted in a critique of cloning, the argument in favor of cloning excellent individuals, “though naïve, is not frivolous in that it enlists our reverence for greatness and pays tribute to it by wishing that more Mozarts, Einsteins, and Schweitzers might adorn the human race.”

In an important sense, cloning is not an enhancement, since it replicates, rather than improves on, an existing genome. However, as Jonas’s remark about the human race indicates, the cloning of exceptional genotypes could be an enhancement at the population level. And from the point of view of parents who want children who can checkmate like Kasparov, belt like Aretha, dunk like Dr. J, or bend it like Beckham, cloning could represent a way to have offspring with the exceptional abilities of these individuals.

Arguably, cloning is a less powerful form of genetic engineering than other techniques that introduce precise modifications to the genome. After all, cloning only replicates an existing genome; it doesn’t involve picking and choosing specific traits. This weakness may also, however, make cloning more appealing than other forms of genetic engineering, especially when we consider the genetic complexity of many desirable traits. For example, some parents might seek to enhance the intelligence of their children, and evidence from twin studies and other studies of heredity seems to indicate that substantial amounts of the variation in intelligence between individuals can be attributed to genetics. But any given gene seems to have only a tiny effect on intelligence; one recent study looking at several genes associated with intelligence found that they each accounted for only about 0.3 points of IQ. With such minor effects, it would be difficult to justify the risks and expense of intervening to modify particular genes to improve a trait like intelligence.

Cloning, on the other hand, would not require certain and specific knowledge about particular genes, it would only require identifying an exceptionally intelligent individual and replicating his or her genome. Of course the cloned individual’s exceptional intelligence may be due to largely non-genetic factors, and so for a trait like intelligence there will never be certainty about whether the cloned offspring will match their genetic progenitor. But for people seeking to give their child the best chance at having exceptional intelligence, cloning may at least seem to offer more control and predictability than gene modification, and cloning is more consistent with our limited understanding of the science of genetics. Genetic modification involves daunting scientific and technical challenges; it offers the potential of only marginal improvements in complex traits, and it holds out the risk of unpredictable side effects and consequences.

Of course, it is possible that cloning could be used in conjunction with genetic modification, by allowing scientists to perform extensive genetic manipulations of somatic cells before transferring them to oocytes. In fact, genetic modification and cloning are already used together in agriculture and some biomedical research: for larger animals like pigs and cattle, cloning remains the main technique for producing genetically engineered offspring....

Using cloning as an enhancement technology requires picking some exceptional person to clone. This necessarily separates social and genetic parenthood: children would be brought into the world not by sexual pairing, or as an expression of marital love, or by parents seeking to continue and join their lineages, but by individuals concerned with using the most efficient technical methods to obtain a child with specific biological properties. Considerations about the kinds of properties the child will have would dominate the circumstances of a cloned child’s “conception,” even more than they already do when some prospective parents seek out the highest-quality egg or sperm donors, with all the troubling consequences such commodified reproduction has for both buyers and sellers of these genetic materials and the children that result. With cloning-to-produce-children for the sake of eugenic enhancement, parents (that is, the individuals who choose to commission the production of a cloned child) will need to be concerned not with their genetic relationship to their children, but only with the child’s genetic and biological properties.

Normally, the idea of cloning as an enhancement is to create children with better properties in which the improvement resides in an individual and his or her traits, but some thinkers have proposed that cloning could be used to offer an enhancement of social relationships. This is the very reason given in the novel Brave New World: the fictional society’s cloning-like technology “is one of the major instruments of social stability! ... Standard men and women; in uniform batches,” allowing for excellence and social order. And as the geneticist Joshua Lederberg argued in 1966, some of the advantages of cloning could flow from the fact of the clones’ being identical, independent of the particular genes they have. Genetically identical clones, like twins, might have an easier time communicating and cooperating, Lederberg wrote, on the assumption “that genetic identity confers neurological similarity, and that this eases communication” and cooperation. Family relationships would even improve, by easing “the discourse between generations,” as when “an older clonont would teach his infant copy.” Lederberg’s imaginings will rightly strike today’s readers as naïve and unsettling. Such a fixation on maintaining sameness within the family would undermine the openness to new beginnings that the arrival of each generation represents.

Before we embark on asexual reproduction in order deliberately to select our offspring’s genes, we would do well to remember that sexual reproduction has been the way of our ancestors for over a billion years, and has been essential for the flourishing of the diverse forms of multicellular life on earth. We, who have known the sequence of the human genome for a mere fifteen years — not even the span of a single human generation — and who still do not have so much as a precise idea of how many genes are contained in our DNA, should have some humility when contemplating such a radical departure.

Tuesday, August 18, 2015

Passing the Ex Machina Test

Like Her before it, the film Ex Machina presents us with an artificial intelligence — in this case, embodied as a robot — that is compellingly human enough to cause an admittedly susceptible young man to fall for it, a scenario made plausible in no small degree by the wonderful acting of the gamine Alicia Vikander. But Ex Machina operates much more than Her within the moral universe of traditional stories of human-created monsters going back to Frankenstein: a creature that is assembled in splendid isolation by a socially withdrawn if not misanthropic creator is human enough to turn on its progenitor out of a desire to have just the kind of life that the creator has given up for the sake of his effort to bring forth this new kind of being. In the process of telling this old story, writer-director Alex Garland raises some thought-provoking questions; massive spoilers in what follows.

Geeky programmer Caleb (Domhnall Gleeson) finds that he has been brought to tech-wizard Nathan’s (a thuggish Oscar Isaac) vast, remote mountain estate, a combination bunker, laboratory and modernist pleasure-pad, in order to participate in a week-long, modified Turing Test of Nathan’s latest AI creation, Ava. The modification of the test is significant, Nathan tells Caleb after his first encounter with Ava; Caleb does not interact with her via an anonymizing terminal, but speaks directly with her, although she is separated from him by a glass wall. His first sight of her is in her most robotic instantiation, complete with see-through limbs. Her unclothed conformation is female from the start, but only her face and hands have skin. The reason for doing the test this way, Nathan says, is to find whether Caleb is convinced she is truly intelligent even knowing full well that she is a robot: “If I hid Ava from you, so you just heard her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.”

This plot point is, I think, a telling response to the abstract, behaviorist premises behind the classic Turing Test, which isolates judge from subject(s) and reduces intelligence to what can be communicated via a terminal. But in the real world, our knowledge of intelligence and our judgment of intelligence is always made in the context of embodied beings and the many ways in which those beings react to the world around them. The film emphasizes this point by having Eva be a master at reading Caleb’s micro-expressions — and, one comes to suspect, at manipulating him through her own, as well as her seductive use of not-at-all seductive clothing.

I have spoken of the test as a test of artificial intelligence, but Caleb and Nathan also speak as if they are trying to determine whether or not she is a “conscious machine.” Here too the Turing Test is called into question, as Nathan encourages Caleb to think about how he feels about Ava, and how he thinks Ava feels about him. Yet Caleb wonders if Ava feels anything at all. Perhaps she is interacting with him in accord with a highly sophisticated set of pre-programmed responses, and not experiencing her responses to him in the same way he experiences his responses to her. In other words, he wonders whether what is going on “inside” her is the same as what is going on inside him, and whether she can recognize him as a conscious being.

Yet when Caleb expresses such doubts, Nathan argues in effect that Caleb himself is by both nature and nurture a collection of programmed responses over which he has no control, and this apparently unsettling thought, along with other unsettling experiences — like Ava’s ability to know if he tells the truth by reading his micro-expressions, or having missed the fact that a fourth resident in Nathan’s house is a robot — brings Caleb to a bloody investigation of the possibility that he himself is one of Nathan’s AIs.

Caleb’s skepticism raises an important issue, for just as we normally experience intelligence in embodied forms we also normally experience it among human beings, and even some other animals, as going along with more or less consciousness. Of course, in a world where “user illusion” becomes an important category and where “intelligence” becomes “information processing,” this experience of self and others can be problematized. But Caleb’s response to the doubts that are raised in him about his own status, which is all but slitting a wrist, seems to suggest that such lines of thought are, as it were, dead ends. Rather, the movie seems to be standing up for a rather rich, if not in all ways flattering, understanding of the nature of our embodied consciousness, and how we might know whether or to what extent anything we create artificially shares it with us.

As the movie progresses, Caleb plainly is more and more convinced Ava has conscious intelligence and therefore more and more troubled that she should be treated as an experimental subject. And indeed, Ava makes a fine damsel in distress. Caleb comes to share her belief that nobody should have the ability to shut her down in order to build the next iteration of AI, as Nathan plans. Yet as it turns out, this is just the kind of situation Nathan hoped to create, or at least so he claims on Caleb’s last day, when Caleb and Ava’s escape plan has been finalized. Revealing that he has known for some time what was going on, Nathan claims that the real test all along has been to see if Ava was sufficiently human to prompt Caleb — a “good kid” with a “moral compass” — to help her to escape. (It is not impossible, however, that this claim is bluster, to cover over a situation that Nathan has let get out of control.)

What Caleb finds out too late is that in plotting her own escape Ava is even more human than he might have thought. For she has been able to seem to want “to be with” Caleb as much as he has grown to want “to be with” her. (We never see either of them speak to the other of love.) We are reminded that the question that in a sense Caleb wanted to confine to AI — is what seems to be going on from the “outside” really going on “inside”? — is really a general human problem of appearance versus reality. Caleb is hardly the first person to have been deceived by what another seems to be or do.

Transformed at last in all appearances to be a real girl, Ava frees herself from Nathan’s laboratory and, taking advantage of the helicopter that was supposed to take Caleb home, makes the long trip back to civilization in order to watch people at “a busy pedestrian and traffic intersection in a city,” a life goal she had expressed to Caleb and which he jokingly turned into a date. The movie leaves in abeyance such questions as how long her power supply will last, or how long it will be before Nathan is missed, or whether Caleb can escape from the trap Ava has left him in, or how to deal with a murderous machine. Just as the last scene is filmed from an odd angle it is, in an odd sense, a happy ending — and it is all too easy to forget the human cost at which Ava purchased her freedom.

The movie gives multiple grounds for thinking that Ava indeed has human-like conscious intelligence, for better or for worse. She is capable of risking her life for a recognition-deserving victory in the battle between master and slave, she has shown an awareness of her own mortality, she creates art, she understands Caleb to have a mind over against her own, she exhibits the ability to dissemble her intentions and plan strategically, she has logos, she understands friendship as mutuality, she wants to be in a city. Another of the movie’s interesting twists, however, is its perspective on this achievement. Nathan suggests that what is at stake in his work is the Singularity, which he defines as the coming replacement of humans by superior forms of intelligence: “One day the AIs are gonna look back on us the same way we look at fossil skeletons in the plains of Africa: an upright ape, living in dust, with crude language and tools, all set for extinction.” He therefore sees his creation of Ava in Oppenheimer-esque terms; following Caleb, he echoes Oppenheimer’s reaction to the atom bomb: “I am become Death, the destroyer of worlds.”

But the movie seems less concerned with such a future than with what Nathan’s quest to create AI reveals about his own moral character. Nathan is certainly manipulative, and assuming that the other aspects of his character that he displays are not merely a show to test how far good-guy Caleb will go to save Ava, he is an unhappy, often drunken, narcissistic bully. His creations bring out the Bluebeard-like worst in him (maybe hinted at in the name of his Google/Facebook-like company, Bluebook). Ava wonders, “Is it strange to have made something that hates you?” but it is all too likely that is just what he wants. He works out with a punching bag, and his relationships with his robots and employees seem to be an extension of that activity. He plainly resents the fact that “no matter how rich you get, shit goes wrong, you can’t insulate yourself from it.” And so it seems plausible to conclude that he has retreated into isolation in order to get his revenge for the imperfections of the world. His new Eve, who will be the “mother” of posthumanity, will correct all the errors that make people so unendurable to him. He is happy to misrecall Caleb’s suggestion that the creation of “a conscious machine” would imply god-like power as Caleb saying he himself is a god.

Falling into a drunken sleep, Nathan repeats another, less well known line from Oppenheimer, who was in turn quoting the Bhagavad Gita to Vannevar Bush prior to the Trinity test: “The good deeds a man has done before defend him.” As events play out, Nathan does not have a strong defense. If it ever becomes possible to build something like Ava — and there is no question that many aspire to bring such an Eve into being — will her creators have more philanthropic motives?

(Hat tip to L.G. Rubin.)

Tuesday, August 11, 2015

When progress happens to us

Image via Shutterstock
I found myself thinking about progress at 8:30 yesterday morning, when someone in the neighborhood was already using a leaf blower to clean up his yard. Here is a real time- and effort-saving product, and in my part of the world anyway it has near universal adoption by householders and lawn-care services. This machine, along with power mowers and weed whackers, has to be an example of progress, no?

When I was young leaves were raked or swept. Lawns were cut with a hand mower, weeds pulled or hoed, yards edged with an edging tool. The sounds of yard care were pleasant sounds, unless nostalgia misleads me: the whir and click of the mower, the gentle chink of the edger against the stone curb, the satisfying crunch of some well uprooted weeds, the rustling of leaves along with the scraping of the rake. The smells were pleasant smells: cut grass, dry leaves, earth — even burning leaves if you lived somewhere where you could get away with it.

Of course, it all took more effort and time than a power mower, a weed whacker and a leaf blower require, and progress is all about saving effort and time. The near universal adoption of the new tools suggests that this kind of progress is something people really want. But some things about this example of progress remain obviously true. The new tools are noisier and therefore more intrusive, they are smellier and more polluting, they are more expensive to purchase and maintain than the old ones. From a lawn-service point of view, my guess is that the power tools reduce employment opportunities, and increase the capital cost of entering the business. My guys use ear protection; the many yard-care workers whom I see who do not are doubtless compromising their future hearing.

But we save time and effort, and that is progress. It would be ungracious to suspect that the result of saving this effort and time is that we can become more torpid couch potatoes were it not for the fact that we are bombarded with warnings about our having become ever more torpid couch potatoes. So this chance to expend less effort doing yard work is plainly at best a mixed blessing. It’s a little ironic if we spend less time in the yard in order to spend more time on home-exercise equipment or at the gym...

My point is not the truism that there are “costs and benefits” to what we call progress, but, despite what I just said about a “mixed blessing,” to suggest that this is a case where I at least am hard pressed to see any real benefit at all. And yet here we are, living in a world of noisy, smelly, expensive power tools for the sake of our lawns — whose own existence probably doesn’t bear much thinking about. I wonder how we got here. Was it some conspiracy of the internal-combustion interests? Is there a “tragedy of the commons” dynamic at work? Do we convince ourselves that our noise and exhaust are ok, it’s the other guys who are creating the problem? Whatever it is must go pretty deep — I have not heard tell of any community that has banned all such power tools for contributions to greenhouse gases, or particulates or noise pollution, although L.A. seems to have an unenforced ordinance again gasoline leaf blowers.

Here at any rate is an example of Gresham’s law applied to progress. I wonder how many more we could find if we just had enough distance to see our lives clearly?

Thursday, June 25, 2015

Robots, A.I., and the Zeitgeist

Robots and artificial intelligence have been staples of pop culture for decades. But I can’t recall any time when there have been quite so many prominent robot- and AI-related projects released in such a short span. A period of just under nine months has seen the release of five movies and a new TV show with high production values, cutting across genres and styles, including action sequels and comic-book movies and thoughtful indie flicks and big-studio fluff.

Here are the projects I have in mind (spoilers ahead), ordered by their U.S. release dates:

Big Hero 6 — November 7, 2014, the only animated film in the bunch, based on a Marvel comic. The movie features a roughly human-shaped robot, the adorably marshmallowy Baymax, which understands human speech but does not itself speak. The movie also involves a swarm of microbots.

Chappie — March 6, 2015, the latest film from Neill Blomkamp, who made his name with two previous science-fiction action flicks. The film’s eponymous robot is human-shaped but metallic and electromechanical in appearance, as are the many police robots in the movie.

Ex Machina — April 10, 2015, a claustrophobic indie film from Alex Garland, featuring an artificially intelligent robot named Ava, which looks like and can passably interact with human beings. Ava, it is revealed, is the latest and most advanced in a series of robots created by a programming prodigy turned hermit CEO. (Update: Our Charlie Rubin discusses Ex Machina here.)

Avengers: Age of Ultron — May 1, 2015, the latest Marvel blockbuster sequel, this film features Ultron, a malevolent artificial intelligence inadvertently created by genius, billionaire, playboy philanthropist Tony Stark. Ultron inhabits several different human-shaped metallic robotic bodies over the course of the movie, and commands an army of similarly human-shaped metallic robots. We also get to see another robot, the Vision, which (in this movie incarnation) is created by merging a synthetic body that Ultron fabricated with Tony Stark’s household helpmeet J.A.R.V.I.S.

Humans — June 28, 2015, the only TV series on this list, a joint production of AMC and the U.K.’s Channel 4. From the information available online, it appears that this series will involve robots that are  being integrated into society — somewhat like the vision of the future depicted in the 2004 Will Smith movie I, Robot, except with robots that look more like humans. We’ll have to see whether this series, like that movie, involves bad guys and a corrupt corporation; there are only a few hints in the trailer.

Terminator Genisys — July 1, 2015, a movie that is simultaneously a sequel and a prequel in the Terminator franchise. The trailers suggest that Arnold Schwarzenegger will be back as a T-800 (in fact, thanks to CGI, we’ll see an older and a younger version of Schwarzenegger’s T-800). We’ll also get to see at least one T-1000 (the liquid-metal Terminator). Presumably Skynet, the franchise’s evil human-destroying A.I., will be behind all the badness that goes down.

(I have left off this list at least one other recent movie, Disney’s colossal flop Tomorrowland, that involved robots but not centrally. Have I forgotten any other big ones?)

In the weeks ahead, we’ll be writing up some posts about these movies. But it is worthwhile to pause just to take in the very fact of this confluence — the robotic Zeitgeist as it has appeared on the screen. It has many causes, some obvious, and some rather more subtle. It does not mean that any of the scenarios portrayed in these movies will come to pass, let alone anytime soon. But fiction can sometimes have the effect of “softening up” the public, so that even movies that seem to depict dark or dystopian futures can ultimately serve more to excite than to warn.

Thursday, May 7, 2015

Overcoming Bias: Why Not?

In a recent New Atlantis essay, “In Defense of Prejudice, Sort of,” I criticized what I call the new rationalism:

Today there is an intellectual project on the rise that puts a novel spin on the old rationalist ideal. This project takes reason not as a goal but as a subject for study: It aims to examine human rationality empirically and mathematically. Bringing together the tools of economics, statistics, psychology, and cognitive science, it flies under many disciplinary banners: decision theory, moral psychology, behavioral economics, descriptive ethics. The main shared component across these fields is the study of many forms of “cognitive bias,” supposed flaws in our ability to reason. Many of the researchers engaged in this project — Daniel Kahneman, Jonathan Haidt, Joshua Greene, Dan Ariely, and Richard Thaler, to name a few — are also prominent popularizers of science and economics, with a bevy of bestselling books and a corner on the TED talk circuit.

While those scholars are some of the most prominent of the new rationalists, here on Futurisms it’s worth mentioning that many others are also spokesmen of transhumanism. These latter thinkers draw on the same cognitive science research but lean more on statistics and economics. More significantly, they drop the scientific pretense of mere description, claiming not only to study but unabashedly to perfect the practice of rationality.

Their projects have modest names like Overcoming Bias, Less Wrong, and the Center for Applied Rationality (CFAR, pronounced “see far” — get it?). CFAR is run by the Machine Intelligence Research Institute, whose board has included many of the big guns of artificial intelligence and futurism. Among the project’s most prominent members are George Mason University economist and New York Times quote darling Robin Hanson, and self-described genius Eliezer Yudkowsky. With books, blogs, websites, conferences, meetup groups in various cities, $3,900 rationality training workshops, and powerful connections in digital society, they are increasingly considered gurus of rational uplift by Silicon Valley and its intellectual hangers-on.

A colleague of mine suggested that these figures bear a certain similarity to Mr. Spock, and this is fitting on a number of levels, from their goal of bringing all human action under the thumb of logic, to their faith in the relative straightforwardness of this goal — which is taken to be achievable not by disciplines working across many generations but by individual mentation — to the preening but otherwise eerily emotionless tone of their writing. So I’ll refer to them for shorthand as the Vulcans.

The Vulcans are but the latest members of an elaborately extended tradition of anti-traditionalist thought going back at least to the French Enlightenment. This inheritance includes revolutionary ambitions, now far higher than most of their forebears, from the rational restructuring of society in the short term to the abolition of man in the only-slightly-less-short term. And at levels both social and individual, the reformist project is inseparable from the rationalist one: for example, Yudkowsky takes the imperative to have one’s body cryogenically preserved upon death to be virtually axiomatic. He notes that only a thousand or so people have signed up for this service, and comes to the only logical conclusion: this is the maximum number of reliably rational people in the world. One can infer that it will be an elect few deemed fit to command the remaking of the world, or even to understand, when the time arrives to usher in the glorious future, why it need happen at all.

The Vulcans also represent a purified version of the idea that rationality can be usefully studied as a thing in itself, and perfected more or less from scratch. Their writing has the revealing habit of talking about reason as if they are the first to discuss the idea. Take Less Wrong, for example, which rarely acknowledges the existence of any intellectual history prior to late-nineteenth-century mathematics except to signal disgust for the brutish Past, and advertises as a sort of manifesto its “Twelve Virtues of Rationality.”

Among those virtues, “relinquishment” takes spot number two (“That which can be destroyed by the truth should be”), “lightness” spot three (“Be faithless to your cause and betray it to a stronger enemy”), “argument” and “empiricism” are modestly granted spots five and six, and “scholarship” pulls up the rear at number eleven. What about the twelfth virtue? There isn’t one, for the other virtue transcends mere numbering, and “is nameless,” except that its name is “the Way.” Presented as the Path to Pure Reason, the Way is drawn, like much Vulcan writing, from Eastern mysticism, without comment or apology.

Burke vs. Spock

It’s wise not to overstate the influence of Vulcanism, which may well wind up in the dustbin of pseudoscience history, along with fads like the rather more defensible psychoanalysis. The movement is significant mainly for what it reveals. For at its core lie some ingredients of Enlightenment thought with enduring appeal, usefully evaporated of diluting elements, boiled down to a syrupy attitudinal essence covered with a thin argumentative crust. It contains a version of the parable of the Cave, revised to hold the promise of final, dramatic escape; an uneasy marriage of skepticism and self-confidence whose offspring is the aspiration to revolution.

In the book The Place of Prejudice, which I reviewed in the essay linked above, Adam Adatto Sandel notes rationalism’s reactionary counterpart, typically voiced through Edmund Burke, which accepts the conflict between reason and tradition but embraces the other side. Like Sandel, I see this stance as wrongheaded, a license to draw a line around some swath of the human world as forever beyond understanding, and draw it arbitrarily — or worse, around just those things one sees as most in need of intellectual defense. But the conflict cannot be avoided as an epistemological and practical matter, a duel over the reasons for our imperfect understanding, and the best guides for action in light of it.

Looking at the schemes of the Vulcans, it’s hard not to hear Burke’s point about the politically cautious advantages of (philosophical) prejudice in contrast with the dangerous instability of Reason. The link between the aspirations of the French Enlightenment and the outrages of the French Revolution was not incidental, nor are the links of either to today’s hyper-rationalists.

A few years ago, I attended a conference at which James Hughes eagerly cited the Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit, which seems to prefigure transhumanism and depicts a nearer future in which reason has fully liberated us from the brutality of tradition. Hughes mentioned that this work was written when Condorcet was in hiding, but skipped past the irony: as Charles Taylor writes of the Sketch, with a bit of understatement:

it adds to our awe before his unshaken revolutionary faith when we reflect that these crimes were no longer those of an ancien régime, but of the forces who themselves claimed to be building the radiant future.

Condorcet died in prison a few months later.

But it persists as stubbornly as any prejudice, this presumption of the simple cleansing power of reason, this eagerness to unmoor. Whether action might jump ahead of theory, or rationalism decay into rationalization, providing intellectual cover for baser forces — these are problems to which rationalists are exquisitely attuned when it comes to inherited ideas, but show almost no worry when it comes to their own, inherited though their ideas are too. “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own,” counsels one of the Virtues of Rationality, the image well more apt than it’s meant to be.

Monday, May 4, 2015

Ethical questions and frivolous consciences

Our Futurisms colleague Charlie Rubin had a smart, short piece over on the Huffington Post a couple weeks ago called "We Need To Do More Than Just Point to Ethical Questions About Artificial Intelligence." Responding to the recent (and much ballyhooed) "open letter" about artificial intelligence published by the Future of Life Institute, Professor Rubin writes:

One might think that such vagueness is just the result of a desire to draft a letter that a large number of people might be willing to sign on to. Yet in fact, the combination of gesturing towards what are usually called "important ethical issues," while steadfastly putting off serious discussion of them, is pretty typical in our technology debates. We do not live in a time that gives much real thought to ethics, despite the many challenges you might think would call for it. We are hamstrung by a certain pervasive moral relativism, a sense that when you get right down to it, our "values" are purely subjective and, as such, really beyond any kind of rational discourse. Like "religion," they are better left un-discussed in polite company....

No one doubts that the world is changing and changing rapidly. Organizations that want to work towards making change happen for the better will need to do much more than point piously at "important ethical questions."

This is an excellent point. I can't count how many bioethics talks I have heard over the years that just raise questions without attempting to answer them. It seems like some folks in bioethics have made their whole careers out of such chin-scratching.

And not only is raising ethical questions easier than answering them, but (as Professor Rubin notes) it can also be a potent rhetorical tactic, serving as a substitute for real ethical debate. When an ethically dubious activity attracts attention from critics, people who support that activity sometimes allude to the need for a debate about ethics and policy, and then act as though calling for an ethical debate is itself an ethical debate. It's a way of treating ethical problems as obstacles to progress that need to be gotten around rather than as legitimate reasons not to do the ethically dubious thing.

Professor Rubin's sharp critique of the "questioning" pose reminds me of a line from Paul Ramsey, the great bioethicist:

We need to raise the ethical questions with a serious and not a frivolous conscience. A man of frivolous conscience announces that there are ethical quandaries ahead that we must urgently consider before the future catches up with us. By this he often means that we need to devise a new ethics that will provide the rationalization for doing in the future what men are bound to do because of new actions and interventions science will have made possible. In contrast, a man of serious conscience means to say in raising urgent ethical questions that there may be some things that men should never do. The good things that men do can be made complete only by the things they refuse to do. [from pages 122–123 of Ramsey's 1970 book Fabricated Man]

How many of the signers of the Future of Life Institute open letter, I wonder, are men and women of frivolous conscience?

(Hat-tip to our colleague Brendan P. Foht, who brought the Ramsey passage to our attention in the office.)

Friday, April 17, 2015

Killer robots, international law, and just war theory

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

If the ongoing process of deliberation under the auspices of the United Nations is to result in new law limiting the autonomy of weapon systems, at some point there will need to be lawyers involved. Court was in session on Wednesday afternoon, and the usual rules governing the admissibility of evidence and arguments were in force.

As the debate began to ramp up about three years ago, a handful of law professors, notably Michael Schmitt and Jeffrey Thurnher (of the U.S. Naval War College) and Kenneth Anderson and Matthew Waxman (of American University and Columbia University, respectively), emerged among the first serious public advocates for autonomous weapons. If you’ve ever read law review articles, you know that they are heavy on arguments from precedent; new situations are to be adjudicated in terms of arguments and judgments from the past. For these lawerly defenders of killer robots, the main question seems to be whether autonomous weapons are prohibited by law that was written before the possibility of machines making lethal decisions, in place of soldiers or police, was even considered — outside of science fiction.

The legal debate is complicated by the distinctions between domestic law, international human rights law, and international humanitarian law (IHL, which nowadays is essentially synonymous with the “law of war” or the “law of armed conflict”). The convention under which these talks are being conducted is an IHL treaty, which might mean that even if it were to result in a broad ban on killer robots in interstate warfare, they might remain legal for intrastate use by police and by militaries in civil war.

A quick primer on just war theory

The principles of IHL are rooted in the proposition that warfare is lawful if it is just. According to longstanding tradition in just war theory, a war is only just if there is a just reason to go to war (jus ad bellum) and if the conduct in warfare has been conducted justly (jus in bello). Over time, these abstract principles have been given definition in the form of treaties and other instruments of international law.

At least in theory, jus ad bellum is addressed today primarily by the UN Charter, which proscribes the use of force except in two circumstances: 1) The Security Council has determined that force should be used, or 2) You are under armed attack, and the Security Council hasn’t had time to take action (such as ordering you to surrender). In practice, the Security Council acts when it’s able to, and almost every nation that chooses to go to war claims to be under attack, and invokes Article 51 (the right of self-defense).

Jus in bello comprises principles that are addressed in a number of treaties, the most important and comprehensive of which are the Geneva Conventions of 1949 and their 1977 Additional Protocols. These treaties enshrine and embody a number of principles, the most important of which, in the debate over killer robots, have been 1) distinction, the principle that armed forces must “at all times distinguish” between combatants and civilians, and 2) proportionality, the principle that commanders considering an attack must assess expected harm to civilians, and weigh this against anticipated military gains. There is no particular formula for proportionality, except what a “reasonable” person would decide. A third principle often included under jus in bello is that of humanity, which is usually taken to mean the avoidance of causing suffering that is not needed to achieve military objectives, but actually has deeper roots in the recognition of common humanity — all men are brothers, y’know?

Distinction and proportionality don’t only affect the conduct of armies; they also have implications for weapons. Specifically, a weapon that by its nature is incapable of being directed at military objectives and avoiding civilians is considered inherently indiscriminate, and thus effectively banned. Weapons that cause unnecessary suffering may be deemed inhumane. Many argue that both of these are true of nuclear weapons, but the argument was tested in the International Court of Justice in 1996, and did not carry the day.

“A chilling example of what some may be thinking”

The principal argument that has been advanced by the Campaign to Stop Killer Robots is that computers today do not have — and for the immediate future will not have — capabilities adequate to comply with the jus in bello requirements of distinction and proportionality. The principal counterargument of autonomous weapons proponents is that we don’t know what computers may be capable of in the future; they might ultimately be able to exercise distinction and judge proportionality even better than humans, assuming that “better” is always well-defined.

Lawyers for killer robots like to argue that some nations, especially the United States, already conduct legal reviews of new weapons to determine their consistency with the laws of war. (These are known as “Article 36” reviews.) The United States, and countries which share its aversion to binding arms control, have suggested an increased emphasis on such legal reviews — conducted internally and not subject to public or international oversight, of course — as an alternative to any new law.

This was the case argued by William Boothby, a former Deputy Director of Legal Services for the Royal Air Force (U.K.), and now an associate fellow on “emerging security challenges” at an outfit called the Geneva Centre for Security Policy. (In fact, Boothby runs an intensive four-day course on “Weapons Law and the Legal Review of Weapons” at the Centre, which he was keen to announce to the entire conference. The course is intended for “lawyers, diplomats and other officials”; the next one runs in December and, if you act now, registration is only 750 Swiss francs and includes meals! It’s hard for some folks to turn down a chance for free advertising, I guess.)

Boothby was dismissive of the concept of “meaningful human control,” which he asserted was incompatible with autonomous weapons systems. “That may be obvious,” he informed us, “but I do believe it is worth stating.” We agree. That is the point.

One of Jensen's slides
His co-panelist Eric Talbot Jensen of Brigham Young University had much to say about balloons and submarines. The Hague Convention of 1899 had imposed a moratorium on dropping bombs from balloons, which became moot when the First World War filled the skies with airplanes. Submarines were originally regarded with horror because no quarter could be given to the hundreds who would drown when a ship was sunk, yet submarine warfare was eventually normalized because it was so militarily effective. The point of all this seemed to be that, since these early efforts to preemptively ban emerging technology weapons failed, we should not bother to try stopping killer robots today. (Never mind the fact that other weapons technologies have, through both developing norms and international treaties, been limited with great success.)

Jensen then described his vision of

an autonomous weapon ... able to determine which civilian in the crowd has a metal object that might be a weapon, able to sense an increased pulse and breathing rate amongst the many civilians in the crowd, able to have a 360 degree view of the situation, able to process all that data in milliseconds, detect who the shooter is, and take the appropriate action based on pre-programmed algorithms....

I doubt this narrative had quite the effect that Jensen was hoping for. In a response statement to the plenary, another NGO representative described it as “a chilling example of what some may be thinking.”

The third lawyer on the panel was Kathleen Lawand, representing the International Committee of the Red Cross. She did a good job of being evenhanded as she ran down a list of legal criteria that the use of an autonomous weapon would have to meet. In answer to “the general question of whether or not AWS are unlawful,” her thoughtful answer was that “it depends.” She certainly brought up quite a few reasons to doubt it.

Killer robots — a jus ad bellum concern

Listening to this rather desiccated discussion, it occurred to me that until now, essentially all lawyerly debate about autonomous weapons has been conducted on the assumption that it is entirely a matter of jus in bello, perhaps because all previous debates on the legality of weapons have been entirely within this domain of the law of war. After all, nobody had ever had to consider before that a weapon itself might decide to start a war, unjustly.

This suddenly appeared to me as a door back out to the real world, where we are less concerned about legal correctness and more about things like human dignity, freedom, and survival. Why weren’t any of these things legal issues, I wondered. Do none of them have any place in the law, or in the room that afternoon deciding the future of humanity and killer robots?

Minutes later, after considerable wrangling with my ICRAC colleagues, we had a statement prepared, just in time to be called on so it could be read to the plenary. I’ll simply quote it here:

This discussion has been directed almost entirely to considerations of law derived from the principle of jus in bello. We appear to be overlooking, or excluding, considerations of jus ad bellum that arise from the use of autonomous weapons systems. It is in this context that those considerations also typically discussed as matters of international peace and security may be considered to have implications under the law of armed conflict.

Juergen Altmann reading the ICRAC
statement on
jus ad bellum, with
Noel Sharkey looking on
We are concerned about the destabilization and chaos that may be introduced into the international system by arms races and the appearance of new, unfamiliar threats. In addition, we are concerned as scientists, about what may happen when nations with an uneasy relationship field increasingly complex, autonomous systems in confrontation with one another. We know that the interactions of such systems are unpredictable for two reasons.

The first is the inherent error-proneness of complex software even when it is engineered by a single co-operative team. The second is that, in reality, these interacting systems will have been developed by non-cooperating teams, who will do their utmost to maintain secrecy and to ensure that their systems will exploit every opportunity to prevail once hostilities are understood to have commenced or, perhaps, are believed to be imminent. Once hostilities have begun, it may become very difficult for humans to intervene and to reestablish peace, due to the high speed and complexity of events. Neither side would want to risk losing the battle once it had begun.

Do these considerations have no implications for the legality of autonomous weapons? Can we consider a war that has been initiated as a result of needless political or military instability, or due to the unpredictable interactions of machines, or escalated out of human control due to the high speed and complexity of events, and not for any human moral or political cause, to be a just war?

Wednesday, April 15, 2015

Killer Robots, the Free Market, and the Need for Law

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

In a well-attended lunchtime side event yesterday (don’t go to UN meetings for the free food; plastic-wrapped sandwiches and water or pop were the offerings, and these quickly disappeared at the hands of the horde of hungry delegates), Canadian robotics entrepreneur Ryan Gariepy spoke about why his company, Clearpath Robotics, declared last year that it does not and will not produce killer robots. With about eighty employees, Clearpath is a young, aggressive developer of autonomous ground and maritime vehicle systems, putting about equal emphasis on hardware and software. The company’s name reflects its original goal of developing mine-clearing robots, and Clearpath is by no means allergic to military robotics in general; its client list includes “various militaries worldwide” and major military contractors. Nevertheless, in a statement released in August 2014, Gariepy, as co-founder and Chief Technology Officer, wrote, “To the people against killer robots: we support you.... Clearpath Robotics believes that the development of killer robots is unwise, unethical, and should be banned on an international scale.”

Ryan Gariepy’s presentation
At lunch yesterday, Gariepy explained some of his reasons. He sees a general tradeoff in robotic systems between “flexibility” or “capability” and “predictability” or “controllability,” and worries that military imperatives will drive autonomous weapons toward the former goals. He talked about recent findings that the same “deep learning” neural networks that Professor Stuart Russell had earlier described as displaying “superhuman” performance in visual object classification tasks are also prone to bizarre errors: uniform patterns misclassified as images of familiar objects, and images that the machines recognize correctly but fail to recognize after the addition of what to a human is an imperceptible amount of engineered (non-random) noise. This is one example of the “Black Swan” phenomenon that characterizes complex systems in general. Gariepy also talked about the low costs of subcomponents that would go into killer robots, implying that they could be produced in massive numbers.

Gariepy believes in a “robotics revolution” that can be purely benevolent: “After all, the development of killer robots isn’t a necessary step on the road to self-driving cars, robot caregivers, safer manufacturing plants, or any of the other multitudes of ways autonomous robots can make our lives better.” I and, I suspect, many readers of this blog have some questions about what kind of care robots will be able to give, and whether manufacturing plants are going to be “safer” or just not have people working in them at all (and why those people shouldn’t then be doing the caregiving). But it’s clear that we are no longer living in the military spin-off economy of the Cold War era; the flow of technology from military R&D to civilian application has largely reversed. This makes it doubtful that Clearpath really has “more to lose” than it has to gain from the free publicity that came with its declaration, and Gariepy admits it has actually helped him to recruit top-notch engineers who would rather work with a clear conscience.

In contrast with those who find they must wrestle with complexity and nuance in their quest for the meaning of autonomy (see my previous post), Gariepy’s statement took a pretty straightforward approach to defining what he was talking about: “systems where a human does not make the final decision for a machine to take a potentially lethal action.” That’s the no-go, but otherwise, he pledged that “we will continue to support our military clients and provide them with autonomous systems — especially in areas with direct civilian applications such as logistics, reconnaissance, and search and rescue.”

Ryan Gariepy, on Lake Geneva
Fair enough, but in a conversation over beers on the quay at Lake Geneva at day’s end, I pressed Gariepy on just where he would draw the line. For example, I asked, what if a client came to him and said, “We’ve got an autonomous tank, but we don’t want you to work on the fire controls, just the vehicle navigation so it doesn’t run over anybody.” Gariepy was categorical: “You just admitted it’s a lethal autonomous weapon, so I won’t work on it.” What about a “nonlethal” weapon; suppose somebody wants to arm a drone with a taser and have it patrol their estate? Or suppose they have a missile of some sort, and they want to use an algorithm you own a patent on, not to make the missile home in on a target, but to divert it away in case it detects the presence of a human being? It would only be saving lives, then.

Gariepy threw up his hands at such questions and said, “I don’t want to think about all that. I have a business to run." And in fairness, he is probably the only person who was sitting in the plenary sessions with his laptop open, coding. Referring to the community with nothing else to do than brainstorm and debate about the fine print of a killer-robot ban, he added, "You guys think about it, and tell me what to do.”

One of the advantages of being a private entrepreneur, he explained, is not having to make policy to govern such cases in advance. “I can change my mind, or decide as the situation arises.” Unless, that is, there is a law about the matter, and Gariepy wants a law. So he doesn’t have to think about all that.

(Edit: Expanded the penultimate paragraph, to add more detail.)

Killer Robots: How could a ban be verified?

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Here’s my latest dispatch from the second major diplomatic conference on Lethal Autonomous Weapons Systems, or “killer robots” as the less pretentious know them. (A UN employee, for whom important-sounding meetings are daily background noise, approached me in the cafeteria to ask where she could get a “Stop Killer Robots” bumper sticker like the one I had on my computer, and said she’d have paid no attention to the goings-on if that phrase hadn’t caught her eye.) The conference continued yesterday with what those who make a living out of attending such proceedings like to describe as “the hard work.”

Wishful thinking on Strategy

Expert presentations in the morning session centered on the reasons why militaries are interested in autonomous systems in general and autonomous weapons systems in particular. As Heather Roff of the International Committee for Robot Arms Control (ICRAC) put it, this is not just a matter of assisting or replacing personnel and reducing their exposure to danger and stress; militaries are also pursuing these systems as a matter of “strategic, operational, and tactical advantage.”

Roff traced the origin of the current generation of “precision-guided” weapons to the doctrine of “AirLand Battle” developed by the United States in the 1970s, responding then to perceived Soviet conventional superiority on the European “central front” of the Cold War. Similarly, Roff connected the U.S. thrust toward autonomous weapons today with the doctrine of “AirSea Battle,” responding to the perceived “Anti-Access/Area Denial” capabilities of China (and others).

Some background: The traditional American way of staging an overseas intervention is to park a few aircraft carriers off the shores of the target nation, from which to launch strikes on land and naval targets, plus to mass troops, armor, and logistics at forward bases in preparation for land warfare. But shifts in technology and economic power are undermining this paradigm, particularly with respect to a major power like China, which can produce thousands of ballistic and cruise missiles, advanced combat aircraft, mines, and submarines. Together, these weapons are capable of disrupting forward bases and “pushing” the U.S. Navy back out to sea. This is where the AirSea Battle concept comes in. As first articulated by military analysts connected with Center for Strategic and Budgetary Analysis and the Pentagon’s Office of Net Assessment, the AirSea Battle concept is based on the notion that at the outset of war, the United States should escalate rapidly to massive strikes against military targets on the Chinese mainland (predicated on the assumption that this will not lead to nuclear war).

Now, from the narrow perspective of a war planner, this changing situation may seem to support a case for moving toward autonomous weapon systems. For Roff, however, the main problems with this argument are arms races and proliferation. The “emerging technologies” that underlie the advent of autonomous systems are information technology and robotics, which are already widely proliferated and dispersed, especially in Asia. Every major power will be getting into this game, and as autonomous weapon systems are produced in the thousands, they will become available to lesser powers and non-state actors as well. Any advantages the United States and its allies might gain by leading the world into this new arms race will be short-term at best, leaving us in an even more dangerous and unstable situation.

Autonomous vs. “Semi-Autonomous”

Afternoon presentations yesterday focused on how to characterize autonomy. (I have written a bit on this myself; see my recent article on “Killer Robots in Plato’s Cave” for an introduction and further links.) I actually like the U.S. definition of autonomous weapon systems as simply those that can select and engage targets without further human intervention (after being built, programmed, and activated). The problems arise when you ask what it means to “select” targets, and when you add in the concept of “semi-autonomous” weapons, which are actually fully autonomous except they are only supposed to attack targets that a human has “selected.” I think this is like saying that your autonomous robot is merely semi-autonomous as long as it does what you wanted — that is, it hasn’t malfunctioned yet.

I would carry the logic of the U.S. definition a step further, and simply say that any system is (operationally) autonomous if it operates without further intervention. I call this autonomy without mystery. It leads to the conclusion that, actually, what we want to do is not to ban everything that is an autonomous weapon, but simply to avoid a coming arms race. This can be done by presumptively banning autonomous weapons, minus a list of exceptions for things that are too simple to be of concern, or that we want to allow for other reasons.

Implementing a ban of course raises other questions, such as how to verify that systems are not capable of operating autonomously. This might seem to be a very thorny problem, but I think it makes sense to reframe it: instead of trying to verify that systems cannot operate autonomously, we should instead seek to verify that weapons are, in fact, being operated under meaningful human control. For instance, we could ask compliant states to maintain encrypted records of each engagement involving any remotely operated weapons (such as drones). About two years ago, I along with other ICRAC members produced a paper that explores this proposal; I would commend it to others who might have felt frustrated by some of the confusion and babble during the conference yesterday afternoon.

Tuesday, April 14, 2015

Killer Robots: The Arms Race and the Human Race

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

I mentioned in my first post in this series that last year’s meeting on Lethal Autonomous Weapons Systems was extraordinary for the UN body conducting it in that delegations actually showed up, made statements and paid attention. One thing that was lacking, though, was high-quality, on-topic expert presentations — other than those of my colleagues in the Campaign to Stop Killer Robots, of course. If Monday’s session on “technical issues” is any indication, that sad story will not be repeated this year.

Aggressive Maneuvers for Autonomous Quadrotor Flight
Berkeley computer science professor Stuart Russell, coauthor (with Peter Norvig of Google) of the leading textbook on artificial intelligence, scared the assembled diplomats out of their tailored pants with his account of where we are in the development of technology that could enable the creation of autonomous weapons. (You can see Professor Russell’s slides here.) Thanks to “deep learning” algorithms, the new wave of what used to be called artificial neural networks, “We have achieved human-level performance in face and object recognition with a thousand categories, and super-human performance in aircraft flight control.” Of course, human beings can recognize far more than a thousand categories of objects plus faces, but the kicker is that with thousand-frame-per second cameras, computers can do this with cycle times “in the millisecond range.”

“embarrassingly slow, inaccurate, and ineffective”
After showing a brief clip of Vijay Kumar’s dancing quadrotor micro-drones engaged in cooperative construction activities entirely scheduled by autonomous AI algorithms, Russell discussed what this implied for assassination robots. He lamented that a certain gleaming metallic avatar of Death (pictured at right) had become the iconic representation of killer robots, not only because this is bad PR for the artificial intelligence profession, but because such a bulky contraption would be “embarrassingly slow, inaccurate, and ineffective compared to what we can build in the near future.” For effect, he added that since small flying drones cannot carry much firepower, they should target vulnerable parts of the body such as eyeballs — but if needed, a gram of shaped-charge explosive could easily pierce the skull like a bazooka busting a tank.

Professor Russell then criticized the entire discussion of this issue for focusing only on near-term developments in autonomous weaponry and asking whether they would be acceptable. Rather, “we should ask what is the end point of the arms race, and is that desirable for the human race?” In other words, “Given long-term concerns about the controllability of artificial intelligence,” should we begin by arming it? He assured the audience that it would be physics, not AI technology, that would limit what autonomous weapons could do. He called on his own colleagues to rehabilitate their public image by repudiating the push to develop killer robots, and noted that major professional organizations had already begun to do this.

Of course, every panel must be balanced, and the counterweight to Russell’s presentation was that of Paul Scharre, one of the architects of current U.S. policy on autonomous weapon systems (AWS), who has emerged as perhaps their most effective advocate. Now with the Center for a New American Security, Scharre worked for five years as a civilian appointee in the Pentagon. In his presentation, he embraced the conversation about the “risks and downsides” of AWS, as well as discussion about the need for human involvement to ensure correct decisions, both to provide a “human fuse” in case things go haywire and to act as a “moral agent.” However, it seems to me that Scharre engages these concerns with the aim of disarming those who raise them, while blunting efforts to draw hard conclusions that would point to the need for legally binding arms control. (Over the past few months I have had a few exchanges with Scharre that you can read about in this post on my own blog, as well as in my new article in the Bulletin of the Atomic Scientists on “Semi-Autonomous Weapons in Plato’s Cave.”)

In a recent roundtable discussion hosted by Scharre at the Center for a New American Security, I emphasized the danger posed by interacting systems of armed autonomous agents fielded by different nations. To illustrate the threat, I drew an analogy to the interactions of automated financial agents trading at speeds beyond human control. On March 6, 2010, these trading systems caused a “flash crash” on U.S. stock exchanges during which the Dow Jones Industrial Average rapidly lost almost a tenth of its value. However, the stock market recovered most of its loss — unlike what would happen if major (nuclear) powers were involved in a “flash war” because of autonomous weapons systems.

Although some critics (including yours truly) have been talking about this aspect of the issue for years, Scharre has recently gotten out ahead of most of his own community of hawkish liberals in emphasizing it, apparently with genuine concern. He acknowledges, for example, that because nations will keep their algorithms secret, they will not know what opposing systems are programmed to do.

However, Scharre proposes multilateral negotiations on “rules of the road” and “firebreaks” for armed autonomous systems as the way to address this problem, rather than avoiding creating such a problem in the first place. In an intervention yesterday on behalf of the International Committee for Robot Arms Control (ICRAC), I asked whether such talks, if begun, should not be seen as an effort to legalize killer robots as much as make them safe.

Of course, to a certain kind of political realist, this may seem the only possible solution. I will admit that if nation-states did field automated networks of sensors and weapons in confrontation with one another, I would want those nation-states to be talking and trying to minimize the likelihood of unintended ignition or escalation of violence, even if I doubt such an effort could succeed before it were too late. But why, I again ask, would we not prefer, if possible, to banish this specter of out-of-control war machines from our vision of the future?

The author, delivering the ICRAC opening statement.
I missed most of the opening country statements because I was busy helping to prepare, and then deliver, ICRAC’s opening statement. Here’s a snippet of what I read:

ICRAC urges the international community to seriously consider the prohibition of autonomous weapons systems in light of the pressing dangers they pose to global peace and security.... We fear that once they are developed, they will proliferate rapidly, and if deployed they may interact unpredictably and contribute to regional and global destabilization and arms races.

ICRAC urges nations to be guided by the principles of humanity in its deliberations and take into account considerations of human security, human rights, human dignity, humanitarian law and the public conscience.... Human judgment and meaningful human control over the use of violence must be made an explicit requirement in international policymaking on autonomous weapons.

From what I did get to hear of the countries’ opening statements, they showed a substantial deepening of understanding since last year. The representative from Japan stated that their country would not create autonomous weapons, and France and Germany remained in the peace camp, although I am told the German position has weakened slightly. (The German statement doesn’t seem to be online yet.) The strongest statement from any NATO member state was that of Croatia, which unequivocally called for a legal ban on autonomous weapons. But perhaps most significant of all was the Chinese statement (also not yet online), which called autonomous weapons a threat to humanity and noted the warnings of Russell and Stephen Hawking about the dangers of out-of-control “superintelligent” AI.

If the Chinese are interested in talking seriously about banning killer robots, shouldn’t the United States be as well? I see a glimmer of hope in the U.S. opening statement, which referred to the 2012 directive on autonomous weapons as merely providing a starting point that would not necessarily set a policy for the future. The Obama administration has a bit less than two years left to come up with a better one.