Futurisms: Critiquing the project to reengineer humanity

Thursday, December 22, 2011

Happy New Fear

I think the first news report I saw about the possibility of genetically engineering avian influenza to be more virulent was this one by the redoubtable Brandon Keim of Wired in 2010. Keim’s post did not really make clear why Yoshihiro Kawaoka, of the University of Wisconsin–Madison had sought to develop a more virulent strain of the virus; it seems to suggest it was done to show it could be done, and that hence, if some final obstacles were overcome, some such pandemic would be “inevitable.” In any case, the research was published in Proceedings of the National Academy of Sciences, and apparently included guidance on the particular human protein that would allow the virus to occupy the upper respiratory tract.

The next story I saw was just this past November, when Kristen Philipkoski at Gizmodo posted that “Engineered Avian Flu Could Kill Half the World’s Humans.” Philipkoski presented the research of virologist Ron Fouchier, and noted the many red flags that might have gone up in the course of his efforts to increase the virulence of H5N1. But no:

He presented his work at the influenza conference in Malta this September. Now he wants to publish his study in a scientific journal, so those responsible for responding to bioterrorism can be prepared for the worst case scenario. Seems like a no-brainer, right? Not exactly. The research has set off alarms among colleagues who are urging Fouchier not to publish, for fear the recipe could wind up in the wrong hands. Some question whether the research should have been done in the first place. Fair point!

Fair point indeed.

I waited for follow-on stories that would suggest that the danger here had been exaggerated, or that there was some extremely compelling reason that Philipkoski had missed for Fouchier to have undertaken his work. To date, I’ve seen nothing along those lines; do let me know in the comments what I might have missed. But the danger of the situation seems to be more or less confirmed by news reports this week that the U.S. National Science Advisory Board for Biosecurity (NSABB) has, for the first time ever, requested that Nature and Science publish only redacted versions of Fouchier’s work, and of further work by Kawaoka. (Fouchier, to his credit, seems to have agreed to the request, if grudgingly and skeptically.)

This time, Gizmodo’s Jamie Condliffe is up in arms. The request is

not cool.... I admit that this is a tough situation, but censoring journals is a dangerous precedent to set.... In many respects, this goes against the nature of science. Science works because people announce their findings for others to question — allowing us to confirm or refute them. That’s how science progresses, and censoring it like this kills the process. It’s also a hugely dangerous precedent to set. I hope the journals win out.

Condliffe’s knee-jerk reaction is only slightly less sophisticated than the pontificating by the journal editors about what “responsible influenza researchers” need to know. For while Fouchier may be willing to see his work redacted, it is not so clear Science is willing to publish it that way. Says Editor-in-Chief Bruce Alberts, “Our response will be heavily dependent upon the further steps taken by the U.S. government to set forth a written, transparent plan to ensure that any information that is omitted from the publication will be provided to all those responsible scientists who request it, as part of their legitimate efforts to improve public health and safety.”

“How science works” is of course important to the transhumanist project as well, and the libertarian impulse in the face of potentially grave danger that this particular incident reveals is not especially promising. Condliffe’s laissez-faire attitude toward science, and Alberts’s attempts to gain the upper hand over the NSABB, are not the brave defenses of the scientific enterprise that they intend them to be — for they are defending irresponsibility.

Says Fouchier:

We have made a list of experts that we could share this with, and that list adds up to well over 100 organizations around the globe, and probably 1,000 experts. As soon as you share information with more than 10 people, the information will be on the street. And so we have serious doubts whether this advice can be followed, strictly speaking.

Is he being cynical, or is this his honest assessment of the ability of his professional colleagues to act in the public interest? When developing atomic weapons, genuinely responsible researchers worked in the full knowledge that their path-breaking efforts would not be published at all, and that any sharing of knowledge would be on a strictly need-to-know basis among an extremely carefully restricted group. Would it not be the very definition of “responsible researcher” for any genetic engineer working in this newer field of weapons of mass destruction to accept, indeed actively seek, similarly serious restrictions?

[Editor’s note: For an analysis of the likelihood of pathogens being bioengineered and weaponized by terrorists or rogue states, see the article “Could Terrorists Exploit Synthetic Biology?” from the Spring 2011 issue of The New Atlantis, by our late contributor Jonathan B. Tucker.]

Image: The Birds (1963), © Universal Pictures

Thursday, December 8, 2011

The Problem with “Friendly” Artificial Intelligence

Readers of this blog may be familiar with the concept of “Friendly AI” — the project of making sure that artificial intelligences will do what we say without harming us (or, at the least, that they will not rise up and kill us all). In a recent issue of The New Atlantis, the authors of this blog have explored this idea at some length.

First, Charles T. Rubin, in his essay “Machine Morality and Human Responsibility,” uses Karel Čapek’s 1921 play R.U.R. — which introduced the word “robot” — to explore the different things people mean when they describe “Friendly AI,” and the conflicting motivations people have for wanting to create it. And he shows why it is that the play actually evinces a much deeper understanding of the meaning and stakes of engineering morality than can be found in the work of today’s Friendly AI researchers:

By design, the moral machine is a safe slave, doing what we want to have done and would rather not do for ourselves. Mastery over slaves is notoriously bad for the moral character of the masters, but all the worse, one might think, when their mastery becomes increasingly nominal.... The robot rebellion in the play just makes obvious what would have been true about the hierarchy between men and robots even if the design for robots had worked out exactly as their creators had hoped. The possibility that we are developing our “new robot overlords” is a joke with an edge to it precisely to the extent that there is unease about the question of what will be left for humans to do as we make it possible for ourselves to do less and less.

Professor Rubin’s essay also probes and challenges the work of contemporary machine-morality writers Wendell Wallach and Colin Allen, as well as Eliezer Yudkowsky.

In “The Problem with ‘Friendly’ Artificial Intelligence,” a response to Professor Rubin’s essay, Adam Keiper and I further explore the motivations behind creating Friendly AI. We also delve into Mr. Yudkowsky’s specific proposal for how we are supposed to create Friendly AI, and we argue that a being that is sentient and autonomous but guaranteed to act “friendly” is a technical impossibility:

To state the problem in terms that Friendly AI researchers might concede, a utilitarian calculus is all well and good, but only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes. Yet it is precisely the debate over just what those valuations should be that is the stuff of moral inquiry. And this is even more the case when all of the possible outcomes in a situation are bad, or when several are good but cannot all be had at once. Simply picking certain outcomes — like pain, death, bodily alteration, and violation of personal environment — and asserting them as absolute moral wrongs does nothing to resolve the difficulty of ethical dilemmas in which they are pitted against each other (as, fully understood, they usually are). Friendly AI theorists seem to believe that they have found a way to bypass all of the difficult questions of philosophy and ethics, but in fact they have just closed their eyes to them.

These are just short extracts from long essays with multi-pronged arguments — we might run longer excerpts here on Futurisms at some point, and as always, we welcome your feedback.

The Cases For and Against Enhancing People

A recent issue of The New Atlantis features several essays on transhumanism which may be of interest to readers of this blog. I’ll describe them briefly in this post and the next.

The first essay is “The Case for Enhancing People,” by Ronald Bailey, the science correspondent for Reason magazine. Ron is well known for supporting transhumanism and enhancement technologies; he makes the case for them in his book Liberation Biology: The Moral and Scientific Case for the Biotech Revolution. Here’s a snippet of his essay for us:

Contrary to oft-expressed concerns, we will find, first, that enhancements will better enable people to flourish; second, that enhancements will not dissolve whatever existential worries people have; third, that enhancements will enable people to become more virtuous; fourth, that people who don’t want enhancement for themselves should allow those of us who do to go forward without hindrance; fifth, that concerns over an “enhancement divide” are largely illusory; and sixth, that we already have at hand the social “technology,” in the form of protective social and political institutions, that will enable the enhanced and the unenhanced to dwell together in peace.

In response to Ron Bailey’s piece, we’ve published an essay by Benjamin Storey, an associate professor of political science at Furman University. The essay challenges Ron’s particularly libertarian strain of transhumanism, but also speaks to some of the fundamental questions raised by human enhancement. Here’s a taste:

“The Case for Enhancing People” is obviously the work of a sharp and curious mind, but Bailey’s libertarian commitment blinds him to the moral difficulties of our biotechnological moment, and condemns him to endlessly exploring what Chesterton called “the clean and well-lit prison of one idea.” When we step outside that prison, we find ourselves confronting a complex political, historical, and moral-existential landscape in which there are no easy answers. Politically, we face both the difficult task of attempting to responsibly shape mainstream moral life without going overboard in “childproofing our culture,” as Yuval Levin has put it, and the sobering reality that technology and individual liberty do not always exist in harmony. Historically, we stand before an uncertain future, in which there is no reason to believe that all technological change issues in genuine human progress.

This is a bracing and carefully wrought exchange; I believe readers will find it well worth their time.