Futurisms: Critiquing the project to reengineer humanity

Monday, February 1, 2016

Toward a Typology of Transhumanism



Years ago, James Hughes sought to typify the emerging political debate over transhumanism with a three-axis political scale, adding a biopolitical dimension to the familiar axes of social and fiscal libertarianism. But transhumanism is a very academic issue, both in the sense that many transhumanists, including Hughes, are academics, and in the sense that it is very removed from everyday practical concerns. So it may make more sense to characterize the different types of transhumanists in terms of the kinds of intellectual positions to which they adhere rather than to how they relate to different positions on the political spectrum. As Zoltan Istvan’s wacky transhumanist presidential campaign shows us, transhumanism is hardly ready for prime time when it comes to American politics.

And so, I propose a continuum of transhumanist thought, to help observers understand the intellectual differences between some of its proponents — based on three different levels of support for human enhancement technologies.

First, the most mild form of transhumanists: those who embrace the human enhancement project, or reject most substantive limits to human enhancement, but who do not have a very concrete vision of what kinds of things human enhancement technology may be used for. In terms of their intellectual background, these mild transhumanists can be defined by their diversity rather than their unity. They adhere to some of the more respectable philosophical schools, such as pragmatism, various kinds of liberalism, or simply the thin, “formally rational” morality of mainstream bioethics. Many of these mild transhumanists are indeed professional bioethicists in good standing. Few, if any of them would accept the label of “transhumanist” for themselves, but they reject the substantive arguments against the enhancement project, often in the name of enhancing the freedom of choice that individuals have to control their own bodies — or, in the case of reproductive technologies, the “procreative liberty” of parents to control the bodies of their children.

Second, the moderate transhumanists. They are not very philosophically diverse, but rather are defined by a dogmatic adherence to utilitarianism. Characteristic examples would include John Harris and Julian Savulescu, along with many of the academics associated with Oxford’s rather inaptly named Uehiro Center for Practical Ethics. These thinkers, who nowadays also generally eschew the term “transhumanist” for themselves, deploy a simple calculus of costs and benefits for society to moral questions concerning biotechnology, and conclude that the extensive use of biotechnology will usually end up improving human well-being. Unlike those liberals who oppose restrictions on enhancement, liberty is a secondary value for these strident utilitarians, and so some of them are comfortable with the idea of legally requiring or otherwise pressuring individuals to use enhancement technologies.

Some of their hobbyhorses include the abandonment of the act-omission distinction — that is, that there are consequences of omitting to act; John Harris famously applied this to the problem of organ shortages when he argued that we should perhaps randomly kill innocent people to harvest their organs, since failing to procure organs for those who will die without them is little different than killing them. Grisly as it is, this argument is not quite a transhumanist one, since such organ donation would hardly constitute human enhancement, but it is clear how someone who accepts this kind of radical utilitarianism would go on to accept arguments for manipulating human biology in other outlandish schemes for maximizing “well-being.”

Third, the most extreme form of transhumanism is defined less by adherence to a philosophical position than to a kind of quixotic obsession with technology itself. Today, this obsession with technology manifests in the belief that artificial intelligence will completely transform the world through the Singularity and the uploading of human minds — although futurist speculations built on contemporary technologies have of course been around for a long time. Aldous Huxley’s classic novel Brave New World, for example, imagines a whole world designed in the image of the early twentieth century factory. Though this obsession with technology is not a philosophical position per se, today’s transhumanists have certainly built very elaborate intellectual edifices around the idea of artificial intelligence. Nick Bostrom’s recent book Superintelligence represents a good example of the kind of systematic work these extreme transhumanists have put into thinking through what a world completely shaped by information technology might be like.

*   *   *

Obviously there is a great deal of overlap between these three degrees of transhumanism, and the most mild stage in particular is really quite vaguely defined. If there is a kind of continuum along which these stages run it would be one from relatively open-minded and ecumenical thinkers to those who are increasingly dogmatic and idiosyncratic in their views. The mild transhumanists are usually highly engaged with the real world of policymaking and medicine, and discuss a wide variety of ideas in their work. The moderate transhumanists are more committed to a particular philosophical approach, and the academics at the Oxford’s Uehiro Center for Practical Ethics who apply their dogmatic utilitiarianism to moral problems usually end up with wildly impractical proposals. Though all of these advocates of human enhancement are enthusiastic about technology, for the extreme transhumanists, technology almost completely shapes their moral and political thought; and though their actual influence on public policy is thankfully limited for the time being, it is these more extreme folks, like Ray Kurzweil and Nick Bostrom, and arguably Eric Drexler and the late Robert Ettinger, who tend to be most often profiled in the press and to have a popular following.

4 comments:

  1. On the other hand, sometimes reality sounds like transhumanism. Michael Shermer seems to have rethought his dismissal of the idea of using brain preservation as a form of emergency medicine in his current Scientific American column, titled “Afterlife for Atheists” in the print edition, and “Can Our Minds Live Forever?” in the online version:

    http://www.scientificamerican.com/article/can-our-minds-live-forever/

    ReplyDelete
  2. Your definition of extreme is a little off in that you consider libertarians to be extremists (most so-called "tranhumanists like myself are libertarian. We do not seek to impose our personal choices and goals onto others).

    Extremism in the defense of liberty is no vice. It is a virtue.

    ReplyDelete
  3. A few nits to pick.

    Since "enhancement" by definition means making something better, it's inherently uphill to argue against "human enhancement." I prefer to talk about "modification" and question the assumption that something we could create by use of technology would be better than human beings are, i.e. that "better" has some objective meaning not referenced to humanity.

    Bostrom's book does not think through "what a world completely shaped by information technology might be like." It attempts to tackle the challenge posed by the prospect of artificial superintelligence. There is a lot to criticize and a lot to praise in the book (which is a masterpiece but hardly original in terms of its content, most of which was to be found in online forums 10-20 years ago).

    Drexler is a visionary engineer-scientist whose work inspired much of the transhumanist movement but deserves to be assessed on its technical merit, not its philosophical assumptions (if any). If you compare Bostrom with Drexler, Bostrom is far more successful and far less substantial. A lot of Bostrom's original thinking is downright silly.

    Transhumanism in general has expanded in so many directions that it is no longer possible to tell a single coherent narrative of what it is, how it has developed or where it is going, let alone reduce it to a one-dimensional spectrum from "moderate" to "extreme." There are so many people who can be associated in one way or another with the movement and its ideas that it really does begin to make sense to break it down into a typology, but I think that you would have to embed this in a space of many dimensions to capture the significant variations between the different types.

    ReplyDelete
  4. Thanks, Mark, for this helpful comment. I won't speak for Brendan, but I'd just offer two quick replies of my own. First, yes, you're certainly right that transhumanist thought is complicated and that the typology offered here is very simple — too simple — and that much more could be said. True.

    Second, I finally just read Bostrom's book last week — I know, I'm late to the party — and I certainly agree that there is a lot to criticize in the book. I found less in it to praise than you did, and I wouldn't call it a masterpiece. But I consider it serious enough that it deserves a more intensive critique than it has so far received. Many of its questionable technical claims and its moral analysis have gone unchallenged by reviewers and reporters. Maybe we here on Futurisms can do something about that.

    ReplyDelete

[Basic HTML tags can be used in this comment field. Comments are moderated for civility and relevance and will not appear until the blog's editors have approved them.]