Futurisms: Critiquing the project to reengineer humanity

Monday, August 9, 2010

The Blending of Humans and Robots

David Gelernter has written a characteristically thought-provoking essay about what guidance might be gleaned from Judaism for how human beings ought to treat “sophisticated anthropoid robots” with artificial intelligence powerful enough to allow them to respond to the world in a manner that makes them seem exactly like us. Taking his cue from Biblical and rabbinic strictures concerning cruelty to animals, he argues that because these robots “will seem human,” we should avoid treating them badly lest we become “more oblivious of cruelty to human beings.”

This conclusion, which one might draw as well on Aristotelian as Biblical grounds, is a powerful one — and in a world of demolition derbies and “Will It Blend?,” where even a video of a washing machine being destroyed can go viral, it is hard to deny that Gelernter has identified a potentially serious issue. It was raised with great force in the “Flesh Fair” scenes of the 2001 movie A.I., where we see robots being hunted down, herded together, and subjected to various kinds of creative destruction in front of howling fans. Meanwhile, the robots look on quietly with what I have always found to be heartbreaking incomprehension.

And yet, it also seems to me that the ringleader at the Flesh Fair, vicious though he is, is not entirely wrong when he harangues the crowd about the need to find a way to assert the difference between humans and robots in a world where it is becoming increasingly easy to confuse the two. And it is in this connection that I wonder whether Gelernter’s argument has sufficiently acknowledged the challenge to Jewish thought that is being posed by at least some of the advocates of the advanced artificial intelligence he is describing.

Gelernter knows full well the “sanctity and ineffable value” that Judaism puts on human life, which is to say he knows that in Jewish thinking human beings are unique within creation. In such a framework, it is understandable why the main concern with animal (or robot) cruelty should be the harm it might do to “our own moral standing” or “the moral stature and dignity of human beings.” But the moral dignity of human beings and our uniqueness in creation is precisely what is coming under attack from transhumanists, as well as the less potent but more widespread forms of scientism and technophilia in our culture. Gelernter is certain that the robot will feel no pain; but what of those who would reply that they will “process” an electrical signal from some part of their bodies that will trigger certain kinds of functions — which is after all what pain “really” is? Gelernter is certain that these anthropoid robots will have no inner life, but what of those, such as Tor Nørretranders and Daniel Dennett, who are busy arguing that what we call consciousness is just “user illusion”?

I don’t doubt that Gelernter could answer these questions. But I do doubt that his answers would put an end to all the efforts to convince us that after all we are simply “meat machines.” And if more and more we think of ourselves as “meat machines,” then what Gelernter calls the “pernicious incrementalism” of cruelty to robots that he is reasonably concerned about points in another direction as well: not that we start treating “thous” as “its,” but that in transforming “its” into “thous” we take all the moral meaning out of “human.”

It probably should not surprise us that there are dangers of kindness to robots as well as cruelty, but the fact that it is so might prompt us to wonder about the reasons that seem to make going down this road so compelling. Speaking Jewishly, Gelernter might recall the lesson from the pre-twentieth-century accounts of the golem, the legends of pious men creating an artificial anthropoid that go back to the Talmud. Nearly from the start two things are clear about the golem: only the wisest and most pious could ever hope to make one, but the greatest wisdom would be to know how and not to do so.

Tuesday, August 3, 2010

Destroying Civilization in Order to Save It

Mark Walker recently wrote an interesting piece over at The Global Spiral suggesting that when it comes to preventing the extinction of civilization, transhumanism is the best of the bad options we have. He frames the problem in a familiar way: the democratization of existential risks. As things are going now, more and more people will become capable of doing greater and greater harm, particularly via biotechnology. But if business as usual is in effect the problem, relinquishment of the knowledge and tools to do such harm would require draconian measures that hardly seem plausible. Transhumanism, while risky, is less risky than either of these courses of action because “posthumans probably won’t have much more capacity for evil than we have, or are likely to have shortly.” That is to say, once you can already destroy civilization, how much worse can it get? Creating beings who are “smarter and more virtuous than we are” has a greater chance for an upside, as “the brightest and most virtuous” would be “the best candidates amongst us to lead civilization through such perilous times.”

At one level, Walker’s essay might appear as mere tautology. If the transhumanist project works out as advertised (smarter and more virtuous beings), then the transhumanist project will have worked out as advertised (smarter and more virtuous beings will do smarter and more virtuous things). But more interestingly, Walker nicely encapsulates a number of issues that transhumanists regularly seek to avoid thinking seriously about. For example:

1) What is the relationship between human and posthuman civilization? If proponents of “the Singularity” are correct, then the rise of posthumans would likely be just another way of destroying human civilization. Our civilization will not be “led through perilous times,” it will be replaced by something new and radically different. One could say that at least then human civilization would have led to something better, rather than simply lying in ruins. But then the next question arises.

2) What makes Walker think that posthuman wisdom and virtue will look like wisdom and virtue to humans? Leaving aside the fact that humans already don’t always agree about what virtue is, we label the things we label virtues because we are the kinds of beings we are. By definition, posthumans will be different kinds of beings. At the very least, why should we expect that we will understand their beneficent intent as such any better than my cat understands I am doing her a favor by not feeding her as much as she would like?

3) Walker suggests we have “almost hit the wall in our capacity for evil.” I hope he is right, but I fear he simply lacks imagination. The existing trajectory of neuroscience, not to speak of how it might be redirected by deliberate efforts to create posthumans, seems to me to open exciting new avenues for pain and degradation along with its helping hand. But be that as it may, I wonder if “destruction of human civilization” is really as bad as it gets. As is clear from discussions that have taken place on Futurisms, for some transhumanists that would hardly be enough: nature itself will have to come under the knife. That kind of deliberate ambition makes an accidental oil spill, or knocking down a few redwood groves, look like shoplifting from a dollar store.

So: human beings have made a hash of things, but since we can imagine godlike beings who might save us we should go ahead and try to create them. We might make a hash of that project, but doing anything else would be as bad or worse. That’s what you call doubling down.