Futurisms: Critiquing the project to reengineer humanity

Wednesday, December 30, 2009

An Ideal Model for WBE (or, I Can Haz Whole-Brain Emulation)

In case you missed the hubbub, IBM researchers last month announced the creation of a powerful new brain simulation, which was variously reported as being "cat-scale," an "accurate brain simulation," a "simulated cat brain," capable of "matching a cat's brainpower," and even "significantly smarter than [a] cat." Many of the claims go beyond those made by the researchers themselves — although they did court some of the sensationalism by playing up the cat angle in their original paper, which they even titled "The Cat is Out of the Bag."

Each of these claims is either false or so ill-defined as to be unfalsifiable — and those critics who pointed out the exaggerations deserve kudos.

But this story is really notable not because it is unusual but rather because it is so representative: journalistic sensationalism and scientific spin are par for the course when it comes to artificial intelligence and brain emulation. I would like, then, to attempt to make explicit the premises that underlie the whole-brain emulation project, with the aim of making sense of such claims in a less ad hoc manner than is typical today. Perhaps we can even evaluate them using falsifiable standards, as should be done in a scientific discipline.

How Computers Work
All research in artificial intelligence (AI) and whole-brain emulation proceeds from the same basic premise: that the mind is a computer. (Note that in some projects, the whole mind is presumed to be a computer, while in others, only some subset of the mind is so presumed, e.g. natural language comprehension or visual processing.)

What exactly does this premise mean? Computer systems are governed by layers of abstraction. At its simplest, a physical computer can be understood in terms of four basic layers:


The layers break down into two software layers and two physical layers. The processor is the device that bridges the divide between software and the physical world. It offers a set of symbolic instructions. But the processor is also a physical object designed to correspond to those symbols. An abacus, for example, can be understood as "just" a wooden frame with beads, but it has been designed to represent numbers, and so can perform arithmetic calculations.

Above the physical/software bridge provided by the processor is the program itself, which is written using instructions in the processor's programming language, also known as the Instruction Set Architecture (ISA). For example, an x86 processor can execute instructions like "add these two numbers," "store this number in that location," and "jump back four instructions," while a program written for the x86 will be a sequence of such instructions. Such programs could be as simple as an arithmetical calculator or as complex as a web browser.

Below the level of the processor is the set of properties of the physical world that are irrelevant to the processor's operation. More specifically, it is the set of properties of the physical processor that do not appear in the scheme relating the ISA to its physical implementation in the processor. So, for example, a physical Turing Machine can be constructed using a length of tape on which symbols are represented magnetically. But one could also make the machine out of a length of paper tape painted different colors to represent different symbols. In each case, the machine has both magnetic and color properties, but which properties are relevant and which are irrelevant to its functioning as a processor depends on the scheme by which the physical/software divide is bridged.

Note the nature of this layered scheme: each layer requires the layer below it, but could function with a different layer below it. Just like the Turing Machine, an ISA can be implemented on many different physical processors, each of which abstracts away different sets of physical properties as irrelevant to their functioning. And a program, in turn, can be written using many different ISAs.
An Ideal Model for Whole-Brain Emulation
In supposing that the mind is a computer, the whole-brain emulation project proceeds on the premise that the computational model thus outlined applies to the mind. That is, it posits a sort of Ideal Model that can, in theory, completely describe the functioning of the mind. The task of the whole-brain emulation project, then, is to "fill in the blanks" of this model by attempting, either explicitly or implicitly, to answer the following four questions:

1. What is the mind's program? That is, what is the set of instructions by which consciousness, qualia, and other mental phenomena are given rise in the brain?

2. In which instruction set is that program written? That is, what is the syntax of the basic functional unit of the mind?

3. What constitutes the hardware of the mind? That is, what is the basic functional unit of
the mind? What structure in the brain implements the ISA of the mind?

4. Which physical properties of the brain are irrelevant to the operation of its basic functional unit? That is, which physical properties of the brain can be left out of a complete simulation of the mind?


We could restate the basic premise of AI as the claim that the mind is an instantiation of a Turing Machine, and then equivalently summarize these four questions by asking: (1) What is the Turing Machine of which the mind is an instantiation? And (2) What physical structure in the brain implements that Turing Machine? When and only when these questions can be answered, it will be possible to program those answers into a computer, and whole-brain emulation will be achievable.
Limitations of the Ideal Model
You might object that this analysis is far too literal in its treatment of the mind as a computer. After all, don't AI researchers now appreciate that the mind is squishy, indefinite, and difficult to break into layers (in a way that this smooth, ideal model and "Good Old-Fashioned AI" don't acknowledge)?

There are two possible responses to this objection. Either mental phenomena (including intelligence, but also consciousness, qualia, and so forth) and the mind as a whole are instantiations of Turing Machines and therefore susceptible to the model and to replication on a computer, or they are not.

If the mind is not an instantiation of a Turing Machine, then the objection is correct, but the highest aspirations of the AI project are impossible.

If the mind is an instantiation of a Turing Machine, then the objection misunderstands the layered nature of physical and computer systems alike. Specifically, the objection understands that AI often proceeds by examining the top layer of the model — the "program" of the mind — but then denies this layer's relationship to the layers below it. This objection essentially makes the same dualist error often attributed to AI critics like John Searle: it argues that if a computational system can be described at a high level of complexity bearing little resemblance to a Turing Machine, then it does not have some underlying Turing Machine implementation. (There is a deep irony in this objection — about which, more in a later post.)

There is a related question about this Ideal Model: Suppose we can ascertain the Turing Machine of which the mind is an instantiation. And suppose we then execute this program on a digital computer. Will the computer then be a mind? Will it be conscious? This is an open question, and a vexing and tremendously important one, but it is sufficient simply to note here that we do not know for certain whether such a scenario would result in a conscious computer. (If it would not, then certain premises of the Ideal Model would be false — but about this, more, also, in a later post.)

A third, and much more pressingly relevant, note about the model. For similar reasons to the fact that we do not know if simulating the brain at a low level will give rise to the high-level phenomena of the mind, it is also the case that even if and when we create a completely accurate model of the brain, we will not necessarily understand the mind. This is, again, because of the layered nature of physical and computational systems. It is just as difficult to understand a low-level simulation of a complex system as it is to understand the original physical system. In either case, higher-level behavior must be additionally understood — just as looking at the instructions executing on a computer processor allows you to completely predict the program's behavior but does not necessarily allow you to understand its higher-level structure; and just as Newton would not necessarily have discerned his mechanical theories by making a perfectly accurate simulation of an apple falling from a tree. (I explained this layering in more depth in this recent New Atlantis essay.)
Achieving and Approximating the Ideal Model
Again, the claim in this post is that the Ideal Model presented here is the implicit model on which the whole-brain emulation project proceeds. Which brings us back to the "cat-brain" controversy.

When we attempt to analyze how the paper's authors "fill in the blanks" of the Ideal Model, we see that they seem to define each of the levels (in some cases explicitly, in others implicitly) as follows: (1) the neuron is the basic functional unit of the mind; (2) everything below the level of the neuron is irrelevant; (3) the neuron's computational power can be accurately replicated by simulating only its electrical action potential; and (4) the program of the mind is encoded in the synaptic connections between neurons. The neuron-level simulation appears to be quite simple, omitting a great level of detail without offering justification or explanation for whether these details are relevant and what might be the effects of omitting them if they are relevant.

Aside from the underlying question of whether such an Ideal Model of the mind really exists — that is, of whether the mind is in fact a computer — the most immediate question is: How close have we come to filling in the details of the Ideal Model? As the "cat-brain" example should indicate, the answer is: not very close. As Sally Adee writes in IEEE Spectrum:
Jim Olds (who directs George Mason University's Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. "We need an Einstein of neuroscience," he says, "to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity." Here's what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?...

No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. "We do not have a definition of consciousness," says [Dartmouth Brain Engineering Laboratory Director Richard] Granger. "Or, worse, we have fifteen mutually incompatible definitions."
The sorts of approximation seen in the "cat-brain" case, then, are entirely understandable and unavoidable in current attempts at whole-brain emulation. The problem is not the state of the art, but the overconfidence in understanding that so often accompanies it. We really have no idea yet how close these projects come to replicating or even modeling the mind. Note carefully that the uncertainty exists particularly at the level of the mind rather than the brain. We have a rather good idea of how much we do and do not know about the brain, and, in turn, how close our models come to simulating our current knowledge of the brain. What we lack is a sense of how this uncertainty aggregates at the level of the mind.

Many defenders of the AI project argue that it is precisely because the brain has turned out to be so "squishy," indefinite, and unlike a computer, that approximations at the low level are acceptable. Their argument is that the brain is hugely redundant, designed to give rise to order at a high level out of disorder at a low level. This may or may not be the case, but again, if it is, we do not know how this happens or which details at the low level are part of the "disorder" and thus safely left out of a simulation. The aggregate low-level approximations may simply be filtered out as noise at a high level. Alternately, if the basic premise that the mind is a computer is true, then even miniscule errors in approximation of its basic functional unit may aggregate into wild differences in behavior at the high level, as they easily can when a computer processor malfunctions at a small but regular rate.

Until we have better answers to these questions, most of the claims such as those surrounding the "cat brain" should be regarded as grossly irresponsible. That the simulation in question is "smarter than a cat" or "matches a cat's brainpower" is almost certainly false (though to my knowledge no efforts have been made to evaluate such claims, even using some sort of feline Turing Test — which, come to think of it, would be great fun to dream up). The claim that the simulation is "cat-scale" could be construed as true only insofar as it is so vaguely defined. Such simulations could rather easily be altered to further simplify the neuron model, shifting computational resources to simulate more neurons, resulting in an "ape-scale" or "human-scale" simulation — and those labels would be just as meaningless.

When reading news reports like many of those about the "cat-brain" paper, the lay public may instinctively take the extravagant claims with a grain of salt, even without knowing the many gaps in our knowledge. But it is unfortunate that reporters and bloggers who should be well-versed in this field peddle baseless sensationalism. And it is unfortunate some that researchers should prey on popular ignorance and press credulity by making these claims. But absent an increase in professional sobriety among journalists and AI researchers, we can only expect, as Jonah Lehrer has noted, many more such grand announcements in the years to come.

Tuesday, December 29, 2009

Happy Birthday, Nanotechnology?

Fifty years ago today, on December 29, 1959, Richard P. Feynman gave an after-dinner talk in Pasadena at an annual post-Christmas meeting of the American Physical Society. Here is how Ed Regis describes the setting of the lecture in his rollicking book Nano:

In the banquet room [at the Huntington-Sheraton hotel in Pasadena], a giddy mood prevails. Feynman, although not yet the celebrity physicist he’d soon become, was already famous among his peers not only for having coinvented quantum electrodynamics, for which he’d later share the Nobel Prize, but also for his ribald wit, his clownishness, and his practical jokes. He was a regular good-time guy, and his announced topic for tonight was “There’s Plenty of Room at the Bottom” — whatever that meant.

“He had the world of young physicists absolutely terrorized because nobody knew what that title meant,” said physicist Donald Glaser. “Feynman didn’t tell anybody and refused to discuss it, but the young physicists looked at the title ‘There’s Plenty of Room at the Bottom’ and they thought it meant ‘There are plenty of lousy jobs in physics.’”

The actual subject of Feynman’s lecture was making things small and making small things.

What I want to talk about is the problem of manipulating and controlling things on a small scale.

As soon as I mention this, people tell me about miniaturization, and how far it has progressed today. They tell me about electric motors that are the size of the nail on your small finger. And there is a device on the market, they tell me, by which you can write the Lord’s Prayer on the head of a pin. But that’s nothing; that's the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction....

Feynman went on to imagine fitting the entire Encyclopaedia Britannica on the head of a pin, and even storing all the information in all the world’s books “in a cube of material one two-hundredth of an inch wide — which is the barest piece of dust that can be made out by the human eye.” He then described the miniaturization of computers, of medical machines, and more. He deferred on the question of how these things would technically be accomplished:

I will not now discuss how we are going to do it, but only what is possible in principle — in other words, what is possible according to the laws of physics. I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven’t yet gotten around to it.

Richard Feynman, seen here on the cover of the February 1960 issue of 'Engineering and Science,' in which his 1959 talk 'There's Plenty of Room at the Bottom' was first published.And Feynman only barely touched on the question of why these things should be pursued — saying that it “surely would be fun” to do them. He closed by offering two thousand-dollar prizes. One would go to the first person to make a working electric motor that was no bigger than one sixty-fourth of an inch on any side; Feynman awarded that prize less than a year later. The other would go to the first person to shrink a page of text to 1/25,000 its size (the scale required for fitting Britannica on the head of a pin); Feynman awarded that in 1985.

Feynman’s lecture was published in Engineering and Science in 1960 —  see the cover image at right — and it’s available in full online here. The lecture is often described as a major milestone in the history of nanotechnology, and is sometimes even credited with originating the idea of nanotechnology — even though he never used that word, even though others had anticipated him in some of the particulars, and even though the historical record shows that his talk was largely forgotten for about two decades. A few historians have sought to clarify the record, and none has done so more definitively than Christopher Toumey, a University of South Carolina cultural anthropologist. (See, for instance, Toumey’s short piece here, which links to two of his longer essays, or his recent Nature Nanotechnology piece here [subscription required].) Relying on journal citations and interviews with researchers, Toumey shows just how little direct influence Feynman’s lecture had, and compares Feynman’s case to that of Gregor Mendel: “No one denies that Mendel discovered the principles of genetics before anyone else, or that he published his findings in a scientific journal ... but that ought not to be overinterpreted as directly inspiring or influencing the later geneticists” who rediscovered those principles on their own.

Toumey suggests that nanotechnology needed “an authoritative founding myth” and found it in Feynman. This is echoed by UC-Davis professor Colin Milburn in his 2008 book Nanovision. Milburn speaks of a “Feynman origin myth,” but then puts a slightly more cynical spin on it:

How better to ensure that your science is valid than to have one of the most famous physicists of all time pronouncing on the “possibility” of your field.... The argument is clearly not what Feynman said but that he said it.

Eric Drexler, whose ambitious vision of nanotechnology is certainly the one that has most captured the public imagination, has invoked the name of Feynman in nearly all of his major writings. This is not just a matter of acknowledging Feynman’s priority. As Drexler told Ed Regis, “It’s kind of useful to have a Richard Feynman to point to as someone who stated some of the core conclusions. You can say to skeptics, ‘Hey, argue with him!’”

How, then, should we remember Feynman’s talk? Fifty years later, it still remains too early to tell. The legacy of “Plenty of Room” will depend in large part on how nanotechnology — and specifically, Drexler’s vision of nanotechnology — pans out. If molecular manufacturing comes to fruition as Drexler describes it, Feynman will deserve credit for his imaginative prescience. If nothing ever comes of it — if Drexler’s vision isn’t pursued or is shown to be technically impossible — then Feynman’s lecture may well return to the quiet obscurity of its first two decades.

[UPDATE: Drexler himself offers some further thoughts on the anniversary of the Feynman lecture over on his blog Metamodern.]

Wednesday, December 23, 2009

Bad Humbug, Good Humbug, and Bah Humbug

Blogger Michael Anissimov does not believe in Santa Claus, but he does believe in the possibility, indeed the moral necessity, of overcoming animal predation. To put it another way, he does not believe in telling fantasy stories to children if they will take those stories to be true, but he has no compunctions about telling them to adults with hopes that they will be true.

An obvious difference Mr. Anissimov might wish to point out is that adults are more likely than children to be able to distinguish fantasy from reality. He can (and does) submit his thoughts to their critical appraisal. While that difference does not justify what Mr. Anissimov regards as taking advantage of children by telling them convincing fantasies, it does suggest something about the difference between small children and adults. Small children cannot readily distinguish between fantasy and reality. In fact, there is a great deal of pleasure to be had in the failure to make that distinction. It could even be true that not making it is an important prelude to the subsequent ability to make it. Perhaps those who are fed from an early age on a steady diet of the prosaic will have more trouble distinguishing between the world as it is and as they might wish it to be. But here I speculate.

In any case, surely if one fed small children on a steady diet of stories like the one Mr. Anissimov tells about overcoming predation, they might come to believe such stories as uncritically as other children believe in Santa Claus. I can easily imagine their disappointment upon learning the truth about the immediate prospects of lions lying down with lambs. We’d have to be sure to explain to them very carefully and honestly that such a thing will only happen in a future, more or less distant, that they may or may not live to see — even if small children are not all that good at understanding about long-term futures and mortality.

But in light of their sad little faces it would be a hard parent indeed who would not go on to assure them that a fellow named Aubrey de Grey is working very hard to make sure that they will live very long lives indeed so that maybe they will see an end to animal predation after all! But because “treating them as persons” (in Mr. Anissimov’s phrase) means never telling children stories about things that don’t exist without being very clear that these things don’t exist, it probably wouldn’t mean much to them if we pointed out that Mr. de Grey looks somewhat like an ectomorphic version of a certain jolly (and immortal) elf:

Friday, December 18, 2009

Arma virumque cano

Beneath Adam’s post “On Lizardman and Liberalism,” commenter Will throws down the gauntlet: “[F]ind one transhumanist who thinks we should be allowed to embed nuclear weapons in our bodies.” I for one am ready concede that I know of no such case. But I’m moved to wonder, why not? Why should a libertarian transhumanist like Anders Sandberg — who believes that “No matter what the social circumstances are, it is never acceptable to overrule someone’s right to ... morphological freedom” — be unwilling to defend the right of an individual to embed a nuclear weapon? Assuming Sandberg would not be so willing, two alternatives occur to me. Either, like many people, he is more decent than his principles would lead one to believe, and/or he has not explored the real implications of his principles.

To some, this case may seem absurd — why would anyone want to turn himself into a bomb? Why indeed? But turning oneself into a bomb is already a reality in our world. And the underlying moral relativism of Mr. Sandberg’s absolute prohibition is of a piece with the progressive moral “wisdom” that asserts “one man’s terrorist is another man’s freedom fighter.” So if indeed Mr. Sandberg would flinch at the implantation of a bomb of any sort, it might be because he is living off moral capital that his own principle is busy degrading. He may be more decent than his principles, but his decency may not survive his principles.

Painting by Charles Bittinger of an atomic test at Bikini Atoll; courtesy U.S. NavyThe commenter Will steps into the breach with his own guiding idea: “Most transhumanists would probably advocate something along the lines of ‘complete morphological freedom as long as it doesn’t violate the rights of other conscious entities’” (emphasis added). But I don’t see how from this libertarian perspective the implantation of a bomb (properly shielded, if nuclear) violates the rights of any conscious entities any more than would carrying about a phial of poison. Will and I can agree that the use of that bomb in a public space would be a Bad Thing. But nothing in Will’s principle (other than a little fallout, perhaps) would prohibit some transhuman of the future from implanting the bomb, hopping into a boat, sailing to the mid-Atlantic outside of the shipping lanes, making sure there are no cetaceans nearby, calling in his coordinates to the by-then doubtless ubiquitous surveillance satellites, and going out in a blaze of glory on whatever will be the equivalents of Facebook or YouTube. Sounds potentially viral to me. Surely the right to blow oneself up under carefully controlled circumstances does not represent the aspirations of any large number of transhumanists, but surely their principles would require them to defend even this minority taste.

Tuesday, December 15, 2009

The View from the Dollhouse

[NOTE: From time to time we will invite guests to contribute to Futurisms. Our first guest post, below, comes from Brian J. Boyd, a graduate student at Oxford and a former New Atlantis intern.]

Fox has done the world two great injustices in canceling first Joss Whedon’s sublime series Firefly and now his intriguing show Dollhouse. Since the final few episodes of Dollhouse are now airing, this seems the right time to reconsider the show and what it suggests about human nature and the technologies of tomorrow.

This time around, Whedon takes us a handful of years into the future, to an America where things look familiar on the surface, but more and more of the people one meets are actually “dolls” — persons whose memories have been erased and identities overwritten by an organization that hires them out to very rich clients to be used as anything from sexual playthings to foster mothers. After each “engagement” the doll returns to be wiped clean and imprinted (that is, reprogrammed) for the next encounter. While the show does tacitly condemn this new form of slavery, Whedon is sensitive to the potential appeal of the imagined technology. In a piece about Dollhouse in the transhumanist magazine H+ a couple of months ago, Erik Davis noted that

The show’s ambivalence about such “posthuman” technologies is captured by the character who does all the wiping and remixing: a smug, immature, and charmingly nerdish wetware genius named Topher Brink [pictured at right above], whose simultaneously dopey and snarky incarnation by the actor Fran Kranz reflects the weird mix of arrogance and creative exuberance that inform so much manipulative neuroscience.

Despite the technology’s potential appeal, Dollhouse has also from the beginning emphasized the potential for abuse of the “doll” technology. In the third episode, Paul Ballard (our hero FBI agent), predicted disaster: “We split the atom; we make a bomb. We come up with anything new, the first thing we do is — destroy, manipulate, control. It’s human nature.” And the Season One finale gives us a glimpse of how things will turn out in Whedon’s fictional world: the episode shows a flashforward to 2019, when Los Angeles is in flames and a ragged band of survivors recoils in horror from any “tech” they find that might house a computer chip.

As the series progresses, the original “bad guys” of the L.A. dollhouse, whom we have been brought to see as complicated human beings with mostly good intentions, have been increasingly pitted against their superiors in the aptly-named Rossum Corporation whose intoxication with the power of their technology has become total. Whedon strongly implies a slippery slope here: At first, dolls were coerced but still nominally volunteers for a period of indentured servitude; in time, their masters grow reluctant to uphold their end of the bargain, unwilling to relinquish power.

Whereas Whedon’s Firefly, with its horses and handguns, turned to romanticism in its effort to reach an accord between modernity and tradition, Dollhouse explores the dangers of new technology without (so far) offering a way out. After all, “it’s human nature” that is the problem here; the technology is merely an expression and culmination of our natural desire to control. In his H+ piece, Davis says that “all of us are dolls sometimes, and dollhouse engineers other times” — in other words, manipulation, whether accomplished through political, theological, or emotional means, is part and parcel of the human experience. Davis has a point. But rather than suggesting that flawed human nature need be remade from scratch, Dollhouse compellingly depicts how the desire to remake our selves and our world can lead to a dismal deal with the devil: Topher sacrifices all the comforts of a normal life for the opportunity to pursue his research and refine his skills on live subjects. But when he comes to see his subjects not as toys but as people, his conscience leads him to join with those who are attempting to put the genie back into the bottle and contain the technology he helped create, vainly striving to undo the harm that has been done. “You’re human,” Topher says to one of his creations in an attempt to comfort her when she cannot cope with the discovery that she is a doll. Her rebuke also serves as a warning to those who think they can improve upon humanity: “Don’t flatter yourself.”
– Brian J. Boyd

Monday, December 7, 2009

On Lizardman and Liberalism

In post called “Getting Used to Hideousness,” Mike Treder makes three points. Each is provocative — and flawed.

First, he says, until relatively recently, people “with gross disabilities” or deformities “were expected to stay out of sight of the general public,” a closeting that Mr. Treder attributes to “the Victorian preference for order and rectitude.” But nowadays, he says, we have become more tolerant of people who “have shocking appearances.” (By way of example, he includes several pictures.)

Second, he moves from those whose unusual appearance was not their choice to those who intentionally alter their looks. He describes a range of body modifications — from makeup to orthodontics to plastic surgery to this sort of thing — and says that nearly everybody modifies himself in some way. He then envisions far more radical body modifications and suggests that there is no moral difference between any of them — they all alter what nature has given us, the only difference is “a matter of degree.”

Third, Mr. Treder invokes, with hope, the transhumanist doctrine of “morphological freedom.” He envisions a day when we will understand that “individuals who don’t look at all” normal will nonetheless be understood to be not freaks but “human beings with normal human feelings.”

Let me briefly respond to each of Mr. Treder’s main points in turn.

First, it is far too simplistic to say that we are becoming more tolerant of the different, deformed, and disabled in our midst. Mr. Treder includes with his post this picture — the lovely face of a smiling young girl with Down syndrome. But faces like hers are becoming ever rarer. Some 90 percent of fetuses diagnosed with Down syndrome are being aborted. This is not the mark of a growing tolerance or compassion; it is a silent purge, enabled by modern technology, of a class of human beings deemed unworthy of life.

Second, Mr. Treder’s argument about body modification is just a simplistic equivalency. The reasoning seems to go like this: Makeup and orthodontics and breast implants and (someday) extra arms and implanted wings are all unnatural, and so if you approve of any body modification you have no standing to criticize any other body modification.

But of course we make moral distinctions between different kinds of body modifications all the time — not based on grounds of “naturalness,” but based on the modification itself (Is it temporary or permanent? Is it external or invasive? Is it therapeutic? What is its cost?), based on the person being modified (Man or woman? Young or old? Mentally healthy?), and based on social context (What is this modification meant to signal? Is it tied to a particular cultural or social setting?). There is no simple checklist for deciding whether a bod-mod is morally licit, but we all make such judgments now, we make them for complicated reasons that reach beyond reflexive repugnance, and we will continue to make them in future eras of modification.

What Mr. Treder is really after is greater tolerance, an acceptance of people who look different. And this brings us to his invocation of “morphological freedom,” a supposed right to modify one’s body however one wishes. Like its transhumanist twin sister “cognitive liberty,” the concept of morphological freedom is an attempt to push the tenets of modern liberalism to their furthest logical extreme. In a 2001 talk elucidating and advocating morphological freedom, Swedish transhumanist Anders Sandberg stressed the centrality of tolerance:

No matter what the social circumstances are, it is never acceptable to overrule someone’s right to ... morphological freedom. For morphological freedom — or any other form of freedom — to work as a right in society, we need a large dose of tolerance.... Although peer pressure, prejudices, and societal biases still remain strong forces, they are being actively battled by equally strong ideas of the right to “be oneself,” the desirability of diversity, and an interest in the unusual, unique, and exotic.

That little taste of Mr. Sandberg’s talk exposes the basic problem of “morphological freedom” (and more generally, the fundamental flaw of any extreme liberalism or libertarianism). The problem is that extreme liberalism destroys the foundations upon which it depends.

Consider: Mr. Sandberg scorns shared social and civic values. He derides them as “peer pressure, prejudices, and societal biases” and observes with satisfaction that they are being “actively battled” by an expansion of tolerance. But tolerance is itself a shared value, one that must be inculcated and taught and reinforced and practiced. A freedom so extreme that it rejects all norms, wipes away shared mores, and undoes social bonds is a freedom that erodes tolerance — and thus topples itself.

Tuesday, December 1, 2009

The Mainstreaming of Transhumanism

Congratulations to Nick Bostrom, Jamais Cascio, and Ray Kurzweil for being recognized as three of Foreign Policy magazine’s “Top 100 Global Thinkers.” Once upon a time this kind of notoriety might not have helped the reputation of an Oxford don in the Senior Common Room (do such places still exist?), but even were that still the case, it must be tremendously satisfying for Professor Bostrom qua movement builder to get such recognition. The mainstreaming of transhumanism, noted (albeit playfully) by Michael Anissimov, proceeds apace. Ray Kurzweil did not win The Economist’s Innovation Award for Computing and Telecommunications because of his transhumanist advocacy, but apparently nobody at The Economist thought that it would in any way embarrass them. He’s just another one of those global thinkers we admire so much.

Of course such news is also good for the critic. I first alluded to transhumanist anti-humanism in a book I published in 1994, so for some time now I’ve been dealing with the giggle and yuck factors that the transhumanist/extropian/Singularitarian visions of the future still provoke among the non-cognoscenti. Colleagues, friends and family alike don’t quite get why anybody would be seriously interested in that. I’ve tried to explain why I think these kinds of arguments are only going to grow in importance, but now I have some evidence that they are in fact growing.

The Emperor Has No ClothesWhich leads me to Mr. Anissimov’s question about what it is that I’m hoping to achieve. My purpose (and here I only speak for myself) is not to predict, develop, or advocate the specific public policies that will be appropriate to our growing powers over ourselves. In American liberal democracy, the success or failure of such specific measures is highly contingent under the best of circumstances, and my firm belief that on the whole people are bad at anticipating the forces that mold the future means that I don’t think we are operating under the best of circumstances. So my intention is in some ways more modest and in some ways less. Futurisms is so congenial to me because I share its desire to create a debate that will call into question some of the things that transhumanists regard as obvious, or at least would like others to regard as obvious. I’ve made it reasonably clear that I think transhumanism raises many deep questions without itself going very deeply into them, however technical its internal discussions might sometimes get. That’s the modest part of my intention. The less modest part is a hope that exposing these flaws will contribute to creating a climate of opinion where the transhumanist future is not regarded as self-evidently desirable even if science and technology develops in such a way as to make it ever more plausible. So if and when it comes time to make policies, I want there to be skeptical and critical ideas available to counterbalance transhumanist advocacy.

In short, I’m happy to be among those who are pointing out that the emperor has no clothes, even if, to those who don’t follow such matters closely, I might look like the boy who cried wolf.