Futurisms: Critiquing the project to reengineer humanity

Wednesday, December 30, 2009

An Ideal Model for WBE (or, I Can Haz Whole-Brain Emulation)

In case you missed the hubbub, IBM researchers last month announced the creation of a powerful new brain simulation, which was variously reported as being "cat-scale," an "accurate brain simulation," a "simulated cat brain," capable of "matching a cat's brainpower," and even "significantly smarter than [a] cat." Many of the claims go beyond those made by the researchers themselves — although they did court some of the sensationalism by playing up the cat angle in their original paper, which they even titled "The Cat is Out of the Bag."

Each of these claims is either false or so ill-defined as to be unfalsifiable — and those critics who pointed out the exaggerations deserve kudos.

But this story is really notable not because it is unusual but rather because it is so representative: journalistic sensationalism and scientific spin are par for the course when it comes to artificial intelligence and brain emulation. I would like, then, to attempt to make explicit the premises that underlie the whole-brain emulation project, with the aim of making sense of such claims in a less ad hoc manner than is typical today. Perhaps we can even evaluate them using falsifiable standards, as should be done in a scientific discipline.

How Computers Work
All research in artificial intelligence (AI) and whole-brain emulation proceeds from the same basic premise: that the mind is a computer. (Note that in some projects, the whole mind is presumed to be a computer, while in others, only some subset of the mind is so presumed, e.g. natural language comprehension or visual processing.)

What exactly does this premise mean? Computer systems are governed by layers of abstraction. At its simplest, a physical computer can be understood in terms of four basic layers:


The layers break down into two software layers and two physical layers. The processor is the device that bridges the divide between software and the physical world. It offers a set of symbolic instructions. But the processor is also a physical object designed to correspond to those symbols. An abacus, for example, can be understood as "just" a wooden frame with beads, but it has been designed to represent numbers, and so can perform arithmetic calculations.

Above the physical/software bridge provided by the processor is the program itself, which is written using instructions in the processor's programming language, also known as the Instruction Set Architecture (ISA). For example, an x86 processor can execute instructions like "add these two numbers," "store this number in that location," and "jump back four instructions," while a program written for the x86 will be a sequence of such instructions. Such programs could be as simple as an arithmetical calculator or as complex as a web browser.

Below the level of the processor is the set of properties of the physical world that are irrelevant to the processor's operation. More specifically, it is the set of properties of the physical processor that do not appear in the scheme relating the ISA to its physical implementation in the processor. So, for example, a physical Turing Machine can be constructed using a length of tape on which symbols are represented magnetically. But one could also make the machine out of a length of paper tape painted different colors to represent different symbols. In each case, the machine has both magnetic and color properties, but which properties are relevant and which are irrelevant to its functioning as a processor depends on the scheme by which the physical/software divide is bridged.

Note the nature of this layered scheme: each layer requires the layer below it, but could function with a different layer below it. Just like the Turing Machine, an ISA can be implemented on many different physical processors, each of which abstracts away different sets of physical properties as irrelevant to their functioning. And a program, in turn, can be written using many different ISAs.
An Ideal Model for Whole-Brain Emulation
In supposing that the mind is a computer, the whole-brain emulation project proceeds on the premise that the computational model thus outlined applies to the mind. That is, it posits a sort of Ideal Model that can, in theory, completely describe the functioning of the mind. The task of the whole-brain emulation project, then, is to "fill in the blanks" of this model by attempting, either explicitly or implicitly, to answer the following four questions:

1. What is the mind's program? That is, what is the set of instructions by which consciousness, qualia, and other mental phenomena are given rise in the brain?

2. In which instruction set is that program written? That is, what is the syntax of the basic functional unit of the mind?

3. What constitutes the hardware of the mind? That is, what is the basic functional unit of
the mind? What structure in the brain implements the ISA of the mind?

4. Which physical properties of the brain are irrelevant to the operation of its basic functional unit? That is, which physical properties of the brain can be left out of a complete simulation of the mind?


We could restate the basic premise of AI as the claim that the mind is an instantiation of a Turing Machine, and then equivalently summarize these four questions by asking: (1) What is the Turing Machine of which the mind is an instantiation? And (2) What physical structure in the brain implements that Turing Machine? When and only when these questions can be answered, it will be possible to program those answers into a computer, and whole-brain emulation will be achievable.
Limitations of the Ideal Model
You might object that this analysis is far too literal in its treatment of the mind as a computer. After all, don't AI researchers now appreciate that the mind is squishy, indefinite, and difficult to break into layers (in a way that this smooth, ideal model and "Good Old-Fashioned AI" don't acknowledge)?

There are two possible responses to this objection. Either mental phenomena (including intelligence, but also consciousness, qualia, and so forth) and the mind as a whole are instantiations of Turing Machines and therefore susceptible to the model and to replication on a computer, or they are not.

If the mind is not an instantiation of a Turing Machine, then the objection is correct, but the highest aspirations of the AI project are impossible.

If the mind is an instantiation of a Turing Machine, then the objection misunderstands the layered nature of physical and computer systems alike. Specifically, the objection understands that AI often proceeds by examining the top layer of the model — the "program" of the mind — but then denies this layer's relationship to the layers below it. This objection essentially makes the same dualist error often attributed to AI critics like John Searle: it argues that if a computational system can be described at a high level of complexity bearing little resemblance to a Turing Machine, then it does not have some underlying Turing Machine implementation. (There is a deep irony in this objection — about which, more in a later post.)

There is a related question about this Ideal Model: Suppose we can ascertain the Turing Machine of which the mind is an instantiation. And suppose we then execute this program on a digital computer. Will the computer then be a mind? Will it be conscious? This is an open question, and a vexing and tremendously important one, but it is sufficient simply to note here that we do not know for certain whether such a scenario would result in a conscious computer. (If it would not, then certain premises of the Ideal Model would be false — but about this, more, also, in a later post.)

A third, and much more pressingly relevant, note about the model. For similar reasons to the fact that we do not know if simulating the brain at a low level will give rise to the high-level phenomena of the mind, it is also the case that even if and when we create a completely accurate model of the brain, we will not necessarily understand the mind. This is, again, because of the layered nature of physical and computational systems. It is just as difficult to understand a low-level simulation of a complex system as it is to understand the original physical system. In either case, higher-level behavior must be additionally understood — just as looking at the instructions executing on a computer processor allows you to completely predict the program's behavior but does not necessarily allow you to understand its higher-level structure; and just as Newton would not necessarily have discerned his mechanical theories by making a perfectly accurate simulation of an apple falling from a tree. (I explained this layering in more depth in this recent New Atlantis essay.)
Achieving and Approximating the Ideal Model
Again, the claim in this post is that the Ideal Model presented here is the implicit model on which the whole-brain emulation project proceeds. Which brings us back to the "cat-brain" controversy.

When we attempt to analyze how the paper's authors "fill in the blanks" of the Ideal Model, we see that they seem to define each of the levels (in some cases explicitly, in others implicitly) as follows: (1) the neuron is the basic functional unit of the mind; (2) everything below the level of the neuron is irrelevant; (3) the neuron's computational power can be accurately replicated by simulating only its electrical action potential; and (4) the program of the mind is encoded in the synaptic connections between neurons. The neuron-level simulation appears to be quite simple, omitting a great level of detail without offering justification or explanation for whether these details are relevant and what might be the effects of omitting them if they are relevant.

Aside from the underlying question of whether such an Ideal Model of the mind really exists — that is, of whether the mind is in fact a computer — the most immediate question is: How close have we come to filling in the details of the Ideal Model? As the "cat-brain" example should indicate, the answer is: not very close. As Sally Adee writes in IEEE Spectrum:
Jim Olds (who directs George Mason University's Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. "We need an Einstein of neuroscience," he says, "to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity." Here's what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?...

No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. "We do not have a definition of consciousness," says [Dartmouth Brain Engineering Laboratory Director Richard] Granger. "Or, worse, we have fifteen mutually incompatible definitions."
The sorts of approximation seen in the "cat-brain" case, then, are entirely understandable and unavoidable in current attempts at whole-brain emulation. The problem is not the state of the art, but the overconfidence in understanding that so often accompanies it. We really have no idea yet how close these projects come to replicating or even modeling the mind. Note carefully that the uncertainty exists particularly at the level of the mind rather than the brain. We have a rather good idea of how much we do and do not know about the brain, and, in turn, how close our models come to simulating our current knowledge of the brain. What we lack is a sense of how this uncertainty aggregates at the level of the mind.

Many defenders of the AI project argue that it is precisely because the brain has turned out to be so "squishy," indefinite, and unlike a computer, that approximations at the low level are acceptable. Their argument is that the brain is hugely redundant, designed to give rise to order at a high level out of disorder at a low level. This may or may not be the case, but again, if it is, we do not know how this happens or which details at the low level are part of the "disorder" and thus safely left out of a simulation. The aggregate low-level approximations may simply be filtered out as noise at a high level. Alternately, if the basic premise that the mind is a computer is true, then even miniscule errors in approximation of its basic functional unit may aggregate into wild differences in behavior at the high level, as they easily can when a computer processor malfunctions at a small but regular rate.

Until we have better answers to these questions, most of the claims such as those surrounding the "cat brain" should be regarded as grossly irresponsible. That the simulation in question is "smarter than a cat" or "matches a cat's brainpower" is almost certainly false (though to my knowledge no efforts have been made to evaluate such claims, even using some sort of feline Turing Test — which, come to think of it, would be great fun to dream up). The claim that the simulation is "cat-scale" could be construed as true only insofar as it is so vaguely defined. Such simulations could rather easily be altered to further simplify the neuron model, shifting computational resources to simulate more neurons, resulting in an "ape-scale" or "human-scale" simulation — and those labels would be just as meaningless.

When reading news reports like many of those about the "cat-brain" paper, the lay public may instinctively take the extravagant claims with a grain of salt, even without knowing the many gaps in our knowledge. But it is unfortunate that reporters and bloggers who should be well-versed in this field peddle baseless sensationalism. And it is unfortunate some that researchers should prey on popular ignorance and press credulity by making these claims. But absent an increase in professional sobriety among journalists and AI researchers, we can only expect, as Jonah Lehrer has noted, many more such grand announcements in the years to come.

Tuesday, December 29, 2009

Happy Birthday, Nanotechnology?

Fifty years ago today, on December 29, 1959, Richard P. Feynman gave an after-dinner talk in Pasadena at an annual post-Christmas meeting of the American Physical Society. Here is how Ed Regis describes the setting of the lecture in his rollicking book Nano:

In the banquet room [at the Huntington-Sheraton hotel in Pasadena], a giddy mood prevails. Feynman, although not yet the celebrity physicist he’d soon become, was already famous among his peers not only for having coinvented quantum electrodynamics, for which he’d later share the Nobel Prize, but also for his ribald wit, his clownishness, and his practical jokes. He was a regular good-time guy, and his announced topic for tonight was “There’s Plenty of Room at the Bottom” — whatever that meant.

“He had the world of young physicists absolutely terrorized because nobody knew what that title meant,” said physicist Donald Glaser. “Feynman didn’t tell anybody and refused to discuss it, but the young physicists looked at the title ‘There’s Plenty of Room at the Bottom’ and they thought it meant ‘There are plenty of lousy jobs in physics.’”

The actual subject of Feynman’s lecture was making things small and making small things.

What I want to talk about is the problem of manipulating and controlling things on a small scale.

As soon as I mention this, people tell me about miniaturization, and how far it has progressed today. They tell me about electric motors that are the size of the nail on your small finger. And there is a device on the market, they tell me, by which you can write the Lord’s Prayer on the head of a pin. But that’s nothing; that's the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction....

Feynman went on to imagine fitting the entire Encyclopaedia Britannica on the head of a pin, and even storing all the information in all the world’s books “in a cube of material one two-hundredth of an inch wide — which is the barest piece of dust that can be made out by the human eye.” He then described the miniaturization of computers, of medical machines, and more. He deferred on the question of how these things would technically be accomplished:

I will not now discuss how we are going to do it, but only what is possible in principle — in other words, what is possible according to the laws of physics. I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven’t yet gotten around to it.

Richard Feynman, seen here on the cover of the February 1960 issue of 'Engineering and Science,' in which his 1959 talk 'There's Plenty of Room at the Bottom' was first published.And Feynman only barely touched on the question of why these things should be pursued — saying that it “surely would be fun” to do them. He closed by offering two thousand-dollar prizes. One would go to the first person to make a working electric motor that was no bigger than one sixty-fourth of an inch on any side; Feynman awarded that prize less than a year later. The other would go to the first person to shrink a page of text to 1/25,000 its size (the scale required for fitting Britannica on the head of a pin); Feynman awarded that in 1985.

Feynman’s lecture was published in Engineering and Science in 1960 —  see the cover image at right — and it’s available in full online here. The lecture is often described as a major milestone in the history of nanotechnology, and is sometimes even credited with originating the idea of nanotechnology — even though he never used that word, even though others had anticipated him in some of the particulars, and even though the historical record shows that his talk was largely forgotten for about two decades. A few historians have sought to clarify the record, and none has done so more definitively than Christopher Toumey, a University of South Carolina cultural anthropologist. (See, for instance, Toumey’s short piece here, which links to two of his longer essays, or his recent Nature Nanotechnology piece here [subscription required].) Relying on journal citations and interviews with researchers, Toumey shows just how little direct influence Feynman’s lecture had, and compares Feynman’s case to that of Gregor Mendel: “No one denies that Mendel discovered the principles of genetics before anyone else, or that he published his findings in a scientific journal ... but that ought not to be overinterpreted as directly inspiring or influencing the later geneticists” who rediscovered those principles on their own.

Toumey suggests that nanotechnology needed “an authoritative founding myth” and found it in Feynman. This is echoed by UC-Davis professor Colin Milburn in his 2008 book Nanovision. Milburn speaks of a “Feynman origin myth,” but then puts a slightly more cynical spin on it:

How better to ensure that your science is valid than to have one of the most famous physicists of all time pronouncing on the “possibility” of your field.... The argument is clearly not what Feynman said but that he said it.

Eric Drexler, whose ambitious vision of nanotechnology is certainly the one that has most captured the public imagination, has invoked the name of Feynman in nearly all of his major writings. This is not just a matter of acknowledging Feynman’s priority. As Drexler told Ed Regis, “It’s kind of useful to have a Richard Feynman to point to as someone who stated some of the core conclusions. You can say to skeptics, ‘Hey, argue with him!’”

How, then, should we remember Feynman’s talk? Fifty years later, it still remains too early to tell. The legacy of “Plenty of Room” will depend in large part on how nanotechnology — and specifically, Drexler’s vision of nanotechnology — pans out. If molecular manufacturing comes to fruition as Drexler describes it, Feynman will deserve credit for his imaginative prescience. If nothing ever comes of it — if Drexler’s vision isn’t pursued or is shown to be technically impossible — then Feynman’s lecture may well return to the quiet obscurity of its first two decades.

[UPDATE: Drexler himself offers some further thoughts on the anniversary of the Feynman lecture over on his blog Metamodern.]

Wednesday, December 23, 2009

Bad Humbug, Good Humbug, and Bah Humbug

Blogger Michael Anissimov does not believe in Santa Claus, but he does believe in the possibility, indeed the moral necessity, of overcoming animal predation. To put it another way, he does not believe in telling fantasy stories to children if they will take those stories to be true, but he has no compunctions about telling them to adults with hopes that they will be true.

An obvious difference Mr. Anissimov might wish to point out is that adults are more likely than children to be able to distinguish fantasy from reality. He can (and does) submit his thoughts to their critical appraisal. While that difference does not justify what Mr. Anissimov regards as taking advantage of children by telling them convincing fantasies, it does suggest something about the difference between small children and adults. Small children cannot readily distinguish between fantasy and reality. In fact, there is a great deal of pleasure to be had in the failure to make that distinction. It could even be true that not making it is an important prelude to the subsequent ability to make it. Perhaps those who are fed from an early age on a steady diet of the prosaic will have more trouble distinguishing between the world as it is and as they might wish it to be. But here I speculate.

In any case, surely if one fed small children on a steady diet of stories like the one Mr. Anissimov tells about overcoming predation, they might come to believe such stories as uncritically as other children believe in Santa Claus. I can easily imagine their disappointment upon learning the truth about the immediate prospects of lions lying down with lambs. We’d have to be sure to explain to them very carefully and honestly that such a thing will only happen in a future, more or less distant, that they may or may not live to see — even if small children are not all that good at understanding about long-term futures and mortality.

But in light of their sad little faces it would be a hard parent indeed who would not go on to assure them that a fellow named Aubrey de Grey is working very hard to make sure that they will live very long lives indeed so that maybe they will see an end to animal predation after all! But because “treating them as persons” (in Mr. Anissimov’s phrase) means never telling children stories about things that don’t exist without being very clear that these things don’t exist, it probably wouldn’t mean much to them if we pointed out that Mr. de Grey looks somewhat like an ectomorphic version of a certain jolly (and immortal) elf:

Friday, December 18, 2009

Arma virumque cano

Beneath Adam’s post “On Lizardman and Liberalism,” commenter Will throws down the gauntlet: “[F]ind one transhumanist who thinks we should be allowed to embed nuclear weapons in our bodies.” I for one am ready concede that I know of no such case. But I’m moved to wonder, why not? Why should a libertarian transhumanist like Anders Sandberg — who believes that “No matter what the social circumstances are, it is never acceptable to overrule someone’s right to ... morphological freedom” — be unwilling to defend the right of an individual to embed a nuclear weapon? Assuming Sandberg would not be so willing, two alternatives occur to me. Either, like many people, he is more decent than his principles would lead one to believe, and/or he has not explored the real implications of his principles.

To some, this case may seem absurd — why would anyone want to turn himself into a bomb? Why indeed? But turning oneself into a bomb is already a reality in our world. And the underlying moral relativism of Mr. Sandberg’s absolute prohibition is of a piece with the progressive moral “wisdom” that asserts “one man’s terrorist is another man’s freedom fighter.” So if indeed Mr. Sandberg would flinch at the implantation of a bomb of any sort, it might be because he is living off moral capital that his own principle is busy degrading. He may be more decent than his principles, but his decency may not survive his principles.

Painting by Charles Bittinger of an atomic test at Bikini Atoll; courtesy U.S. NavyThe commenter Will steps into the breach with his own guiding idea: “Most transhumanists would probably advocate something along the lines of ‘complete morphological freedom as long as it doesn’t violate the rights of other conscious entities’” (emphasis added). But I don’t see how from this libertarian perspective the implantation of a bomb (properly shielded, if nuclear) violates the rights of any conscious entities any more than would carrying about a phial of poison. Will and I can agree that the use of that bomb in a public space would be a Bad Thing. But nothing in Will’s principle (other than a little fallout, perhaps) would prohibit some transhuman of the future from implanting the bomb, hopping into a boat, sailing to the mid-Atlantic outside of the shipping lanes, making sure there are no cetaceans nearby, calling in his coordinates to the by-then doubtless ubiquitous surveillance satellites, and going out in a blaze of glory on whatever will be the equivalents of Facebook or YouTube. Sounds potentially viral to me. Surely the right to blow oneself up under carefully controlled circumstances does not represent the aspirations of any large number of transhumanists, but surely their principles would require them to defend even this minority taste.

Tuesday, December 15, 2009

The View from the Dollhouse

[NOTE: From time to time we will invite guests to contribute to Futurisms. Our first guest post, below, comes from Brian J. Boyd, a graduate student at Oxford and a former New Atlantis intern.]

Fox has done the world two great injustices in canceling first Joss Whedon’s sublime series Firefly and now his intriguing show Dollhouse. Since the final few episodes of Dollhouse are now airing, this seems the right time to reconsider the show and what it suggests about human nature and the technologies of tomorrow.

This time around, Whedon takes us a handful of years into the future, to an America where things look familiar on the surface, but more and more of the people one meets are actually “dolls” — persons whose memories have been erased and identities overwritten by an organization that hires them out to very rich clients to be used as anything from sexual playthings to foster mothers. After each “engagement” the doll returns to be wiped clean and imprinted (that is, reprogrammed) for the next encounter. While the show does tacitly condemn this new form of slavery, Whedon is sensitive to the potential appeal of the imagined technology. In a piece about Dollhouse in the transhumanist magazine H+ a couple of months ago, Erik Davis noted that

The show’s ambivalence about such “posthuman” technologies is captured by the character who does all the wiping and remixing: a smug, immature, and charmingly nerdish wetware genius named Topher Brink [pictured at right above], whose simultaneously dopey and snarky incarnation by the actor Fran Kranz reflects the weird mix of arrogance and creative exuberance that inform so much manipulative neuroscience.

Despite the technology’s potential appeal, Dollhouse has also from the beginning emphasized the potential for abuse of the “doll” technology. In the third episode, Paul Ballard (our hero FBI agent), predicted disaster: “We split the atom; we make a bomb. We come up with anything new, the first thing we do is — destroy, manipulate, control. It’s human nature.” And the Season One finale gives us a glimpse of how things will turn out in Whedon’s fictional world: the episode shows a flashforward to 2019, when Los Angeles is in flames and a ragged band of survivors recoils in horror from any “tech” they find that might house a computer chip.

As the series progresses, the original “bad guys” of the L.A. dollhouse, whom we have been brought to see as complicated human beings with mostly good intentions, have been increasingly pitted against their superiors in the aptly-named Rossum Corporation whose intoxication with the power of their technology has become total. Whedon strongly implies a slippery slope here: At first, dolls were coerced but still nominally volunteers for a period of indentured servitude; in time, their masters grow reluctant to uphold their end of the bargain, unwilling to relinquish power.

Whereas Whedon’s Firefly, with its horses and handguns, turned to romanticism in its effort to reach an accord between modernity and tradition, Dollhouse explores the dangers of new technology without (so far) offering a way out. After all, “it’s human nature” that is the problem here; the technology is merely an expression and culmination of our natural desire to control. In his H+ piece, Davis says that “all of us are dolls sometimes, and dollhouse engineers other times” — in other words, manipulation, whether accomplished through political, theological, or emotional means, is part and parcel of the human experience. Davis has a point. But rather than suggesting that flawed human nature need be remade from scratch, Dollhouse compellingly depicts how the desire to remake our selves and our world can lead to a dismal deal with the devil: Topher sacrifices all the comforts of a normal life for the opportunity to pursue his research and refine his skills on live subjects. But when he comes to see his subjects not as toys but as people, his conscience leads him to join with those who are attempting to put the genie back into the bottle and contain the technology he helped create, vainly striving to undo the harm that has been done. “You’re human,” Topher says to one of his creations in an attempt to comfort her when she cannot cope with the discovery that she is a doll. Her rebuke also serves as a warning to those who think they can improve upon humanity: “Don’t flatter yourself.”
– Brian J. Boyd

Monday, December 7, 2009

On Lizardman and Liberalism

In post called “Getting Used to Hideousness,” Mike Treder makes three points. Each is provocative — and flawed.

First, he says, until relatively recently, people “with gross disabilities” or deformities “were expected to stay out of sight of the general public,” a closeting that Mr. Treder attributes to “the Victorian preference for order and rectitude.” But nowadays, he says, we have become more tolerant of people who “have shocking appearances.” (By way of example, he includes several pictures.)

Second, he moves from those whose unusual appearance was not their choice to those who intentionally alter their looks. He describes a range of body modifications — from makeup to orthodontics to plastic surgery to this sort of thing — and says that nearly everybody modifies himself in some way. He then envisions far more radical body modifications and suggests that there is no moral difference between any of them — they all alter what nature has given us, the only difference is “a matter of degree.”

Third, Mr. Treder invokes, with hope, the transhumanist doctrine of “morphological freedom.” He envisions a day when we will understand that “individuals who don’t look at all” normal will nonetheless be understood to be not freaks but “human beings with normal human feelings.”

Let me briefly respond to each of Mr. Treder’s main points in turn.

First, it is far too simplistic to say that we are becoming more tolerant of the different, deformed, and disabled in our midst. Mr. Treder includes with his post this picture — the lovely face of a smiling young girl with Down syndrome. But faces like hers are becoming ever rarer. Some 90 percent of fetuses diagnosed with Down syndrome are being aborted. This is not the mark of a growing tolerance or compassion; it is a silent purge, enabled by modern technology, of a class of human beings deemed unworthy of life.

Second, Mr. Treder’s argument about body modification is just a simplistic equivalency. The reasoning seems to go like this: Makeup and orthodontics and breast implants and (someday) extra arms and implanted wings are all unnatural, and so if you approve of any body modification you have no standing to criticize any other body modification.

But of course we make moral distinctions between different kinds of body modifications all the time — not based on grounds of “naturalness,” but based on the modification itself (Is it temporary or permanent? Is it external or invasive? Is it therapeutic? What is its cost?), based on the person being modified (Man or woman? Young or old? Mentally healthy?), and based on social context (What is this modification meant to signal? Is it tied to a particular cultural or social setting?). There is no simple checklist for deciding whether a bod-mod is morally licit, but we all make such judgments now, we make them for complicated reasons that reach beyond reflexive repugnance, and we will continue to make them in future eras of modification.

What Mr. Treder is really after is greater tolerance, an acceptance of people who look different. And this brings us to his invocation of “morphological freedom,” a supposed right to modify one’s body however one wishes. Like its transhumanist twin sister “cognitive liberty,” the concept of morphological freedom is an attempt to push the tenets of modern liberalism to their furthest logical extreme. In a 2001 talk elucidating and advocating morphological freedom, Swedish transhumanist Anders Sandberg stressed the centrality of tolerance:

No matter what the social circumstances are, it is never acceptable to overrule someone’s right to ... morphological freedom. For morphological freedom — or any other form of freedom — to work as a right in society, we need a large dose of tolerance.... Although peer pressure, prejudices, and societal biases still remain strong forces, they are being actively battled by equally strong ideas of the right to “be oneself,” the desirability of diversity, and an interest in the unusual, unique, and exotic.

That little taste of Mr. Sandberg’s talk exposes the basic problem of “morphological freedom” (and more generally, the fundamental flaw of any extreme liberalism or libertarianism). The problem is that extreme liberalism destroys the foundations upon which it depends.

Consider: Mr. Sandberg scorns shared social and civic values. He derides them as “peer pressure, prejudices, and societal biases” and observes with satisfaction that they are being “actively battled” by an expansion of tolerance. But tolerance is itself a shared value, one that must be inculcated and taught and reinforced and practiced. A freedom so extreme that it rejects all norms, wipes away shared mores, and undoes social bonds is a freedom that erodes tolerance — and thus topples itself.

Tuesday, December 1, 2009

The Mainstreaming of Transhumanism

Congratulations to Nick Bostrom, Jamais Cascio, and Ray Kurzweil for being recognized as three of Foreign Policy magazine’s “Top 100 Global Thinkers.” Once upon a time this kind of notoriety might not have helped the reputation of an Oxford don in the Senior Common Room (do such places still exist?), but even were that still the case, it must be tremendously satisfying for Professor Bostrom qua movement builder to get such recognition. The mainstreaming of transhumanism, noted (albeit playfully) by Michael Anissimov, proceeds apace. Ray Kurzweil did not win The Economist’s Innovation Award for Computing and Telecommunications because of his transhumanist advocacy, but apparently nobody at The Economist thought that it would in any way embarrass them. He’s just another one of those global thinkers we admire so much.

Of course such news is also good for the critic. I first alluded to transhumanist anti-humanism in a book I published in 1994, so for some time now I’ve been dealing with the giggle and yuck factors that the transhumanist/extropian/Singularitarian visions of the future still provoke among the non-cognoscenti. Colleagues, friends and family alike don’t quite get why anybody would be seriously interested in that. I’ve tried to explain why I think these kinds of arguments are only going to grow in importance, but now I have some evidence that they are in fact growing.

The Emperor Has No ClothesWhich leads me to Mr. Anissimov’s question about what it is that I’m hoping to achieve. My purpose (and here I only speak for myself) is not to predict, develop, or advocate the specific public policies that will be appropriate to our growing powers over ourselves. In American liberal democracy, the success or failure of such specific measures is highly contingent under the best of circumstances, and my firm belief that on the whole people are bad at anticipating the forces that mold the future means that I don’t think we are operating under the best of circumstances. So my intention is in some ways more modest and in some ways less. Futurisms is so congenial to me because I share its desire to create a debate that will call into question some of the things that transhumanists regard as obvious, or at least would like others to regard as obvious. I’ve made it reasonably clear that I think transhumanism raises many deep questions without itself going very deeply into them, however technical its internal discussions might sometimes get. That’s the modest part of my intention. The less modest part is a hope that exposing these flaws will contribute to creating a climate of opinion where the transhumanist future is not regarded as self-evidently desirable even if science and technology develops in such a way as to make it ever more plausible. So if and when it comes time to make policies, I want there to be skeptical and critical ideas available to counterbalance transhumanist advocacy.

In short, I’m happy to be among those who are pointing out that the emperor has no clothes, even if, to those who don’t follow such matters closely, I might look like the boy who cried wolf.

Monday, November 30, 2009

The New Bioethics Commission

Last week, the White House announced the formation of a new Presidential Commission for the Study of Bioethical Issues. It will have a chairman and vice chairman — and at least at first, both will be university administrators: Amy Gutmann, the president of U Penn, and James W. Wagner, the president of Emory.

The executive order formally creating the commission — what you might think of as the charter explaining the commission’s purpose and powers — was published today. It emphasizes policy-relevance: the commission is tasked with “recommend[ing] legal, regulatory, or policy actions” related to bioethics. This stands in contrast to its immediate predecessor, the President’s Council on Bioethics, the charter for which emphasized exploring and discussing over recommending. Since the former council’s website (bioethics.gov) has been taken down, we are pleased to announce that we have archived all of its publications here on the New Atlantis site. (The Council’s impressive website, which included transcripts of all its public meetings, will hopefully be restored somewhere online in its entirety soon; in the meantime, interested parties will have to make do with the incomplete record in the Internet Archive.)

The former council’s report that is most relevant to this blog is Beyond Therapy, a 2003 consideration of human enhancement. Perhaps most striking about that report is its modus operandi: instead of beginning with an analysis of novel and controversial enhancement technologies, the council chose to begin by examining human functions and activities that have been targeted for enhancement. “By structuring the inquiry around the desires and goals of human beings, we adopt the perspective of human experience and human aspiration, rather than the perspective of technique and power. By beginning with long-standing and worthy human desires, we avoid premature adverse judgment on using biotechnologies to help satisfy them.” Beyond Therapy is a powerful document, and it rewards careful attention. (We published a symposium of essays in response to the book.)

We will have more to say about the former council in the months ahead. But for now, one final amusing observation about the new commission: If you look closely at the executive order creating it, you will see that among the issues it is invited to discuss is “the application of neuro- and robotic sciences.” That’s right — President Obama’s new bioethics commission has been explicitly invited to take a look at robotics. Just the latest indication that the administration is worried about the looming robot threat.

Sunday, November 29, 2009

Looking for a Serious Debate

Over on his blog Accelerating Future, Michael Anissimov has a few criticisms of our blog. Or at least, a blog sharing our blog’s name; he gets so many things wrong that it seems almost as though he’s describing some other blog. And Mr. Anissimov’s comments beneath his own post range from ill-informed and ill-reasoned to ill-mannered and practically illiterate. They are beneath response — except to note that Mr. Anissimov should know better. But putting aside those comments and the elementary errors that were likely the result of his general carelessness in argument — like misattributing to Charlie something that I wrote — some of the broader strokes of Mr. Anissimov’s ignorant and crude post deserve notice.

First, Mr. Anissimov’s post is intellectually lazy. To label an argument “religious” or “irreligious” does not amount to a refutation. Nor can you refute an argument by claiming to expose the belief structures that undergird it.

Second, Mr. Anissimov’s post is intellectually dishonest. He approvingly quotes an article that claims that “all prominent anti-transhumanists — [Francis] Fukuyama, [Leon] Kass, [and Bill] McKibben — are religious.” But anyone who has read those three thinkers’ books and essays will know that they make only publicly-accessible arguments that do not rely upon or even invoke religion. And more to the point, it is an indisputable matter of public fact that none of us here at Futurisms has made the arguments that Mr. Anissimov is imputing to us. None of us has ever argued that we object to transhumanism because “through suffering [we] will enter paradise after [we] are dead.” Not even close.

Once Mr. Anissimov has (falsely) established that those of us who disagree with him do so for religious reasons, he claims that we “want the same damn thing” that he wants. Except that while he wants to achieve immortality through science, his critics “think they can get it through magic.”

To the contrary, our arguments have in fact been humanistic and what you might call earthly — hardly magical thinking or appeals to paradise. The very distinction between humanists and transhumanists should make plain whose beliefs are grounded in earthly affairs and whose instead depend on appeals to fantasy. We are skeptical of transhumanist promises of paradise because their arguments are, by and large, based on faith and fantasy instead of reason and fact; because what they hope to deliver would likely be something quite other than paradise if it became reality; and because the promise of paradise can be used to justify things that ought not be tolerated.

It is too much to ask for Mr. Anissimov to be a charitable reader of our arguments, but if he wants to be taken seriously he should make an effort to seem capable of at least comprehending them. Until he does, it is a peculiar irony that a transhumanist would invoke religion in order to avoid engaging in a substantive debate with his critics.

Monday, November 23, 2009

The Significance of Man

Over at Gizmodo, Jesus Diaz has called attention to a genuinely lovely animation of Earth’s weather from August 17-26, 2009. He notes in passing, “It also shows how beautiful this planet is, and how insignificant we are.”

Scene from '2001: A Space Odyssey'There is something about pictures of Earth from space that seems to call forth this judgment all the time; it is equivalent, I suppose, to the “those people look like ants” wonderment that used to be so common when viewing a city from the top of its tallest building. That humans are insignificant is a particularly common idea among those environmentalists and atheists who consider that their opinions are founded in a scientific worldview. It is also widely shared by transhumanists, who use it all the time, if only implicitly, when they debunk such pretensions as might make us satisfied with not making the leap to posthumanity.

But in fact, just as those people were not really ants, so it is not clear that we are so insignificant, even from the point of view of a science that teaches us that we are a vanishingly small part of what Michael Frayn, in his classic novel Sweet Dreams, called “a universe of zeros.” Let’s leave aside all the amazing human accomplishments in science and technology (let alone literature and the arts) that are required for Mr. Diaz to be able to call our attention to the video, and the amazing human accomplishments likewise necessary to produce the video. The bottom line is, we are the only beings out there observing what Earth’s weather looks like from space. Until we find alien intelligence, there is arguably no “observing” at all without us, and certainly no observations that would culminate in a judgment about how beautiful something is. At the moment, so far as we know (that is, leaving aside faith in God or aliens) we are the way in which the universe is coming to know itself, whether through the lens of science or aesthetics. That hardly seems like small potatoes.

Sometimes transhumanists play this side of the field, too. Perhaps we are the enlivening intelligence of a universe of otherwise dead matter, and it is the great task of humanity to spread intelligence throughout the cosmos, a task for which we are plainly unsuited in our present form. So onward, posthuman soldiers, following your self-willed evolutionary imperative! Those of us left behind may at least come to find some satisfaction that we were of the race that gave birth to you dancing stars.

It is interesting how quickly we come back to human insignificance; in this case, it is transhumanism’s belief in our vast potential to become what we are not, which makes what we are look so small.

Tuesday, November 17, 2009

The “Anti-Progress” Slur

Adam already noted my brief response to a charge frequently made against “bioconservatives”: that we are against progress and for suffering. I’d like to say a little more in the hope of putting this tired rhetorical trope to rest. So let me just list, in no particular order and without any effort to be comprehensive or predictive, a dozen areas in which I personally think science and technology are contributing to genuine incremental improvements in the material conditions of human life — i.e., progress:

1. Agriculture: decreased/more focused inputs, increased crop yields, quality, diversity, and reliability
2. Water: increased quality of water supply and more reliable and efficient distribution
3. Energy: more efficient energy production, transport and consumption, greater diversity of energy supplies
4. Transportation: increased speed and safety, greater energy efficiency
5. Food supply: improved quality and diversity along with increased safety and storage time
6. Space travel: reduced cost of routine manned space operations, increased capacity in exploratory efforts
7. Construction: more durable materials, increased simplicity and speed of commercial, residential and infrastructure construction
8. Military: increased ability to detect explosives and weapons of mass destruction, and to preempt their use or contain their consequences; increased reliability and precision of munitions
9. Medicine: increased safety, scope and availability of vaccination; less invasive and more precise surgery; personalized and/or narrowly targeted medical treatments; simplified diagnostics and treatments; better prosthetic devices for physical and neurological disabilities (and yes, I know about the thin line between therapy and enhancement, but we'll deal with that another time).
10. Nature: improved ability to predict extreme weather and geological events
11. Waste: treating waste products as resources
12. Communication: continued increases in speed, bandwidth and information connectivity

I’m not trying to be controversial or surprising here, nor to suggest that transhumanists are against any of these developments. But I am pointing out that progress is not the same as “the latest thing” or the most outré imaginings. Progress is not about being at the bleeding edge for its own sake, or having an idea that only a few people believe in, or being attracted to what is strange and unique. Let’s try not to confuse being for some less-than-controversial kinds of progress with being against progress simply. Any and all change is not progress, and somebody’s claim that a given change is “progress” should be taken as an invitation to critical thinking about what would make human life better — and not as the last word.

Monday, November 16, 2009

Quick Links: Singularity University, Neuro-Trash, and more

• Imagine the frat parties: Ted Greenwald, a senior editor of the print edition of Wired magazine, has been attending and covering Singularity University for Wired.com. We’ll have more on this in the days ahead. Meanwhile, Nick Carr suggests some mascots for Singularity U.

• Squishy but necessary: Last month, Athena Andreadis, the author of the book The Biology of Star Trek, had a piece in H+ Magazine throwing cold water on some visions of brain uploading and downloading. Money quote: “It came to me in a flash that many transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons.... The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.”

• Thanks, guys: We’re pleased to have fans over at the “Fight Aging” website, where they say we “write well.” The praise warms our hearts, it truly does. We only wish that those guys were capable of reading well. Their post elicited this response from our Futurisms coauthor Charles Rubin: “Questioning what look to us to be harebrained ideas of progress does not make us ‘against progress.’ Nor does skepticism about ill-considered notions of the benefits of immortality make us ‘for suffering’ or ‘pro-death.’ It may be that the transhumanists really cannot grasp those distinctions, perhaps because of their apparently absolute (yet completely unjustified) confidence in their ability to foretell the future. Only if they have a reliable crystal ball — if they can know with certainty that their vision of the future will come to pass — does opposition to their vision of progress make us ‘anti-progress’ and does acknowledging the consequences of mortality make us ‘pro-death.’” Indeed. And I might add that such confidence in unproven predictive powers seems less like the rationality transhumanists claim to espouse than like uncritical faith.

• A sporting chance: Gizmodo has an essay by Aimee Mullins — an actress, model, former athlete, and double amputee — about technology, disability, and competition. Her key argument: “Advantage is just something that is part of sports. No athletes are created equal. They simply aren’t, due to a multitude of factors including geography, access to training, facilities, health care, injury prevention, and sure, technology.” Mullins concedes that it might be appropriate to keep certain technological enhancements out of sport, but she is “not sure” where to draw the line, and she advises not making any decisions about technologies before they actually exist.

• On ‘Neuro-Trash’: A remarkable essay in the New Humanist by Raymond Tallis on the abuse of brain research. Tallis starts off by describing how neuroscience is being applied to ever more aspects of human affairs. “This might be regarded as harmless nonsense, were it not for the fact that it is increasingly being suggested ... that we should use the findings of neurosciences to guide policymakers. The return of political scientism, particularly of a biological variety, should strike a chill in the heart.” Beneath this trend, Tallis writes, lies the incorrect “fundamental assumption” that “we are our brains.” (Vaughan over at MindHacks describes Tallis’s essay as “barnstorming and somewhat bad-tempered.” Readers looking for more along these lines might also enjoy our friend Matt Crawford’s New Atlantis essay on “The Limits of Neuro-Talk.”)

• Calling Ringling Bros.: We’ve known for a long time that people talking on cell phones get so distracted that they can become oblivious to what’s physically around them — entering a state sometimes called “absent presence.” In the October issue of Applied Cognitive Psychology, a team of researchers from Western Washington University reported the results of an experiment observing and interviewing pedestrians to see if they noticed a nearby clown wearing “a vivid purple and yellow outfit, large shoes, and a bright red nose” as he rode a bicycle. As you would expect, cell phone users were pretty oblivious. Does this suggest that we’ll suffer from increasing “inattentional blindness” as we are bombarded with ever more stimuli from increasingly ubiquitous gadgets? Not necessarily: it turns out that pedestrians listening to music tended to notice the clown more than those walking in silence. The cohort likeliest to see the clown consisted of people walking in pairs.

• Metaphor creep: “If the brain is like a set of computers that control different tasks,” says an SFSU psychology professor, then “consciousness is the Wi-Fi network that allows different parts of the brain to talk to each other and decide which action ‘wins’ and is carried out.”

• Another kind of ‘Futurism’: This year marks the centenary of the international Futurist art movement. The 1909 Futurist Manifesto that kicked it all off is explicitly violent and even sexist in its aims (“we want to exalt movements of aggression, feverish sleeplessness, the double march, the perilous leap, the slap and the blow with the fist ... we want to glorify war — the only cure for the world...”) and critical of any conservative institutions (professors and antiquaries are called “gangrene”; museums, libraries, and academies are called “cemeteries of wasted effort, calvaries of crucified dreams, registers of false starts”). Central to the Futurist vision was a love of new technologies — and of all the speed, noise, and violence of the machine age.

Saturday, November 14, 2009

The Human Factor

There is a poll over at Gizmodo asking readers What Percentage of Our Body Would Have To Be Replaced Before We Ceased Being Human?

The options that the poll offers are mostly percentages — 10 percent, 20 percent, and so on — which is pretty silly, since it suggests that percentages are a useful way of talking about the human organism. (What is “20 percent” of a human body? Is that by mass? Or volume? Or perhaps surface area?) The logic of the poll’s options leads the respondent to think about the question along the lines of the Sorites paradox: “Well,” a respondent might think, “I don’t see what the big difference would be between 30 percent and 40 percent... or between 40 and 50 percent... or between 50 and 60...”

Given those options, it’s no surprise that three quarters of the respondents (as of this writing) have instead picked the following choice: “You can take away nearly everything, but if our brains are replaced by machines, we cease being human.”

In the comments beneath the poll, some readers objected to the options they were given. Commenter “newgalactic” argues that the correct answer to the question is a figure lower than any among the poll’s options: “The ‘body,’” he writes, “has more ties to the ‘mind/soul’ than we realize.” (He also wonders whether the poll results might be skewed by the fact that most Gizmodo readers are “dorks/nerds” who are “less physically blessed,” while someone with a body like Brad Pitt might “be more inclined to attach his humanity to his physical body.”)

The comments also suggest that any attempt to take the poll’s silly question seriously must start by asking a deeper question: What does it mean to be human? In the first issue of The New Atlantis, bioethicist Gilbert Meilaender described some of the difficulties that the deeper question entails:

We might try to think of human beings (or the other animals) [as collections of parts], and, indeed, we are often invited to think of them as collections of genes (or as collections of organs possibly available for transplant), but we might also wonder whether doing so loses a sense of ourselves as integrated, organic wholes.

Even if we think of the human being as an integrated organism, the nature of its unity remains puzzling in a second way. The seeming duality of person and body has played a significant role in bioethics. As the language of “personhood” gradually came to prominence in bioethical reflection, attention has often been directed to circumstances in which the duality of body and person seems pronounced. Suppose a child is born who, throughout his life, will be profoundly retarded. Or suppose an elderly woman has now become severely demented. Suppose because of trauma a person lapses into a permanent vegetative state. How shall we describe such human beings? Is it best to say that they are no longer persons? Or is it more revealing to describe them as severely disabled persons? Similar questions arise with embryos and fetuses. Are they human organisms that have not yet attained personhood? Or are they the weakest and most vulnerable of human beings?

Related questions arise when we think of conditions often, but controversially, regarded as disabilities.... Notice that the harder we press such views the less significant becomes any normative human form. A head, or a brain, might be sufficient, if it could find ways to carry out at a high level the functions important to our life.

Such puzzles are inherent in the human condition, and they are sufficiently puzzling that we may struggle to find the right language in which to discuss that aspect of the human being which cannot be reduced to body. Within the unity of the human being a duality remains, and I will here use the language of “spirit” to gesture toward it. As embodied spirits (or inspirited bodies) we stand at the juncture of nature and spirit, tempted by reductionisms of various sorts. We have no access to the spirit — the person — apart from the body, which is the locus of personal presence; yet, we are deeply ill at ease in the presence of a living human body from which all that is personal seems absent. It is fair to say, I think, that, in reflecting upon the duality of our nature, we have traditionally given a kind of primacy to the living human body. Thus, uneasy as we might be with the living body from which the person seems absent, we would be very reluctant indeed to bury that body while its heart still beat.

A definition of human being based only on biological parts will fail to capture the unique nature of the living human. A definition based on biological functions will fail to include human beings who lack those specific functions. Indeed, any strictly biological definition will miss the qualitative aspects of what it means to be human — how we live and behave over the course of our lives; what we do and are capable of doing; what we feel and experience; and how all of it changes — in short, the phenomena of life. And so a rich understanding of what it means to be human might start with science but must go beyond it, seeking wisdom especially in the disciplines rightly called “the humanities.”

There can be no honest answer to the Gizmodo poll as it is phrased, and there is no easy answer to the deeper question of what it means to be human. But the search is rewarding — and, in a way, the search may itself be part of the answer.

(h/t Instapundit)

Friday, November 13, 2009

The more you know... (about radical life extension)

Keep your eyes peeled when you're using Hulu and Vimeo these days and you may notice the latest step in the life-extension crowd's attempt to march into the mainstream. The Methuselah Foundation has created four "public service announcements" that are now in rotation on the two sites.
The four spots are here, here, here, and here.
NOTE: The original ending to this post has been removed, as it referred to possibly misleading identifying details from a previous post, which have also been removed. See the postscript to that post here.

Thursday, November 12, 2009

Long Live the King

Aubrey de Grey, a great advocate of immortality, is not worried about “immortal tyrants” for three reasons. First, because tyrannicide will still be possible. Second, because the spread of democracy will preemptively forestall tyranny. Third, because one immortal tyrant may not be so bad as a succession of tyrants, where the next guy is worse than the last. Each argument shows characteristic limits of the transhumanist imagination.

As far as tyrannicide goes, like many transhumanists de Grey stops well short of thinking through the possible consequences of the change he proposes (we are all speculating here, but we can try to be thorough speculators). Remember that tyrants already tend to be fairly security-conscious, knowing that whatever happens they are still mortal. Why would the prospect of having power and immortality to lose make them less risk-averse? It seems rather more likely that the immortal tyrant will be extremely risk-averse and hence security-conscious, and therefore represent a very “hard target” for the assassin — who will have equally much to lose if his mission is unsuccessful. As it is, most people living under a tyrant just do their best to keep their heads down; tyrannicides are rare. Throw immortality into the mix, and they are likely to be rarer still.

As far as democracy goes, de Grey exhibits a confidence characteristic of transhumanists generally: he knows what the future holds. I would certainly join him in hoping that democracy is here to stay and increasingly the wave of the future, but I don’t know that to be true and I don’t know how anyone could know that to be true. The victory of democracy over tyranny in the twentieth century was a near thing. History tells us that good times readily give way to bad times. The belief that democracy represents a permanent cure to the problem of tyranny is facile, in the way that all easy confidence about the direction of history is facile.

Finally, de Grey falls back on the proposition ‘better the devil you know than the devil you don’t’ — better Lenin than Stalin, to use his example. Leaving aside the question of how different the two leaders actually were, here de Grey is apparently trying to be hard-headed: It may not be all sweetness and light when we’re all immortal after all! Like many transhumanists, he is not very good at moral realism. You have to wonder: would the character of the immortal tyrant really stay the same over time? If, as the old maxim holds, absolute power corrupts absolutely, it would seem very much more likely that life under an immortal tyrant would get worse.

Finally, the problem is not really just tyranny, it is evil. In his Wisconsin State Fair speech of 1859, Lincoln notes, “It is said an Eastern monarch once charged his wise men to invent him a sentence, to be ever in view, and which should be true and appropriate in all times and situations. They presented him the words: ‘And this, too, shall pass away.’ How much it expresses! How chastening in the hour of pride! — how consoling in the depths of affliction!” Immortal evil means a world where the prideful will never be chastened, and the afflicted only consoled by giving up the very boon that de Grey promises us.

The problem with defending death

Todd May has a short essay on death at the New York Times's Happy Days blog. The argument is age-old (so to speak), but he reiterates it in a concise, compelling, and beautiful way:
Immortality lasts a long time. It is not for nothing that in his story “The Immortal” Jorge Luis Borges pictures the immortal characters as unconcerned with their lives or their surroundings. Once you’ve followed your passion — playing the saxophone, loving men or women, traveling, writing poetry — for, say, 10,000 years, it will likely begin to lose its grip. There may be more to say or to do than anyone can ever accomplish. But each of us develops particular interests, engages in particular pursuits. When we have been at them long enough, we are likely to find ourselves just filling time. In the case of immortality, an inexhaustible period of time.

And when there is always time for everything, there is no urgency for anything. It may well be that life is not long enough. But it is equally true that a life without limits would lose the beauty of its moments. It would become boring, but more deeply it would become shapeless. Just one damn thing after another.

This is the paradox death imposes upon us: it grants us the possibility of a meaningful life even as it takes it away. It gives us the promise of each moment, even as it threatens to steal that moment, or at least reminds us that some time our moments will be gone. It allows each moment to insist upon itself, because there are only a limited number of them. And none of us knows how many.
Well put. But wouldn't Todd May's argument about the importance of omnipresent death in shaping our lives become somewhat twisted and strained if it actually were possible to halt aging (as life extension advocates believe will someday be possible)? It is one thing to argue for the wisdom of accepting death when it is an inevitability. But it would be very different to make a positive case for death when it is no longer inevitable.

In his blog post, May notes that "it is precisely because we cannot control when we will die, and know only that we will, that we can look upon our lives with the seriousness they merit." But, although we can already decide to die if we so choose, might it not be much harder to look upon our lives with the same seriousness if we had to control when we died? Whatever the choice, our lives would take on a farcical quality, either from the emptiness of living without limits or the tragic absurdity of choosing to die rather than face that prospect.

(Hat tip: Brian Boyd)

[Image: "Q" from Star Trek: The Next Generation, portrayed by John de Lancie]

Wednesday, November 11, 2009

Robotic sports writers

The Singularity Hub posts on a new program that can churn out sports news stories:
Called Stats Monkey, the new computer software analyzes the box scores, and play by plays to automatically generate the news article. It highlights key players and clutch plays and will even write an appropriate headline and find a matching photo for a [key] player!

... [I]t could work for every sport humans like to read about. Moreover, Stats Monkey could be adapted to write business stories, or conference updates, or other forms of professional journalism that rely heavily on numbers and analytics. Writing, it seems, is no longer immune from automation.
Robotic sports journalists will make a nice complement to robotic athletes. Now all we need are robotic spectators! Human inefficiency could be removed from sports altogether, and then we could wonder at the technical prowess of games just as we marvel at the skills of server stacks:



And hey, if things really work out, soon we won't need humans for writing Singularity blogs either.

Tuesday, November 10, 2009

Someone is WRONG on the Internet!

À la XKCD, several recent posts here on Futurisms have stirred up some lively debate in comment threads. In case you missed the action, the "Transhumanist Resentment Watch" has led to a deeper exploration of some of this blog's major themes — resentment, disease, and normalcy. A post on magic pills has sparked a discussion on medical economics. The question of libertarian enhancement continues to bounce back and forth. And my rather innocuous posting of an Isaac Asimov story has led to tangents on hedonism and accomplishment.

Unmanning the Front Lines

The recent incident at Fort Hood recalls to mind a proposition that has become a great truism since the terrorism attacks of September 11, one that should never be allowed to be merely a truism: how grateful we, usually civilians, should be to the first responders who run toward danger rather than away from it. The guardian virtues our military, police, and public-safety organizations inculcate are all the more to be admired because they stand in such stark contrast to the virtues centering on comfortable self-preservation that are the stock and trade of our deeply bourgeois regime. In other times and places, courage, discipline, and honor might have been seen as among the highest expressions of our humanity; but I dare say most of us most of the time see them as instrumentally useful to peaceable pursuits that are the real business of life.

Courage of course requires being in harm’s way, and what we might hope would be the normal qualms of a decent chain of command about putting people in harm’s way is only heightened in our particular cultural environment. This point was brought home to me with great force when a student directed me to a recruitment video at the United States Navy Memorial website with the tagline “working every day to unman the front lines,” featuring the Navy’s remotely-piloted drone technology. It would be churlish and wrongheaded to deny that such marvels are a wonderful way to avoid putting the lives of our sons and daughters at risk. But it would be foolish to ignore the double entendre as well. With the front lines unmanned, there will be less need of nerve, courage, and spiritedness — manly virtues that Officer Kimberly Munley, who took down the Fort Hood shooter, reminds us are not exclusively the province of men. And it is not the Navy alone, of course. The push to replace human soldiers and first responders with robotic devices is well underway in nearly all services (I don’t know that much is happening on the fire or emergency medicine fronts).

Battling ’bots may still be only a distant prospect, and right at the moment we plainly have no lack of fellow citizens willing and able to serve as our guardians (although some first responders, like volunteer fire services, might be an exception). But in her provocative book Systems of Survival, Jane Jacobs warns that the guardian virtues hang together, and if you tamper with one you risk undermining them all — a point Plato might well agree with. So we should be asking ourselves: What happens to virtues like honor, loyalty, or discipline when they are not only challenged from without by the bourgeois virtues, but from within; when a need for courage is seen by the guardians themselves as a sign of a defect in their ability to protect us without putting themselves in harm’s way? It is an awesome task to be responsible for the lives of others at the risk of one’s own life, and through the guardian virtues the terrible power of that task is directed and constrained. As much as we hope for a day when all men will live in peace, we are entitled to wonder whether that day will be brought closer by replacing the traditional terrors of battle with innovative methods of cold-blooded killing.

Monday, November 9, 2009

Singularity Summit videos

Videos of the talks from the 2009 Singularity Summit, which we covered extensively on this blog, are now available here. A few videos are still missing, but most of them are up.

The best videos (IMHO, as the kids say) are:
  • David Chalmers on principles of simulation and the Singularity (video / post)
  • Peter Thiel making the economic case for the Singularity (video / post)
  • And the discussion with Stephen Wolfram on the Singularity at the cosmic scale (video / post)

Also worthwhile, revealing, or at least entertaining:
  • Brad Templeton's talk was one of the most entertaining, ambitious, and plausible; the audience question segment was also particularly good (video / post)
  • Juergen Schmidhuber's talk on digitizing creativity was lively and engaging, if silly (video / post)
  • The segment of Michael Nielsen's talk where he describes the principles of quantum computing (video / post)

The Myth of Libertarian Enhancement, Cont'd

Our recent post about libertarian enhancement has received some pushback. For example, commenter Kurt9 says he believes that “the highest moral value in the universe is to pursue one’s own happiness and love of life.” (He says that anyone who disagrees with him is engaging in “sophistry for totalitarianism” — his attempt at peremptorily ending all debate.) For Kurt9, a libertarian, pursuing happiness and love-of-life means super-longevity: “I want radical life extension (multiple 1000 year life span). I want to cure aging and get free of it. I fail to see why you should have a problem with this.”

This gives us another opportunity to discuss libertarianism and transhumanism. Consider: If he were to pursue radical life extension in strict accordance with libertarian principles, he would have to do so free from restrictions imposed by others and without in turn restricting the choices of others. This might be possible if he were a brilliant scientist living out in a solitary shack in the woods without contact with human civilization.

"A Hunstman and Dogs" by Winslow Homer. Courtesy WorldVisitGuide.But living alone in a shack isn’t usually conducive to major medical advances. The actual realization of our commenter Kurt9’s dream would require a society in which thousand-year lifespans were not just attainable but available to guys like him. That would be a society very different from our own. To put it simply, consider all the social, cultural, and economic changes that accompanied the doubling of human life expectancy over the last century and a half, and imagine the turmoil that would be involved in suddenly adding another nine centuries to the lifespan. The changes involved would be radical, complex, far from uniformly good or bad, and extremely difficult to predict beforehand.

This actually hints at a deeper truth connected to how we think about the future. All human beings live in a particular time and place. Moreover, all our choices and our ways of life presuppose a particular society, culture, and set of institutions in which they can be realized. However much fun it might be for fiction or for a thought experiment, in real life it makes no sense to talk about how “the world as it is now” will be different from “the world as it is now with the single modification that one individual can choose to live for a thousand years.” It is a meaningless proposition, as much a practical absurdity as it would be for an ancient Roman to insist that it concerns no one else if he wants to invent and drive an automobile.

The commenter Kurt9 also says that “We have no desire to impose our dreams and choices on other[s]. We seek only the freedom to do our own thing.”

This libertarian “freedom to do our own thing” implies that each individual should be equally free to pursue his or her chosen way of life no matter what that choice is, so long as no one else is harmed. Yet as much as you might not want to impose your own choices on others, it is an inescapable fact of life that our choices do impinge and often impose on others. Consider the old saw about liberty — that your freedom to swing your fist ends at the tip of my nose. But in real life, a guy flailing around with clenched fists is going to alter the behavior of everyone in sight. Or, to pick a different example, my neighbor’s choice to mine coal in his backyard obtrudes upon my freedom to choose to live in a quiet neighborhood with unpolluted groundwater and high property values. Or, to offer an example more relevant to some of our readers, your personal choice to develop an artificial intelligence that can write useful computer programs will impinge on my freedom to choose to enjoy a fulfilling and lucrative career as a computer programmer.

Libertarian transhumanists don’t really seem to be interested in protecting each individual’s equal freedom to do and be what he wants. Rather, they are interested in defending their own prerogatives to pursue their particular choices to enhance themselves without any cumbrance or criticism. But even this narrower and more solipsistic version of libertarianism will ultimately have to contend with the fact that as any given individuals gain powers — particularly powers of the sort that would be available to hypothetical posthumans — they will gain also the ability to exercise those powers in spite of and over others individuals in ways that will be far more difficult to prevent, stop, or even detect.

Liberty, rightly understood, does need to be defended. But we must also recognize that the image of liberty that some libertarians hold — epitomized by the iconic rugged frontiersman — depends on a self-reliance and self-constitution that are quite alien to today’s world. We are now more socially, politically, and technologically enmeshed than ever before. And while government tyranny remains a serious concern, there are other kinds of tyranny — including freely-chosen technological tyranny — that we should remain vigilant against.

Friday, November 6, 2009

Defining ‘Cyborg’ Down

Wired Science has a story by Brandon Keim featuring the work of University of Chicago geoscientist Patrick McGuire. McGuire is working on “wearable AI systems and digital eyes that see what human eyes can’t.” So equipped, “space explorers of the future could be not just astronauts, but ‘cyborg astrobiologists.’” That phrase — “cyborg astrobiologist” — comes from the title McGuire and his team gave to the paper reporting their early results. In their paper, they describe developing a “real-time computer-vision system” that has helped them successfully to identify “lichens as novel within a series of images acquired in semi‑arid desert environments.” Their system also quickly learned to distinguish between familiar and novel colored samples.

According to Keim, McGuire admits there is a long way to go before we get to the cyborg astrobiologist stage — a point that seems to have been missed by the folks at Wired Science, who gave Keim’s piece the headline “AI Spacesuits Turn Astronauts Into Cyborg Biologists” (note the present tense). But it’s true that the meaning of “cyborg” is contested ground. If Michael Chorost in his fine book Rebuilt (which I reviewed here) can decide that he is a cyborg because he has a cochlear implant, than perhaps those merely testing McGuire’s system are cyborg, too.

But my point now isn’t to be one of those sticklers who tries to argue with Humpty Dumpty that it is better if words don’t mean whatever we individually want them to mean. Rather, I’m wondering why McGuire should have used this phrase, “cyborg astrobiologists,” in this recent paper and a number of earlier ones. The word “cyborg” was originally used to describe something similar to what McGuire is attempting, as Adam Keiper has noted:

In 1960, at the height of interest in cybernetics, the word cyborg—short for “cybernetic organism”—was coined by researcher Manfred E. Clynes in a paper he co-wrote for the journal Astronautics. The paper was a theoretical consideration of various ways in which fragile human bodies could be technologically adapted and improved to better withstand the rigors of space exploration. (Clynes’s co-author said the word cyborg “sounds like a town in Denmark.”)

But McGuire doesn’t seem to be aware of the word’s original connection to space exploration — he doesn’t acknowledge it anywhere, as far as I can tell — and instead he seems to be using the word “cyborg” in its more recent and sensationalistic science-fiction-ish sense of part-man, part-machine. So why use that word? The simple answer, I suppose, is that academics are far from immune to the lure of attention-getting titles for their work. But it is still noteworthy that for McGuire and his audience, “cyborg” is apparently something to strive for, not a monstrous hybrid like most iconic cyborgs (think Darth Vader, the Borg, or the Terminators). Deliberately or not, McGuire is engaged in a revaluation of values. One wonders whether in a transhumanist future there will be any “monsters” at all; perhaps that word will share the fate of other terms of distinction that have become outmoded or politically incorrect. “Monster,” after all, implies some norm or standard, and transhumanism is in revolt against norms and standards.

Or perhaps the unenhanced human being will become the monster, the literal embodiment of all that right-thinking intelligence rebels against, a dead-end abortion of mere nature. Their obstinate persistence would be fearful if they themselves were not so pitiful. We came from that?