Futurisms: Critiquing the project to reengineer humanity

Friday, February 25, 2011

“What Talking with Computers Teaches Us About What It Means to Be Alive”

I have to admit that the cover of this month’s Atlantic, proclaiming “Why Machines Will Never Beat the Human Mind,” left me rather uninterested in reading the article, as claims to have made such a proof almost never hold up. And, indeed, to the extent that the article implies that it has provided a case against artificial general intelligence (AGI), it really hasn’t (for my money, it’s an open question as to whether AGI is possible).

Nonetheless, Brian Christian’s article is easily the most insightful non-technical commentary on the Turing Test I’ve ever read, and one of the best pieces on artificial intelligence in general I’ve read. If he hasn’t disproved AGI, he has done much to show just what a difficult task it would be to achieve it — just how complicated and inscrutable is the subject that artificial intelligence researchers are attempting to duplicate, imitate, and best; and how the AI software we have today is not nearly as comparable to human intelligence as researchers like to claim.

Christian recounts his experience participating as a human confederate in the Loebner Prize competition, the annual event in which the Turing Test is carried out upon the software programs of various teams. Although no program has yet passed the Turing Test as originally described by Alan Turing, the competition awards some conciliatory prizes, including the Most Human Computer and the Most Human Human, for, respectively, the program able to fool the most judges that it is human and the person able to convince the most judges that he is human.

Christian makes it his goal to win the Most Human Human prize, and from his studies and efforts to win, offers a bracing analysis of human conversation, computer “conversation,” and what the difference between the two teaches us about ourselves. I couldn’t do justice to Christian’s nuanced argument if I attempted to boil it down here, so I’ll just say that I can’t recommend this article highly enough, and will leave you with a couple excerpts:

One of the first winners [of the Most Human Human prize], in 1994, was the journalist and science-fiction writer Charles Platt. How’d he do it? By “being moody, irritable, and obnoxious,” as he explained in Wired magazine — which strikes me as not only hilarious and bleak, but, in some deeper sense, a call to arms: how, in fact, do we be the most human we can be — not only under the constraints of the test, but in life?...

We so often think of intelligence, of AI, in terms of sophistication, or complexity of behavior. But in so many cases, it’s impossible to say much with certainty about the program itself, because any number of different pieces of software — of wildly varying levels of “intelligence” — could have produced that behavior.

No, I think sophistication, complexity of behavior, is not it at all. For instance, you can’t judge the intelligence of an orator by the eloquence of his prepared remarks; you must wait until the Q&A and see how he fields questions. The computation theorist Hava Siegelmann once described intelligence as “a kind of sensitivity to things.” These Turing Test programs that hold forth may produce interesting output, but they’re rigid and inflexible. They are, in other words, insensitive — occasionally fascinating talkers that cannot listen.

Christian’s article is available here, and is adapted from his forthcoming book, The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive. The New Atlantis also has two related essays worth reading: “The Trouble with the Turing Test” by Mark Halpern, which makes many similar arguments but goes more deeply into the meaning of the “intelligence” we seem to see in conversational software, and “Till Malfunction Do Us Part,” Caitrin Nicol’s superb essay on sex and marriage with robots, which features some of the same AI figures discussed in Christian’s article.

Tuesday, February 22, 2011

Setting the Record Straight

Kyle Munkittrick, the transhumanist blogger with whom we have tussled before, has a newish perch over on one of Discover magazine’s blogs. In a post today, Munkittrick tries to zing Peter Lawler, a contributing editor to The New Atlantis. For now I won’t comment on the substance of Munkittrick’s post; I just want to focus on a prefatory paragraph. He mentions that Professor Lawler served on the President’s Council on Bioethics, then offers this smorgasbord of smears and demonstrable falsehoods:

For those of you unfamiliar with Bush’s President’s Council on Bioethics, they were the brilliant minds behind halting stem cell research, focusing on it-worked-for-Bristol-Palin abstinence-only sex education and being generally terrible philosophers and thinkers. Charles Krauthammer was asked his opinion of ethical issues, I kid you not. In short, the PCBE happily rubber-stamped the backwards and anti-science decrees of Bush and Cheney in an effort to supplicate the deranged Christian base of the Republican party. I tell you all of this lovely information so you have a working context for the luminary Big Think has decided to employ.
Let’s look at these claims one by one.

Was the Council “behind halting stem cell research”? No. First of all, stem cell research never “halted” — in fact, it received funding from the federal government for the first time during the Bush administration, and it flourished in the United States during the Bush years. Second, President Bush’s stem cell funding policy was announced on August 9, 2001, in the same speech in which the president announced he was going to create the Council. The Council didn’t even have its first meeting until January 2002, after the policy was already in place. (The Council did, however, publish an extremely useful report in 2004 explaining the state of stem cell research, as well as a white paper in 2005 analyzing some proposed means of obtaining pluripotent stem cells that wouldn’t require the intentional destruction of human embryos.)

Did the Council focus on “abstinence-only sex education”? No. The Council never addressed that subject. Mr. Munkittrick is either mistaken or lying. (Go ahead and search the Council’s publications and meeting transcripts for yourself. In fact, the only mention in all the Council’s work comes from neuroscientist Patricia Churchland, an avowed secular humanist who, in contributing a chapter to one report, criticizes abstinence education in passing.)

Was the Council composed of “generally terrible philosophers and thinkers”? I am happy to concede Mr. Munkittrick’s intimate familiarity with terrible philosophers and thinkers, not to mention terrible thinking. But this is a grossly unfair characterization of the Council. Among its members were medical doctors, accomplished scientists, philosophers, theologians, and lawyers, with a wide range of views. It also solicited testimony and contributions from many accomplished and esteemed figures, also with a very wide range of views. The Council’s members were very accomplished people who often disagreed with one another on the subjects the Council debated — disagreements that were sometimes very illuminating. (As for Dr. Krauthammer, Mr. Munkittrick may dislike his views on national security policy, but that has little bearing on his service on the Council.)

Did the Council “rubber-stamp the backwards and anti-science decrees of Bush and Cheney in an effort to supplicate the deranged Christian base of the Republican party”? The latter part of this statement is just inflammatory nonsense; the former part shows a plain ignorance of the Council’s work. The Council was certainly not a rubberstamp, starting with its first report, on cloning policy, in 2002. It was such a diverse group of scholars with such divided views that it couldn’t have been a mere rubberstamp for any administration’s policies.

But policy wasn’t the Council’s chief concern anyway. As Council member Gilbert Meilaender wrote in an excellent essay for The New Atlantis a year ago, “exploring and examining competing goals” was the primary task of the Council. “Such exploration is unlikely to result in a large number of policy recommendations, but that is not its aim. The aim, rather, is to help the public and its elected representatives think about the implications of biotechnological advance for human life.” This is the assessment a reasonable person would have of the Council’s work after reading any of its reports, all of which were philosophically deep in their attempts to understand difficult bioethical issues, but generally went lightly on the policy recommendations — so one gets the sense from this post that Mr. Munkittrick is wholly unfamiliar with the reports issued by the body he so quickly dismisses.

Finally, back to Lawler. A respected professor of political philosophy, Lawler is the author of several wise books about modernity, postmodernity, technology, and faith. I heartily recommend his latest book, Modern and American Dignity, as well as his previous book Stuck with Virtue; they both grapple with bioethical questions, and they both reward careful reading.

Racism, Humanism, and Speciesism: The Irony of the Censored “Huck Finn”

For reasons unrelated to the recent controversy surrounding the book, I’ve recently been rereading Adventures of Huckleberry Finn (the Norton Critical Edition, uncensored). Nonetheless, the controversy has been sharp in my mind as I’ve been reading, and it’s striking how deeply the change undermines some key passages from the book. Take this one, from the end of Chapter XIV:
“Why, Huck, doan de French people talk de same way we does?”
No, Jim; you couldn’t understand a word they said — not a single word.”
“Well, now, I be ding-busted! How do dat come?”
I don’t know; but it’s so. I got some of their jabber out of a book. Spose a man was to come to you and say Polly-voo-franzy — what would you think?”
“I wouldn’ think nuff’n; I’d take en bust him over de head — dat is, if he warn’t white. I wouldn’t ’low no nigger to call me dat.”
“Shucks, it ain’t calling you anything. It’s only saying, do you know how to talk French.”
“Well, den, why couldn’t he say it?”
“Why, he is a-saying it. That’s a Frenchman’s way of saying it.”
“Well, it’s a blame’ ridicklous way, en I doan’ want to hear no mo’ ’bout it. Dey ain’ no sense in it.”
“Looky here, Jim, does a cat talk like we do?”
“No, a cat don’t.”
“Well, does a cow?”
“No, a cow don’t, nuther.”
“Does a cat talk like a cow, or a cow talk like a cat?”
“No, dey don’t.”
“It’s natural and right for ’em to talk different from each other, ain’t it?”
“ ’Course.”
“And ain’t it natural and right for a cat and a cow to talk different from us?”
“Why, mos’ sholy it is.”
“Well, then, why ain’t it natural and right for a Frenchman to talk different from us? — you answer me that.”
“Is a cat a man, Huck?”
“No.”
“Well, den, dey ain’t no sense in a cat talkin’ like a man. Is a cow a man? — er is a cow a cat?”
“No, she ain’t either of them.”
“Well, den, she ain’t got no business to talk like either one er the yuther of ’em. Is a Frenchman a man?”
“Yes.”
Well, den! Dad blame it, why doan’ he talk like a man? — you answer me dat!”
I see it warn’t no use wasting words — you can’t learn a nigger to argue. So I quit.
Huck is right, of course, that Jim fails to grasp a basic piece of knowledge: the existence of multiple human languages. But the genius of this passage lies in how Jim, in refuting that it is “natural and right” to think that a Frenchman is different from an American in the same way a cow is, actually expresses a much deeper truth that Huck fails to grasp. Moreover, the passage conveys this truth not just in spite of Huck being our narrator, but through the way Huck reveals his ignorance and Jim does not. Huck dismisses Jim’s argument using an epithet that asserts that a black man is less than a man, when of course Jim, a black man, has just shown a truth deeper than differences of language or understanding: a man is a man. Jim’s point is both argument and — in showing Jim’s intellect — demonstration of what is wrong with the epithet. (This is true despite the fact that Jim himself continues to use the epithet, and appears on some level to believe it.)

The motivation behind replacing the “n-word” with the word “slave” is understandable: I feel uncomfortable even repeating it in the excerpt here. But the power of this passage, and other similar ones in the novel, would be completely lost if the word were changed to “slave.” Huck would seem to be dismissing Jim’s argument based on his terrible lot in life — which deprived him, perhaps, of Huck’s educational opportunities — rather than based on the idea that Jim’s race makes him subhuman.

This change has the advantage of appealing to our modern understanding of why Jim seems ignorant in many respects. But the central purposes of passages like this one are then lost: the meaning of Jim’s point itself, its significance in relationship to Huck’s dismissal of it, and the fact that Jim’s ability to even have such an insight is evidence itself of how wrong and cruel are Huck’s use of that term. Writing thirty years ago in the New York Times about efforts to ban the book, Russell Baker noted:
The people [whom Huck and Jim] encounter are drunkards, murderers, bullies, swindlers, lynchers, thieves, liars, frauds, child abusers, numbskulls, hypocrites, windbags and traders in human flesh. All are white. The one man of honor in this phantasmagoria is black Jim, the runaway slave. “Nigger Jim,” as Twain called him to emphasize the irony of a society in which the only true gentleman was held beneath contempt.
As Twain wrote the book, Jim is a living refutation, through his evident sensitivity, intelligence, and honor, of that terrible term Huck uses to dismiss him. But in the reformulation, many of those qualities become less evident, so that ironically, “Slave Jim” seems much more like a minstrel-show caricature than does “Nigger Jim.” Worse still, the irony of Jim’s name is lost too, so that where Twain’s book shows how wrong it is to think of Jim as subhuman — and suggests why the source of our equality was still of pressing importance to the book’s readers in 1885 — the new version instead brings us to see Jim as merely an object of pity.

Transhumanists coopt civil rightsrhetoric, warping it in the process.Photo via flickr/ThinkVegan.
I hope the meaning of this passage, and others like it from Huck Finn — particularly the astonishing chapter after the one cited above, in which Huck plays a cruel trick on Jim — will survive. And I believe that transhumanist theorists and activists could learn a thing or two about rights, equality, persons, and (if they are interested) human beings by revisiting Twain’s great book. One transhumanist group, the Institute for Ethics and Emerging Technologies, recently announced a program to promote the idea of the “Non-Human Person.” I strongly support efforts to better our treatment of animals and the environment, and to reevaluate our historical attitude towards both as mere matter for manipulation, devoid of any moral status. But the IEET’s new program, in stating that “the general thrust of human history is toward the progressive inclusion of previously marginalized individuals and groups,” continues the transhumanist trope of claiming that the movement is carrying on the work that freed the slaves and brought civil rights to minorities — and so it would do well to acknowledge the historical facts about how civil rights advanced, and about where our equality has been understood to come from.

Memphis sanitation workers strike in 1968. Photo copyright Richard L. Copley.

Related Links:

Friday, February 18, 2011

Watson reax from an AI researcher, Ken Jennings, and others

I’d like to point our readers to a couple articles of note about IBM’s Watson and its Jeopardy! win.

First, transhumanist and AI researcher Ben Goertzel, writing at, of all places, KurzweilAI.net, seems to agree with my overall assessment of the significance of Watson:
Ray Kurzweil has written glowingly of Watson as an important technology milestone

Indeed no human can do what a search engine does, but computers have still not shown an ability to deal with the subtlety and complexity of language. Humans, on the other hand, have been unique in our ability to think in a hierarchical fashion, to understand the elaborate nested structures in language, to put symbols together to form an idea, and then to use a symbol for that idea in yet another such structure. This is what sets humans apart.

That is, until now. Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.

I understand where Kurzweil is coming from, but nevertheless, this is a fair bit stronger statement than I’d make. As an AI researcher myself I’m quite aware of the all subtlety that goes into “thinking in a hierarchical fashion,” “forming ideas,” and so forth. What Watson does is simply to match question text against large masses of possible answer text — and this is very different than what an AI system will need to do to display human-level general intelligence. Human intelligence has to do with the synergetic combination of many things, including linguistic intelligence but also formal non-linguistic abstraction, non-linguistic learning of habits and procedures, visual and other sensory imagination, creativity of new ideas only indirectly related to anything heard or read before, etc. An architecture like Watson barely scratches the surface!
Next, the most witty and astute piece I’ve read on Watson comes from Ken Jennings himself. It turns out that the grace and good humor (in both senses of the word) Jennings displayed on Jeopardy! wasn’t a fluke:
Indeed, playing against Watson turned out to be a lot like any other Jeopardy! game, though out of the corner of my eye I could see that the middle player had a plasma screen for a face. Watson has lots in common with a top-ranked human Jeopardy! player: It's very smart, very fast, speaks in an uneven monotone, and has never known the touch of a woman.
Jennings’s short article is well worth your time to read, as is his equally funny and insightful Q&A session at the Washington Post.

Finally, don’t miss Hijinks Ensue’s comic and Slate’s hilarious video about Watson. Part of the latter is factual, but the premise is about Watson’s quest trying to “make it” on other game shows, such as Who Wants to Be a Millionaire?, The Newlywed Game, and Survivor. Good stuff (though a warning that both have PG-13 elements).

Thursday, February 17, 2011

Watson, Can You Hear Me? (The Significance of the “Jeopardy” AI win)

Yesterday, on Jeopardy!, a computer handily beat its human competitors. Stephen Gordon asks, “Did the Singularity Just Happen on Jeopardy?” If so, then I think it’s time for me and my co-bloggers to pack up and go home, because the Singularity is damned underwhelming. This was one giant leap for robot publicity, but only a small step for robotkind.

Unlike Deep Blue, the IBM computer that in 1997 defeated the world chess champion Garry Kasparov, I saw no indication that the Jeopardy! victory constituted any remarkable innovation in artificial intelligence methods. IBM’s Watson computer is essentially search engine technology with some basic natural language processing (NLP) capability sprinkled on top. Most Jeopardy! clues contain definite, specific keywords associated with the correct response — such that you could probably Google those keywords, and the correct response would be contained somewhere in the first page of results. The game is already very amenable to what computers do well.

In fact, Stephen Wolfram shows that you can get a remarkable amount of the way to building a system like Watson just by putting Jeopardy! clues straight into Google:


Once you’ve got that, it only requires a little NLP to extract a list of candidate responses, some statistical training to weight those responses properly, and then a variety of purpose-built tricks to accommodate the various quirks of Jeopardy!-style categories and jokes. Watching Watson perform, it’s not too difficult to imagine the combination of algorithms used.

Compiling Watson’s Errors

On that large share of search-engine-amenable clues, Watson almost always did very well. What’s more interesting to note is the various types of clues on which Watson performed very poorly. Perhaps the best example was the Final Jeopardy clue from the first game (which was broadcast on the second of three nights). The category was “U.S. Cities,” and the clue was “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.” Both of the human players correctly responded Chicago, but Watson incorrectly responded Toronto — and the audience audibly gasped when it did.


Watson performed poorly on this Final Jeopardy because there were no words in either the clue or the category that are strongly and specifically associated with Chicago — that is, you wouldn’t expect “Chicago” to come up if you were to stick something like this clue into Google (unless you included pages talking about this week’s tournament). But there was an even more glaring error here: anyone who knows enough about Toronto to know about its airports will know that it is not a U.S. city.

There were a variety of other instances like this of “dumb” behavior on Watson’s part. The partial list that follows gives a flavor of the kinds of mistakes the machine made, and can help us understand their causes.
  • With the category “Beatles People” and the clue “‘Bang bang’ his ‘silver hammer came down upon her head,’” Watson responded, “What is Maxwell’s silver hammer.” Surprisingly, Alex Trebek accepted this response as correct, even though the category and clue were clearly asking for the name of a person, not a thing.
  • With the category “Olympic Oddities” and the clue “It was the anatomical oddity of U.S. gymnast George Eyser, who won a gold medal on the parallel bars in 1904,” Watson responded, “What is leg.” The correct response was, “What is he was missing a leg.”
  • In the “Name the Decade” category, Watson at one point didn’t seem to know what the category was asking for. With the clue “Klaus Barbie is sentenced to life in prison & DNA is first used to convict a criminal,” none of its top three responses was a decade. (Correct response: “What is the 1980s?”)
  • Also in the category “Name the Decade,” there was the clue, “The first modern crossword puzzle is published & Oreo cookies are introduced.” Ken responded, “What are the twenties.” Trebek said no, and then Watson rang in and responded, “What is 1920s.” (Trebek came back with, “No, Ken said that.”)
  • With the category “Literary Character APB,” and the clue “His victims include Charity Burbage, Mad Eye Moody & Severus Snape; he’d be easier to catch if you’d just name him!” Watson didn’t ring in because his top option was Harry Potter, with only 37% confidence. His second option was Voldemort, with 20% confidence.
  • On one clue, Watson’s top option (which was correct) was “Steve Wynn.” Its second-ranked option was “Stephen A. Wynn” — the full name of the same person.
  • With the clue “In 2002, Eminem signed this rapper to a 7-figure deal, obviously worth a lot more than his name implies,” Watson’s top option was the correct one — 50 Cent — but its confidence was too low to ring in.
  • With the clue “The Schengen Agreement removes any controls at these between most EU neighbors,” Watson’s first choice was “passport” with 33% confidence. Its second choice was “Border” with 14%, which would have been correct. (Incidentally, it’s curious to note that one answer was capitalized and the other was not.)
  • In the category “Computer Keys” with the clue “A loose-fitting dress hanging from the shoulders to below the waist,” Watson incorrectly responded “Chemise.” (Ken then incorrectly responded “A,” thinking of an A-line skirt. The correct response was a “shift.”)
  • Also in “Computer Keys,” with the clue “Proverbially, it’s ‘where the heart is,’” Watson’s top option (though it did not ring in) was “Home Is Where the Heart Is.”
  • With the clue “It was 103 degrees in July 2010 & Con Ed’s command center in this N.Y. borough showed 12,963 megawatts consumed at 1 time,” Watson’s first choice (though it did have enough confidence to ring in) was “New York City.”
  • In the category “Nonfiction,” with the clue “The New Yorker’s 1959 review of this said in its brevity & clarity it is ‘unlike most such manuals, a book as well as a tool.’” Watson incorrectly responded “Dorothy Parker.” The correct response was “The Elements of Style.”
  • For the clue “One definition of this is entering a private place with the intent of listening secretly to private conversations,” Watson’s first choice was “eavesdropper,” with 79% confidence. Second was “eavesdropping,” with 49% confidence.
  • For the clue “In May 2010 5 paintings worth $125 million by Braque, Matisse & 3 others left Paris’ museum of this art period,” Watson responded, “Picasso.”
We can group these errors into a few broad, somewhat overlapping categories:
  • Failure to understand what type of thing the clue was pointing to, e.g. “Maxwell’s silver hammer” instead of “Maxwell”; “leg” instead of “he was missing a leg”; “eavesdropper” instead of “eavesdropping.”
  • Failure to understand what type of thing the category was pointing to, e.g.,“Home Is Where the Heart Is” for “Computer Keys”; “Toronto” for “U.S. cities.”
  • Basic errors in worldly logic, e.g. repeating Ken’s wrong response; considering “Steve Wynn” and “Stephen A. Wynn” to be different responses.
  • Inability to understand jokes or puns in clues, e.g. 50 Cent being “worth” “more than his name implies”; “he’d be easier to catch if you’d just name him!” about Voldemort.
  • Inability to respond to clues lacking keywords specifically associated with the correct respone, e.g. the Voldemort clue; “Dorothy Parker” instead of “The Elements of Style.”
  • Inability to correctly respond to complicated clues that involve inference and combining facts in subsequent stages, rather than combining independent associated clues; e.g. the Chicago airport clue.
What these errors add up to is that Watson really cannot process natural language in a very sophisticated way — if it did, it would not suffer from the category errors that marked so many of its wrong responses. Nor does it have much ability to perform the inference required to integrate several discrete pieces of knowledge, as required for understanding puns, jokes, wordplay, and allusions. On clues involving these skills and lacking search-engine-friendly keywords, Watson stumbled. And when it stumbled, it often seemed not just ignorant, but completely thoughtless.

I expect you could create an unbeatable Jeopardy! champion by allowing a human player to look at Watson’s weighted list of possible responses, even without the weights being nearly as accurate as Watson has them. While Watson assigns percentage-based confidence levels, any moderately educated human will be immediately be able to discriminate potential responses into the three relatively discrete categories “makes no sense,” “yes, that sounds right,” and “don’t know, but maybe.” Watson hasn’t come close to touching this.

The Significance of Watson’s Jeopardy! Win

In short, Watson is not anywhere close to possessing true understanding of its knowledge — neither conscious understanding of the sort humans experience, nor unconscious, rule-based syntactic and semantic understanding sufficient to imitate the conscious variety. (Stephen Wolfram’s post accessibly explains his effort to achieve the latter.) Watson does not bring us any closer, in other words, to building a Mr. Data, even if such a thing is possible. Nor does it put us much closer to an Enterprise ship’s computer, as many have suggested.

In the meantime, of course, there were some singularly human characteristics on display in the Jeopardy! tournament, and evident only in the human participants. Particularly notable was the affability, charm, and grace of Ken Jennings and Brad Rutter. But the best part was the touches of genuine, often self-deprecating humor by the two contestants as they tried their best against the computer. This culminated in Ken Jennings’s joke on his last Final Jeopardy response:


Nicely done, sir. The closing credits, which usually show the contestants chatting with Trebek onstage, instead showed Jennings and Rutter attempting to “high-five” Watson and show it other gestures of goodwill:


I’m not saying it couldn’t ever be done by a computer, but it seems like joking around will have to be just about the last thing A.I. will achieve. There’s a reason Mr. Data couldn’t crack jokes. Because, well, humor — it is a difficult concept. It is not logical. All the more reason, though, why I can’t wait for Saturday Night Live’s inevitable “Celebrity Jeopardy” segment where Watson joins in with Sean Connery to torment Alex Trebek.

Celebrating self-mutilation, Ctd.

In response to my last post about transhumanist celebration of the self-harming behavior of one young woman, tlcraig comments:
I have to say, I am tempted by the view that Lepht Anonym is simply more clear-sighted and thorough-going in her rejection of 'the given', or, more sharply put, her hatred of the body, than her fellow transhumanists. Like the body-builder, or the cosmetic surgery patient, she at least recognizes the necessity of risking the good that goes with our presently limited bodies in order to get FOR HERSELF the thought-to-be-possible good of a deliberately remade body. Her fellow transhumanists are willing, even eager, to risk the goods available to presently limited bodies FOR FUTURE GENERATIONS. The fact that they are willing to risk nothing themselves must be somewhat telling, no? Indeed, from the vantage point of L.A., it looks a bit like cowardice masquerading as generosity.

Of course, this is not to deny that there may be a confusion, even a kind of mental illness, behind her 'daring', and that the actions of the more 'timid' transhumanists in fact points to a prudence. But making that explicit would oblige thinking their way past ridiculous arguments like "searching on Google makes us all cyborgs already" and "aging is a disease no different than cancer"
Tlcraig is right, of course, that one could view Lepht Anonym’s behavior as simply following transhumanist principles without timidity. But now that we have an example of those principles in action, we can vividly see their shortcomings. From a theoretical standpoint, one could argue that we only consider her sort of self-modification to be caused in part by mental illness because of our outdated normative principles — or even that we’re all actually mentally ill for accepting our frail, decaying bodies. But then, as we’ve seen in this case, one becomes unable to distinguish between healthy and unhealthy states of mind — in particular, one loses the capacity to judge any self-modification behavior as unhealthy, or as motivated by unhealthy impulses.

Perhaps there is such a thing as a perfectly adjusted, psychologically balanced, and untroubled person simply deciding for philosophical reasons to cut up himself or herself. But it is striking that none of the transhumanist-friendly discussions I’ve seen about Lepht Anonym have mentioned even the possibility that her behavior might be motivated in part by disturbed psychological states, feelings of self-loathing, or suicidal ideation. Nor, of course, have they noted the easily available confirmation that her behavior actually is motivated by these things. Nor have they discussed whether this might bring into question the praising of self-modification — much less have they discussed whether it might be unethical to encourage it in this one individual.

All of this points to the conclusion that transhumanism has some profound shortcomings in its ability (and desire) to understand the human subject it claims to be so interested in bettering.

Tuesday, February 15, 2011

Celebrating self-mutilation

I had a look today at the disturbing, fascinating blog of Lepht Anonym, the young woman who recently caused a stir on transhumanist-leaning sites by performing various “enhancement” surgeries on herself at home. These surgeries typically involved implanting small devices in herself, such as magnets under her fingertips, meant to give her extra sensory abilities — often with medical complications resulting.

There’s actually something strangely refreshing about Anonym’s blog: it may be the only transhumanist writing I’ve seen that seems to be written by an actual person, one clearly possessed of a complicated inner life. Transhumanists usually seem to lose interest in expressing their inner lives when they give their thoughts over to the boundlessly incoherent muddle of transhumanist theorizing.

Here’s just one example of Anonym’s distinctive relationship to transhumanism:

i would very much like it if the uneducated masses who like to call me an idiot would disavail themselves of the following precepts:...

3. that you are just as much a "cyborg" as i am because you use an iPhone and wear glasses. [****] off if you are going to tell me that what i do is pointless, and i do not want to debate the definition of cybrog with any normal.

Anonym is here rejecting one of the most familiar and empty transhumanist tropes (employed just yesterday in a blog post by Philippe Verdoux, who says that “the cyborg is already among us”).

Lepht Anonym delivers a lecture.

There is much else that could be said about Anonym’s very personal chronicle. Most notable, sadly, is the confirmation, in a post dated eight days before Wired.com ran its story about her, that Anonym is a diagnosed sufferer of borderline personality disorder (BPD). One of the main symptoms of BPD is deliberate self-harm — formerly known as self-mutilation.

Transhumanists love to repeat the idea that life as we know it, inextricable as it is from aging, is inherently a state of disease (for which transhumanism is the cure). Whatever you think of the aims of that idea, it is difficult to distinguish among various diseased states as good and bad. The only easily recognizable good is resisting the disease — rebelling against the bounds of biology.

Consequently, transhumanists have no conception of any relevance to beings alive today of what it means to flourish, and neither, then, of what sorts of acts and states of mind constitute a profound lack of flourishing. And so it’s sad, if not at all surprising, to find transhumanists not only lacking the faculties to evaluate self-mutilation as the self-destructive behavior of a person in need of help, but encouraging it — both by reporting on it so enthusiastically, and by fostering a subculture in which it could be understood as a laudable act of creation and self-expression.

It’s not psychological distress: it’s “morphological freedom” through “DIY bio.” This is the terminology transhumanists use to anoint their attitude as the highest and bravest sort of enlightenment. Except, read a few of Anonym’s posts describing her self-surgeries and the complications following them, and get a sense of the motivation behind them, and those terms begin to seem like cruel euphemisms — and yet another indication that transhumanist ideology represents a step backwards, not forwards, in our betterment and self-understanding. Wired.com should seriously reexamine its decision to run this piece in the way it did. And — although I know that the moral invoked here is itself scoffed at as unenlightened — the transhumanist community should be ashamed of its role in this.

Lepht Anonym certainly has a distinctive voice and presence on her blog. I can’t help but enjoy that she has twenty-six blog posts tagged “that is illogical captain.” She can be clever, witty, and charmingly self-deprecating. Her self-description says that she “likes people,” and it shows: even in posts in which she describes her pain and confusion, there is an obvious and admirable warmth and love for her friends and family.

I hope Lepht Anonym will stick around, and will find an outlet for her energy and talents that is better for her.

UPDATE: See my follow-up post here.

Friday, February 11, 2011

Man Achieves True Clarity of Hindsight

Engadet reports that Wafaa Bilal is having trouble with the camera anchored to the back of his head. The post is a little vague, but to Laura June, it is "not really a surprise" that he should be removing the camera from the back of his head, and that the experience has been pretty painful. Still and all, she thinks the project can be judged a success if he just wanted to be known as "the guy who had a camera implanted in the back of his head."

When Engadget contributor Sean Hollister covered the same story back in December, he was considerably more upbeat. While the version now posted is headlined "NYU prof sticks camera on the back of his head, just as promised," the version I have archived in my Google Reader says, "Man sticks camera in the back of his head, fulfills our childhood fantasies." The tone of the post that follows is anything but skeptical, even if there was a tongue-in-cheek aspect of the original headline. (Although if there was, why change it?)

Everything is so much clearer, after the fact!