Futurisms: Critiquing the project to reengineer humanity

Saturday, October 31, 2009

The stars our destination

Forget performance enhancement, optical implants, and all the other "upgrades" that the coming decades of progress towards the Singularity are supposed to bring. What about the distant (or at least remote) future, after we've transcended? Many transhumanists believe that our destiny is to continue expanding outward from the Earth, consuming the Solar System, the Galaxy, and eventually the entire Universe with our being. The exact nature of that being is still a matter of dispute — it may be bodies like our own but made to live much longer, or bodies that have been enhanced through mechanization, or robotic surrogates, or perhaps even Consciousness itself expanding on a computational substrate (see Ray Kurzweil's The Singularity Is Near for a depiction) — but the general idea of our inevitable expansion into the cosmos is the same.

Of course, long before transhumanism and even before space travel, science fiction writers were speculating on the implications of just such a notion of posthuman destiny. In his 1956 short story "The Last Question," Isaac Asimov considers the inevitability of limits, ends, endings, and beginnings. The story presages the metaphysical spiritualism of Arthur C. Clarke's 2001 and related science fiction, as well as that of many of the later transhumanists. Read it for the provocation of thought (and the hilarious anachronism of planet-sized computers).

(Hat tip: Mark Reitblatt.)

Friday, October 30, 2009

Life blahg, cont'd

On a somewhat related note, enjoy this Volstein from the cover of the latest issue of The New Yorker:


Life blahg

CNN recently ran an article about Microsoft researcher Gordon Bell's efforts to record every aspect of his life (which they credulously dub "converting his brain into 'e-memory'"). And over at the Singularity Hub, Keith Kleiner notes an advance in the technology:
Lifelogging – recording every single minute of your life (or as much of it as possible) – continues its unstoppable march towards the mainstream with the announcement that Vicon will soon release a life recording device called the Revue. The device is worn around your neck and automatically takes photos up to every 30 seconds.
Despite Kleiner's use of the rhetoric of inevitability — a standard device among transhumanists — lifelogging is a complicated subject, with a lot of ins, outs, and what-have-yous; it's a subject we'll return to on this blog. But in the meantime, XKCD concisely and beautifully gets at one of the core problems:



Of course, the sort of issues raised by the comic have been with us as long as we have had both the technologies to record — photography, video, journal-writing, portraiture, and other media — and the impulse to create narratives of one's life for oneself and for others. But just because that impulse is venerable doesn't mean that it has not changed over time; today, as we are able to indulge that impulse ever more easily, there is a growing sense that our technologies and habits can impede the very experiences they are meant to safely seal away for later remembrance.

Thursday, October 29, 2009

The economics of magic pills: Questions for Methuselists

In its 2003 report Beyond Therapy (discussed in a symposium in the Winter 2004 New Atlantis), the President's Council on Bioethics concludes that "the more fundamental ethical questions about taking biotechnology 'beyond therapy' concern not equality of access, but the goodness or badness of the things being offered and the wisdom of pursuing our purposes by such means." That is certainly right, and it is why this blog chiefly focuses on the deeper questions related to the human meaning of our technological aspirations. That said, the question of equality of access is still worth considering, not least because it is one of the few ethical questions considered legitimate by many transhumanists, and so it might provide some common ground for discussion.

In the New York Times, the economist Greg Mankiw, while discussing health care, offers a fascinating thought experiment that sheds some light on the issue of access:

Imagine that someone invented a pill even better than the one I take. Let’s call it the Dorian Gray pill, after the Oscar Wilde character. Every day that you take the Dorian Gray, you will not die, get sick, or even age. Absolutely guaranteed. The catch? A year’s supply costs $150,000.

Anyone who is able to afford this new treatment can live forever. Certainly, Bill Gates can afford it. Most likely, thousands of upper-income Americans would gladly shell out $150,000 a year for immortality.

Most Americans, however, would not be so lucky. Because the price of these new pills well exceeds average income, it would be impossible to provide them for everyone, even if all the economy’s resources were devoted to producing Dorian Gray tablets.

The standard transhumanist response to this problem is voiced by Ray Kurzweil in The Singularity Is Near: "Drugs are essentially an information technology, and we see the same doubling of price-performance each year as we do with other forms of information technology such as computers, communications, and DNA base-pair sequencing"; because of that exponential growth, "all of these technologies quickly become so inexpensive as to become almost free."

Though my cell phone bill begs to differ, Kurzweil's point may well be true. And yet if that were the whole picture, we might expect one of the defining trends of the past half century to have been the steady decline in the cost of health care. Instead, as Mankiw notes:

These questions may seem the stuff of science fiction, but they are not so distant from those lurking in the background of today’s health care debate. Despite all the talk about waste and abuse in our health system (which no doubt exists to some degree), the main driver of increasing health care costs is advances in medical technology. The medical profession is always figuring out new ways to prolong and enhance life, and that is a good thing, but those new technologies do not come cheap. For each new treatment, we have to figure out if it is worth the price, and who is going to get it.

However quickly the costs for a given set of medical technologies falls, the rate at which expensive new technologies are developed grows even faster — as, more significantly, does our demand for them. In the case of medicine, what begins as a miraculous cure comes in time to be expected as routine, and eventually even to be considered a right (think of organ transplantation, for example). What Kurzweil and the like fail to grasp is that, absent some wise guiding principles about the purpose of our biotechnical power, as we gain more of it we paradoxically become less satisfied with it and only demand more still.

But if our biotechnical powers were to grow to the point that "defeat" of death truly seemed imminent, the demand for medicine would only grow with it. The advocates of radical life extension already believe death to be a tragedy that inflicts incalculable misery. That increased demand would only magnify the perceived injustice of death (why must my loved one die, when So-and-So, by surviving one year more, can live forever?), and could create such a sense of urgency that desperate measures — demeaning research, economy-endangering spending — would seem justified.

For believers in the technological convulsion of the Singularity, the question of access and distribution is even more pointed, since the gap between the powers of the post-Singularity "haves" and "have-nots" would dwarf present-day inequality — and the "haves" might well want to keep the upper hand. To paraphrase the Shadow, "Who knows what evil lurks in the hearts of posthumanity?"

(Hat tip: David Clift-Reaves via Marginal Revolution.)
[Photo credit: Flickr user e-magic]

Wednesday, October 28, 2009

Text and the City

When you're reading some rapturous talk about the glorious future in which enhancement will allow us to bend the world to our will, it often seems like a remote (if not necessarily distant) fantasyland, something to be contemplated through ideals. It's easy to forget that many of the most anticipated enhancements are already with us in some early stage of development, and we can look to the effects they already have had as a guide to how they might manifest in a more extreme form.

Human sexuality is frequently cited as a potential subject of futuristic enhancement, with predictions about the future ranging from alterations to sexual organs, to brain implants that allow for enhanced intimacy, to entirely virtual interactions. (The funniest description may be from this H+ Magazine article, where Natasha Vita-More imagines posthuman sex as "multiple exchanges of digitized codes reaching a crescendo.") Well, it is worth remembering that human sexuality has already been transformed, first very radically with the advent of the birth-control pill a half-century ago, and more recently in strange new ways as a result of the last two decades of advances in communication technologies.

The latter transformation is explored in a new article in New York magazine. Since April 2007, that magazine has published weekly "sex diaries" written by a series of anonymous New Yorkers. The author of the article read through all hundred-plus diaries and noted especially the role of information technology:

Virtually everyone under the age of 30 has grown up with their sexuality digitally enhanced, and the rest of us are rapidly forgetting the world before we all were hooked into the same erotically charged network of instantaneously transmitted messages and images.

"Enhanced," of course, is a vague and slippery word in this context, and the author observes in the diaries a series of anxieties and fears about sexuality: the anxiety of too much choice, the anxiety of appearing overly sincere, the anxiety of appearing prudish. This remarkable snippet comes from his description of the anxiety of not being chosen:

Among active Diarists, the worry that they will make the wrong choice is surpassed by the fear that they might find themselves without one. To guard against this disaster, everybody is on somebody’s back burner, and everybody has a back burner of their own, which they maintain through open-ended texts, sporadic Facebook messages, G-chats, IM’s, and terse e-mails. The Diarists appear to do this regardless of whether or not they are in a committed, or even a contractually sealed, relationship....

A Diarist with any game at all has unlimited opportunity.... They use their cell phone to disaggregate, slice up, and repackage their emotional and physical needs, servicing each with a different partner, and hoping to come out ahead. This can get complicated quickly, however, and can lead to uneasy situations.... This compulsive toggling between options winds up inflicting the very damage it was designed to protect against.

These self-selected diarists from The City That Never Sleeps may not be representative of the broader population; nor can we assume that their experiences will directly show us anything of what to expect in the future. But we can certainly wonder whether these people's sex lives — or their lives on the whole — are better as a result of the digital "enhancement" described in the article. At least as portrayed in the article, the diarists seem bewildered and lost. They are unsure of what they want out of love and life other than control — and yet as digital technologies allow them to more easily indulge their immediate impulses, their control only seems to wane as they become more unsure about they want and more unsatisfied with what they get.

Tuesday, October 27, 2009

Are psychologists humans too?

Via Mind Hacks, psychologist Norbert Schwartz gives a revealing answer when asked what nagging things he still doesn't understand about himself:
I don’t understand ... why I’m still fooled by incidental feelings. Some 25 years ago Jerry Clore and I studied how gloomy weather makes one’s whole life look bad – unless one becomes aware of the weather and attributes one’s gloomy mood to the gloomy sky, which eliminates the influence. You’d think I learned that lesson and now know how to deal with gloomy skies. I don’t, they still get me.
Schwartz claims that the tendency he describes can be counteracted, even though his own experience suggests otherwise. It's fascinating to hear him ask, in essence, "Why is it that my awareness of facts about human psychology does not automatically exempt me from those facts?" Or, in other words, "Why must I be bound to behave humanly simply because I am human?"

This attitude — not uncommon among behavioral scientists — is extended to its logical end in the tenets of transhumanism. Consider Michael Anissimov's notion that "It is a physical fact about our brains that the connections between stimuli and pleasure/displeasure are arbitrary and exist mostly for evolutionary reasons.... [W]e will eventually modify them if we wish, because the mind is not magical, it’s 'just' a machine."

If the psychological fact that gloomy weather makes for gloomy moods is meaningless and ought to be nonbinding, why not make it so that gloomy weather makes for cheery moods? Why, after all, shouldn't we reprogram ourselves so that gloomy weather makes us feel like we're eating ice cream, having sex, or riding across moonbeams on a unicorn fed by marshmallows? (One wonders how then a description of weather as 'gloomy' could retain any communicable meaning — how, indeed, a word like 'gloomy' could remain intelligible at all — but no matter, for of course these too are but disposable artifacts.)

Thursday, October 22, 2009

Transhumanism and the Escape from the Everyday

I don’t often turn to French Marxists for wisdom about the world, but a passage in Declan Kiberd’s Ulysses and Us: The Art of Everyday Life in Joyce’s Masterpiece called my attention to something Henri Lefebvre wrote about everydayness that is relevant to a theme introduced in an earlier post by Ari. In his 1947 book Critique of Everyday Life, Lefebvre argues passionately that modern philosophy, art, and politics alike are alienated (a key concept for him) from the everyday. Only Marxism, he thought, provides the proper perspective from which to appreciate the profound significance of everyday life, and how it has deteriorated under the rule of the bourgeoisie. His view of the crisis of everyday life is not the same as Ari’s or Yuval Levin’s, but I think at least one of his observations does not stand or fall on the truth of Lefebvre’s Marxist foundations:

Escape from life or rejection of life, recourse to outmoded or exhausted ways of life, nostalgia for the past or dreams of a superhuman future, these positions are basically identical.... Make the rejection of everyday life — of work, of happiness — a mass phenomenon ... and you end up with the Hitlerian ‘mystique.’

The Palace of SovietsTranshumanism is of course not (yet) “a mass phenomenon,” nor by its own lights does it reject happiness (even if it does reject mere human happiness). But it quite possibly rejects work and, as Ari pointed out, it certainly rejects everyday life. On the other hand, transhumanists in effect accuse their critics of adherence to, or nostalgia for, outmoded ways of life.

Were Lefebvre alive today, he might be content with viewing these charges and countercharges as simply more proof of the decay that defines what he would probably call neoliberalism. I don’t think that point of view is correct, and in any case, my point here is not exactly to make his argumentum ad Hitlerum. But looked at more broadly, I think Lefebvre’s warning about rejecting the everyday has merit, in two respects.

First, even if it would be wrong to say that transhumanism has a Hitlerian mystique, it most certainly has a mystique. Its mystique forms at the intersection of the modern academy and the Web, which is to say, a good deal of transhumanist advocacy is sufficiently long, jargon-ridden, and impenetrable to ensure that only the initiates will follow it. These high barriers to entry mean that it is easy to hold the comforting belief that anyone who disagrees does not actually understand. And when concepts are not in fact all that hard to understand — “the singularity” being a noteworthy instance — there is a fetishistic attention to details that only those on the inside are likely to care much about. Like sectarian movements generally (see Wildavsky and Douglas’s classic, Risk and Culture), transhumanism’s first concern is its internal cohesion; the mystique both encourages and enforces unity. Mystique as such also turns sectarians away from the everyday, in the sense of things that can be experienced in common between those inside and outside the sect. On the outside, “everybody poops,” and that’s that. On the inside, “we” have the more sophisticated understanding that allows us to ask why we should have to poop if we don’t want to.

Second, as Lefebvre suggests, rejecting the everyday is playing with dynamite. Most of the time we make our way decently in the world because of the power of the everyday, not out of high principle, rational decision-making, noble characters, perfect faith, creative brilliance, or any of the other high-toned qualities to which we might aspire. If we had to depend on the best in us, we would be lost. To paraphrase a thought in Koestler’s Darkness at Noon, to reject the everyday is to sail without ballast on a stormy sea. It is not guilt by association to remind ourselves that the totalitarian movements of the twentieth century, sailing without ballast, self-consciously set out to create a new kind of man, with terrible results.

Of course, for all too many human beings, terrible things are very much part of the everyday. We can certainly be grateful, even if not slavishly grateful, that we live in a time and place so different from the norm, the millennia in which the everyday was literally every day. Nothing about taking the everyday seriously requires us to accept every aspect of the everyday that characterizes a given time and place. But if we want to be serious about progress, we have to start from where we are, and why we are here. A clear eyed view of the everyday, however prosaic, is a better guide to what progress might mean than a starry-eyed view of fantastic futures.

Wednesday, October 21, 2009

Transhumanist Resentment Watch

[NOTE: This post has been edited since it was first published. See the postscript below.]

In a recent post, I discussed the combative rhetoric of transhumanists, and concluded that their resentment is directed not so much against critics, but against their own human nature. Given how widespread this resentment is, I think it would be worthwhile to start chronicling it.

With apologies for the vulgarity, here is the first in the series (via Michael Anissimov at Accelerating Future):
Found on [XXXXX]’s Facebook.... [XXXXX is a member of the board of directors of the] Methuselah Foundation.
Towards whom is this "bold gesture" directed? It doesn't seem to be anyone in particular. Again, [XXXXX] seems to literally be giving the finger to his own human nature. Beyond the strangeness of that self-loathing, the transhumanists bizarrely seem to be personifying human nature itself in order to antagonize it.

POSTSCRIPT (January 14, 2013): The image above has been altered to block out the face of the man flipping the bird. The original image appeared in a post by Michael Anissimov, which has since been removed, but you can still find it in the Internet Archive. We have now altered the picture to obscure the man's face and edited the post to remove his name, since he says that he was not responsible for the picture (he says his face was photoshopped in) or the words. Although it seems that he endorsed the picture and sentiment at least halfheartedly — he either posted it on Facebook or left it up there for some time after someone else did — we are happy to take it down at his request.
Since one of my comments to this post mentioned the man by name, I've also removed it (our platform offers no option to edit comments). It appears below with the name redacted. This comment originally appeared third in order, in response to the comments by “The Boss” and “citizencyborg,” and was dated October 22, 2009 at 8:25 PM:
@The Boss: Yes, [XXXXX] is, as you say, giving “aging the finger.” But as I noted a while back, giving the finger to aging means giving the finger to something written into our very nature. [XXXXX] is giving the finger not to some abstract enemy called “aging,” but to an aspect of who he is — from his distinguished graying hair to the crow’s feet beside his eyes.
@citizencyborg: I don’t see any mention of “aging-related diseases” in [XXXXX]’s caption, just a gesture directed at the human aging process writ large. This is not a frivolous point. There is, in general, a distinction to be made between (on one hand) wanting longer lives and wanting to prevent premature death, and (on the other hand) wanting to abolish entirely the process of aging unto death. To be sure, the distinction between therapy and enhancement is imperfect. But it does shed light on what it means to live warring against your own nature.

Monday, October 19, 2009

A human on/off switch

There was a fascinating article on CNN last week about an experimental medical technique:
She turns a dial, and the sealed enclosure starts to fill with poison gas — hydrogen sulfide. An ounce could kill dozens of people.

The rat sniffs the air a few times, and within a minute, his naturally twitchy movements are almost still. On a monitor that shows his rate of breathing, the lines look like a steep mountain slope, going down.

At first glance, that looks bad. We need oxygen to live. If you don't get it for several minutes — for example, if you suffer cardiac arrest or a bad gunshot wound — you die. But something else is going on inside this rat. He isn't dead, isn't dying. The reason why, some people think, is the future of emergency medicine.

You see, Roth thinks he's figured out the puzzle. "While it's true we need oxygen to live, it's also a toxin," he explains. Scientists are starting to understand that death isn't caused by oxygen deprivation itself, but by a chain of damaging chemical reactions that are triggered by sharply dropping oxygen levels.

The thing is, those reactions require the presence of some oxygen. Hydrogen sulfide takes the place of oxygen, preventing those reactions from taking place. No chain reaction, no cell death. The patient lives.
Okay, so this is more like a form of suspended animation than an on/off switch. As the article notes, metabolic reactions continue, just at an extremely slowed rate. But it's quite similar in effect; in rats, at least, it appears that immersing them in this gas is like turning them off without killing them, and putting them back in regular air seems to revive them as if nothing had happened.

If it can be safely used by humans, this apparent state of suspended animation would be very different from the related techniques that are now available. Barbiturate-induced comas, increasingly used in medicine during the last two decades, only shut down the brain, not the whole body — and their safety is debatable. And cryonic freezing is not currently reversible.

The potential therapeutic and not-so-therapeutic applications of this new technique — again, assuming that it works on and is relatively safe for humans — boggle the mind. The CNN article sticks strictly to the therapeutic applications. It would allow for new ease in surgeries, like open-heart operations, that temporarily disrupt metabolic processes. You can also imagine the potential in emergency situations, to shut someone down until he can be rushed to the best facilities. Or you could put someone into suspended animation while she awaits a donor organ.

The notion of a human on/off switch really reminds me — to put my nerd cards fully on the table here — of an episode of Star Trek: The Next Generation, "The Measure of a Man." In that episode, Data, an android, is put on trial to determine whether he is a sentient being with rights (a person), or property of the Federation (a machine). In order to demonstrate that he is not a person, Commander Riker first removes Data's arm and then, as a closing argument, abruptly walks up behind him and flips a switch on his back that shuts him off. Data falls forward onto the table, rendered a mere hunk of metal (or "bricked," to use the parlance of our times), and Riker proclaims, "Pinocchio is broken; its strings have been cut."

Aside from the obvious ethical problems of the human on/off switch, I wonder on what grounds we can maintain our sense of personhood when we come to understand ourselves as mechanisms rather than living beings?

Wednesday, October 14, 2009

The Myth of Libertarian Enhancement

In the previous post here on Futurisms, my co-blogger Charles T. Rubin argues that one can only have a libertarian stance towards transhumanism “if one believes that all ‘lifestyle’ choices are morally incommensurable, that the height of moral wisdom is ‘do your own thing’ (and for as long as possible).” This is certainly right, but I worry that most transhumanists would in fact happily agree with this statement. They would see it not as a condemnation of their moral disarmament, but a celebration of their moral enlightenment through radical self-determination. Charlie concludes that “[w]hat is really at stake here is not whether some people want to boss others around, but whether technological change is worth thinking about at all.” I’d like to expand on this point — that is, to argue that technological change must be thought about, even and especially by libertarians.

While Charlie was discussing just one particular comment thread, it is worth noting that there is a strong, perhaps even dominant, libertarian strain among transhumanists. As Woody Evans noted in H+ Magazine, “Take it as a given that most supporters of transhumanism trend toward advocating for more personal freedom: keep the government out of our bedrooms and biologies please.” This certainly matches my own observations: try exploring with a transhumanist the wisdom of any possible restriction on enhancement and you are very likely to hear a similar refrain.

Strangely, this discussion-ending response is not characteristic just of transhumanists. Ask someone who is skeptical of — or even opposed to — enhancing himself or herself, and you are likely to hear expressions of tolerance similar to those proffered by participants in a recent study on cognitive enhancement in academia: “I see it more as a lifestyle. You are making this choice to find the easy way out and morally I think that that is someone’s lifestyle choice.” And, “I don’t feel comfortable about the word ‘acceptable’ because I don’t think that I am able to judge someone.... I think it is a matter of your own conscience if it is acceptable or not.”

The “to each his own” argument against governmental restrictions of personal freedom is shaky for several reasons, not the least of which is that government is not the only force that restricts personal freedom. The widespread use of enhancement creates tremendous social pressures to compete and conform; these pressures, too, can be said to restrict personal freedom. One need look only to the history of professional baseball over the last ten years to see a clear example. And beyond the world of competitive sports, the use of cognitive-enhancing drugs like Ritalin for nontherapeutic purposes is soaring among working professionals and among high school and college students (as shown in the study cited above, and as discussed in this sobering article by our New Atlantis colleague Matt Crawford). The specific choices — Should I start doping during the off-season? Should I take this pill to help me study? — may have been made by individuals, but they were influenced by others and their impact was collective. There is a sort of prisoner's dilemma at work here, with decisions made for the individual good having a detrimental effect on the larger whole.

(To be sure, much the same point holds for other technological changes that create social pressures. Take cell phones, for example — which some transhumanists consider a primitive form of enhancement: the advantages gained by early adopters of cell phones created pressures that led the rest of us to get cell phones, too.)

The point that technological change is not just a matter of individual concern is made perfectly clear in the transhumanists' own rhetoric, rife with grand talk of ushering in the next phase of human evolution, doing away with antiquated social constructs, and so forth. They promise not just to remake humanity but to thoroughly remake civilization. And yet, when confronted with questions about how societies ought to decide which technologies are good or bad, they often duck behind appeals to personal choice. The only way to reconcile this seeming contradiction is by recognizing that transhumanists do not value unrestricted individual liberty so much as unrestricted individual power.

Those who worry about how tyranny of the government might rob them of their freedom are right to do so. But they would do well also to consider the other ways freedom can be diminished.

[Photo source: Fly Navy (CC)]

Monday, October 12, 2009

Moral relativism and the future of technology

There are aspects of the arguments of advocates of human re-engineering that, for what it’s worth, I agree with. One is that nanotechnology, or more specifically molecular manufacturing, holds the potential (if it is possible at all) to alter a great many things that we currently take for granted about the shape of human life. It may not yet be clear how exactly we might find ourselves in a world where something like the replicators from Star Trek are possible, but a fair amount of research and development is currently pointed directly or indirectly in that direction, and I would not want to bet against it. I’m not sure whether this belief makes me a technological optimist or a technological pessimist, which is one reason why I don’t find those terms very helpful when we try to think seriously about the future of technology.

A while back I did a phone interview with a reporter from the Miami Herald about the nano-future, and recently I found his story online. The comments that follow the story are worth noting, because they are common responses to the point I tried to make to the reporter — that not all the potential of nanotechnology is for the good, and that some of the things that sound good may not really, on reflection, be good.

At first glance, the criticisms in the comments section sound contradictory. One writer notes that all technologies have good uses and bad uses, and that since there is nothing new about that the Herald, as a newspaper, should not bother to feature stories that make this point. But a second commenter notes in effect that since molecular manufacturing could put an end to scarcity, it will have the very good effect of putting an end to all conflict and the only bad left will be those (like me, apparently) who want to tell other people what they should or should not do. So from the first point of view we should just go ahead with nanotechnology because it doesn’t really change anything, and from the second point of view we should go ahead because it will change (nearly) everything.

The link between these two arguments is moral relativism. The second author speaks in quasi-nonrelativistic terms of a “fundamental right” to life, but he or she seems to mean by that a right not to die or a right to do whatever one wants with one’s life. That is quite a distance from the meaning of those who articulated a natural right to life. What is so attractive about libertarian utopianism except if one believes that all “lifestyle” choices are morally incommensurable, that the height of moral wisdom is “do your own thing” (and for as long as possible)?

On the other hand, the truism that all technology has good and bad uses is only trite if one believes that being able to judge between them is uninteresting — that such judgments are nothing more than matters of subjective opinion. Otherwise one might think it very important indeed to find ways to maximize the good and minimize the bad.

What is really at stake here is not whether some people want to boss others around, but whether technological change is worth thinking about at all. Moral relativism makes it easy to not think about it, to just sit back and let things happen while reserving the right to protest when some arbitrary personal line is crossed. I’m skeptical that disarming our moral judgment is the best way to deal with the challenges of our ever increasing powers over nature.

Friday, October 9, 2009

The Crisis of Everyday Life

Over at The Speculist, Phil Bowermaster has fired a volley across our bow. His post contains a few misrepresentations of The New Atlantis and our contributors. However, we think our body of work speaks for itself, and so rather than focusing on Mr. Bowermaster's sarcastic remarks, I'd like to comment on the larger substantial point in his post. In covering a talk at the Singularity Summit last weekend, I wrote the following:
[David] Rose says the FDA is regulating health, but he says "everyone in this room is going to hell in a handbasket, not because of one or two genetic diseases," but because we're getting uniformly worse through aging. And that, he says, is what they're trying to stop. Scattered but voracious applause and cheering. It's that same phenomenon again — this weird rally attitude of yeah, you tell 'em! Who is it that they think they're sticking it to? Or what?
Bowermaster responds, "Gosh, I can't imagine," and contends that my question arises from the fact that "the New Atlantis gang ... ha[s] a difficult time even imagining that the positions they routinely take on issues — being manifestly and self-evidently correct — could be seriously opposed by anyone, much less in a vocal and enthusiastic way." He adds that my question appeared to be one of "genuine puzzlement."

In the haste of blogging in real time, I may have failed to make clear that my question wasn't expressing "genuine puzzlement," but was rhetorical. But now, with the leisure to spell out my concerns more fully, I'd like to expand on the point I was trying to make — and thereby to address Mr. Bowermaster's post.

The combative rhetoric of transhumanists
I posed my question — Who is it that they think they're sticking it to? — not just in response to the specific scene I had just described, but because of the pervasive rally-like attitude at the conference. That sense of sticking it to an unnamed opponent was part of the way many presenters spoke. Their statements — however technical, mundane, or uncontroversial — were often phrased as jabs instead of simple declarations. They spoke as in defiance — but of adversaries who were not named, not present, and may not have even existed. (The worst example of this was in the stage appearances by Eliezer Yudkowsky, as I noted here and here. Official videos of the conference are not yet available, but the point will quickly become evident in any video of his talks you can find online.)

This combative tendency demands examination because it is so typical of transhumanist rhetoric in general. To take just one egregious example, consider this excerpt from a piece in H+ Magazine entitled "The Meaning of Life Lies in Its Suckiness." This piece is more sarcastic and vulgar than most transhumanist writings, but its combativeness and resentment are fairly representative:
[Bill] McKibben will put on his tombstone: “I’m dead. Nyah-nyah-nyah. Have a nice eternal enhanced life, transhumanist suckers.” Ray Kurzeill [sic] will be sitting there with his nanotechnologically enhanced penis and wikipedia brain feeling like a chump. Whose life has meaning now, bitches? That’s right, the dead guy.
The combativeness of transhumanist rhetoric might be more justifiable if it emerged chiefly in arguments with critics dubious of the transhumanist project to remake humanity (or to "save the world," or whatever the preferred rendering). But their combativeness extends far beyond direct responses to their critics. It is rather a fundamental aspect of their stance toward the world.

Take, for instance, the discussion I was blogging about in the first place. A member of the audience asked whether the FDA should revisit its definition of health; the speaker's rally-like attitude (and the audience's corresponding response) could not have been directed at anybody in particular, for the FDA has nothing to do with what either the questioner or the speaker were talking about. Both the question and answer were detached from reality, but the speaker acted as if the FDA were really shafting the American people, and he nursed the audience's sense of grievance at their perceived loss.

The fault, dear Brutus...
Against whom, then, is their grievance directed? Or — as I suggested in my initial post — against what is it directed? The ultimate target of the unhappy conferencegoers' ire was not the FDA. Nor does the H+ Magazine author I quoted above have much of a case against Bill McKibben. Rather, the grievance of the transhumanists is against human nature and all of its limitations. As my co-blogger Charles T. Rubin wrote of prominent transhumanists Hans Moravec and Ray Kurzweil, they "share a deep resentment of the human body: both the ills of fragile and failing flesh, and the limitations inherent to bodily life, including the inability to fulfill our own bodily desires."

Despite tremendous advances in our health, longevity, and prosperity, man's given nature keeps us in bondage — and the sense of urgency in the effort to slip loose those bonds paradoxically grows as we comprehend ever greater means of doing so.

Transhumanism's combative stance derives from this sense of constant urgency — what Yuval Levin has dubbed "the crisis of everyday life." The main target of the combativeness, then, is man's limited nature; the transhumanists are warring against what they themselves are. Any anger directed at critics like Bill McKibben or the FDA is rather incidental.

The transhumanists' stance might become more clear — or at least more honest — if they acknowledged that their resentment is more directed at their own human nature than at any particular humans. But to do so might imperil their position. For they might realize — if the history of which they are exemplary is any guide — that as their power grows, their resentment at the remaining limits will only deepen, and will increase their hunger for ever more power to chase those limits away.

If their power did allow them to vanquish the last of their limitations — if "man's estate," to borrow Francis Bacon's phrase, were fully relieved — to what purposes would these posthumans then turn their power? What purpose would they find in their existence when the central reason they have now for living was at last fulfilled? Through what struggle would they flourish when their struggle against struggle itself was complete?

Wednesday, October 7, 2009

Make your brain USB-compatible

From XKCD last week:



This comic (aside from the nice dig at Linux) gets right at some of the core difficulties of treating the mind as a purely functional input/output device. Just imagine the huge technical challenge of connecting our nervous system to a relatively simple interface — USB. Not only do you have to treat the brain as a computer: you have to make it run the same protocols that we have on modern computers.

Biofeedback mechanisms might be suitable in theory, but if their history thus far is any guide, there are severe limitations on the bit rate they can achieve. Brains may seem like computers filled with information-dense electrical signals — but we have not yet figured out how to turn those signals into streams of data that are both dense and sufficiently well-defined to interface with a computer. For now, if you want to communicate with a computer, you'd have better results chucking baseballs at a keyboard than relying on any brain-scanning technology.

As for the comic's joke about patching the "software" of the mind, half of the humor in it (the half that's not about Linux) is in the meaninglessness of the proposition. It's commonplace among strong-A.I. enthusiasts to describe the brain as hardware and the mind as software, but which part of the brain is the hardware, and how is the software encoded into it? Suppose you want to change that encoding to "patch" the mind. You could treat the software as encoded in the pattern of neurons and synapses; but then, assuming you know how to change that pattern precisely, what exactly do you change? What is the mapping from a neuron-and-synapse encoding to a high-level feature of the mind?

Perhaps neuroscientists will, in time, discover new compatibilities between minds and machines. But it is striking that our attempts so far to interface brains and computers have had to cope with the mind's stubborn and peculiar patterns and organizations, rather than finding it like the computers we know — amenable to divisibility into distinct modules and submodules and sub-submodules that are subject to our direct, well-defined, and predictable control. We have found some success, that is, in attempting to make computers more like minds, but not much in treating minds as computers.

Tuesday, October 6, 2009

"The means to make the man of the future"

The impulse to redesign humanity is not new, and turns up in surprising places once you start paying attention to it. Consider the following passage from a very long speech given in 1891 by Woman’s Christian Temperance Union founder Frances Willard, one of the giants of turn-of-the-century progressivism:

It may be that in some better day the world will see a human being gifted with the best powers of what we are wont to call the "lower orders of creation;" keen sighted and swift of motion as a bird, sharp scented as a greyhound, faithful as a dog and full of wisdom as an elephant. It may be, too, that we shall see a human being who has not only these powers, but is made up of the best physical graces, mental gifts and graciousness of all generations; one who shall gain knowledge, not by the present slow process of acquisition, but instantaneously, through magnetic currents, from the books and brains about him. One who will be such a thinker as Kepler or Kant; such a poet as Shakespeare or Tennyson; such an artist as Da Vinci; such a sculptor as Phidias; such a musician as Beethoven; such a statesman as Gladstone; such a philanthropist as Shaftesbury; such a saint as Guyon. Naturally the unintelligent and the unimaginative will declare this impossible, but everything helps forward the advent of just such a being as that. All arts, inventions, philanthropies, religions, are but tentacul put forth, searching for the means to make the man of the future, who shall be what all who have the vision and faculty divine have always prophesied he would yet be — a microcosm, the mirror of the universe. We in our little corner, doing our work well-nigh unnoted by the world at large, are helping by our small increments of power to create this complete human being — the goal of all desire and hope. The coral zoophyte builds not more surely on the unseen reef that yet shall rise in gleaming beauty above the deep sea's level blue than we are building for universal and perfected human nature. Nothing less is in our thought, and nothing else; for by ideals we live, and this ideal has been upon our consciousness from the beginning. The brain is but a stained glass window now, we wish to change it to a crystal pure and brilliant. The total abstinence [from alcohol] pledge is but one strand in the cable of our organised endeavour, for we have seen that to make man as God would have him be, the student of perfection must study his heredity, must hover like an unseen guardian about his cradle, his desk at school, his happy playground, his thoughtless and endangered youth, his tempted manhood, and must guard, not only against beginnings of ill in his own separate career, but their organised form in the habits, customs and laws of his nation and his world. For "it is easier to prevent than to undo."

This excerpt is Willard at her most utopian, and to that extent does not do justice to the strong bent of practical reform that is the dominant tone of her speech. Yet it is not disconnected from that dominant tone; Willard evidently realized that reform has to aim at something or else it is mere change, and here she adumbrates her ultimate goal.

Willard’s description of her hoped-for human future could serve as a manifesto for contemporary transhumanism, but for a few key distinctions. The first is her sense, explicitly mentioned later in the speech, that the wisdom of science was on a convergent course with the truth of religion, which (in good Spinozistic fashion) she defines as doing good, not doctrine. The stir James Hughes made in transhumanist circles by even approaching the Catholic Church on transhumanism confirms the obvious, which is that for most transhumanists, their project is a substitute for religion.

A second distinction is her willingness to acknowledge not only animal bodily superiorities, but high points of human culture as well — which count as little or nothing for those in quest of the Singularity. What is Da Vinci, in comparison with some imagined super-intelligence? A dabbler and dauber, a constructor of termite mounds.

Finally, Willard looks to a perfected humanity. Her notion of that perfection is ultimately hard to fathom (mirror of the universe?) and perhaps somewhat mystical. But it is a united perfected humanity, not the for-the-most-part libertarian transhumanist visions of diverse forms of do-your-own-thing posthumanity. For Willard, we are all in this together.

On their merits, Willard’s speculations may be superior to the transhumanist norm, but at the very least they are rhetorically superior precisely for the differences just noted: she has a vision of human progress not cut off from all previous human history, but flowing from it. In context, that would be something like a common touch. She apparently thought nothing of laying out her vision to the second biennial convention of the World Woman’s Christian Temperance Union, and to do so in a section of her speech devoted to abstinence, the organization’s key issue. We don’t know, of course, how her audience, representatives of a mass movement, reacted to this passage. We do know that transhumanists talk mostly to... other transhumanists. But perhaps in principle their separation from the human mass should not be a problem; after all, if they are correct about the future, they’ll be alright Jack.

Monday, October 5, 2009

The Revolution Will Be PowerPointed

A panel discussion during the 2009 Singularity Summit in New York City.

The 2009 Singularity Summit wrapped up in New York City yesterday. The whole thing was something of a blur — two days of back-to-back talks, milling about with conferencegoers, and frenzied posting.

As you can see here, the attendees were predominantly male, and almost exclusively nerds of various flavors: long-haired, disheveled programmers; smoothly dressed, New-Age types looking for transcendence but not immune to the need to constantly check their iPhones; jargon-slinging, bespectacled academics; and gel-haired, polo-shirt-wearing, young social entrepreneurs. (Pop Sci shows a similar sampling.) Basically, the conference felt like being back in my college computer science department.

Everyone I met was quite inquisitive and friendly. There was an excitement in the air, a sense of being in the presence of great people working together towards a great cause (about which, more in a moment).

The content of the conference itself, however, was rather underwhelming. Most of the talks were highly technical but too short and delivered too rapidly to convey much substance in a way that would last. Only a few of the speakers gave presentations both insightful and clear enough to be truly informative or persuasive. (For my money, the best talks were those by David Chalmers and Peter Thiel, and the discussion with Stephen Wolfram.)

The conference also lacked an overarching message. Certainly a diversity of opinion and interests in such a conference is inevitable, even good. But the problem was that the presenters treated it like a scientific or technical conference (indeed, some of the presentations seemed to have been written for technical conferences, with only a coda tacked on to justify their relevance to this one) when in fact the Singularity, transhumanism, and the related subjects that attracted the audience this weekend are not, strictly speaking, scientific subjects.

To put it another way, while its means may be technical and scientific, the ends of Singularitarianism, as disparate and even incoherent as they may be, are rather like those of a spiritual movement. I kept waiting for the presenters to make grand statements about the moral imperatives of the movement and about the awe-inspiring new things we will do and be. There were a few, but those larger ideas were mostly taken for granted. I thought, in particular, that we might get some of these first principles from Anna Salamon, who gave the opening and closing talks, or from Ray Kurzweil, who presides as the de facto spiritual leader (and head coach) of the movement.

But for a movement that aspires to such revolutionary things, the summit was in fact rather conventional: dry talks, PowerPoint slides, and lectures in rapid succession. (I should note that the organizers kept the whole thing impeccably on schedule, except for allowing Kurzweil to go well over his time at the end of the first day.) It seemed that many of the attendees were most excited during the breaks between presentations. They huddled around the superstar presenters. I heard more than a few conferencegoers ask each other, "Have you seen Ray? Where is he? I want to talk to him." Many were excited just to be in the presence of fellow-travelers (since, as some of them told me, many of the attendees only knew of the Singularitarian movement through the Internet).

And this was where the organizers oddly seemed both to understand why people were really there and to fail to structure the event to reflect that. The proceedings rang of celebrity worship. The M.C. revved up the excitement before the big-name speakers. The final panel discussion was, unfortunately, about nothing substantive, just a sort of "behind the scenes with the boys of the Singularity," an interview focusing on personalities instead of ideas. And Kurzweil didn't deign to give a coherent presentation. For the first day, he literally came up on stage with a pad of paper and offered his ad hoc thoughts and pronouncements on the previous speakers. On the second day, he gave what one Twitterer described as his "stump speech" — a laundry list of responses to critics, mostly taken verbatim from his book on the Singularity. His talks just seemed to serve the purpose of assuring the crowd that the coach was still in control of the game and there was no need to worry (as another blogger has suggested).

But my impression was that there wasn't nearly enough discussion and interaction to really suit most conferencegoers (myself included). And I heard attendees again and again expressing their wish to interact more with the presenters, and many expressing frustration at not having been able to ask questions.

I don't really fault the organizers for this. Putting together a large conference is a demanding task, and this one was impressively smooth in its operation. Perhaps on some level it made sense to stick to the tried-and-true format of a professional, academic, or scientific conference. But that's the problem: this is not a business, it is not an academic discipline, and it is not a science. It is a movement, one with goals it seeks to accomplish. I have the sense that the attendees were interested less in simply hearing facts — many of which are better conveyed in print and online anyway — than in discussing what it is they are all engaged in. Perhaps in the future, these conferences might be run more like seminars instead of lectures, or might find other ways of incorporating give-and-take conversations.

Many of the conferencegoers want humanity to become more virtual, with our frail bodies supplanted and our minds uploaded. To apply that logic, perhaps future conferences will move wholly online to avoid the logistical constraints of meeting in the physical world. But for this year, at least, the attendees seemed largely to take satisfaction in physicality: in encountering their leaders, in being in the presence of others who agree with them, and just in chatting over coffee with the fellow members of their movement.

Scenes from the Singularity Summit

Here are a few images from this past weekend's Singularity Summit, now that it has drawn to a close.

Here's Aubrey de Grey autographing a book for a conferencegoer:




And de Grey wasn't the only fellow sporting such ample facial hair; here's an attendee:



These beard photos might make you wonder about the demographics of the conference. Judge for yourself:



Not that there were no women's faces to be seen. The presentation by Juergen Schmidhuber gave us a big one:



There was only one woman presenter at the conference, Anna Salamon — although she got to speak both first and last. Here she is chatting with a conferencegoer.



And next, a shot of a woman attendee: Ilana Pregen, asking the question that Brad Templeton so brusquely dismissed.



Stay tuned for more conference wrap-up today.

Sunday, October 4, 2009

"How much it matters to know what matters"

Anna Salamon, the first speaker at the 2009 Singularity Summit, is also the last: "How much it matters to know what matters: A back-of-the-envelope calculation." (Abstract and bio here.)

Salamon starts off with highlighting the apparently stupid reasons people do what they do — habit, culture, etc. — and says they could achieve goals much more efficiently with a little strategic thinking. Humans tend to act from roles, she says, not goals. For example, people spend four years in medical school because they find the role of doctor important, rather than doing basic comparative research on salaries. Apparently roles cannot be goals. (Hmm, I wonder why Salamon does things like speak at conferences? Purely because it was the course of action that maximized her finances?)

Anna Salamon at the 2009 Singularity SummitSalamon continues to lament the way people don't think strategically when making decisions. She's extolling the virtues of writing down estimates and using those to make goals. This is a strangely long wind-up, going on and on about why making back-of-the-envelope calculations is good. (Does Salamon think she invented utilitarianism?)

Okay, now she's finally going for it: Her back-of-the-envelope calculations of the aggregate value and risk from A.I research. The risk from A.I., she says, is 7 percent. I guess she means a 7 percent chance of the world ending. The number of lives affected: about 7 billion. She breezes through more calculations, and manages to come up with some dollar amount of increased value through life. (Such estimates always have a touch of the absurd about them, no matter the context; here they seem especially silly.)

She breezes through the rest of the talk, too. Her conclusion is that we should think "damn hard" about the benefits and risks of the Singularity. And we should fund A.I. research and the Singularity Institute. A very underwhelming end to the summit, and quite an anticlimax after the previous panel.

And that's it for the conference. I'll have a final wrap-up later tonight (or possibly tomorrow), and will be going back and inserting a few more pictures into some of the earlier posts. Check back soon, and stay tuned, as the coverage we've been doing here marks just the beginning of our discussions here on "Futurisms."

On persuasion and saving the world

The penultimate item on the agenda of the 2009 Singularity Summit is a panel discussion, on no particular topic, involving Aubrey de GreyEliezer Yudkowsky, and Peter Thiel. The moderator is Michael Vassar of the Singularity Institute. And it is in that order, from left to right, that the four men appear in this picture:
From left: Aubrey de Grey, Eliezer Yudkowsky, Peter Thiel, and Michael Vassar.
Vassar starts with a question about when each of the panelists realized they wanted to change the world. Thiel says he knew when he was young he wanted to be an entrepreneur, and once he found out about the Singularity, it was just natural to get on board with it and "save the world."

Yudkowsky says, "Once I realized there was a problem, it never occurred to me not to save the world," with a shrug and arms in the air. (Very scattered laughter and applause. The audience seems uncomfortable with him. I am, anyway. As I noted earlier, everything the guy says seems to drip with condescension, even in this room filled with people overwhelmingly on his side. He keeps having to invent straw men to put down as he talks.)

De Grey says he knows exactly when he realized he wanted to make a difference. It was when he was young and wanted to be a great pianist, but then realized that he'd spend all this time practicing — and then what? He'd just be another pianist and there are tons of those. So he decided he wanted to change a world. Then later he discovered no one was looking at stopping aging and he was horrified, so he decided to do that.

The moderator asks what each man would be working on if not the Singularity. De Grey says other existential risks besides aging. Yudkowsky says studying human rationality. (If only he would. A Twitterer seems to share my sentiments.) But he says it's not about doing what you're good at or want to do, but what you need to do. Thiel would be studying competition. Competition can be extremely good, he says, but can go way too far, and crush people. He says it was better for him as a youth that computers got better than chess, because he realized he shouldn't be stressing himself so much over being a super-achieving chess player.

They get into talking about achievement a bit more later, and Thiel says he thinks it's really important for people to have ways to persevere that aren't necessarily about public success.

De Grey highlights the importance of "embarrassing people" to make them realize how wrong they are. We're all aware of some of the things people say in defense of aging, he says. Thiel says his own personal bias is that that's not a good approach, because there are so many different ways of looking at things, people have so many different cultural and value systems, and there may be deep-seated reasons they believe what they do. He says he likes to try hard to explain his points to people.

The rest of the discussion is not especially noteworthy. A bit of celebrity worship and ego stroking. Peter Thiel easily takes the cake for charm on this stage.

Rationalism, risk, and the purpose of politics

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Eliezer Yudkowsky at the 2009 Singularity Summit in New York CityEliezer Yudkowsky, a founder of the Singularity Institute (organizer of this conference), is up next with his talk, "Cognitive Biases and Giant Risks." (Abstract and bio.)
He starts off by talking about how stupid people are. Or, more specifically, how irrational they are. Yudkowsky runs through lots of common logical fallacies. He highlights the "Conjunction Fallacy," where people find a story more plausible when it includes more details, when in fact a story becomes less probable when it has more details. I find this to be a ridiculous example. Plausible does not mean probable; people are just more willing to believe something happened when they are told that there are reasons that it happened, because they understand that effects have causes. That's very rational. (The Wikipedia explanation, linked above, has a different explanation than Yudkowsky's that makes a lot more sense.)
Yudkowsky is running through more and more of these examples. (Putting aside the content of his talk for a moment, he comes across as unnecessarily condescending. Something I've seen a bit of here — the "yeah, take that!" attitude — but he's got it much more than anyone else.)

He's bringing it back now to risk analysis. People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.

This is particularly important with the risk of extinction, because it's subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it's hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it's hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).
[One of Yudkowsky's slides.]


Yudkowsky concludes by asking, why are we as a nation spending millions on football when we're spending so little on all different sorts of existential threats? We are, he concludes, crazy.

That seems at first to be an important point: We don't plan on a large scale nearly as well or as rationally as we might. But just off the top of my head, Yudkowsky's approach raises three problems. First, we do not all agree on what existential threats are; that is what politics and persuasion are for; there is no set of problems that everyone thinks we should spend money on; scientists and technocrats cannot answer these questions for us since they inherently involve values that are beyond the reach of mere rationality. Second, Yudkowsky's depiction of humans, and of human society, as irrational and stupid is far too simplistic. And third, what's so wrong with spending money on football? If we spent all our money on forestalling existential threats, we would lose sight of life itself, and of what we live for.

Thus ends his talk. The moderator notes that video of all the talks will be available online after the conference; we'll post links when they're up.

Methuselah speaks

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Aubrey de GreyThe conference's last batch of talks is now underway, leading off with one of the Singularity movement's most colorful characters, Aubrey de Grey, and is titled "The Singularity and the Methuselarity: Similarities and Differences." (Abstract and bio.) De Grey has a stuffy British accent, long hair, and a beard down to his mid-chest. (I imagine this is meant to point to longevity in some way or another, though how precisely is difficult to discern. Is he showcasing how long he's been alive? Or maybe trying to get us thinking about longevity by looking older than his forty-six years?)

De Grey is running through the standard gamut of life-extension medical technology. Gerontology, he says, is becoming an increasingly difficult and pointless pursuit as it attempts to treat the inevitable damage of old age. But if we reverse the damage, he says, we might be able to extend our biological age at a rate approaching the pace of time.

He goes through more math than is really necessary for us to get the concept that we can increase the rate at which we're slowing aging. He mentions the concept of the Longevity Escape Velocity (LEV), which is the rate at which rejuvenation therapies must improve in order to stay one step ahead of aging. De Grey offers a somewhat-awkward neologism: the point at which we reach LEV, he says, is the "Methuselarity." This is when we're not quite immortal but we're battling aging fast enough to be effectively immortal. (I have in mind an image of a cartoon character sprinting across a river and laying down the planks of a bridge in front of him as he goes.)

De Gray claims that we double our therapy rate every forty-two years, and that this is more than good enough if it's kept up to reach LEV. Also, he notes, LEV decreases as our rejuvenation powers get better and better. He's building a case here for maintenance technologies, like the massive cocktails of supplements and drugs that Kurzweil takes in hopes of slowing their aging.*

There are some interesting implications of his calculations. One of them, he notes, is that once we increase average longevity past the current maximum (about 120 years), the hardest part is over (since LEV will steadily decrease). This means that, he says, the first thousand-year-old will probably be not much more than twenty years older than the first 150-year-old. And the first million-year-old will probably only be a couple years older than the first thousand-year-old.

De Grey concludes by pointing out a tension between his project and the goals of some of the others in the room: He claims that after the Methuselarity, there will be no need to be uploaded. "Squishy stuff will be fine." He notes, however, that this may significantly increase our risk aversion.

A questioner asks about his personal stake in the Singularity. De Grey says he's not selfish because all of this travelling takes a toll on his health and longevity, and his work benefits others much more than himself (presumably, in an aggregate utilitarian sense that their combined increase in longevity outweighs his).

De Grey really breezed through that talk. The audience and the Twittersphere seemed to love it, though.

[One of de Grey's slides.]


[* As originally written, this post stated that Aubrey de Grey is on a diet-supplement regime similar to the one Ray Kurzweil is on. Upon examination, we have no reason to think that is true; in fact, this interview seems to suggest that it is not. We have amended the text and apologize for the confusion. -ed.]

Investing in the Singularity?

[Continuing coverage of the 2009 Singularity Summit in New York City.]
The last talk before the final break of the conference is on venture capitalism, moderated by CNBC's Robert Pisani, and including Peter Thiel, David Rose, and Mark Gorenberg.
Thiel mentions that many companies take a very long time to become profitable. He says that the first five or six investors in FedEx lost money, but it was the seventh who made a lot. So, he says, he likes to invest in companies that expect to lose money for a long time. They tend to be undervalued.

[From left: Peter Thiel, Mark Gorenberg, David S. Rose, and moderator Bob Pisani]
The mod asks how venture capitalists deal with the Singularity in making their decisions. One of the panelists responds that they're all bullish about technology, echoing Thiel: if technology does not advance, they're all screwed. But it sounds like he's effectively saying that they keep it in mind but it doesn't really impact investing. He doesn't really look farther out than ten years. Thiel says he does think that there are some impacts — among other things, it's a good time to invest in biotech. ("Yes!" says the woman next to me, in a duh voice.)

A questioner asks about why none of the panelists have mentioned investing in A.I. The guy has a very annoyed tone, as he did when he asked a question in Thiel's talk. Thiel doesn't seem enthused:

Peter Thiel

But another panelist says yes, good, let's invest more in high-tech companies! Rapturous applause.

Peter Thiel on the Singularity and economic growth

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Peter Thiel is a billionaire, known for cofounding PayPal and for his early involvement in Facebook. He also may be the largest benefactor of the Singularity Summit and longevity-related research. His talk today is on "Macroeconomics and Singularity." (Abstract and bio.)
Thiel begins by outlining common concerns about the Singularity, and then asks the members of this friendly audience to raise their hands to indicate which they are worried about:
1. Robots kill humans (Skynet scenario). Maybe 10% raise their hands.
2. Runaway biotech scenario. 30% raise hands.

3. The "gray goo scenario." 5% raise hands.

4. War in the Middle East, augmented by new technology. 20% raise hands.

5. Totalitarian state using technology to oppress people. 15% raise hands.


6. Global warming. 10% raise hands. (Interesting divergence again between transhumanism and environmentalism.)


7. Singularity takes too long to happen. 30% raise hands — and there is much laughter and applause.


Thiel says that, although it is rarely talked about, perhaps the most dangerous scenario is that the Singularity takes too long to happen. He notes that several decades ago, people expected American real wages to skyrocket and the amount of time working to decrease. Americans were supposed to be rich and bored. (Indeed, Thiel doesn't mention it, but the very first issue of The Public Interest, back in 1965, included essays that worried about this precise concern, under the heading "The Great Automation Question.") But it didn't happen — real wages have stayed the same since 1973 and Americans work many more hours per year than they used to.

Thiel says we should understand the recent economic problems not as a housing crisis or credit crisis but rather as a technology crisis. All forms of credit involve claims on the future. Credit works, he says, if you have a background of growth — if everything grows every year, you won't have a credit crisis. But a credit crisis means that claims for the future can't be matched.

He says that if we want to keep society stable, we have to keep growing, or else we can't support all of the projected growth that we've currently leveraged. Global stability, he says, depends on a "Good Singularity."

In essence, we have to keep growing because we've already bet on the promise that we'll grow. (I tried this argument in a poker game once for why a pair of threes should trump a flush — I already allocated my winnings for this game to pay next month's rent! — but it didn't take.)

Thiel's talk is over halfway into his forty-minute slot. He is an engaging speaker with a fascinating thesis. The questioners are lining up quickly — far more lined up than for any other speaker so far, including Kurzweil.

In response to the first question about the current recession, Thiel predicts there will be no more bubbles in the next twenty years; either it will boom continuously or stay bust, but people are too aware now, and the cycle pattern has been broken. The next questioner asks about regulation and government involvement — should all this innovation happen in the private sector, or should the government fund it? Thiel says that the government isn't anywhere near focused enough on science and technology right now, and he doesn't think it has any role to play in innovation.
Peter Thiel
Another questioner asks about Francis Fukuyama's book, Our Posthuman Future, in which he argues that once we create superhumans, there will be a superhuman/human divide. (Fukuyama has also called transhumanism one of the greatest threats to the welfare of humanity.) Thiel says it's implausible — technology filters down, just like cell phones. He says that it's a non-argument and that Fukuyama is hysterical, to rapturous applause from the audience.

After standing in line, holding my laptop with one hand and blogging with another, I take the stand and ask Thiel about the limits of his projection: if we're constantly leveraging against the future, what happens when growth reaches its limits? Will we hit some sort of catastrophic collapse? He says that we may reach some point in the future where we have, basically, a repeat of what we had over the last two years, when we can't meet growth and we have another collapse. So are there no limits to growth, I ask? He says if we hit other road bumps we'll have to just deal with it then. I try again, but the audience becomes restless and Thiel essentially repeats his point, so I go sit down.

What I should have asked was: Why is it so crucial to speed up innovation if catastrophic collapse is seemingly inevitable, whether it happens now or later?

Remembering, forgetting, and improving our minds

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Gary Marcus, an NYU psychologist, is underway with the first of the afternoon talks, "The Fallibility and Improvability of the Human Mind." (Abstract and bio.)

Marcus starts off discussing natural language, and the common argument that it is highly imperfect. He says that many linguists, such as Noam Chomsky, have argued that natural language is in fact very close to an ideal system — that is, one that a set of super-engineers would design if they had built it from scratch. He says that he doesn't want to make the argument that necessarily "things that look like human bugs are actually features," but he says it's something we should keep in mind.

Next Marcus moves on to discuss aspects of "the human system" that are problematic (he mentions the spinal column), and asking about limitations of the human mind. He starts with basic things, like how easily we forget simple things like where we left our keys. Computers, he note, store all memory in specific locations; our brains don't. Minds are also susceptible to irrationality, like "framing effects" (where the same issue can yield very different opinions depending on how it's described).

Now he's launching into the selfsame systematic descriptions of the human mind that he seemed to be warning us against earlier: he says this last problem is due to something like "garbage in, garbage out" — that is, we remember best the last thing we have heard, and that's why we are susceptible to framing. Marcus doesn't seem to have considered his own advice of whether this apparent "bug" might actually be a "feature."

Marcus concludes by saying that the current state of human biology is an accident, and there is room for improvement, "if we dare." At least as far as this talk goes, that claim is short on evidence and long on opinion.

During the questions, an audience member asks about the case of those rare individuals who seem to remember everything. Marcus says this is more a disorder where people obsess about recording and documenting their lives, and that enhances their memory. He wrote a Wired article earlier this year about Jill Price, the most famous recent non-forgetter. Marcus says that while the media has focused on the sad parts of Price's story — she cannot forget bad things — he knows of another case where the unforgetter is a DJ who seems quite happy. (More data points, please!)

Robo-cars and energy independence

[Continuing coverage of the 2009 Singularity Summit in New York City.]

The last talk of the morning is from Brad Templeton of the Electronic Frontier Foundation on "The Finger of AI: Automated Electrical Vehicles and Oil Independence." (Abstract and bio.)

He starts off bemoaning the horrors of human driving, mostly for reasons of safety. Then he complains about how on the way to the conference today, under the sidewalk, he heard workers installing honest-to-goodness nineteenth-century transportation technology! Subways! This is what governments today are spending their money on! The future, he says, is in robot-driven or autonomous cars. (Yeah, replacing mass transit with cars for everyone will work great here in Manhattan.)
Brad Templeton of the Electronic Frontier Foundation
He's going over the DARPA autonomous vehicle project now — showing an extended clip of a documentary video. Really? I think everyone here already gets the idea. (I have very conflicted feelings about autonomous cars, incidentally. I have a few friends who participated in one of the DARPA contests, and the technical challenge is amazing and must have been really fun to work on. But man, do I love driving a car myself, and that's something now pretty thoroughly integrated into the modern American psyche. It may be interesting to watch how the A.I. and environmental movements converge or diverge on this topic; indeed, the tension is already evident from Templeton's previous comment.)

Now Templeton is touting autonomous vehicles as a political necessity because it will eliminate our dependence on foreign oil. If robots are driving cars, we don't need to own them ourselves anymore, and they can become specialized and lightweight. We press a button on our cell phones and near-instantly, a robot taxi pulls up. He says this will bring about drastic increases in efficiency, so that it's even more efficient than mass transit.

Templeton is talking about potential problems with this vision. He notes, "People are very scared, for some reason, of being killed by robots." That may be the single best quote of the conference so far. But he also noted earlier that robots will be better drivers because they don't get drunk. Oh really?

A questioner asks what will happen to the car insurance industry when robots are driving and crashes disappear. Templeton says, "Yeah, f*** 'em."
Brad Templeton takes questions at the 2009 Singularity Summit in New York City
Another questioner asks how a society increasingly dependent on robotics will be affected in terms of its perception and concentration capabilities. She seemed serious and inquisitive, but Templeton sort of starts badgering her, thinking she's disguising a critique as a question (she says she was just asking). His answer is just that he's surprised to hear that question in this town, and that he thinks of these innovations as advancements, not "bugs." At least one Twitterer agrees that he should have handled it differently. [UPDATE: A picture of the questioner asking Templeton.]

I'm skeptical of Templeton's plan, but I must admit that his was one of the more entertaining and engaging talks at the conference. He speaks in a rather rapid-fire way, but everyone's completely following along with him on everything. It's like he's trying to sell us a car...

And that's a wrap on the morning talks.