Futurisms: Critiquing the project to reengineer humanity

Wednesday, June 23, 2010

Humanity’s Last Breath

In Ray Kurzweil’s 2005 tome The Singularity is Near, he has a section rebutting what he calls “the criticism from holism” — the idea that “machines are organized as rigidly structured hierarchies of modules, whereas biology is based on holistically organized elements in which every element affects every other.” His response is that “It’s true that biological design represents a profound set of principles ... [but] there is nothing that restricts nonbiological systems from harnessing the emergent properties of the patterns found in the biological world.”

For the sake of argument, let’s suppose that Kurzweil is correct in claiming that all of the phenomena of the human being can be replicated on machines. Let’s instead consider a different proposition: that the transhumanist understanding of humans is by its nature shallow and incomplete — in particular, its methodology blinds them to aspects of human nature only apparent when the human being is considered as a whole, and in relation to society, culture, and environment. If so, then transhumanists are not able to recognize many of the defining characteristics of that “pattern” known as the human being, and so by their approach won’t be able to fully replicate and modify us — even if such a feat is in principle possible.

Kurzweil’s description of the replacement of the human circulatory and respiratory systems perfectly exemplifies this myopic methodology. Kurzweil notes what impressive “machines” the heart and lungs are but highlights their vulnerability to failure, and argues that we can replace them with machines that perform the same functions but with much greater efficiency and reliability. Soon a runner might only need to take a single breath to sprint a mile, and
Eventually... there will be no reason to continue with the complications of actual breathing and the burdensome requirement of breathable air everywhere we go. If we find breathing itself pleasurable, we can develop virtual ways of having this sensual experience.

This argument gets to the heart (a phrase that may lose its meaning if this scheme is carried out) of the transhumanist approach to the human being as a sort primitive production economy just waiting for its own Henry Ford to break it into processes fit for assembly lines. At first blush (another phrase that draws its meaning from human respiration and circulation) the approach seems sensible enough, particularly in a case like this: breathing is simply a bodily function for providing oxygen for respiration, with the apparent epiphenomenon of a pleasurable sensation. Why not separate the two, maximizing both by making the respiratory function more efficient, and the respiratory sensation more pure and not dependent on the function?

But since Kurzweil here at least implicitly claims to be interested in replicating and improving all of the “patterns” of human existence, his scheme for replicating breathing should capture all of its goods before it sets about improving them. So let’s take a look at how his ostensibly complete account of breathing stacks up against other commonly available accounts.

Just to name a few:
  • A quick look at the scientific literature shows that breathing is not simply a respiratory process but, as a function of the autonomic nervous system, is integrally connected to other bodily processes. For example, as yoga instructors have long known, proper breathing is strongly correlated with overall physical wellbeing: labored breathing can contribute to and breathing therapy can alleviate stress and stress-related diseases such as hypertension and blood pressure.

  • In a New Atlantis essay from last year, Alan Rubenstein notes that “The activity of breathing demonstrates very nicely how action on the world can be initiated by an organism either deliberately, as in conscious breathing (think yoga, or simply ‘take a deep breath’) or ‘unconscious’ breathing (think breathing while we sleep or, in fact, most of the time that we are awake and not paying attention).”

    Further, he writes, “Breathing is an activity of the whole organism, an action taken by the organism, toward the world, and spurred by the organism’s felt need. The body of an animal needs what the world has to give and works constantly in its own interests to obtain it.”

    Rubenstein suggests that the absence of an organism’s impulse to breathe, its drive to continue its existence through a basic engagement with its environment, ought to be considered alongside the absence of heartbeat, brain activity, and awareness as one of the basic markers of death.

  • For Alexi Murdoch and Radiohead, to remember to breathe is to remember to be grounded in the world, to maintain sense and clarity in the face of confusion, alienation, and suffering. For R.E.M., to stop breathing is to surrender to these forces.

  • For Laika, breathing signifies a connection to wind and the seasons, the breath of nature.

  • For The Prodigy, Frou Frou, and The Police, to feel the breath of another is to have one’s being wrapped up in theirs. For Telepopmusik, to breathe is to be grounded in the world or taken out of it through another.

  • For The Corrs (among many others), to be in awe is to be breathless.

  • For Margaret Atwood, to love and be loved, to live for another, is to wish “to be the air that inhabits you for a moment only...to be that unnoticed & that necessary.”

  • For Roger Ebert, the feelings we have towards other human beings — as equal or lesser beings — are something we breathe.

  • For Geography professor Yi-Fu Tuan, in Space and Place: The Perspective of Experience, “The real is the familiar daily round, unobtrusive like breathing.”

  • For Lydia Peelle, the Reasons for and Advantages of Breathing include a rootedness in existence that allows us the possibility of catching “a glimpse of the infinite.”

  • For Walker Percy, breathing is the first force of gravity that grounds a person in his own existence when he attempts to fly away from it entirely through scientific detachment: “I stood outside of the universe and sought to understand it.... The only difficulty was that though the universe had been disposed of, I myself was left over. There I lay in my hotel room with my search over yet still obliged to draw one breath and then the next.”
Just to name a few.

One may dismiss some of these understandings of breathing as unreal or unimportant. But if any of these aspects are deemed integral to our experience, it must be noted that none will survive the transhumanist decomposition of the human in general and breathing in particular into function and sensation. Just in the attempt to isolate the respiratory function of breathing, the place of breathing within the whole human body — its autonomic connections to other bodily functions — will make the task of decomposition far more practically difficult than its proponents suggest. But that’s only part of the picture.

In the basic act of breathing, there is not simply a feeling of pleasure and a co-incidental act of sustenance, but a feeling of pleasure as an act of sustenance. The sensation of rhythmed breathing during a long jog, or gasping for breath after surfacing from the bottom of a river, is not simply a feeling of pleasure as pleasure, like eating a sweet dessert, but the feeling that comes from the being’s act of sustaining its own life. No matter how accurate a virtual simulation of breathing, the sensation when divorced from function can never be the full phenomenon, the phenomenon of breathing as the act of a being working for its existence from the surrounding world. None of the other aspects of breathing — its connection to love, to spirit, to nature, to the experience of being — could survive either.

Transhumanists find the relationships between the various components of human existence quixotic, and best to ignore. It’s easy to pick us apart, and so, they assume, it must be to put us together — so even when it comes to a feature of our existence as basic as breathing, they cannot grasp that there might be some purposeful relationship worth preserving between what it is, what it is like, and what it is for. Transhumanists may succeed in making us into some new being, but it will be one bereft of all the everyday depths of experience to which they are now so blind.

[Image credit: "breathe" by deviantart user sibayak.]

Tuesday, June 22, 2010

Futurisms and ideas of goodness and human excellence

In a recent post over on his Pop Transhumanism blog, Kyle Munkittrick makes four points against what we do here at Futurisms. A few quick responses:

1) The ideas of goodness of the sort we profess to be interested in change over time. This point is undeniable but trivial, unless one adheres dogmatically to the historicism upon which Mr. Munkittrick's chosen areas of study (feminism, science studies, and critical theory) are largely founded.

2) Ideas of goodness have tended to focus on goodness in relation to intelligent, rational adults, and transhumanism merely extends the boundary conditions for these traits, already an ongoing historical tendency. As a claim about the history of moral ideas the first part of this assertion is a simplification, but the characterization of transhumanism in relation to that simplification is, as far as I am concerned, hardly controversial. That is to say, transhumanism reifies some simplified moral ideas. Congratulations!

3) We could debate what is good about Audrey Hepburn. Mr. Munkittrick is writing in response to this post of mine showing a picture of Audrey Hepburn, and the lively comment thread it provoked. About Ms. Hepburn, he writes: “She was a fantastic human being and remains iconic, but why? Is it because she is beautiful? Smart? Kind? A humanitarian? Because she was a great actress? Her fashion sense? She was a smoker, is that good? She had miscarriages, would remedying that situation lessen her? Not only would there be a debate over what actually makes her good, any agreement (say, her fashion) would lead to debates over someone who is better at that aspect (Jackie O, Gaga, Coco Chanel).” While my interest in fashion is minimal, I would enjoy having the kind of debate about what makes a good human being that these questions point to — that’s why I’m blogging at Futurisms. But, as I will note below, I’m not convinced Mr. Munkittrick really wants to join me.

4) Futurisms privileges a “late 20th century version of humanism” and in so doing is “willfully ignorant.” This claim is at least refreshing in comparison with Michael Anissimov’s ongoing effort to winkle out the hidden theological agenda behind this blog. But speaking only for myself, while I admire much of late twentieth-century humanism (mostly those aspects of it rooted in the eighteenth century), I think it could learn a great deal from humanists like Thomas More or Montaigne or Plato. As could transhumanists.

Back to point three. I posted the Hepburn picture to see if it would prompt debate, and it did. Mr. Munkittrick found the result “largely uninteresting.” That’s odd, because the responses in the comments thread certainly touched on the question of “what made her good.” So my speculation is that when Mr. Munkittrick presents a list of questions about “what makes her good,” he is suggesting they have no rational answers, and that when he speaks of debate what he really means is something like: “we could debate it, but what would be the point?” I think that he, like a great many transhumanists, has little interest in understanding human excellence for two reasons. First, because increasing human power — celebration of which is at the core of such “humanism” as transhumanism can reasonably claim — means that human excellence is on its way to being passé. Second, all ideas about human excellence are in any case little more than historically conditioned opinions, also to be molded by increasing human power as we take hold of our own evolution.

In short, wishing that Audrey Hepburn had no miscarriages and hadn’t died might make Mr. Munkittrick a nice guy, but it is hardly evidence that transhumanists are in any serious sense humanists.

Monday, June 21, 2010

Final thoughts on the H+ Summit

Well, so much for liveblogging, but I wanted to share some final thoughts on the H+ Summit at Harvard that I recently attended.

A rushed conference

The lineup at the summit included several dozen presenters over the course of two days, most given only ten minutes to speak. As a result, almost all of the talks felt rushed. Also, most of the speakers seemed rather unpracticed, and few were very focused or informative. There didn’t seem to be much of an overarching structure or purpose to the conference, and its “citizen scientist” theme ended up barely a footnote. There was both too much and too little going on to really engage the audience’s attention.

Whereas last year’s Singularity Summit had a clear eye towards grandeur and popularization, with better organization, longer, more focused lectures and a stronger sense of audience-presenter relationship, the H+ Summit had a very collegiate feel. While this could have worked to its advantage, it mostly didn’t. The conference was held in a college lecture hall, where the presenters stood almost directly beneath the slide screen and had to crane over their shoulders awkwardly to appear engaged with their slides. The lack of a stage further diminished the presenters. These factors combined with the overall lack of a focus made the proceedings feel rather like sitting through a series of informal college lectures by teaching assistants whom the students haven’t been given much reason to respect or pay attention to.

Moreover, because the proceedings were constantly running about twenty minutes late, every scheduled Q&A session was skipped. Every single one. The organizers had decided to have attendees submit questions solely via Twitter (a strange idea to begin with, considering that the conference was held in person and that some people had traveled quite a distance to attend) but even all of those Twitter questions went unasked and unanswered. Instead, audience members often took to shouting out comments at the presenters as they talked. This was mainly instigated by the person who did the most shouting — who, weirdly, I believe was Alex Lightman, the executive director of Humanity+ (I might be wrong that it was him, but it was definitely one of the organizers). He clearly meant it to be friendly, but given how rushed the presenters already were, these shouted interjections mostly had the effect of throwing them further off rather than creating a sense of audience participation, and I felt it led the audience to take the talks less seriously.

Organization of the talks

Among the other reasons for my sense that the audience didn’t take the proceedings entirely seriously was the large number of low-level presenters. A remarkable number of them gave talks plugging some small project or product of theirs, but attempting because of the context to force some sheen of larger significance on it. I talked to Kevin Jain, a conference presenter, and founding president of the Harvard College Future Society. He claimed that his group handled much of the organizing for the conference. If that’s true, the conference was actually quite an impressive logistical accomplishment for such a small organization (though it’s strange that the Humanity+ organization would put their name on it without doing the work themselves).

Jain said that the idea had been to try to open up the conference to include a wide variety of presenters, instead of just the usual luminaries. That’s a worthy goal, but in the future, the conference would benefit greatly from emphasizing quality over quantity, by choosing presenters and the lengths of their talks based on substance rather than the speakers’ reputations, and by more clearly separating big-picture talks from low-level project and research presentations. I would have liked to see a lot more time given to the likes of James Hughes, George Dvorsky, Patrick Lin, and Lauren Silbert to develop their ideas, whereas Stephen Wolfram and Ray Kurzweil would have benefited greatly from shorter time slots that forced them to focus their talks. Without naming any names, several of the other talks might have been safely left off the schedule until they were more well-developed.

Transhumanist Kegger

One way that the collegiate feel actually improved the proceedings was in the planned extra-conference activities. The most heavily promoted was a party held after the first day of the conference at a private off-site location, what appeared to be a converted-garage workspace for a small company. The place was a sort of paradise for a particular species of tech nerd of the Radio Shack variety, complete with a wide variety of electronics equipment for designing circuits, and whiteboards with diagrams of circuits and finite-state machines. And, best of all, there was a keg and plenty of cans and pitchers of beer, and, as David Brent would say, el vino did flow.

The party was packed with all manner of conference attendees, organizers, and presenters. I saw Ben Goertzel, Natasha Vita-More, Jessica Scorpio, and Patrick Lin hanging around. I talked to young Harvard undergraduates in the Future Society, who seemed to have joined more out of curiosity and excitement than a dead-set belief in a posthuman vision. (One fellow I talked to, wearing a Zildjian cymbals shirt, had handled the sound equipment at the conference, and between the shirt and his constant running about on stage, he looked a bit like an H+ roadie.) I asked Aubrey de Grey how long it took him to grow his beard. Two years, he said, though that was a long time ago, and he fidgets with it enough that it’s reached an equilibrium where he doesn’t have to trim it.

I spent much of that party in long conversations with a few people — presenter Ramez Naam and a smattering of other attendees who may not want to be named. (Naam amiably pointed out to me that I mischaracterized his talk in my post; see my update to it.) There seems to be so little common ground and so much mutual suspicion that these conversations can be difficult at first. But once I made it clear that not everything I say is a front for a secret desire to pass laws and regulations restricting other peoples’ freedom, I found that the conversations opened up remarkably.

Of course, it was still slow going in interrogating each other’s ideas and getting to basics, much less basics on which we might agree. I did arrive at two tangible if somewhat random agreements with two separate interlocutors. One converser and I came to agree that there might be value in not living in permanent pharmacologically-induced bliss. Another (Naam) and I came to agree that there are higher and lower ways of living, and in particular, that the life of Albert Einstein is a better sort of life to lead than the life of a person who simply eats ice cream, even if Einstein’s work had never been shared with society and so never benefited any other people. Hey, common ground! As Carl Sagan put it: small moves.

The human transhumanist

As struck me at last year’s Singularity Summit, all of these conversations reminded me of just how human are the transhumanists and their proceedings. Throughout all of this, I was struck by the contrasts between their pristine vision and the earthliness of their lives. There was, first of all, the mundanity of the conference itself. There was running into Ray Kurzweil and Aubrey de Grey in the men’s room. There was Kurzweil, spry and ruddy, his face part anticipatory, part fatigued, distractedly checking his e-mail as he stood a few feet away from me waiting to take the stage and deliver a speech that has clearly become routine. There was a conference organizer plugging his iPod into the sound system during a break and playing, of all lofty things, Fleetwood Mac.

There was sharing burritos, salads, sandwiches, and conversation at an Au Bon Pain over the lunch break with several conference presenters and organizers. When I noted how funny it was that we still had to take time at a transhumanist conference to eat, there was an organizer who agreed that, indeed, we still haven’t solved the problem of sustenance. (Don’t you like eating?, I asked. Yeah, he said, but I like learning and improving myself more, and I could be spending my time doing that.) There was watching conference presenters mingle about the beer party, looking with a hint of nervousness for someone to talk to. There was the sheer enjoyment everyone clearly took in interacting face-to-face with members of a movement usually connected only online, with hanging around someone’s cool workspace garage, munching barbecue and glugging beer and getting to talk to luminaries and leaders, and even being pleasantly surprised to discover that people they disagree with are nice and personable but just have different ideas.

Of course, these are all very ordinary human aspects of just about any conference. But the strange thing is that this particular conference and its attendees are devoted to doing away with this sort of humanity. Perhaps not all the conferencegoers — maybe some of them endorse only intermediate stages of enhancement. But the posthuman world of beings who don’t need to eat, drink, travel, engage in the trials of conversation, experience the peculiar anxieties and joys of attempting to know another person, or participate in anything that at all stinks of the everyday — that world is not one in which any of the experiences had at the conference could occur, or in which the concepts we use to understand them could even retain any coherence or meaning.

It’s hard to believe the conferencegoers and I could both inhabit the same world, both seem to discern the same pleasures in it, and yet they want it to end. There are greater joys to be had, I know they will say, over the horizon — a grab-bag of every fulfilled wish you could dream of. But it’s hard to believe they really understand clearly what those are, and just what it would and wouldn’t be like if they got them.

Saturday, June 19, 2010

Assorted impressions and scenes from the H+ Summit

[A few more posts about last weekend’s H+ Summit at Harvard.]

Before I get to my final thoughts on the 2010 H+ Summit, I’d like to share some images from the conference, as well as a smattering of impressions I had on the large number of talks to which I wasn’t able to devote full posts:

One presenter — Alex Backer, I believe — suggested that living longer will force us to focus more on the future. But this seems contrary to the ethos of transhumanism, for much as it promises great might and riches in the future, the future matters only inasmuch as it holds the possibility for greater pleasures in the present. As William Gibson recently wrote: “If you’re fifteen or so, today, I suspect that you inhabit a sort of endless digital Now, a state of atemporality enabled by our increasingly efficient communal prosthetic memory.... The Future, capital-F, be it crystalline city on the hill or radioactive post-nuclear wasteland, is gone. Ahead of us, there is merely...more stuff. Events.”

Yuval Levin’s distinction between the “anthropology of innovation” and the “anthropology of generations” is instructive here: the future Backer has in mind is not posterity, or the future for our children — two concepts that transhumanism pushes to the margins — but one that happens to us. So, Backer is saying, we will finally have a reason to worry about the effect of our actions on the future, because we personally will be around to suffer the consequences. Self-interest tempering self-interest.

James Hughes and Natasha Vita-More focus on their breathing,at the request of presenter M. A. Greenstein.

Andrea Kuszewski spoke about how to increase your cognitive capacity. Sometimes, she says, you need to use technology less. For example, GPS makes it easier to get around but weakens your cognitive abilities relative to finding your way around by yourself. Technological enhancement versus the benefits of self-improvement and self-reliance: there’s a fraught pair of transhumanist imperatives if I’ve ever heard of one.

The Harvard Mark I electro-mechanical computer, outside the conference hall.

Speaking of fraught transhumanist imperatives, Andrew Hessel’s talk reminds me that it’s really quite silly that transhumanists still adopt the pretense of being environmentalists. Modern-day environmentalism owes a huge debt to Romantic thinkers, who elevated sublime experiences of nature above the scientific hyper-rationalist view of nature prevalent in their day. Transhumanists, of course, can mount no such defense of nature. Their core views are either indifferent to nature in their focus on virtuality, or else revolted by nature in its original sins of death and suffering.

The best defense they can mount of nature consistent with their ideology is to talk vaguely about “preserving our biological heritage,” which calls to mind the Joni Mitchell lyrics: “They took all the trees and put them in a tree museum, then they charged the people a dollar and a half to see them.”

David Orban, a conferencegoer, and her little dog too.

Melanie Swan asked the audience how many of them have gotten genetic tests; to my eye, about a quarter of the conferencegoers raised their hands.

Ron Bailey
In his talk at the conference, Reason magazine science writer Ron Bailey used a common transhumanist trope, comparing the end of laws discriminating against racial minorities to the end of laws discriminating against another supposed minority — the enhanced. Bailey only does this implicitly, but it’s funny how often criticism of transhumanism gets explicitly compared to chauvinism for white males, since most transhumanists are, as most of the attendees at this conference were, males and predominantly white.

Aside from Bailey’s disdain for democracy, it’s worth pointing out that he also groups legal restrictions on embryonic stem cell research under the umbrella of “democratic tyranny,” yet evinces no concern for exercising tyranny over the rights of these beings.

Bailey also noted that it’s hard for this movement to form political coalitions among minorities as previous civil rights movements have, because of the small problem that there aren’t any enhanced humans. Whoops. This trifling fact indicates much bigger problems for the ontological anarchy of transhumanism than Bailey lets on.

Ron Bailey and a conference attendee, through the looking-glass.

Speaking of the dominant male representation among transhumanists, it’s worth pointing out that there were many women speakers at this conference — far more than the lone one at the last Singularity Summit.

One of the most hilariously earnest tweets of the conference came in response to Bailey’s talk. Twitterer anarchytweet (!) wrote, “To Ronald Bailey: Do you think some form of anarchy could combat the tyranny of the majority without destroying civilization?” (This gets me to wondering if anyone has ever un-ironically tweeted “Anarchy in the UK!” from an AT&T iPhone.)

Scenes from the after-party, in someone’s awesome converted-garage nerd-paradise workspace.

During a break, one of the organizers tested out the sound system by playing Fleetwood Mac’s “Rhiannon” off of his iPod. Great song. Do you think posthumans will ever still sit around listening to Fleetwood Mac? Is this an activity beneath beings who will spend their time having sex with 10,000 other consciousnesses at once? Or does this just mean they’ll have to switch to listening to Darude and John Digweed?

More scenes from the after-party. Aubrey de Grey at right.

Millie Ray gave a brief overview of embryonic stem cells versus induced pluripotent stem (iPS) cells. She gave only the slightest mention in passing to the fact that “a lot of the ethical concerns” are bypassed via iPS cells — but, typical of the focus of this conference, didn't mention at all what those concerns might be.

The talks on the first day were plagued by various technical problems, particularly on Apple computers, that delayed the presentations. The organizers joke this off by noting that at least it’s not as bad as Steve Jobs’s recent embarrassment with Apple products not working at an Apple conference. Yeah, except Steve Jobs is only suggesting that we purchase his computers, not that we literally live in them.

During one of the longer stretches where the audience was sitting and waiting for a technical issue to be resolved, a woman sitting near me turned to her friend and remarked, apparently unsarcastically, “I hate technology.”

Heather Knight charms organizers and attendees.

Much of this conference was just a hodgepodge of people presenting whatever random research or project they’re working on, and attempting to puff it up in significance. One of the worst/best examples of this is Morris Johnson, one of the first presenters on Day 2. He was plugging some project of his, but seemed to have no idea of what it was or how it worked. Tweeters described it as “unwitting comedy,” asked “What is this?,” and wondered “wtf Morris Johnson is saying to us at #hplus: is this an ISO-9001 process talk? An AmWay presentation? Hemp advocacy? Don’t get it.”

A cockroach suffers indignity during Timothy Marzullo’s talk.

Relatedly, Russell Whitaker, aka @OrthoNormalRuss, had some wonderful tweets (I can't believe I am using those words) at this conference. Just a few here, here, here, and here.

Tweeter @mrgarlic noted, “Ran into Ray K by sink in the mens room. Not how I imagined our first meeting.” Same thing happened to me with both Kurzweil and Aubrey de Grey, and I had the same thought. They’ve made themselves out to be so above the earthly concerns of humanity that you don’t quite expect it. Which reminds me of Milan Kundera:

When I was small and would leaf through the Old Testament retold for children and illustrated in engravings by Gustave Doré, I saw the Lord God standing on a cloud. He was an old man with eyes, nose, and a long beard, and I would say to myself that if He had a mouth, He had to eat. And if He ate, He had intestines. But that thought always gave me a fright, because even though I come from a family that was not particularly religious, I felt the idea of a divine intestine to be sacrilegious.... In the second century, the great Gnostic master Valentinus resolved the damnable dilemma by claiming that Jesus “ate and drank, but did not defecate.”

As for Kurzweil, well, I am sure I am not the first to observe this, but given the number of vitamins he takes every day, he must have the world’s most expensive urine.

Audience members rapt at the end of Kurzweil’s talk.

There have been several improbable names at this conference. We already know that Natasha Vita-More and Max More changed their names, as have others in the transhumanist camps (“Wrye Sententia,” anyone?). But what about people like Jessica Scorpio and Hank Hyena?

Conference farewells.

The scene outside the conference, in Harvard meatspace.

Thursday, June 17, 2010

The Master Stumpeth

[A few more posts about last weekend's H+ Summit at Harvard.]

The last and keynote speaker of the 2010 H+ Summit was, of course, the big daddy of transhumanism, Ray Kurzweil (bio, on-the-fly transcript).

Kurzweil, batting clean-upIn introducing him, the organizers noted that he flew into town that morning from Colorado, where he was filming his movie, and that he would be zipping out from the conference right after his talk to catch a flight to Los Angeles. This little detail is pretty emblematic of the conference in general: whereas Kurzweil hovered around last year's Singularity Summit and descended intermittently to comment upon it like the head priest issuing edicts to his votaries, here he attended none of the conference and just stopped by to deliver his stump speech and head back out.

Given that this is the main event, I should probably try to outline it in detail, but just like his talk at SingSum, there was neither any core message to this talk nor anything remotely new about it. He hits all of his standard talking points. And I don't just mean the same themes, but the very same details he lays out in The Singularity Is Near: the same graphs about Moore's Law and about the exponential progress of technology in general and various technologies in specific. The main reason for his being here is his celebrity, it seems. Though he does have the shiniest slides of anyone here; his presentation is polished, if not new or focused.

In keeping with Kurzweil's own unfocused approach, here are a few random notes about the talk and follow-up Q&A:

  • -- I wasn't the only one underwhelmed by Ray the K's presentation. George Dvorsky, a conference presenter, tweeted about how it was all boilerplate. Tweeter Samuel H. Kenyon complained about the warm reception: "Seriously people, why does Kurzweil deserve a standing ovation but the other presenters don't? Idol worshiping is not my bag." The best tweet was a tweak, joking about Kurzweil's obsession with exponential curves: "Why is this talk now not 5 minutes long and 1000 times as interesting as it was 5 years ago?"

  • -- Here's Kurzweil on human DNA: "We're walking around with software — and this is not a metaphor, it's very literal — that we've had for thousands or millions of years." My jaw was on the floor. Literally, not metaphorically.

  • -- Kurzweil is working on a book about reverse-engineering the brain, called How the Mind Works and How to Build One (see the Singularity Hub's recent article on this). Someone alert Steven Pinker that he's been one-upped. Also, this literal/metaphorical biological software business doesn't bode well for the metaphysical clarity of this book.

  • -- He makes an important admission, which is that there is no scientific test we can conceive of to determine whether an entity is conscious — and this means in particular that the Turing Test does not definitively demonstrate consciousness. His conclusion is that consciousness may continue to elude our philosophical understanding, and we should just set those questions aside and focus on what we can practically do.

  • -- He claims that we are "not going to be transcending our humanity, we're going to be transcending our biology." Uh oh, they're going to need to add a few items to the agenda for the next staff meeting:

  • (1) Time to change the name to "transbiologism"? And "H+" to "B+"?
    (2) Figure out how in the world humanity is separate from its biology.
    (3) Come up with a plan to deal with some very put-out materialists.

  • -- As part of the great transhumanist benevolence outreach, Kurzweil makes the bold claim that "old people are people too." Of course, what this really means — aside from "if you at all question the wisdom of extreme longevity, then you hate old people" — is "we should turn our revulsion for getting old into pity for the elderly." Somehow I don't think respecting the dignity of the elderly as we do the young and able-bodied is really what he's getting at here.

And that's it for the last presentation of the 2010 H+ Summit. Stay tuned for a couple of wrap-up posts.

David Pearce takes the meat out of meatspace

[A few more posts about last weekend's H+ Summit at Harvard.]

Pearce: pleasure, no painAnother of the transhumanist movement's more prominent figures, David Pearce (bio, slides), spoke at the conference about what he considers a moral imperative: the abolition of "suffering in all sentient life." As with much of the rest of the conference, this was another rehashing of ideas already widely discussed, with little new added.

The first big project that Pearce has in mind to unsuffer the world is ending the slaughter of farm animals. The problem of our continuing taste for meat is supposed to be solved not by making all of us into vegetarians but rather, at least in the short-term, by creating artificial meat in a laboratory without slaughtering animals. My biggest concern here is that with the need for fresh, real meat removed, the plots of future Jurassic Park films would be ruined, and that is a horror we can never allow. (Except, as Dr. Grant knows, T-Rex doesn't want to be fed, he wants to hunt.)

I kid, but the other big project Pearce mentions is ending predation, and one of the ways he suggests doing this actually is by feeding animals artificially-produced meats. This, he thinks, will be inadequate for the whole problem, and what we'll really need to do is to redesign predators themselves so as not to be predatory — a notion we have discussed here on Futurisms before. This is one of the most striking illustrations of the heart of the transhumanist attitude, for make no mistake about it: Pearce here is calling for the destruction of Earth's biosphere, as surely as if he were to call for animals to be re-engineered so as not to emit carbon dioxide.

Among the highlights from this little bit of sort-of well-intentioned lunacy was an assertion from Pearce that lions are the same thing as serial killers, and so we have just as much obligation to stop them. Does it need saying that lions, unlike serial killers, lack the capacities for empathy, understanding right and wrong, and choosing whether or not to kill, and so are amoral rather than immoral creatures? Apparently it does.

It shows, again, the radicalism characteristic of this movement that the call for such a project is greeted with so many yawns at this conference.

Wednesday, June 16, 2010

Natasha Vita-More and the enhancement ethos

[A few more posts about last weekend's H+ Summit at Harvard.]

After taking a little time to recuperate from the H+ Summit this weekend at Harvard, I'll be finishing up my coverage over the next day or so.

One of the bigger names speaking at the conference was Natasha Vita-More (bio, slides, on-the-fly transcript). The wife of Max More, she says she adopted the surname "Vita-More" because it means "more life."

Vita-More launches into some grandstanding about how transhumanists think all life is precious, no matter what its form. Why are transhumanists then apparently untroubled by the practice of destroying human embryos for embryonic stem cell research, for the mere possibility of lengthening their own lives? Shouldn't they consider this a form of tyranny? [NOTE: See the update below.]

She uses the terms "plastination" and "plasticity" interchangeably, apparently not knowing that they are two completely different concepts, like lightning and lightning bugs (to borrow from Mark Twain). [NOTE: See the update below.]

Vita-More's talk is scattered and not especially substantive. Lots of transhumanist buzzwords, and boilerplate about empowerment and seizing control of one's own health and body. She's more here to project an attitude than a set of clear ideas. At one point, one of the conference organizers informs her she has two minutes left in her talk, and she responds shortly that she's going to go longer. Empowered she is.

This one slide of hers, an attempt to add an artsy sheen to that ethos while cramming in as much transhumanist mumbo jumbo as possible, actually sums up well the whole presentation, in both form and content (that's Vita-More in the middle):

(click to enlarge)

UPDATE: Natasha Vita-More comments that I was mistaken about her confusing “plasticity” and “plastination,” saying that she had “used the term plasticity and meant it. Someone else used the term plastination, which I did refer to but it was not my focus.”

Here is my transcription of the video from that portion of her talk:

Looking at a single cell in my body, and building this primo post-human prototype, and then going to where I am now, which is the very issue of our brain and plasticity. Not the plasticity that was talked about earlier, which is a very wise and smart idea, or cryonics plasticity with vitrification possibilities with nano-medicine, but the plasticity of who we are right now and our knowledge base, and how we’re acquiring information, and how we’re assessing the information we acquire...

It appears that she might have just misspoken without misunderstanding: she used the term “plasticity” to refer to plastination, but made a point of saying that they’re different. Still, I think this is symptomatic of the shtick of talks like this, which involves blurring the distinctions between somewhat related terms and concepts.

For example, her first reference to plasticity presumably is to neuroplasticity, but its description as “a very wise and smart idea” makes it sound like a design principle rather than a discovery or theory of natural science. And “the plasticity of who we are right now and our knowledge base” is not useful to describe as some distinct form of plasticity, both because it’s essentially just a manifestation of neuroplasticity, and because it’s less descriptive than just stating directly what she’s talking about: the openness and eagerness to learn new things. Blurring these terms together in this way has more the muddling effect of jargon than the clarifying effect of metaphor. (A cynic might even say that this blurring was intended to lend the appearance of scientific authority to unscientific concepts and proposals.)

I should also note that Ms. Vita-More sent me an email to note that she is “not an advocate of plastination (outside of dramatic artistic sculpture), although it has its intrigue” and that she is “deeply opposed to creating embryos as marketing kits for stem cells.”

Sunday, June 13, 2010

Patrick Lin on the military's push for human enhancement

[Continuing coverage of the 2010 H+ Summit at Harvard.]

After James Hughes came Patrick Lin (bio). Lin noted correctly that something that's gone basically unremarked at this conference so far is how much of the push towards futuristic technologies and human enhancement is driven by the military. The military has strong reasons to try to engineer soldiers with superhuman strength, who can climb walls, who don't need to eat and sleep, and so forth.

Instead of "Be All You Can Be," he says, the Army's recruiting slogan might become "Be More Than You Can Be." Nice line. Though I wonder what this would do to the Army's two more recent slogans, "Army of One" and "Army Strong."

Lin notes that the primacy of military interests in human enhancement raises all sorts of ethical issues. For instance, he wonders whether society might become more warlike and wars become more frequent. Lin doesn't have time to go into a lot more detail about these questions given the cramped ten-minute time slot, but it's good to hear them raised. Needless to say, military technology — today's and tomorrow's — is a subject often broached in the pages of The New Atlantis, including P.W. Singer's recent essay "Military Robots and the Laws of War." And the future of military technology was also the focus of a big conference in D.C. three weeks ago.

Whatever the ethical implications, the current fronts of military innovation seem to give us glimpse into our technological future. It may be true, as Robert Wright argued at the conference just mentioned (skip to about 41:50 in this video) that DARPA, the military's advanced-research group, really only creates things that would have come along soon anyway. But by and large, war (understood to include preparation and deterrence) is, as it always has been, a tremendous catalyst of technological innovation.

James Hughes, the Enlightenment, and the radiant future

[Continuing coverage of the 2010 H+ Summit at Harvard.]

James Hughes had the morning talk after Patrick Hopkins. He basically did a rapidfire ten-minute version of a mini-essay he published earlier this year on transhumanism's inheritance of Enlightenment problems. That mini-essay was supposed to be part of a seven-essay series, although it looks like only five have been published. We have discussed a few of these essays on this blog (here, here, here, and here).
James Hughes, far right, enlightened by his laptop's glow.
Because of the short time slot, Hughes compressed his talk into a thesis with which I'm generally in agreement: that transhumanists don't usually realize that very many of their debates recapitulate Enlightenment debates, and they have a responsibility to learn about and engage with those arguments. We part ways with Hughes on the details, though (as evidenced by our series of responses), and in particular I'm skeptical about the idea that transhumanism's fractured Enlightenment inheritance spells positive things for its coherence and goodness, even when that inheritance is recognized and engaged with.

Here's just one example. Hughes notes that continental Enlightenment thinkers laid the foundations for the utopianism we find alive today in transhumanism. In particular, they pioneered the idea that pure reason would liberate us from the shackles of death and tyranny, a notion that Hughes more or less embraces (albeit with a huge caveat).

Hughes calls particular attention to the Marquis de Condorcet, who "wrote one of the most remarkably utopian essays in the history of the Enlightenment, which proposed that reason would eventually liberate us all from the church and the state, that there would be women's suffrage eventually, that we would get rid of slavery eventually, and that we would get rid of unnecessary involuntary death." He's referring here to the Sketch for a Historical Picture of the Progress of the Human Spirit.

Although Hughes notes the difficult conditions under which Condorcet wrote his Sketch — he "was part of the French Revolution but was being hunted down by the Jacobins" when he wrote it — Hughes misses the significance of that fact. As Charles Taylor explains in his book Sources of the Self:

Certainly the greatest and fullest statement of the philosophy of history of the unbelieving Enlightenment is Condorcet's Esquisse [Sketch], taking us through ten ages of human existence, the tenth being the anticipated radiant future of mankind.... This passage takes on an additional poignancy when one reflects that it was written in 1793, when its author was in hiding in Paris, with a warrant for his arrest by the Jacobin-controlled Committee of Public Safety as a suspected Girondin, and that he in fact had only a few months more to live. There were, indeed, "errors, crimes, injustices" for which he needed consolation. And it adds to our awe before his unshaken revolutionary faith when we reflect that these crimes were no longer those of an ancien régime, but of the forces who themselves claimed to be building the radiant future. [Emphasis added.]

One can only hope that transhumanists will heed the darker lessons of the Enlightenment in their call for a radiant future incomprehensibly brighter than that dreamed of by the Jacobins. But that would require levels of responsibility and restraint that are not only not in evidence among transhumanists, but are basically inimical to its goals.

[NOTE: I'll have a few more posts tonight or tomorrow, catching up on other presentations from the conference, along with some more pictures and a few concluding thoughts.]

Open mic night at H+ Summit

Up next is Brian Malow (bio), self-described "transhumorist." I think that means he transcends the boundaries of what's humorous, because yowza:

-- "We are going to die, and that is a spoiler."
-- (after a joke bombs) "That was an endothermic joke. It required the addition of a little energy from you."
-- "I'm not saying I don't have hair, it's just outside the visible spectrum."

Brian MalowThis guy just flew in, and boy are his surgically-implanted wings tired. But it's okay, I think the guy is sort of an entertainer first, informer second, like the transhumanist version of Michael Scott.

Sorry, I'm obliged to heckle. The bestiality-and-transhumanism jokes actually weren't bad. Well, you know, they were bad, but they weren't bad. Don't forget to tip your waiters, folks.

Patrick Hopkins on why uploading won't work

[Continuing coverage of the 2010 H+ Summit at Harvard.]

Next up is Patrick Hopkins on "Why Uploading Will Not Work" (bio, slides). A few days ago on this blog, guest-poster Mark Gubrud previewed Hopkins's presentation at length.

As Gubrud described, Hopkins looks at the language used to describe mind uploading. What are the metaphors we use when speaking about it? The first is location: the mind is "in" or "within" a brain, and can be put "onto" a computer. The second is motion: the mind can be "moved," "transferred," "put" into a computer. And the third is substance: the mind is a thing that can be moved from one "receptacle" to another. But, Hopkins asks, do these metaphors really work? Is the mind truly an object that is housed "inside" a brain and can be "moved" to another "receptacle"? According to naturalist theories of mind, no. The positions that do think this, Hopkins says, are basically religious, relying on notions of souls, spirits, and ghosts.

Hopkins tries to absolve uploading advocates from blame; he says that they have just inherited this language from religion. I think it's far more likely that they're inheriting language and concepts from the discipline that gives rise to the notion of "uploading" in the first place: computer science. Computers are heavily dualistic systems, and transhumanists think the mind/brain is a computer, so they treat it as dualistic too.

Hopkins anticipates the rebuttal that this language is just metaphorical. But, he says, central to the idea of uploading is that personal identity is preserved. So the question is, does copying preserve identity? Is copying the same thing as transferring, as literally moving a mind? He sas no: copying creates something that is exactly structurally and behaviorally similar to the original, but that is not the same as identity. The copied mind has a different history, and is made of different matter; we can metaphysically tell the difference (as usual, see SMBC). If you want to believe that the mind is a pattern, he says, then it's important to know that a pattern is not an object that can be plucked out and moved; it's a way of organizing matter.

He describes a familiar scenario from the philosophy of mind: You're sitting in a room and someone holds a gun to your head and says he's about to shoot you, but before he does that he's going to copy your mind into the other room. You'd still be unsettled, but maybe you'd be okay because you'd think that you would just go to sleep in one room and wake up in another. But what if the gunman then said "just kidding, I'm not going to shoot you, but I still made the copy." It couldn't be you in the other room, then, could it? Well your relationship to the mind in the other room is no different than it was a moment earlier; the only difference is that the gun is no longer at your temple. Mind uploading, Hopkins concludes, will not work as we like to think it will. (He doesn't say it explicitly, but basically what he's demonstrated is that psychological continuity is not all that is required for personal identity.)

Patrick Hopkins provides what is easily the best talk of the conference so far — he manages to convey sophisticated ideas effectively and concisely in a ten-minute slot that few other speakers have been able to own. And his message is convincing. Again, I wish the conference had put far more emphasis on talks of this level of thoughtfulness and speakers who were this effective.

I do have a few quibbles, though. First, Hopkins either misrepresents or misunderstands the significance of the argument he presents. To say that "uploading won't work" makes it sound like he's presenting a philosophical case for why we couldn't have machines that are conscious, and whose consciousness very closely resembles that of existing persons. But his argument is based on the premise that we could. His conclusion is just that the results wouldn't be as clean and transparent as everyone assumes.

So Hopkins's claim is that a mind cannot be separated from a body and continued. But that is not quite the same as claiming that a mind cannot be copied. What if it could — what if a duplication were possible? Hopkins offers no consideration to the huge moral dilemmas that would arise if such beings were somehow created. If it were somehow technically possible, such duplicate beings might well consider themselves to have a continuous personal identity, complete with memories, thoughts, and feelings — only their memories, thoughts, and feelings about their own history of self would be false. The identity of the original being would be thrown into chaos just by the fact of its duplicate's existence. How then would we treat such beings? Could we hold the copy responsible for crimes that it remembers having committed, but did not? Could we deny it credit for accomplishments it thinks it made but did not? These questions would become impossible to answer, and we would find many of the bases for our legal and social order similarly thrown into chaos, and impossible to resolve.

Maintaining continuous personal identity (and other really fundamental aspects of consciousness and mind) is not simply a philosophical matter of recognizing the necessary components, but a practical matter of maintaining them, socially and as lived lives. The conclusion Hopkins should arrive at by the end of his talk is not "this is why uploading won't work" but "this is why we shouldn't do it."

Day 2 at H+ Summit: George Dvorsky gets serious

The 2010 H+ Summit is back underway here at Harvard, running even later than yesterday. After the first couple of talks, the conference launches into a more philosophical block, which promises a break in the doldrums of most of these talks so far. First up in this block is George Dvorsky (bio, slides, on-the-fly transcript), who rightly notes that ethical considerations have largely gone unmentioned so far at this conference. And how. He also notes in a tweet that "The notion that ethicists are not needed at a conference on human enhancement is laughable." Hear hear.

Dvorsky's presentation is primarily concerned with machine consciousness, and ensuring the rights of new sentient computational lifeforms. He's not talking about robots, he says, like the ones we have today that are not sentient but are anthropomorphized to evoke our responses as if they were. (Again, see Caitrin Nicol in The New Atlantis on this subject.) Dvorsky posits that these robots have no moral worth. For example, he says, you may have seen this video before — footage of a robot that looks a bit like a dog and is subjected to some abuse:

Even though many people want to feel sorry for the robot when it gets kicked, Dvorsky says, they shouldn't, because it has no moral worth. Only things with subjective awareness have moral worth. I'd agree that moral worth doesn't inhere in such a robot. But as for subjective awareness as the benchmark, what about babies and the comatose, even the temporarily comatose? Do they have any moral worth? Also, it is not a simple matter to say that we shouldn't feel sorry for the robot even if it doesn't have moral worth. Isn't it worth considering the effects on ourselves when we override our instincts and intuitions for empathy toward what seem to be other beings, however aptly-directed those feelings may be? Is protecting the rights of others entirely a matter of our rational faculties?

Dvorsky continues by describing problems raised by advancing the moral rights of machines. One, he says, is human exceptionalism. (And here the notion of human dignity gets its first brief mention at the conference.) Dvorsky derides human exceptionalism as mere "substrate chauvinism" — the idea that you must be made of biological matter to have rights.

He proposes that conscious machines be granted the same rights as human beings. Among these rights, he says, should be the right not to be shut down, and to own and control their own source code. But how does this fit in with the idea of "substrate chauvinism"? I thought the idea was that substrate doesn't matter. If it does matter — to the extent that these beings have special sorts of rights like owning their own source code that not only don't apply but have no meaning for humans — doesn't this mean that there is some moral difference for conscious machines that must be accounted for rather than scoffed off with the label "substrate chauvinism"?

George Dvorsky has a lot of work to do with resolving incoherences in his approach to these questions. But he deserves credit for trying, and for offering the first serious, thoughtful talk at this conference. The organizers should have given far more emphasis and time to presenters like him. Who knows how many of the gaps in Dvorsky's argument might have been filled if he had been given more than the ten-minute slot that they're giving everybody else here with a project to plug.

Saturday, June 12, 2010

Ben Goertzel: "What you mean 'we,' human?"

[Continuing coverage of the 2010 H+ Summit at Harvard.]

Ben Goertzel (bio, slides) delivers the last talk of the day, rehashing lots of boilerplate stuff about artificial intelligence. Towards the end of the talk, he uses the word "ethics" for what I think must be the first time in the conference. The developments he's talking about, he says, raise lots of important ethical questions, and those might be an interesting subject for another talk, but he doesn't have time to go into them today. Nobody seems to.

He talks about the notion of a "global brain" and super-AIs that far surpass our own intelligence, and how the prospect is scary but not bad. An audience member shouts out, "Unless they decide to wipe us like parasites!" Someone else senses trouble and starts saying, "Don't... don't... don't..." Goertzel bunts and says we'll still be part of them, and someone else shouts out, "What you mean 'we,' human?" (I wonder how many in the room caught the Lone Ranger reference.)

That's the close of the talks today, folks. There will be another post or two later tonight with more notes and thoughts on the conference, and then stay tuned for day two of the conference tomorrow.

Ben Goertzel, partying after the end of the first day of the conference.

Darlene Cavalier on "citizen science"

Darlene Cavalier (bio, slides) is giving the most tangible talk they've had yet on the conference's theme of "The Citizen Scientist." She's citing a number of anecdotes about small, nonprofessional organizations conducting experiments on their own, or amateur astronomers and entomologists making little discoveries. All very nice, but it's hard to see how this constitutes a significant or new phenomenon, particularly since, despite lots of talk about its democratic nature, the most important science has increasingly become the purview exclusively of highly specialized and credentialed academics.

The one thing that Ms. Cavalier alluded to that would be noteworthy is the phenomenon of non-academic scientists getting regularly published in peer-reviewed journals, but she didn't give much indication that this is happening a lot. More importantly, it's still hard to see how the theme of "Citizen Science" is more than marginally related to transhumanism, and the talks haven't done much to clarify that.

Darlene Cavalier on citizen science

Kevin Jain thinks the Singularity might change things

[Continuing coverage of the 2010 H+ Summit at Harvard.]

Kevin Jain (bio), an undergrad at Harvard who is president of the school's Future Society, talks about "assumptions" behind academic disciplines. Each one is predicated on some assumed fact about human nature. Psychology assumes the existence of death; government assumes earthly bounds; economics assumes the existence of scarcity; computer science assumes a human/non-human divide.

Majoring in any of these disciplines will no longer be relevant, he says, as the assumptions on which they're based change. Jain says he's devising a new kind of textbook that takes account of the changes in these assumptions. That's quite a project. He's right to point out that much of our knowledge is based on the assumption of some shared nature between beings in society. Let's even grant for the sake of argument that he's right about those fundamental assumptions all being challenged. In that case, it certainly would be worth asking what would take their place. But it could not possibly be a matter of merely issuing new editions of our textbooks. If we lose the knowledge and wisdom we have accrued that is based on our shared experience and nature, what basis might there be for, say, communication, order, and society?

Here is Jain and a slide summarizing his thesis:
Kevin Jain, questioning assumptions.

Stephen Wolfram systematizes everything

[Continuing coverage of the 2010 H+ Summit at Harvard.]
Stephen Wolfram (bio) has a very unhelpful habit of describing disciplines in terms of the spaces of possible knowledge within them. So he asks about whether we can say that art has progressed throughout history, and says it's not clear that it has, unless you note that the space of possible works of art has been progressively more filled in. This a remarkably stupid thing to say, though. Aside from evincing zero understanding of the purpose of art, it also misunderstands how "spaces" work. You could define a "space" of, say, all possible 1000x1000-pixel JPG images and talk about those. But whether you define works of art as physical or conceptual objects, the possible space is not only infinite, but indefinite. A really useless way of thinking about anything related to the subject.

Stephen Wolfram going on and on and on about NKS, his 'new kind of science'I note this mostly because this example is emblematic of the approach Wolfram takes throughout his long presentation. He keeps talking about the space of possible programs, and what possible programs out there might have intelligent properties and whether we can find those. Now, unlike possible works of art, the space of possible programs actually is well-defined, to the extent that one can actually devise systems for enumerating each possible program. Even so, although this gives an interesting image of walking into a forest full of all possible programs searching for the ones that are alive and conscious, it's not a very useful way to think about intelligence since it tells us nothing about its salient features. Imagine applying it to something radically more simple, like writing a program to do your taxes. How is thinking about "the space of possible programs" remotely useful to that task?

I'm willing to bet that a lot of people will mistake Wolfram's talk for a very high-level talk when in fact it's ridiculously too low-level; if it were delivered as a lecture at a computer-science conference, it would be obvious to everyone in the room just how unserious it is. Moreover, Wolfram's talk is utterly detached from human concerns. There's no consideration at all of moral questions, just of fields of technical possibility. It's as if he's worried that he might sound like a member of the human race himself. This makes it particularly ironic when he comes back around at the end of his talk to saying that his systems approach will eventually determine all human meaning.

The organizers really should have reined in this talk. Where the others have been too short to get anything done, this one, at fifty minutes — the length of five normal slots — was unfocused and uninteresting.

Heather Knight and the real boy

[Continuing coverage of the 2010 H+ Summit at Harvard.]
Fem and bot: Heather Knight faces her 'Star Wars'-performing robot, while H+ chairman David Orban looks on, kneeling.

Heather Knight (bio) had the first presentation after lunch. She's a young computer scientist, fresh out of M.I.T. undergrad, and she is interested in (even an evangelist for) socialized robotics. She goes through some of the standard stuff about making robots that can sense and imitate human emotions, and then starts in with attempting to contrive reasons for doing this, such as amusing kids who are waiting for their parents to pick them up. (A teddy bear won't suffice?) This echoes a chain of dialogue about socialized robotics going back at least to the 1960s and Joseph Weizenbaum's ELIZA, a simple text-based program that tricked people into thinking it could converse with them, and that many people seriously suggested be used as a psychological therapist.

Knight's presentation is low on content; even for socialized robotics, a field aimed at tricking people into believing there is complex behavior where there is not, the robotics presentation she puts on is very elementary — a prefabricated robot with minimal voice-recognition capability, which summarizes Star Wars, complete with sound effects. The audience eats it up, though. This is clearly a presentation in many different ways aimed at style over substance. At least one Twitterer was a fan of the show, though, and another aptly noted, "I think Heather Knight thinks her performing robot is a real boy."

Knight has the same problem as every other social roboticist, which is the blithe belief that she can recreate through pure engineering a "system" (i.e., human interaction) that is as complex as anything we know — and, moreover, can recreate it from whole cloth, without any apparent engagement with or even awareness of the wealth of thought about social life. For more about this, see our New Atlantis colleague Caitrin Nicol's wonderful essay about the follies of social robotics.

I don't mean to pick on Heather Knight. Like most other presenters, Knight is just presenting her research here, not purporting to present some grand unified theory; but few of them seem to realize the intellectual burdens that making these sorts of claims must bear. She certainly has a stage presence, though, and it's striking that her talk just reflects the trend of almost every presentation here so far having either been on an obscure and relatively unimportant technical subject, or else a repetition of stock transhumanist ideas. There's been almost nothing new here. I'm not sure whether to blame the presenters or the organizers, but hopefully it'll pick up as the conference goes on.

Seth Lloyd on democratizing science

[Continuing coverage of the 2010 H+ Summit at Harvard.]
M.I.T. professor Seth Lloyd (bio) takes the stage, to discuss the implications of science for democracy and the implications of democracy for science. Scientific knowledge, he says, can be defined as information that can be verified or tested by anybody. "Rats could do experiments." (Hmm...) He clarifies that anybody can verify scientific knowledge, and so knowledge conducted in secret by governments and not made public shouldn't properly count as part of the public scientific enterprise. He urges the audience to write our congressmen and insist that scientific knowledge not be kept secret.

Lloyd, whose presentation is fast and freewheeling, asks if anybody has questions. I ask him: What about making public such information as how to build a nuclear bomb? Good question, he says, but the point is that it shouldn't be scientists who make decisions about secrecy. They should be decided democratically. (And, in fact, in many cases, such decisions already are made democratically.) Lloyd is devoted to democracy, quite the opposite of Ron Bailey, the libertarian science writer, whose presentation this morning argued that democracy is a threat to transhumanism.

Before leaving the subject of democracy, Lloyd makes a final point about how the government needs to go back to emphasizing basic research, and leave applied research to private companies, who will find their own funding if applications are profitable.

He then launches into a frenzied chalkboard illustration of a lot of stuff about time travel and quantum mechanics. I have absolutely no idea what he was talking about (and not entirely due to ignorance on those subjects), but loved his presentation nonetheless. Nothing quite like watching a mad scientist jumping for joy over science.

You gotta fight for your right to plastinate your brain

Next up is John Smart, talking about chemopreservation, or "plastination" of the brain. (Bio, on-the-fly transcript.) Smart is the co-founder of the Brain Preservation Foundation, the purpose of which you can probably guess from the name. Its ultimate aims are not very different from those of the cryonics movement, although the processes that Smart describes are closer to the plastination techniques made famous in the "Body Worlds" exhibits. His foundation has just created a prize for research that successfully leads to such techniques, and has also just secured $100,000 of funding for that prize.

The next step, he says, is for a particular technique called Emergency Glutaraldehyde Perfusion to be made a legal postmortem choice, with priority over a state's right to destructive autopsy. Laws will need to be rewritten to accommodate these new rights, he argues, except for in Switzerland and a few other "foresighted" states that apparently already do.

Why should we plastinate? Among other reasons Mr. Smart offers are "cultural preservation," "human experienceome" (sic), and "virtual memorials." But the main motivation, of course, is the possibility for uploading and reviving minds later. "Human beings," he says, "have not only biological but also primitive digital selves." Huh?

We should embrace a sort of Pascal's wager for brain preservation he says: preserve now, and decide what we can do with them later. He keeps plugging the prize and asking for donations throughout the talk. It's a "very worthy cause," he says. One tweeter seems to think it's worth $20.

Harvard postdoc Ken Hayworth (bio, slides), one of Smart's collaborators on the Brain Preservation Foundation, is up next. He goes into a bit more detail about the specifics of mind scanning. He whirls through some stock photos of sliced-up mouse brains, and then starts asserting that cognitive science has basically figured out how consciousness works, or at least come up with comprehensive theories for it. He points in particular to the books How Can the Human Mind Occur in the Physical Universe? by John Anderson, Unified Theories of Cognition by Alan Newell, Consciousness Explained by Daniel Dennett, and The Ego Tunnel by Thomas Metzinger.

Hayworth says that we have now, or at least will in the next few decades have, "the technological and computational models that could make mind uploading a reality." He says it's almost certain that the mind can be extracted from a plastinated brain. The only thing stopping us is that brain preservation has yet to be developed into a reliable surgical procedure — though he went out of his way to repeat the old canard about the illusion of the self, which would seem to make the main reason for preserving one's brain illusory as well.

"If you agree people should have the right [his emphasis] to have their brain[s] preserved and stored," then he points you to a petition on the Brain Preservation Foundation site to get hospitals to make this standard. So this is the grand political card up transhumanism's sleeve? Web petitions?