Futurisms: Critiquing the project to reengineer humanity

Monday, November 30, 2009

The New Bioethics Commission

Last week, the White House announced the formation of a new Presidential Commission for the Study of Bioethical Issues. It will have a chairman and vice chairman — and at least at first, both will be university administrators: Amy Gutmann, the president of U Penn, and James W. Wagner, the president of Emory.

The executive order formally creating the commission — what you might think of as the charter explaining the commission’s purpose and powers — was published today. It emphasizes policy-relevance: the commission is tasked with “recommend[ing] legal, regulatory, or policy actions” related to bioethics. This stands in contrast to its immediate predecessor, the President’s Council on Bioethics, the charter for which emphasized exploring and discussing over recommending. Since the former council’s website (bioethics.gov) has been taken down, we are pleased to announce that we have archived all of its publications here on the New Atlantis site. (The Council’s impressive website, which included transcripts of all its public meetings, will hopefully be restored somewhere online in its entirety soon; in the meantime, interested parties will have to make do with the incomplete record in the Internet Archive.)

The former council’s report that is most relevant to this blog is Beyond Therapy, a 2003 consideration of human enhancement. Perhaps most striking about that report is its modus operandi: instead of beginning with an analysis of novel and controversial enhancement technologies, the council chose to begin by examining human functions and activities that have been targeted for enhancement. “By structuring the inquiry around the desires and goals of human beings, we adopt the perspective of human experience and human aspiration, rather than the perspective of technique and power. By beginning with long-standing and worthy human desires, we avoid premature adverse judgment on using biotechnologies to help satisfy them.” Beyond Therapy is a powerful document, and it rewards careful attention. (We published a symposium of essays in response to the book.)

We will have more to say about the former council in the months ahead. But for now, one final amusing observation about the new commission: If you look closely at the executive order creating it, you will see that among the issues it is invited to discuss is “the application of neuro- and robotic sciences.” That’s right — President Obama’s new bioethics commission has been explicitly invited to take a look at robotics. Just the latest indication that the administration is worried about the looming robot threat.

Sunday, November 29, 2009

Looking for a Serious Debate

Over on his blog Accelerating Future, Michael Anissimov has a few criticisms of our blog. Or at least, a blog sharing our blog’s name; he gets so many things wrong that it seems almost as though he’s describing some other blog. And Mr. Anissimov’s comments beneath his own post range from ill-informed and ill-reasoned to ill-mannered and practically illiterate. They are beneath response — except to note that Mr. Anissimov should know better. But putting aside those comments and the elementary errors that were likely the result of his general carelessness in argument — like misattributing to Charlie something that I wrote — some of the broader strokes of Mr. Anissimov’s ignorant and crude post deserve notice.

First, Mr. Anissimov’s post is intellectually lazy. To label an argument “religious” or “irreligious” does not amount to a refutation. Nor can you refute an argument by claiming to expose the belief structures that undergird it.

Second, Mr. Anissimov’s post is intellectually dishonest. He approvingly quotes an article that claims that “all prominent anti-transhumanists — [Francis] Fukuyama, [Leon] Kass, [and Bill] McKibben — are religious.” But anyone who has read those three thinkers’ books and essays will know that they make only publicly-accessible arguments that do not rely upon or even invoke religion. And more to the point, it is an indisputable matter of public fact that none of us here at Futurisms has made the arguments that Mr. Anissimov is imputing to us. None of us has ever argued that we object to transhumanism because “through suffering [we] will enter paradise after [we] are dead.” Not even close.

Once Mr. Anissimov has (falsely) established that those of us who disagree with him do so for religious reasons, he claims that we “want the same damn thing” that he wants. Except that while he wants to achieve immortality through science, his critics “think they can get it through magic.”

To the contrary, our arguments have in fact been humanistic and what you might call earthly — hardly magical thinking or appeals to paradise. The very distinction between humanists and transhumanists should make plain whose beliefs are grounded in earthly affairs and whose instead depend on appeals to fantasy. We are skeptical of transhumanist promises of paradise because their arguments are, by and large, based on faith and fantasy instead of reason and fact; because what they hope to deliver would likely be something quite other than paradise if it became reality; and because the promise of paradise can be used to justify things that ought not be tolerated.

It is too much to ask for Mr. Anissimov to be a charitable reader of our arguments, but if he wants to be taken seriously he should make an effort to seem capable of at least comprehending them. Until he does, it is a peculiar irony that a transhumanist would invoke religion in order to avoid engaging in a substantive debate with his critics.

Monday, November 23, 2009

The Significance of Man

Over at Gizmodo, Jesus Diaz has called attention to a genuinely lovely animation of Earth’s weather from August 17-26, 2009. He notes in passing, “It also shows how beautiful this planet is, and how insignificant we are.”

Scene from '2001: A Space Odyssey'There is something about pictures of Earth from space that seems to call forth this judgment all the time; it is equivalent, I suppose, to the “those people look like ants” wonderment that used to be so common when viewing a city from the top of its tallest building. That humans are insignificant is a particularly common idea among those environmentalists and atheists who consider that their opinions are founded in a scientific worldview. It is also widely shared by transhumanists, who use it all the time, if only implicitly, when they debunk such pretensions as might make us satisfied with not making the leap to posthumanity.

But in fact, just as those people were not really ants, so it is not clear that we are so insignificant, even from the point of view of a science that teaches us that we are a vanishingly small part of what Michael Frayn, in his classic novel Sweet Dreams, called “a universe of zeros.” Let’s leave aside all the amazing human accomplishments in science and technology (let alone literature and the arts) that are required for Mr. Diaz to be able to call our attention to the video, and the amazing human accomplishments likewise necessary to produce the video. The bottom line is, we are the only beings out there observing what Earth’s weather looks like from space. Until we find alien intelligence, there is arguably no “observing” at all without us, and certainly no observations that would culminate in a judgment about how beautiful something is. At the moment, so far as we know (that is, leaving aside faith in God or aliens) we are the way in which the universe is coming to know itself, whether through the lens of science or aesthetics. That hardly seems like small potatoes.

Sometimes transhumanists play this side of the field, too. Perhaps we are the enlivening intelligence of a universe of otherwise dead matter, and it is the great task of humanity to spread intelligence throughout the cosmos, a task for which we are plainly unsuited in our present form. So onward, posthuman soldiers, following your self-willed evolutionary imperative! Those of us left behind may at least come to find some satisfaction that we were of the race that gave birth to you dancing stars.

It is interesting how quickly we come back to human insignificance; in this case, it is transhumanism’s belief in our vast potential to become what we are not, which makes what we are look so small.

Tuesday, November 17, 2009

The “Anti-Progress” Slur

Adam already noted my brief response to a charge frequently made against “bioconservatives”: that we are against progress and for suffering. I’d like to say a little more in the hope of putting this tired rhetorical trope to rest. So let me just list, in no particular order and without any effort to be comprehensive or predictive, a dozen areas in which I personally think science and technology are contributing to genuine incremental improvements in the material conditions of human life — i.e., progress:

1. Agriculture: decreased/more focused inputs, increased crop yields, quality, diversity, and reliability
2. Water: increased quality of water supply and more reliable and efficient distribution
3. Energy: more efficient energy production, transport and consumption, greater diversity of energy supplies
4. Transportation: increased speed and safety, greater energy efficiency
5. Food supply: improved quality and diversity along with increased safety and storage time
6. Space travel: reduced cost of routine manned space operations, increased capacity in exploratory efforts
7. Construction: more durable materials, increased simplicity and speed of commercial, residential and infrastructure construction
8. Military: increased ability to detect explosives and weapons of mass destruction, and to preempt their use or contain their consequences; increased reliability and precision of munitions
9. Medicine: increased safety, scope and availability of vaccination; less invasive and more precise surgery; personalized and/or narrowly targeted medical treatments; simplified diagnostics and treatments; better prosthetic devices for physical and neurological disabilities (and yes, I know about the thin line between therapy and enhancement, but we'll deal with that another time).
10. Nature: improved ability to predict extreme weather and geological events
11. Waste: treating waste products as resources
12. Communication: continued increases in speed, bandwidth and information connectivity

I’m not trying to be controversial or surprising here, nor to suggest that transhumanists are against any of these developments. But I am pointing out that progress is not the same as “the latest thing” or the most outré imaginings. Progress is not about being at the bleeding edge for its own sake, or having an idea that only a few people believe in, or being attracted to what is strange and unique. Let’s try not to confuse being for some less-than-controversial kinds of progress with being against progress simply. Any and all change is not progress, and somebody’s claim that a given change is “progress” should be taken as an invitation to critical thinking about what would make human life better — and not as the last word.

Monday, November 16, 2009

Quick Links: Singularity University, Neuro-Trash, and more

• Imagine the frat parties: Ted Greenwald, a senior editor of the print edition of Wired magazine, has been attending and covering Singularity University for Wired.com. We’ll have more on this in the days ahead. Meanwhile, Nick Carr suggests some mascots for Singularity U.

• Squishy but necessary: Last month, Athena Andreadis, the author of the book The Biology of Star Trek, had a piece in H+ Magazine throwing cold water on some visions of brain uploading and downloading. Money quote: “It came to me in a flash that many transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons.... The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.”

• Thanks, guys: We’re pleased to have fans over at the “Fight Aging” website, where they say we “write well.” The praise warms our hearts, it truly does. We only wish that those guys were capable of reading well. Their post elicited this response from our Futurisms coauthor Charles Rubin: “Questioning what look to us to be harebrained ideas of progress does not make us ‘against progress.’ Nor does skepticism about ill-considered notions of the benefits of immortality make us ‘for suffering’ or ‘pro-death.’ It may be that the transhumanists really cannot grasp those distinctions, perhaps because of their apparently absolute (yet completely unjustified) confidence in their ability to foretell the future. Only if they have a reliable crystal ball — if they can know with certainty that their vision of the future will come to pass — does opposition to their vision of progress make us ‘anti-progress’ and does acknowledging the consequences of mortality make us ‘pro-death.’” Indeed. And I might add that such confidence in unproven predictive powers seems less like the rationality transhumanists claim to espouse than like uncritical faith.

• A sporting chance: Gizmodo has an essay by Aimee Mullins — an actress, model, former athlete, and double amputee — about technology, disability, and competition. Her key argument: “Advantage is just something that is part of sports. No athletes are created equal. They simply aren’t, due to a multitude of factors including geography, access to training, facilities, health care, injury prevention, and sure, technology.” Mullins concedes that it might be appropriate to keep certain technological enhancements out of sport, but she is “not sure” where to draw the line, and she advises not making any decisions about technologies before they actually exist.

• On ‘Neuro-Trash’: A remarkable essay in the New Humanist by Raymond Tallis on the abuse of brain research. Tallis starts off by describing how neuroscience is being applied to ever more aspects of human affairs. “This might be regarded as harmless nonsense, were it not for the fact that it is increasingly being suggested ... that we should use the findings of neurosciences to guide policymakers. The return of political scientism, particularly of a biological variety, should strike a chill in the heart.” Beneath this trend, Tallis writes, lies the incorrect “fundamental assumption” that “we are our brains.” (Vaughan over at MindHacks describes Tallis’s essay as “barnstorming and somewhat bad-tempered.” Readers looking for more along these lines might also enjoy our friend Matt Crawford’s New Atlantis essay on “The Limits of Neuro-Talk.”)

• Calling Ringling Bros.: We’ve known for a long time that people talking on cell phones get so distracted that they can become oblivious to what’s physically around them — entering a state sometimes called “absent presence.” In the October issue of Applied Cognitive Psychology, a team of researchers from Western Washington University reported the results of an experiment observing and interviewing pedestrians to see if they noticed a nearby clown wearing “a vivid purple and yellow outfit, large shoes, and a bright red nose” as he rode a bicycle. As you would expect, cell phone users were pretty oblivious. Does this suggest that we’ll suffer from increasing “inattentional blindness” as we are bombarded with ever more stimuli from increasingly ubiquitous gadgets? Not necessarily: it turns out that pedestrians listening to music tended to notice the clown more than those walking in silence. The cohort likeliest to see the clown consisted of people walking in pairs.

• Metaphor creep: “If the brain is like a set of computers that control different tasks,” says an SFSU psychology professor, then “consciousness is the Wi-Fi network that allows different parts of the brain to talk to each other and decide which action ‘wins’ and is carried out.”

• Another kind of ‘Futurism’: This year marks the centenary of the international Futurist art movement. The 1909 Futurist Manifesto that kicked it all off is explicitly violent and even sexist in its aims (“we want to exalt movements of aggression, feverish sleeplessness, the double march, the perilous leap, the slap and the blow with the fist ... we want to glorify war — the only cure for the world...”) and critical of any conservative institutions (professors and antiquaries are called “gangrene”; museums, libraries, and academies are called “cemeteries of wasted effort, calvaries of crucified dreams, registers of false starts”). Central to the Futurist vision was a love of new technologies — and of all the speed, noise, and violence of the machine age.

Saturday, November 14, 2009

The Human Factor

There is a poll over at Gizmodo asking readers What Percentage of Our Body Would Have To Be Replaced Before We Ceased Being Human?

The options that the poll offers are mostly percentages — 10 percent, 20 percent, and so on — which is pretty silly, since it suggests that percentages are a useful way of talking about the human organism. (What is “20 percent” of a human body? Is that by mass? Or volume? Or perhaps surface area?) The logic of the poll’s options leads the respondent to think about the question along the lines of the Sorites paradox: “Well,” a respondent might think, “I don’t see what the big difference would be between 30 percent and 40 percent... or between 40 and 50 percent... or between 50 and 60...”

Given those options, it’s no surprise that three quarters of the respondents (as of this writing) have instead picked the following choice: “You can take away nearly everything, but if our brains are replaced by machines, we cease being human.”

In the comments beneath the poll, some readers objected to the options they were given. Commenter “newgalactic” argues that the correct answer to the question is a figure lower than any among the poll’s options: “The ‘body,’” he writes, “has more ties to the ‘mind/soul’ than we realize.” (He also wonders whether the poll results might be skewed by the fact that most Gizmodo readers are “dorks/nerds” who are “less physically blessed,” while someone with a body like Brad Pitt might “be more inclined to attach his humanity to his physical body.”)

The comments also suggest that any attempt to take the poll’s silly question seriously must start by asking a deeper question: What does it mean to be human? In the first issue of The New Atlantis, bioethicist Gilbert Meilaender described some of the difficulties that the deeper question entails:

We might try to think of human beings (or the other animals) [as collections of parts], and, indeed, we are often invited to think of them as collections of genes (or as collections of organs possibly available for transplant), but we might also wonder whether doing so loses a sense of ourselves as integrated, organic wholes.

Even if we think of the human being as an integrated organism, the nature of its unity remains puzzling in a second way. The seeming duality of person and body has played a significant role in bioethics. As the language of “personhood” gradually came to prominence in bioethical reflection, attention has often been directed to circumstances in which the duality of body and person seems pronounced. Suppose a child is born who, throughout his life, will be profoundly retarded. Or suppose an elderly woman has now become severely demented. Suppose because of trauma a person lapses into a permanent vegetative state. How shall we describe such human beings? Is it best to say that they are no longer persons? Or is it more revealing to describe them as severely disabled persons? Similar questions arise with embryos and fetuses. Are they human organisms that have not yet attained personhood? Or are they the weakest and most vulnerable of human beings?

Related questions arise when we think of conditions often, but controversially, regarded as disabilities.... Notice that the harder we press such views the less significant becomes any normative human form. A head, or a brain, might be sufficient, if it could find ways to carry out at a high level the functions important to our life.

Such puzzles are inherent in the human condition, and they are sufficiently puzzling that we may struggle to find the right language in which to discuss that aspect of the human being which cannot be reduced to body. Within the unity of the human being a duality remains, and I will here use the language of “spirit” to gesture toward it. As embodied spirits (or inspirited bodies) we stand at the juncture of nature and spirit, tempted by reductionisms of various sorts. We have no access to the spirit — the person — apart from the body, which is the locus of personal presence; yet, we are deeply ill at ease in the presence of a living human body from which all that is personal seems absent. It is fair to say, I think, that, in reflecting upon the duality of our nature, we have traditionally given a kind of primacy to the living human body. Thus, uneasy as we might be with the living body from which the person seems absent, we would be very reluctant indeed to bury that body while its heart still beat.

A definition of human being based only on biological parts will fail to capture the unique nature of the living human. A definition based on biological functions will fail to include human beings who lack those specific functions. Indeed, any strictly biological definition will miss the qualitative aspects of what it means to be human — how we live and behave over the course of our lives; what we do and are capable of doing; what we feel and experience; and how all of it changes — in short, the phenomena of life. And so a rich understanding of what it means to be human might start with science but must go beyond it, seeking wisdom especially in the disciplines rightly called “the humanities.”

There can be no honest answer to the Gizmodo poll as it is phrased, and there is no easy answer to the deeper question of what it means to be human. But the search is rewarding — and, in a way, the search may itself be part of the answer.

(h/t Instapundit)

Friday, November 13, 2009

The more you know... (about radical life extension)

Keep your eyes peeled when you're using Hulu and Vimeo these days and you may notice the latest step in the life-extension crowd's attempt to march into the mainstream. The Methuselah Foundation has created four "public service announcements" that are now in rotation on the two sites.
The four spots are here, here, here, and here.
NOTE: The original ending to this post has been removed, as it referred to possibly misleading identifying details from a previous post, which have also been removed. See the postscript to that post here.

Thursday, November 12, 2009

Long Live the King

Aubrey de Grey, a great advocate of immortality, is not worried about “immortal tyrants” for three reasons. First, because tyrannicide will still be possible. Second, because the spread of democracy will preemptively forestall tyranny. Third, because one immortal tyrant may not be so bad as a succession of tyrants, where the next guy is worse than the last. Each argument shows characteristic limits of the transhumanist imagination.

As far as tyrannicide goes, like many transhumanists de Grey stops well short of thinking through the possible consequences of the change he proposes (we are all speculating here, but we can try to be thorough speculators). Remember that tyrants already tend to be fairly security-conscious, knowing that whatever happens they are still mortal. Why would the prospect of having power and immortality to lose make them less risk-averse? It seems rather more likely that the immortal tyrant will be extremely risk-averse and hence security-conscious, and therefore represent a very “hard target” for the assassin — who will have equally much to lose if his mission is unsuccessful. As it is, most people living under a tyrant just do their best to keep their heads down; tyrannicides are rare. Throw immortality into the mix, and they are likely to be rarer still.

As far as democracy goes, de Grey exhibits a confidence characteristic of transhumanists generally: he knows what the future holds. I would certainly join him in hoping that democracy is here to stay and increasingly the wave of the future, but I don’t know that to be true and I don’t know how anyone could know that to be true. The victory of democracy over tyranny in the twentieth century was a near thing. History tells us that good times readily give way to bad times. The belief that democracy represents a permanent cure to the problem of tyranny is facile, in the way that all easy confidence about the direction of history is facile.

Finally, de Grey falls back on the proposition ‘better the devil you know than the devil you don’t’ — better Lenin than Stalin, to use his example. Leaving aside the question of how different the two leaders actually were, here de Grey is apparently trying to be hard-headed: It may not be all sweetness and light when we’re all immortal after all! Like many transhumanists, he is not very good at moral realism. You have to wonder: would the character of the immortal tyrant really stay the same over time? If, as the old maxim holds, absolute power corrupts absolutely, it would seem very much more likely that life under an immortal tyrant would get worse.

Finally, the problem is not really just tyranny, it is evil. In his Wisconsin State Fair speech of 1859, Lincoln notes, “It is said an Eastern monarch once charged his wise men to invent him a sentence, to be ever in view, and which should be true and appropriate in all times and situations. They presented him the words: ‘And this, too, shall pass away.’ How much it expresses! How chastening in the hour of pride! — how consoling in the depths of affliction!” Immortal evil means a world where the prideful will never be chastened, and the afflicted only consoled by giving up the very boon that de Grey promises us.

The problem with defending death

Todd May has a short essay on death at the New York Times's Happy Days blog. The argument is age-old (so to speak), but he reiterates it in a concise, compelling, and beautiful way:
Immortality lasts a long time. It is not for nothing that in his story “The Immortal” Jorge Luis Borges pictures the immortal characters as unconcerned with their lives or their surroundings. Once you’ve followed your passion — playing the saxophone, loving men or women, traveling, writing poetry — for, say, 10,000 years, it will likely begin to lose its grip. There may be more to say or to do than anyone can ever accomplish. But each of us develops particular interests, engages in particular pursuits. When we have been at them long enough, we are likely to find ourselves just filling time. In the case of immortality, an inexhaustible period of time.

And when there is always time for everything, there is no urgency for anything. It may well be that life is not long enough. But it is equally true that a life without limits would lose the beauty of its moments. It would become boring, but more deeply it would become shapeless. Just one damn thing after another.

This is the paradox death imposes upon us: it grants us the possibility of a meaningful life even as it takes it away. It gives us the promise of each moment, even as it threatens to steal that moment, or at least reminds us that some time our moments will be gone. It allows each moment to insist upon itself, because there are only a limited number of them. And none of us knows how many.
Well put. But wouldn't Todd May's argument about the importance of omnipresent death in shaping our lives become somewhat twisted and strained if it actually were possible to halt aging (as life extension advocates believe will someday be possible)? It is one thing to argue for the wisdom of accepting death when it is an inevitability. But it would be very different to make a positive case for death when it is no longer inevitable.

In his blog post, May notes that "it is precisely because we cannot control when we will die, and know only that we will, that we can look upon our lives with the seriousness they merit." But, although we can already decide to die if we so choose, might it not be much harder to look upon our lives with the same seriousness if we had to control when we died? Whatever the choice, our lives would take on a farcical quality, either from the emptiness of living without limits or the tragic absurdity of choosing to die rather than face that prospect.

(Hat tip: Brian Boyd)

[Image: "Q" from Star Trek: The Next Generation, portrayed by John de Lancie]

Wednesday, November 11, 2009

Robotic sports writers

The Singularity Hub posts on a new program that can churn out sports news stories:
Called Stats Monkey, the new computer software analyzes the box scores, and play by plays to automatically generate the news article. It highlights key players and clutch plays and will even write an appropriate headline and find a matching photo for a [key] player!

... [I]t could work for every sport humans like to read about. Moreover, Stats Monkey could be adapted to write business stories, or conference updates, or other forms of professional journalism that rely heavily on numbers and analytics. Writing, it seems, is no longer immune from automation.
Robotic sports journalists will make a nice complement to robotic athletes. Now all we need are robotic spectators! Human inefficiency could be removed from sports altogether, and then we could wonder at the technical prowess of games just as we marvel at the skills of server stacks:



And hey, if things really work out, soon we won't need humans for writing Singularity blogs either.

Tuesday, November 10, 2009

Someone is WRONG on the Internet!

À la XKCD, several recent posts here on Futurisms have stirred up some lively debate in comment threads. In case you missed the action, the "Transhumanist Resentment Watch" has led to a deeper exploration of some of this blog's major themes — resentment, disease, and normalcy. A post on magic pills has sparked a discussion on medical economics. The question of libertarian enhancement continues to bounce back and forth. And my rather innocuous posting of an Isaac Asimov story has led to tangents on hedonism and accomplishment.

Unmanning the Front Lines

The recent incident at Fort Hood recalls to mind a proposition that has become a great truism since the terrorism attacks of September 11, one that should never be allowed to be merely a truism: how grateful we, usually civilians, should be to the first responders who run toward danger rather than away from it. The guardian virtues our military, police, and public-safety organizations inculcate are all the more to be admired because they stand in such stark contrast to the virtues centering on comfortable self-preservation that are the stock and trade of our deeply bourgeois regime. In other times and places, courage, discipline, and honor might have been seen as among the highest expressions of our humanity; but I dare say most of us most of the time see them as instrumentally useful to peaceable pursuits that are the real business of life.

Courage of course requires being in harm’s way, and what we might hope would be the normal qualms of a decent chain of command about putting people in harm’s way is only heightened in our particular cultural environment. This point was brought home to me with great force when a student directed me to a recruitment video at the United States Navy Memorial website with the tagline “working every day to unman the front lines,” featuring the Navy’s remotely-piloted drone technology. It would be churlish and wrongheaded to deny that such marvels are a wonderful way to avoid putting the lives of our sons and daughters at risk. But it would be foolish to ignore the double entendre as well. With the front lines unmanned, there will be less need of nerve, courage, and spiritedness — manly virtues that Officer Kimberly Munley, who took down the Fort Hood shooter, reminds us are not exclusively the province of men. And it is not the Navy alone, of course. The push to replace human soldiers and first responders with robotic devices is well underway in nearly all services (I don’t know that much is happening on the fire or emergency medicine fronts).

Battling ’bots may still be only a distant prospect, and right at the moment we plainly have no lack of fellow citizens willing and able to serve as our guardians (although some first responders, like volunteer fire services, might be an exception). But in her provocative book Systems of Survival, Jane Jacobs warns that the guardian virtues hang together, and if you tamper with one you risk undermining them all — a point Plato might well agree with. So we should be asking ourselves: What happens to virtues like honor, loyalty, or discipline when they are not only challenged from without by the bourgeois virtues, but from within; when a need for courage is seen by the guardians themselves as a sign of a defect in their ability to protect us without putting themselves in harm’s way? It is an awesome task to be responsible for the lives of others at the risk of one’s own life, and through the guardian virtues the terrible power of that task is directed and constrained. As much as we hope for a day when all men will live in peace, we are entitled to wonder whether that day will be brought closer by replacing the traditional terrors of battle with innovative methods of cold-blooded killing.

Monday, November 9, 2009

Singularity Summit videos

Videos of the talks from the 2009 Singularity Summit, which we covered extensively on this blog, are now available here. A few videos are still missing, but most of them are up.

The best videos (IMHO, as the kids say) are:
  • David Chalmers on principles of simulation and the Singularity (video / post)
  • Peter Thiel making the economic case for the Singularity (video / post)
  • And the discussion with Stephen Wolfram on the Singularity at the cosmic scale (video / post)

Also worthwhile, revealing, or at least entertaining:
  • Brad Templeton's talk was one of the most entertaining, ambitious, and plausible; the audience question segment was also particularly good (video / post)
  • Juergen Schmidhuber's talk on digitizing creativity was lively and engaging, if silly (video / post)
  • The segment of Michael Nielsen's talk where he describes the principles of quantum computing (video / post)

The Myth of Libertarian Enhancement, Cont'd

Our recent post about libertarian enhancement has received some pushback. For example, commenter Kurt9 says he believes that “the highest moral value in the universe is to pursue one’s own happiness and love of life.” (He says that anyone who disagrees with him is engaging in “sophistry for totalitarianism” — his attempt at peremptorily ending all debate.) For Kurt9, a libertarian, pursuing happiness and love-of-life means super-longevity: “I want radical life extension (multiple 1000 year life span). I want to cure aging and get free of it. I fail to see why you should have a problem with this.”

This gives us another opportunity to discuss libertarianism and transhumanism. Consider: If he were to pursue radical life extension in strict accordance with libertarian principles, he would have to do so free from restrictions imposed by others and without in turn restricting the choices of others. This might be possible if he were a brilliant scientist living out in a solitary shack in the woods without contact with human civilization.

"A Hunstman and Dogs" by Winslow Homer. Courtesy WorldVisitGuide.But living alone in a shack isn’t usually conducive to major medical advances. The actual realization of our commenter Kurt9’s dream would require a society in which thousand-year lifespans were not just attainable but available to guys like him. That would be a society very different from our own. To put it simply, consider all the social, cultural, and economic changes that accompanied the doubling of human life expectancy over the last century and a half, and imagine the turmoil that would be involved in suddenly adding another nine centuries to the lifespan. The changes involved would be radical, complex, far from uniformly good or bad, and extremely difficult to predict beforehand.

This actually hints at a deeper truth connected to how we think about the future. All human beings live in a particular time and place. Moreover, all our choices and our ways of life presuppose a particular society, culture, and set of institutions in which they can be realized. However much fun it might be for fiction or for a thought experiment, in real life it makes no sense to talk about how “the world as it is now” will be different from “the world as it is now with the single modification that one individual can choose to live for a thousand years.” It is a meaningless proposition, as much a practical absurdity as it would be for an ancient Roman to insist that it concerns no one else if he wants to invent and drive an automobile.

The commenter Kurt9 also says that “We have no desire to impose our dreams and choices on other[s]. We seek only the freedom to do our own thing.”

This libertarian “freedom to do our own thing” implies that each individual should be equally free to pursue his or her chosen way of life no matter what that choice is, so long as no one else is harmed. Yet as much as you might not want to impose your own choices on others, it is an inescapable fact of life that our choices do impinge and often impose on others. Consider the old saw about liberty — that your freedom to swing your fist ends at the tip of my nose. But in real life, a guy flailing around with clenched fists is going to alter the behavior of everyone in sight. Or, to pick a different example, my neighbor’s choice to mine coal in his backyard obtrudes upon my freedom to choose to live in a quiet neighborhood with unpolluted groundwater and high property values. Or, to offer an example more relevant to some of our readers, your personal choice to develop an artificial intelligence that can write useful computer programs will impinge on my freedom to choose to enjoy a fulfilling and lucrative career as a computer programmer.

Libertarian transhumanists don’t really seem to be interested in protecting each individual’s equal freedom to do and be what he wants. Rather, they are interested in defending their own prerogatives to pursue their particular choices to enhance themselves without any cumbrance or criticism. But even this narrower and more solipsistic version of libertarianism will ultimately have to contend with the fact that as any given individuals gain powers — particularly powers of the sort that would be available to hypothetical posthumans — they will gain also the ability to exercise those powers in spite of and over others individuals in ways that will be far more difficult to prevent, stop, or even detect.

Liberty, rightly understood, does need to be defended. But we must also recognize that the image of liberty that some libertarians hold — epitomized by the iconic rugged frontiersman — depends on a self-reliance and self-constitution that are quite alien to today’s world. We are now more socially, politically, and technologically enmeshed than ever before. And while government tyranny remains a serious concern, there are other kinds of tyranny — including freely-chosen technological tyranny — that we should remain vigilant against.

Friday, November 6, 2009

Defining ‘Cyborg’ Down

Wired Science has a story by Brandon Keim featuring the work of University of Chicago geoscientist Patrick McGuire. McGuire is working on “wearable AI systems and digital eyes that see what human eyes can’t.” So equipped, “space explorers of the future could be not just astronauts, but ‘cyborg astrobiologists.’” That phrase — “cyborg astrobiologist” — comes from the title McGuire and his team gave to the paper reporting their early results. In their paper, they describe developing a “real-time computer-vision system” that has helped them successfully to identify “lichens as novel within a series of images acquired in semi‑arid desert environments.” Their system also quickly learned to distinguish between familiar and novel colored samples.

According to Keim, McGuire admits there is a long way to go before we get to the cyborg astrobiologist stage — a point that seems to have been missed by the folks at Wired Science, who gave Keim’s piece the headline “AI Spacesuits Turn Astronauts Into Cyborg Biologists” (note the present tense). But it’s true that the meaning of “cyborg” is contested ground. If Michael Chorost in his fine book Rebuilt (which I reviewed here) can decide that he is a cyborg because he has a cochlear implant, than perhaps those merely testing McGuire’s system are cyborg, too.

But my point now isn’t to be one of those sticklers who tries to argue with Humpty Dumpty that it is better if words don’t mean whatever we individually want them to mean. Rather, I’m wondering why McGuire should have used this phrase, “cyborg astrobiologists,” in this recent paper and a number of earlier ones. The word “cyborg” was originally used to describe something similar to what McGuire is attempting, as Adam Keiper has noted:

In 1960, at the height of interest in cybernetics, the word cyborg—short for “cybernetic organism”—was coined by researcher Manfred E. Clynes in a paper he co-wrote for the journal Astronautics. The paper was a theoretical consideration of various ways in which fragile human bodies could be technologically adapted and improved to better withstand the rigors of space exploration. (Clynes’s co-author said the word cyborg “sounds like a town in Denmark.”)

But McGuire doesn’t seem to be aware of the word’s original connection to space exploration — he doesn’t acknowledge it anywhere, as far as I can tell — and instead he seems to be using the word “cyborg” in its more recent and sensationalistic science-fiction-ish sense of part-man, part-machine. So why use that word? The simple answer, I suppose, is that academics are far from immune to the lure of attention-getting titles for their work. But it is still noteworthy that for McGuire and his audience, “cyborg” is apparently something to strive for, not a monstrous hybrid like most iconic cyborgs (think Darth Vader, the Borg, or the Terminators). Deliberately or not, McGuire is engaged in a revaluation of values. One wonders whether in a transhumanist future there will be any “monsters” at all; perhaps that word will share the fate of other terms of distinction that have become outmoded or politically incorrect. “Monster,” after all, implies some norm or standard, and transhumanism is in revolt against norms and standards.

Or perhaps the unenhanced human being will become the monster, the literal embodiment of all that right-thinking intelligence rebels against, a dead-end abortion of mere nature. Their obstinate persistence would be fearful if they themselves were not so pitiful. We came from that?

Tuesday, November 3, 2009

In texted time

Three items today relevant to recent posts. First, following up on our series of posts on lifelogging, CNN has a very cursory but still-worth-excerpting article called "Do digital diaries mess up your brain?":
But recording everything you do takes people out of the "here and now," psychologists say. Constant documenting may make people less thoughtful about and engaged in what they're doing because they are focused on the recording process, Schwartz said.

Moreover, if these documented memories are available to others, people may actually do things differently.

"If we have experiences with an eye toward the expectation that in the next five minutes, we're going to tweet them, we may choose difference experiences to have, ones that we can talk about rather than ones we have an interest in," he said.

Similarly, a 1993 study led by researchers at the University of Virginia found that undergraduate students who were asked to think about their reasons for choosing posters chose differently and reported less satisfaction than those who did not have to justify their choices.
The opportunity to contact many people at once seems to encourage compartmentalization, as people try to establish different kinds of romantic attachments with different people at the same time.

It seems to encourage an attitude of contingency. If you have several options perpetually before you, and if technology makes it easier to jump from one option to another, you will naturally adopt the mentality of a comparison shopper.

It also seems to encourage an atmosphere of general disenchantment. Across the centuries the moral systems from medieval chivalry to Bruce Springsteen love anthems have worked the same basic way. They take immediate selfish interests and enmesh them within transcendent, spiritual meanings. Love becomes a holy cause, an act of self-sacrifice and selfless commitment.

But texting and the utilitarian mind-set are naturally corrosive toward poetry and imagination. A coat of ironic detachment is required for anyone who hopes to withstand the brutal feedback of the marketplace. In today’s world, the choice of a Prius can be a more sanctified act that the choice of an erotic partner.
Finally, Mariah Carey aside, can you believe this is intended as an advertisement for Blackberrys?:

Monday, November 2, 2009

On being in the world

Apropos the recent pair of posts here on lifelogging, I might recommend for further reading Christine Rosen's essay on multitasking from The New Atlantis last year, and Walter Kirn's 2007 essay on that subject in The Atlantic. From Kirn's piece:
Productive? Efficient? More like running up and down a beach repairing a row of sand castles as the tide comes rolling in and the rain comes pouring down. Multitasking, a definition: “The attempt by human beings to operate like computers, often done with the assistance of computers.” It begins by giving us more tasks to do, making each task harder to do, and dimming the mental powers required to do them.
Kirn's essay contains so many asides and parentheticals but builds in such a crescendo that I think he must have intentionally crafted the form of the essay to itself be a sort of meditation on focus. He directs his ire not so much at the technologies of multitasking as at the ways they are used, and at the unquestioned premises behind the tools' design and promotion — premises that can produce effects quite the opposite of what is promised and intended.

Take e-readers, for example. Let's put aside the claims that reading is coming to an end and the counter-claims that reading is undergoing a renaissance; instead, let's focus on the e-reader technology itself. The difference between, say, the Kindle and printed books (playfully explored here by Alan Jacobs on one of our sister blogs) is of course partly a matter of comfort for the eye and the hand. But more importantly, screens are generally part of a series of technologies that immerse us in a vast web of constant connection to other things, people, and ideas — rather than just the things, people, and ideas right in front of us. In another New Atlantis article last year, Christine Rosen described her experience attempting to read Dickens's Nicholas Nickleby on a Kindle:
... I quickly adjusted to the Kindle’s screen and mastered the scroll and page-turn buttons. Nevertheless, my eyes were restless and jumped around as they do when I try to read for a sustained time on the computer. Distractions abounded. I looked up Dickens on Wikipedia, then jumped straight down the Internet rabbit hole following a link about a Dickens short story, “Mugby Junction.” Twenty minutes later I still hadn’t returned to my reading of Nickleby on the Kindle.
Maryanne Wolf wonders about the implications of that kind of distraction for children on the New York Times website:
The child’s imagination and children’s nascent sense of probity and introspection are no match for a medium that creates a sense of urgency to get to the next piece of stimulating information. The attention span of children may be one of the main reasons why an immersion in on-screen reading is so engaging, and it may also be why digital reading may ultimately prove antithetical to the long-in-development, reflective nature of the expert reading brain as we know it....

The habitual reader Aristotle worried about the three lives of the “good society”: the first life is the life of productivity and knowledge gathering; the second, the life of entertainment; and the third, the life of reflection and contemplation....

I have no doubt that the digital immersion of our children will provide a rich life of entertainment and information and knowledge. My concern is that they will not learn, with their passive immersion, the joy and the effort of the third life, of thinking one’s own thoughts and going beyond what is given.
E-readers wouldn't be nearly as problematic if they didn't — both explicitly by being Internet-enabled and implicitly through their digital and screeny natures — draw us into the mode of interaction that is characteristic of the digital world. Reading itself may not be going anywhere, but sustained and focused reading might become increasingly difficult.


And of course these concerns about screens and reading apply more broadly to our interactions with people, places, and the world around us in general. Just take a look at the pilots who recently not only overflew their airport by 150 miles but didn't even respond to frantic hails from airports and other nearby pilots, all because they were distracted by their laptops. Maybe the pilots are lying — maybe they were really asleep — but even then, the fact that they would use laptops as an excuse and that so many of us would find that excuse plausible suggests that we understand the great power that the screen can have over us. One shudders to imagine how our interaction with the world will shift if the medium of information immersion is slapped right onto our eyeballs.

(Hat tip: Justin Henderson)
[Photo credits: Parviz Research Group, University of Washington; Ryon Day via The Austin Map Project]

Sunday, November 1, 2009

Useful Singularity overview

Various people I know across the pro/anti-transhumanism spectrum have been looking for a while for good but concise introductory material to give to people who don't know about the Singularity. Ray Kurzweil's The Singularity is Near is probably now the standard introductory text, but not all people want to read a book that is in what Kurzweil might call an uncompressed format (which is to say, rather long and repetitive) on a subject that they don't know anything about in the first place.

Well, thank goodness for the variation of media formats. If you're looking for something short and clear, I found essentially a six-page version of Kurzweil's book that he did as an article for The Futurist, the magazine of the World Future Society. It's on pages 2-3 and 5-9 of the PDF here.