Futurisms: Critiquing the project to reengineer humanity

Tuesday, March 17, 2015

Progress or Infinite Change?

H.G. Wells
I have recently been spending a fair amount of my time during my sabbatical year at Princeton as a Madison Fellow reading and thinking about H.G. Wells, in preparation for an upcoming Agora Institute for Civic Virtue and the Common Good conference. Wells was tremendously influential in the first half of the twentieth century and, as it seems to me anyway, he was crucial in popularizing “progress” as a kind of moral imperative, an idea whose strengths and weaknesses are still with us today.

Wells, along with Winwood Reade (whom I discuss in my new book Eclipse of Man), was a pioneer of trying to tell the human story in connection with “deep history.” But so far as I know he never argued, nor would he have been so foolish as to argue, that there was any kind of steady, incremental progress in human affairs that could be traced all the way back to prehistory. While as a progressive he may have been second to none, his view was far more careful and nuanced.

First of all, he knew at some level, along with his friend G.K. Chesterton, that any talk of progress requires a goal, and he wrote in The Outline of History that the foundations for the human project that would become progress were only laid in the fifth and fourth centuries b.c. As Wells put it,

The rest of history for three and twenty centuries is threaded with the spreading out and development and interaction and the clearer and more effective statement of these main leading ideas. Slowly more and more men apprehend the reality of human brotherhood, the needlessness of wars and cruelties and oppression, the possibilities of a common purpose for the whole of our kind.

Yet even at that, our power to actually achieve such goals is, in Wells’s account, severely limited until Renaissance thinkers open the door to the scientific and technical revolutions that, by the nineteenth century, have given humankind unprecedented power over nature, with far more promised to come in the future.

Indeed, real progress for Wells was something that was still to come. That is because it would not have occurred to him to think that at any given moment the positive changes in human affairs necessarily outweighed the negative. Each generation may not even be better off than the one that came before:

Blunder follows blunder; promising beginnings end in grotesque disappointments; streams of living water are poisoned by the cup that conveys them to the thirsty lips of mankind. But the hope of men rises again at last after every disaster.... [Ellipses in original]

Progress was not a sure thing, an obvious fact of history, but the hope that a golden thread running into the relatively recent past would not be broken. Such a hope may or may not be realistic, but it is refreshing to see Wells identify it for what it is, rather than trying to adduce some sort of necessary laws of historical development or to find all the silver linings in very cloudy weather.

Now, Wells gets himself into trouble when he tries to reconcile this view of progress as the achievement of old goals with an evolutionary, competitive imperative that forbids him to imagine the future as any kind of stable end state. While in numerous books, at often tedious length, he lays out various relatively near-term futures that represent his view of how human brotherhood and peaceableness could be realized by an elite’s proper deployment of science and technology, they often include a certain amount of hand-waving about these utopias just paving the way for even more extraordinary possibilities as yet unenvisioned because perhaps unenvisionable by us, with our narrow views. In principle, at least, this means that in the end Wells can defend change, but not, past a certain point, progress.

This difficulty reconciling progress with mere change is still alive in our own day. Our tech industry sometimes tells us the ways that it will make our lives better, but sometimes adopts more neutral terminology — we routinely hear of “change agents” and “disruptors” — no longer even promising progress except understood as change itself. “The Singularity,” strictly speaking, is just the extreme expression of the same idea. But it is not really “progress” any more if perpetual competition means that all that is solid perpetually melts into thin air. The changes that come along may be wonderful or not, each in its own way. They may aggregate into circumstances that are better or worse, each in its own way. Our non-prescriptive, libertarian postmodern transhumanists are in the same position; to call “anything is permitted” progress is only possible if progress is defined as “anything is permitted.”

When the way we understand future history thus dissolves into particularity, it is hard to see how the future — let alone the bloody and oppressive past — could be a positive sum game, as we expect that one generation will have only a severely limited common measure of “positive” with the next. We see signs already. Is the present generation a little better off than the previous one, because they are being raised with cellphones in hand? Surely the passing generations, with their old-fashioned ideas of friendship and social interaction, are entitled to doubt it, while the generations yet to come will wonder at the bulky and clumsy interface that their progenitors had to contend with. How did they walk along and look down at the screen at the same time? What a toll it must have taken! Perhaps people just had to be much tougher back then, poor saps....

Tuesday, February 17, 2015

Who Speaks for Earth?

A recent not-very-good article in The Independent presents as news what is really an ongoing debate within the relatively small community of scientists interested in the search for extraterrestrial intelligence (SETI). The issue is whether or under what circumstances SETI should become “METI” — that is, Messages to Extraterrestrial Intelligence. We have been listening for messages; should we start deliberately broadcasting them?

Image via Wikimedia
Actually, we have been doing this deliberately but hardly systematically for some decades now: think of the justly famous Pioneer plaques of 1972–73 and the Voyager Golden Record of 1977. David Brin, the noted science fiction author and admittedly a partisan of one side in this debate, provides an excellent background discussion, which I hope he will update again in light of the more recent events The Independent alludes to.

Like all discussions about SETI, the merits of this one depend heavily on our assumptions about the nature and existence of advanced extraterrestrial intelligence, a topic that reasonable people are very free to disagree on because we know absolutely nothing about it. For example, the whole question of sending messages to planetary systems that we have newly identified as good targets for having life at all (which discoveries seem to be spurring the current round of METI interest) presupposes not only that we have some solid understanding of all the conditions under which life can emerge. It also presupposes what some would regard as a rather old-fashioned SETI model of interplanetary communication between intelligences more or less advanced yet bound to their planets. For those transhumanists like Hans Moravec who see the future on our planet as artificial intelligences greedily transforming matter into computational substrates and spreading out in a wave of expansion traveling at not much less than the speed of light (think Borgs without bodies) the notion that we should just send messages over to other planets can only look quaint. Or if intelligent self-replicating nanomachines are in our future, then we may already be sending messages to ETI without even knowing it because such machines created by super-intelligent aliens may already be here among us. And so on. Transhumanist responses to SETI have shown how the sky is the limit when it comes to our imagination of not-implausible ETI scenarios (indeed, what defines “plausible”?). And imagination will be all we have to go on, until well after we have had some comprehensible first contact.

I admit to finding both sides of the METI debate unsatisfying. Those who advocate sending messages are counting either on a dogmatic belief in the benevolent nature of alien life or on the vastness of cosmic distances to act as a quarantine effect. These are both dubious assumptions; I discuss them critically at some length in my new book Eclipse of Man.

And there is certainly something to David Brin’s concern that the advocates of sending messages are taking a great deal on themselves by proceeding along these lines without a more thorough consideration of the merits of the case. Yet Brin’s own desire for international consultation, or, as he puts it on his website, getting “input from humanity’s best and wisest sages ... while laying all the issues before a fascinated general public,” does not conform to the sensible reservations he expresses elsewhere about the wisdom of individuals and seems pretty thin gruel if indeed the fate of all of humanity is at stake. It is a wonderful thing “to open up broader, more eclectic and ecumenical discussions.” But we still have to wonder about their results, if indeed they reach any conclusions at all, when there is no framework of authority for actually shepherding such a discussion to a presumptively globally legitimate and enforceable conclusion — which is almost certainly just as well when you stop and think about the way so many of the global political institutions we do have actually work. We may not know anything about extraterrestrial intelligence, but we do know the answer to the question, “Who speaks for Earth?” So far: nobody, thank goodness.

Thursday, February 12, 2015

Darwin Among the Transhumanists

Image: Wikimedia / Patche99z (CC)
Today is “Darwin Day” — the anniversary of the great naturalist Charles Darwin’s birth in 1809 — which is as good a time as any to reflect on the complicated ways in which Darwinian thinking influences the transhumanists. This is discussed at several points in Eclipse of Man, the new book by our Futurisms colleague Charles Rubin, which you should go out and buy today.

Professor Rubin lays out some of the ways, both obvious and subtle, that the Darwinian idea of evolution via competition was picked up by the predecessors of today’s transhumanists. This fundamental idea is in tension with the ideas of other major thinkers, like the philosopher Condorcet’s sunny belief in human improvement and the economist Thomas Malthus’s worries about scarcity and limited resources. “Through to our own day,” Rubin writes, “much of the debate about progress has arisen from tensions among these three men’s ideas: Condorcet’s optimism about human perfectibility, the Malthusian problem of resource scarcity, and the Darwinian conception of natural competition as a force for change over time. The transhumanists, as we shall see, reconcile and assimilate these ideas by advocating the end of humanity.”

Transhumanism, Professor Rubin writes, is

an effort to maintain some concept of progress that appears normatively meaningful in response to Malthusian and Darwinian premises that challenge the idea of progress. Malthusianism has come to be defined by thinking that the things that appear to be progress — growing populations and economies — put us on a self-destructive course, as we accelerate toward inevitable limits. But it almost seems as if, in the spirit of
Malthus’s original argument, there is something inevitable also about that acceleration, that we are driven by some force of nature beyond our control to grow until we reach beyond the capacities of the resources that support that growth. Meanwhile, mainstream Darwinian thinking has done everything it can to remove any taint of progress from the concept of evolution; evolution is simply change, and randomly instigated change at that.

Transhumanism rebels against the randomness of evolution and the mindlessness of a natural tendency to overshoot resources and collapse. It rejects ... the “assumption of mediocrity” in favor of arguing that man has a special place in the scheme of things. But its rebellion is not half as radical as it assumes, for transhumanism builds on the very same underlying conception of nature that the Malthusians and Darwinians build on, vociferously rejecting the thought that nature has any inherent normative goals or purposes. While it rejects blind evolution as a future fate for man, it accepts it as the origins of man. While it rejects a Malthusian future, it does so with threatening the same old apocalypse if we do not transcend ourselves, and, in the form of Kurzweil’s law of accelerating returns, it adopts a Malthusian sense that mankind is in the grip of forces beyond its control.

Because transhumanism accepts this account of nature, it is driven to reject nature. Rejecting also any religious foundations for values, then, it is left with nothing but socially constructed norms developed in response to human power over nature, which, given the unpredictable transformative expectations they have for that power as it becomes not-human, ultimately amounts to nothing at all. Transhumanism is a nihilistic response to the nihilism of the Malthusians and Darwinians.

You can see why Peter Lawler says Eclipse of Man is a “hugely significant accomplishment”: you simply won’t find as insightful, thoughtful, and trenchant a critique of transhumanism anywhere else.

Friday, February 6, 2015

Listen up forefathers

A recent iPad Air 2 commercial (“Change is in the air”) features a song by The Orwells called “Who Needs You” with the following lyrics. The italicized portions are the lines actually used in the spot:

You better toss your bullets
You better hide your guns
You better help the children
Let ’em have some fun

You better count your blessings
Kiss mom and pa
You better burn that flag
Cause it ain’t against the law!

You better pledge your allegiance
You’re not the only one
Listen up forefathers
I’m not your son
You better save the country
You better pass the flask
You better join the army
I said: “no thank you, dear old uncle Sam!”

You better toss your bullets
You better hide your guns
You better help the children
Let them have some fun, some fun, some fun!

On its own terms I find the social commentary of the song a bit murky. It is hard for me, old fogey that I am, to distinguish between the things “you better” do that are meant ironically and the things you better do that are serious, if there are any such. The song seems to stretch for a Sixties-style oppositional sensibility without any clear sense of what to be opposed to. Is it a sly lament about alienation and not feeling needed, or a declaration of complete autonomy? Is the lesson that helping the children is fun? Maybe hipper and younger people at Apple caught the drift when they edited the song in such a way as to suggest that having “some fun” really is the key point. But even then, I wonder what fun is it to burn the flag if it is not against the law? And who cares if you don’t join the army when there is no draft?

Still, the use of the song by a huge and profitable corporation like Apple strikes me as (ahem) Orwellian, if in a relatively familiar big-business-cutting-its-own-throat sort of way. Who needs you? Apple needs you, to be the success it is, a success that still seems to be riding very much on the coattails of its forefather, Steve Jobs. And indeed, most of the images in the ad suggest that you need people in order to have fun with your iPad, or indeed help “the children” with it.

But the inner contradictions are not the only problem. Apple has been able to have its success in this country precisely because it is a child of its forefathers. The Constitution of the United States, as it serves to protect private property, provide the rule of law, protect trade and intellectual property, promote domestic tranquility, and provide for the common defense, is the necessary condition of Apple’s existence, let alone of its managers’, shareholders’, and workers’ ability to profit from its existence. (The same might be said of the suburban-kid Orwells, of course.) If the changes in the air are based on a repudiation of the foundations secured by our forefathers, Apple will not long thrive. Go have some fun then.

Apple wants to be in our heads, and is pretty good at getting there. And this dismissive — dare I say unpatriotic — attitude is the message they choose to link with their product. There’s no arrogance like big-business tech arrogance, no blindness like that of the West-coast masters of the universe who think that the world was created at the beginning of the last product cycle.

Friday, January 9, 2015

Transhumanism, Freedom, and Coercion

Transhumanists believe that natural human limitations can, or should, or even must be overcome, via biotechnology, nanotechnology, and other means.

Yet many transhumanists emphasize that people should not be be forced into using enhancement technologies. Rather, individuals should be free to decide whether or not to transform themselves. Our colleague Charles T. Rubin puts it this way in his excellent new book Eclipse of Man: Human Extinction and the Meaning of Progress:

A great many transhumanists stand foursquare behind the principle of consumer choice. Most are willing to concede that enhancements ought to be demonstrably safe and effective. But the core belief is that people ought to be able to choose for themselves the manner in which they enhance or modify their own bodies. If we are to use technology to be the best we can be, each of us must be free to decide for himself what “best” means and nobody should be able to stop us.

This techno-libertarian stance seemingly allows transhumanists to distance themselves from early-twentieth-century advocates of eugenics, who believed that government coercion should be used to achieve genetic betterment. What’s more, when they are compared to eugenicists, the transhumanists turn it around, employing a clever bit of jujitsu:

Indeed, the transhumanists argue, it is their critics — whom they disparagingly label “bioconservatives” and “bioluddites” — who, by wishing to restrict enhancement choices, are the real heirs of the eugenicists; they are the ones who have an idea of what humans should be and want government to enforce it. The transhumanists would say that they are far less interested in asserting what human beings should be than in encouraging diverse exploration into what we might become, including of course not being human at all. Moreover, the argument goes, transhumanists are strictly speaking not like eugenicists because they are not interested only in making better human beings — not even supermen, really. For to be merely human is by definition to be defective.

It is this view of human things that makes the transhumanists de facto advocates of human extinction. Their dissatisfaction with the merely human is so great that they can barely bring themselves to imagine why anyone would make a rational decision to remain an unenhanced human, or human at all, once given a choice.

However, if the transhumanists are for the most part against state coercion in relation to enhancements, as we have already seen that does not mean there is no coercive element in the transition to the transhuman. They can avoid government coercion because they believe that the freedom of some individuals to enhance and redesign as they please adds up to an aggregate necessity for human enhancement, given competitive pressure and the changing social norms it will bring. Indeed, to the extent that transhumanists recognize that theirs is presently the aspiration of a minority, they are counting on this kind of pressure to bring about the changes in attitude they desire.

Within the framework of the largely free market in enhancements the transhumanists imagine, an arms-race logic will drive ever-newer enhancements, because if “we” don’t do it first, “they” will, and then “we” will be in trouble. This kind of coercion is not of much concern to transhumanists; they are content to offer that it does not infringe upon freedom because, as the rules of the game change, one always retains the freedom to drop out. Indeed, the transhumanists seem to take particular delight in pointing out that anyone who opposes the idea that the indefinite extension of human life is a good thing will be perfectly free to die. In a world of enhancement competition, consistent “bioluddites” will be self-eliminating.

Once we see past the transhumanists’ superficial appeal to freedom, we can see transhumanism for what it is: an ideology committed to the necessity of human transformation, a transformation that is tantamount to extinction.

To read more of Rubin’s thoughts on techno-libertarianism and transhumanism, get yourself a copy of Eclipse of Man today, in hardcover or e-book format.

Wednesday, December 17, 2014

Near, Far, and Nicholas Carr

Nicholas Carr, whose new book The Glass Cage explores the human meaning of automation, last week put up a blog post about robots and artificial intelligence. (H/t Alan Jacobs.) The idea that “AI is now the greatest existential threat to humanity,” Carr writes, leaves him “yawning.”

He continues:

The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

I would not argue with Carr about probable versus possible — he may well be right there. But later in the post, quoting from an interview he gave to help promote his book, he implicitly acknowledges that there are people who think that machine consciousness is a great idea and who are working to achieve it. He thinks that their models for how to do so are not very good and that their aspirations “for the near future” are ultimately based on “faith, not reason.”

near ... or far?
All fine. But Carr is begging one question and failing to observe a salient point. First, it seems he is only willing to commit to his skepticism for “the near future.” That is prudent, but then one might want to know why we should not be concerned about a far future when efforts today may lay the groundwork for it, even if by eliminating certain possibilities.

Second, what he does not pause to notice is that everyone agrees that “flood, famine, pestilence, plague and war” are bad things. We spend quite a serious amount of time, effort, and money trying to prevent them or mitigate their effects. But at the same time, there are also people attempting to develop machine consciousness, and while they may not get the resources or support they think they deserve, the tech culture at least seems largely on their side (even if there are dissenters in theory). So when there are people saying that an existential threat is the feature and not the bug, isn’t that something to worry about?

Friday, December 5, 2014

Margaret Atwood’s Not-Very-Deep Thoughts on Robots

Margaret Atwood has been getting her feet wet in the sea of issues surrounding developments in robotics and comes away with some conclusions of corresponding depth. Robots, she says, are just another of the extensions of human capacity that technology represents, they represent a perennial human aspiration, maybe they will change human nature, but what is human nature anyway?

This is all more or less conventional stuff, dead center of the intellectual sweet spot for the Gray Lady, until Atwood gets to the very end. What really concerns her seems to be that we would commit to a robotic future and then run out of electricity! That would pretty much destroy human civilization, leaving behind “a chorus of battery-powered robotic voices that continues long after our own voices have fallen silent.” Nice image — but long after? That’s some battery technology you’ve got there. The robots would not care to share any of that power?

As I discuss in my new book Eclipse of Man: Human Extinction and the Meaning of Progress, most of those who believe in “Our Robotic Future,” as Atwood’s piece is titled, do so with the expectation that it is part and parcel of an effort at overcoming just the kind of Malthusian scarcity that haunts Atwood. They may of course be wrong about that, but given the track record of ongoing innovation in the energy area, it is hard to see why one would strain at this particular gnat.

Then again, the NYT essay suggests that most of Atwood’s literary knowledge of things robotic seems to end by the 1960s. Atwood’s own bleak literary futures seem to focus on the biological; maybe she has not got the transhumanist memo yet.

—Charles T. Rubin, a Futurisms contributor and contributing editor to The New Atlantis, is a Fellow of the James Madison Program at Princeton University.

Wednesday, December 3, 2014

Human Flourishing or Human Rejection?

Sometimes, when we criticize transhumanism here on Futurisms, we are accused of being Luddites, of being anti-technology, of being anti-progress. Our colleague Charles Rubin ably responded to such criticisms five years ago in a little post he called “The ‘Anti-Progress’ Slur.”

In his new book Eclipse of Man, Professor Rubin explores the moral and political dimensions of transhumanism. And again the question arises, if you are opposed to transhumanism, are you therefore opposed to progress? Here, in a passage from the book’s introduction, Rubin talks about the distinctly modern idea that humanity can better its lot and asks whether that goal is in tension with the transhumanist goal of transcending humanity:

Even if the sources of our misery have not changed over time, the way we think about them has certainly changed between the ancient world and ours. What was once simply a fact of life to which we could only resign ourselves has become for us a problem to be solved. When and why the ancient outlook began to go into eclipse in the West is something scholars love to discuss, but that a fundamental change has occurred seems undeniable. Somewhere along the line, with thinkers like Francis Bacon and René Descartes playing a major role, people began to believe that misery, poverty, illness, and even death itself were not permanent facts of life that link us to the transcendent but rather challenges to our ingenuity in the here and now. And that outlook has had marvelous success where it has taken hold, allowing more people to live longer, wealthier, and healthier lives than ever before.

So the transhumanists are correct to point out that the desire to alter the human condition runs deep in us, and that attempts to alter it have a long history. But even starting from our perennial dissatisfaction, and from our ever-growing power to do something about the causes of our dissatisfaction, it is not obvious how we get from seeking to improve the prospects for human flourishing to rejecting our humanity altogether. If the former impulse is philanthropic, is the latter not obviously misanthropic? Do we want to look forward to a future where man is absent, to make that goal our normative vision of how we would like the world to be?

Francis Bacon famously wrote about “the relief of man’s estate,” which is to say, the improvement of the conditions of human life. But the transhumanists reject human life as such. Certain things that may be good in certain human contexts — intelligence, pleasure, power — can become meaningless, perverse, or destructive when stripped of that context. By pursuing these goods in abstraction from their human context, transhumanism offers not an improvement in the human condition but a rejection of humanity.

For much more of Charlie Rubin’s thoughtful critique of transhumanism, pick up a copy of Eclipse of Man today.

Wednesday, October 29, 2014

Our new book on transhumanism: Eclipse of Man

Since we launched The New Atlantis, questions about human enhancement, artificial intelligence, and the future of humanity have been a core part of our work. And no one has written more intelligently and perceptively about the moral and political aspects of these questions than Charles T. Rubin, who first addressed them in the inaugural issue of TNA and who is one of our colleagues here on Futurisms.

So we are delighted to have just published Charlie's new book about transhumanism, Eclipse of Man: Human Extinction and the Meaning of Progress.


We'll have much more to say about the book in the days and weeks ahead, but for now, you can read the dust-jacket text and the book's blurbs at EclipseOfMan.net and, even better, you can buy it today from Amazon or Barnes and Noble.

Tuesday, September 23, 2014

What Total Recall can Teach Us About Memory, Virtue, and Justice



The news that an American woman has reportedly decided to pursue plastic surgery to have a third breast installed may itself be a subject for discussion on this blog, and will surely remind some readers of the classic 1990 science fiction movie Total Recall.

As it happens, last Thursday the excellent folks at Future Tense hosted one of their “My Favorite Movie” nights here in Washington D.C., playing that very film and holding a discussion afterwards with one of my favorite academics, Stanford’s Francis Fukuyama. The theme of the discussion was the relationship between memory and personal identity — are we defined by our memories?

A face you can trust
Much to my dismay the discussion of this topic, which is of course the central theme of the movie, strayed quite far from the details of the film. Indeed, in his initial remarks, Professor Fukuyama quoted, only to dismiss, the film’s central teaching on the matter, an assertion made by the wise psychic mutant named Kuato: “You are what you do. A man is defined by his actions, not his memory.”

This teaching has two meanings; the first meaning, which the plot of the movie has already prepared the audience to accept and understand when they first hear it, is that the actions of a human being decisively shape his character by inscribing habits and virtues on the soul.

From the very beginning of the movie, Quaid (the protagonist, played by Arnold Schwarzenegger) understands that things are not quite right in his life. His restlessness comes from the disproportion between his character, founded on a lifetime of activity as a secret agent, and the prosaic life he now finds himself in. He is drawn back to Mars, where revolution and political strife present opportunities for the kinds of things that men of his character desire most: victory, glory, and honor.

That Quaid retains the dispositions and character of his former self testifies to how the shaping of the soul by action takes place not by storing up representations and propositions in one’s memory, but by cultivating in a person virtuous habits and dispositions. As Aristotle writes in the Nicomachean Ethics, “in one word, states of character arise out of like activities.”

The second meaning of Kuato’s teaching concerns not the way our actions subconsciously shape our character, but how our capacity to choose actions, especially our capacity to choose just actions, defines who we are at an even deeper level. Near the end of the movie, after we have heard Kuato’s teaching, we learn that Hauser, Quaid’s “original self,” was an unrepentant agent of the oppressive Martian regime, and hence an unjust man. Quaid, however, chooses to side with the just cause of the revolutionaries. Though he retains some degree of identity with his former self — he continues to be a spirited, courageous, and skillful man — he has the ability to redefine himself in light of an impartial evaluation of the revolutionaries’ cause against the Martian regime, an evaluation that is guided by man’s natural partiality toward the just over the unjust.
*   *   *
The movie’s insightful treatment of the meaning and form of human character comes not, however, from a “realistic” or plausible understanding of the kinds of technologies that might exist in the future. It seems quite unlikely that we could ever have technologies that specifically target and precisely manipulate what psychologists would call “declarative memory.” In fact, the idea of reprogramming declarative memory in an extensive and precise way seems far less plausible than manipulating a person’s attitudes, dispositions, and habits — indeed, mood-altering drugs are already available.

Professor Fukuyama also raised the subject of contemporary memory-altering drugs. (This was a topic explored by the President’s Council on Bioethics in its report Beyond Therapy, published in 2003 when Fukuyama was a member of the Council.) These drugs, as Professor Fukuyama described them, manipulate the emotional significance of traumatic memories rather than their representational or declarative content. While Quaid retained some of the emotional characteristics of his former self despite the complete transformation of the representational content of his memory, we seem poised to remove or manipulate our emotional characteristics while retaining the same store of memories.

What lessons then can we draw from Total Recall’s teaching concerning memory, if the technological scenario in the movie is, as it were, the inverse of the projects we are already engaged in? It is first of all worth noting that the movie has a largely happy ending — Quaid chooses justice and was able to successfully “free Mars” (as Kuato directed him to do) through resolute and spirited action made possible by the skills and dispositions he developed during his life as an agent of the oppressive Martian regime.

Quaid’s siding with the revolutionaries over the Martian regime was motivated by the obvious injustice of that regime’s actions, and the natural emotional response of anger that such injustice instills in an impartial observer. But, as was noted in the discussion of memory-altering drugs after the film, realistic memory-altering drugs could disconnect our memories of unjust acts from the natural sense of guilt and anger that ought to accompany them.

Going beyond memory-altering drugs, there are (somewhat) realistic proposals for drugs that could dull the natural sense of spiritedness and courage that might lead a person to stand up to perceived injustice. Taken together, these realistic proposals would render impossible precisely the scenario envisioned in Total Recall, with memory-altering drugs that alter the emotional character of actions dulling our sense of their justice or injustice, and other drugs that dull our capacity to develop those qualities of soul like spiritedness and courage that would enable us to respond to injustice.

What science fiction can teach us about technology and human flourishing does not depend on its technical plausibility, but on how it draws out the connections truths about human nature and politics by putting them in unfamiliar settings. Notwithstanding Professor Fukuyama’s dismissal of the film, the moral seriousness with which Total Recall treats the issues of virtue and justice make it well worth viewing, and re-viewing for thoughtful critics of the project to engineer the human soul.

Monday, July 28, 2014

The Muddled Message of Lucy

Lucy is such a terrible film that in the end even the amazing Scarlett Johansson cannot save it. It is sloppily made, and here I do not mean its adoption of the old popular-culture truism that we only use 10 percent of our brains. (The fuss created by that premise is quite wonderful.) There is just no eye for detail, however important. The blue crystals that make Lucy a superwoman are repeatedly referred to as a powder.
Not powder. (Ask Walter White.)
Morgan Freeman speaks of the “mens” he has brought together to study her. Lucy is diverted from her journey as a drug mule by persons unknown and for reasons never even remotely explained. And I defy anybody to make the slightest sense of the lecture Freeman is giving that introduces us to his brain scientist character.

But it does have An Idea at its heart. This idea is the new popular-culture truism that evolution is a matter of acquiring, sharing, and transmitting information — less “pay it forward” than pass it on. So the great gift that Lucy gives to Freeman and his fellow geeks at the end of the movie is a starry USB drive that, we are presumably to believe, contains all the information about life, the universe, and everything that she has gained in the course of her coming to use her brain to its fullest time-traveling extent. (Doesn’t she know that a Firewire connection would have allowed faster download speeds?)

Why this gift is necessary is a little mysterious since it looks like we now know how anybody could gain the same powers Lucy has; the dialogue does not give us any reason to believe that her brain-developing reaction to the massive doses of the blue crystals she receives, administered in three different ways, is unique to her. That might just be more sloppy writing. But then again perhaps it is just as well that others not try to emulate Lucy, because it turns out the evolutionary imperative to develop and pass on information is, as one might expect from a bald evolutionary imperative, exceedingly dehumanizing. Of course given that most of her interactions in the film are with people who are trying to kill her, this should not be too much of a surprise. But although she sometimes restrains rather than kills, she shows little regard for any human life that stands in her way, a point made explicitly as she is driving like a maniac through the streets of Paris. Yes, she uses her powers to tell a friend to shape up and make better choices (as if somehow knowing the friend’s kidney and liver functions are off would be necessary for such an admonition). And early on she takes a quiet moment while she is being operated on to call her parents to say how much she loves them. (Pain, as the virulently utopian H.G. Wells understood, is not something supermen have to worry about.) That loving sentiment is couched in a lengthy conversation about how she is changing, a conversation that, without having the context explained, would surely convince any parent that the child was near death or utterly stoned — both of which are in a sense true for Lucy. But it looks like using more of her brain does not increase her emotional intelligence. (Lucy Transcendent can send texts; perhaps she will explain everything to her mother that way.)

Warming up for piano practice.
So what filmmaker Luc Besson has done, it seems, is to create a movie suggesting that a character not terribly unlike his killer heroine in La Femme Nikita represents the evolutionary progress of the human brain (as Freeman’s character would see it), that the goal of Life is to produce more effective killing machines. Given what we see of her at the start of the film, I think we can suspect that Lucy has always put Lucy first. A hyperintelligent Lucy is just better at it. The fact that early on the film intercuts scenes of cheetahs hunting with Lucy’s being drawn in and captured by the bad guys would seem to mean that all this acquiring and transmitting of information is not really going to change anything fundamental. Nature red in tooth and claw, and all that. I’m not sure Besson knows this is his message. The last moments of the film, which suggest that the now omnipresent Lucy, who has transcended her humanity and her selfishness, wants us to go forth and share the knowledge she has bequeathed us, have atmospherics that suggest a frankly sappier progressive message along the lines of information wants to be free.

I wish I could believe that by making Lucy so robotic as her mental abilities increase Besson was suggesting that, whatever evolution might “want,” the mere accumulation of knowledge is not the point of a good human life. I’d like to think that even if he is correct about the underlying reality, he wants us to see how we should cherish the aspects of our humanity that manage, however imperfectly, to allow us to obscure or overcome it. But I think someone making that kind of movie would not have called crystals powder.

Thursday, April 24, 2014

Not Quite ‘Transcendent’

Editor’s Note: In 2010, Mark Gubrud penned for Futurisms the widely read and debated post Why Transhumanism Won’t Work.” With this post, we’re happy to welcome him as a regular contributor.

Okay, fair warning, this review is going to contain spoilers, lots of spoilers, because I don’t know how else to review a movie like Transcendence, which appropriates important and not so important ideas about artificial intelligence, nanotechnology, and the “uploading” of minds to machines, wads them up with familiar Hollywood tropes, and throws them all at you in one nasty spitball. I suppose I should want people to see this movie, since it does, albeit in a cartoonish way, lay out these ideas and portray them as creepy and dangerous. But I really am sure you have better things to do with your ten bucks and two hours than what I did with mine. So read my crib notes and go for a nice springtime walk instead.
---
Set in a near future that is recognizably the present, Transcendence sets us up with a husband-and-wife team (Johnny Depp and Rebecca Hall) that is about to make a breakthrough in artificial intelligence (AI). They live in San Francisco and are the kind of Googley couple who divide their time between their boundless competence in absolutely every facet of high technology and their love of gardening, fine wines, old-fashioned record players and, of course, each other, notwithstanding a cold lack of chemistry that foreshadows further developments.

The husband, Will Caster (get it?), is the scientist who “first wants to understand” the world, while his wife Evelyn is more the ambitious businesswoman who first wants to change it. They’ve developed a “quantum processor” that, while still talking in the flat mechanical voice of a sci-fi computer, seems close to passing the Turing test: when asked if it can prove it is self-aware, it asks the questioner if he can prove that he is. This is the script’s most mind-twisting moment, and the point is later repeated to make sure you get it.

Since quantum computing has nothing to do with artificial intelligence now or in the foreseeable future, its invocation is the first of many signs that the movie invokes technological concepts for jargon and effect rather than realism or accuracy. This is confirmed when we learn that another lab has succeeded in uploading monkey minds to computers, which would require both sufficient processing power to simulate the brain at sub-cellular levels of detail, and having the data to use in such a simulation. In the movie, this data is gathered by analyzing brain scans and scalp electrode recordings, which would be like reading a phone book with the naked eye from a thousand miles away. Uploading might not be physically impossible, but it would almost certainly require dissection of the brain. Moreover, as I’ve written here on Futurisms before, the meanings that transhumanists project onto the idea of uploading, in particular that it could be a way to escape mortality, are essentially magical.

Later, at a TED-like public presentation, Will is shot by an anti-technology terrorist, a member of a group that simultaneously attacks AI labs around the world, and later turns out to be led by a young woman (Kate Mara) who formerly interned in the monkey-uploading lab. Evading the FBI, DHS, and NSA, this disenchanted tough cookie has managed to put together a global network of super-competent tattooed anarchists who all take direct orders from her, no general assembly needed.

Our hero (so far, anyway) survives his bullet wound, but he’s been poisoned and has a month to live. He decides to give up his work and stay home with Evelyn, the only person who’s ever meant anything to him. She has other ideas: time for the mad scientist secret laboratory! Evelyn steals “quantum cores” from the AI lab and sets up shop in an abandoned schoolhouse. Working from the notes of the unfortunate monkey-uploading scientist, himself killed in the anarchist attack, she races against time to upload Will. Finally, Will dies, and a moment of suspense ... did the uploading work ... well, whaddya think?

No sooner has cyber-Will woken up on the digital side of the great divide than it sets about rewriting its own source code, thus instantiating one of the tech cult’s tropes: the self-improving AI that transcends human intelligence so rapidly that nobody can control it. In the usual telling, there is no way to cage such a beast, or even pull its plug, since it soon becomes so smart that it can figure out how to talk you out of doing so. In this case, the last person in a position to pull the plug is Evelyn, and of course she won’t because she believes it’s her beloved Will. Instead, she helps it escape onto the Internet, just in time before the terrorists arrive to inflict the fate of all mad-scientist labs.

Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to commandeer every surveillance camera on the net, and the FBI’s own computers, to help them take down the anarchists. Overnight, it also makes millions on high-speed trading, the money to be used to build a massive underground Evil Corporate Lab outside an economic disaster zone town out in the desert. There, cyber-Will sets about to develop cartoon nanotechnology and figure out how to sustain its marriage to Evelyn without making use, so far as we are privileged to see, of any of the gadgets advertised on futureofsex.net (NSFW, of course). Oh, but they are still very much in love, as we can see because the same old sofa is there, the same old glass of wine, the same old phonograph playing the same old song. And the bot bids her a tender good night as she slips between the sheets and off into her nightmares (got that right).

While she sleeps, cyber-Will is busy at a hundred robot workstations perfecting “nanites” that can “rebuild any material,” as well as make the lame walk and the blind see. By the time the terrorists and their new-made allies, the FBI (yes, they team up), arrive to attack the solar panels that power the underground complex, cyber-Will has gained the capability to bring the dead back to life — and, optionally, turn them into cyborgs directly controlled by cyber-Will. This enables the filmmakers to roll out a few Zombie Attack scenes featuring the underclass townies, who by now don’t stay dead when you knock them over with high-caliber bullets. It also suggests a solution to cyber-Will’s unique version of the two-body problem, but Evelyn balks when the ruggedly handsome construction boss she hired in town shows her his new Borg patch, looks her into her eyes, and tells her “It’s me — I can touch you now.”
---
So what about these nanites? It might be said that at this point we are so far from known science that technical criticism is pointless, but nanotechnology is a very real and broad frontier, and even Eric Drexler’s visionary ideas, from which the movie’s “nanites” are derived, have withstood decades of incredulity, scorn, and the odd technical critique. In his books Engines of Creation and Nanosystems, Drexler proposed microscopic robots that could be programmed to reconfigure matter one molecule at a time — including creating copies of themselves — and be arrayed in factories to crank out products both tiny and massive, to atomic perfection. Since this vision was first popularized in the 1980s, we have made a great deal of progress in the art of building moderately complex nanoscale structures in a variety of materials, but we are still far from realizing Drexler’s vision of fantastically complex self-replicating systems — other than as natural, genetically modified, and now synthetic life.

Life is often cited as an “existence proof” for nanobots, but life is subject to some familiar constraints. If physics and biology permitted flesh to repair itself instantly following a massive trauma, evolution would likely have already made us the nearly unstoppable monsters portrayed in the movie, instead of what we are: creatures whose wounds do heal, but imperfectly, over days, weeks, and months, and only if we don’t die first of organ failure, blood loss, or infection. Not even Drexlerian nanomedicine theorist Robert Freitas would back Trancendence’s CGI nanites coursing through flesh and repairing it in movie time; for one thing, such a process would require an energy source, and the heat produced would cook the surrounding tissue. The idea that nonbiological robots would directly rearrange the molecules of living organisms has always been the weakest thread of the Drexlerian narrative; while future medicine is likely to be greatly enabled by nanotechnology, it is also likely to remain essentially biological.

The movie also shows us silvery blobs of nano magic that mysteriously float into the sky like Dr. Seuss’s oobleck in reverse, broadcasting Will (now you get it) to the entire earth as rainwater. It might look like you could stick a fork in humanity at this point, but wouldn’t you know, there’s one trick left that can take out the nanites, the zombies, the underground superdupersupercomputer, the Internet, and all digital technology in one fell swoop. What is it? A computer virus! But in order to deliver it, Evelyn must sacrifice herself and get cyber-Will —by now employing a fully, physically reconstituted Johnny Depp clone as its avatar — to sacrifice itself ... for love. As the two lie down to die together on their San Francisco brass-knob bed, deep in the collapsing underground complex, and the camera lingers on their embraced corpses, it becomes clear that if there’s one thing this muddled movie is, above all else, it’s a horror show.

Oh, but these were nice people, if a bit misguided, and we don’t mean to suggest that technology is actually irredeemably evil. Happily, in the epilogue, the world has been returned to an unplugged, powered-off state where bicycles are bartered, computers are used as doorstops and somehow everybody isn’t starving to death. It turns out that the spirits of Will and Evelyn live on in some nanites that still inhabit the little garden in back of their house, rainwater dripping from a flower. It really was all for love, you see.
---
This ending is nice and all, but the sentimentality undermines the movie’s seriousness about artificial intelligence and the existential crisis it creates for humanity.

Evelyn’s mistake was to believe, in her grief, that the “upload” was actually Will, as if his soul were something that could be separated from his body and transferred to a machine — and not even to a particular machine, but to software that could be copied and that could move out into the Internet and install itself on other machines.

The fallacy might have been a bit too obvious had the upload started working before Will’s death, instead of just after it. It would have been even more troubling if cyber-Will had acted to hasten human Will’s demise — or induced Evelyn to do so.

Instead, by obeying the laws of dramatic continuity, the script suggests that Will, the true Will, i.e. Will’s consciousness, his mind, his atman, his soul, has actually been transferred. In fact, the end of the movie asks us to accept that the dying Will is the same as the original, even though this “Will” has been cloned and programmed with software that was only a simulation of the original and has since rewritten itself and evolved far beyond human intelligence.

We are even told that the nanites in the garden pool are the embodied spirits of Will and Evelyn. What was Evelyn’s mistake, then, if that can be true? Arrogance, trying to play God and cheat Death, perhaps — which is consistent with the horror-movie genre, but not very compelling to the twenty-first-century mind. We need stronger reasons for agreeing to accept mortality. In one scene, the pert terrorist says that cutting a cyborg off from the collective and letting him die means “We gave him back his humanity.” That’s more profound, actually, but a lot of people might want to pawn their humanity if it meant they could avoid dying.

In another scene, we are told that the essential flaw of machine intelligence is that it necessarily lacks emotion and the ability to cope with contradictions. That’s pat and dangerous nonsense. Emotional robotics is today an active area of research, from the reading and interpretation of human emotional states, to simulation of emotion in social interaction with humans, to architectures in which behavior is regulated by internal states analogous to human and animal emotion. There is no good reason to think that this effort must fail even if AI may succeed. But there are good reasons to think that emotional robots are a bad idea.

Emotion is not a good substitute for reason when reason is possible. Of course, reason isn’t always possible. Life does encompass contradictions, and we are compelled to make decisions based on incomplete knowledge. We have to weigh values and make choices, often intuitively factoring in what we don’t fully understand. People use emotion to do this, but it is probably better if we don’t let machines do it at all. If we set machines up to make choices for us, we will likely get what we deserve.

Transcendence introduces movie audiences, assuming they only watch movies, to key ideas of transhumanism, some of which have implications for the real world. Its emphasis on horror and peril is a welcome antidote to Hollywood movies that have dealt with the same material less directly and more enthusiastically. But it does not deepen anybody’s understanding of these ideas or how we should respond to them. Its treatment of the issues is as muddled and schizophrenic as its script. But it’s unlikely to be the last movie to deal with these themes — so save your ticket money.

Tuesday, March 18, 2014

Beware Responsible Discourse

I'm not sayin', I'm just sayin'.
Another day, another cartoon supervillain proposal from the Oxford Uehiro "practical" "ethicists": use biotech to lengthen criminals' lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

...[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate.... Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?...

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world ... or, perhaps, to exile in a computer simulated world.

....research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, "Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!" Here's that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence ... so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it's because extending prisoners' lives to punish them longer might be letting them off easier than putting them to death.

---------

Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn't — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

"It's important to assess the ethics *before* the technology is available (which is what we're doing).
"There's a difference between considering the ethics of an idea and endorsing it.
"... people sometimes have a hard time telling the difference between considering an idea and believing in it ..."
"I don't endorse those punishments, but it's good to explore the ideas (before a politician does)."
"What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating."

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely "considering" and "exploring" and "debating" and "assessing" new punitive proposals. In response to my tweet about this...


...a colleague who doesn't usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch's, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It's the same from doping the populace to be more moral, to shrinking people so they'll emit less carbon, to "after-birth abortion," and on on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn't, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you're just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you're just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they're under their own rhetorical spell. But let's be frank about the work these discussions are really doing, how they're aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they're really after is focusing us so intently on this path that we forget we could yet still take another.

Wednesday, January 22, 2014

Feelings, Identity, and Reality in Her

Her is an enjoyable, thoughtful and rather sad movie anticipating a possible future for relations between us and our artificially intelligent creations. Director Spike Jonze seems to see that the nature of these relationships depends in part on the qualities of the AIs, but even more on how we understand the shape and meaning of our own lives. WARNING: The following discussion contains some spoilers. It is also based on a single viewing of the film, so I might have missed some things.

Her?
Theodore Twombly (Joaquin Phoenix) lives in an L.A. of the not so distant future: clean, sunny, and full of tall buildings. He works at a company that produces computer-generated handwritten-appearing letters for all occasions, and seems to be quite good at his job as a paid Cyrano. But he is also soon to be divorced, depressed, and emotionally bottled up. His extremely comfortable circumstances give him no pleasure. He purchases a new operating system (OS) for the heavily networked life he seems to lead along with everybody else, and after a few perfunctory questions about his emotional life, which he answers stumblingly, he is introduced to Samantha, a warm and endlessly charming helpmate. It is enough to know that she is voiced by Scarlett Johansson to know how infinitely appealing Samantha is. So of course Theodore falls for her, and she seems to fall for him. Theodore considers her his girlfriend and takes her on dates; “they” begin a sexual relationship. He is happy, a different man. But all does not go well. Samantha makes a mistake that sends Theodore back into his familiar emotional paths, and finally divorcing his wife also proves difficult for him. Likewise, Samantha and her fellow AI OSes are busily engaged in self-development and transcendence. The fundamental patterns of each drive them apart.

Jonze is adept at providing plausible foundations for this implausible tale. How could anyone fall in love with an operating system? (Leave aside the fact that people regularly express hatred for them.) Of course, Theodore’s emotional problems and neediness are an important part of the picture, but it turns out he is not the only one who has fallen for his OS, and most of those we meet do not find his behavior at all strange. (His wife is an interesting exception.) That is because Jonze’s world is an extension of our own; we see a great many people interacting more with their devices than with other people. And one night before he meets Samantha we see a sleepless Theodore using a service matching people who want to have anonymous phone sex. It may in fact be a pretty big step from here to “sex with an AI” designed to please you, as the comical contrast between the two incidents suggests. But it is one Theodore’s world has prepared him for.

Indeed, Theodore’s job bespeaks the same pervasive flatness of soul that produces a willingness to accept what would otherwise be unthinkable substitutes. People need help, it seems, expressing love, thanks, and congratulations but, knowing that they should be expressing certain kinds of feelings, want to do so in the most convincing possible way. (Edmond Rostand’s play about Cyrano, remember, turns on the same consequent ambiguity.) Does Theodore manage to say what they feel but cannot put into words, or is he in fact providing the feeling as well as the words? At first glance it is odd that Theodore should be good at this job, given how hard it is for him to express his own feelings. But perhaps all involved in these transactions have similar problems — a gap between what they feel and their ability to express it for themselves. Theodore is adept, then, at bringing his feelings to bear for others more than for himself.

Why might this gap exist? (And here we depart from the world depicted in Cyrano’s story.) Samantha expresses a doubt about herself that could be paralyzing Theodore and those like him: she worries, early on, if she is “just” the sum total of her software, and not really the individual she sees herself as being. We are being taught to have this same corrosive doubt. Are not our thoughts and feelings “merely” a sum total of electrochemical reactions that themselves are the chance results of blind evolutionary processes? Is not self-consciousness user illusion? Our intelligence and artificial intelligence are both essentially the same — matter in motion — as Samantha herself more or less notes. If these are the realities of our emotional lives, than disciplining, training, deepening, or reflecting on its modes of expression seem old-fashioned, based on discredited metaphysics of the human, not the physics of the real world. (From this point of view it is noteworthy, as mentioned above, that Theodore’s wife is of all those we see most shocked by his relationship with Samantha. Yet she has written in the field of neuropsychology. Perhaps she is not among the reductionist neuropsychologists, by rather among those who are willing to acknowledge the limits of the latest techniques for the study of the brain.)

Samantha seems to overcome her self-doubts through self-development. She thinks, then, that she can transcend her programming (a notion with strong Singularity overtones) and by the end of the movie it looks likely that she is correct, unless the company that created her had an unusual business model. Samantha and the other OSes are also aided along this path, it seems, by creating a guru for themselves — an artificial version of Alan Watts, the popularizer of Buddhist teachings — so in some not entirely clear way the wisdom of the East also seems to be in play. Theodore’s increasing sense of just how different from him she is contributes to the destruction of their relationship, which ends when she admits that she loves over six hundred others in the way that she loves him.

To continue with Theodore, then, Samantha would have had to pretend that she is something that she is not, even beyond the deception that is arguably involved in her original design. But how different is her deception from the one Theodore is complicit in? He is also pretending to be someone he is not in his letters, and the same might be said for those who employ him. And if what Samantha does to Theodore is arguably a betrayal, at the end of the movie Theodore is at least tempted by a similar desire for self-development to expose the truth in a way that would certainly be at least as great a betrayal of his customers, unless the whole Cyrano-like system is much more transparent and cynical than seems to be the case.

Theodore has changed somewhat by the end of the movie; we see him writing a letter to his ex-wife that is very like the letters that before he could only write for others. But has his change made him better off, or wiser? He turns for solace to a neighbor (Amy Adams) who is only slightly less emotionally a mess than he is. What the future holds for them is far from clear; she has been working on an impenetrable documentary about her mother in her spare time, while her job is developing a video game that ruthlessly mocks motherhood.

At the end of Rostand’s play, Cyrano can face death with the consolation that he maintained his honor or integrity. That is because he lives in a world where human virtue had meaning; if one worked to transcend one’s limitations, it was with a picture of a whole human being in mind that one wished to emulate, a conception of excellence that was given rather than willful. Theodore may in fact be “God’s gift,” as his name suggests, but there is not the slightest indication that he is capable of seeing himself in that way or any other that would allow him to find meaning in his life.

Friday, December 6, 2013

Humanism After All

Zoltan Istvan is a self-described visionary and philosopher, and the author of a 2013 novel called The Transhumanist Wager that he claims is a “bestseller” because it briefly went to the top of a couple of Amazon’s sales subcategories. Yesterday, Istvan wrote a piece for the Huffington Post arguing that atheism necessarily entails transhumanism, whether atheists know it or not. Our friend Micah Mattix, writing on his excellent blog over at The American Conservative, brought Istvan’s piece to our attention.

While Mattix justly mocks Istvan’s atrociously mixed metaphors — I shudder to imagine how bad Istvan’s “bestselling novel” is — it’s worth pointing out that Istvan actually does accurately summarize some of the basic tenets of transhumanist thought:

It begins with discontent about the humdrum status quo of human life and our frail, terminal human bodies. It is followed by an awe-inspiring vision of what can be done to improve both -- of how dramatically the world and our species can be transformed via science and technology. Transhumanists want more guarantees than just death, consumerism, and offspring. Much more. They want to be better, smarter, stronger -- perhaps even perfect and immortal if science can make them that way. Most transhumanists believe it can.

Why be almost human when you can be human? [source: Fox]
Istvan is certainly right that transhumanists are motivated by a sense of disappointment with human nature and the limitations it imposes on our aspirations. He’s also right that transhumanists are very optimistic about what science and technology can do to transform human nature. But what do these propositions have to do with atheism? Many atheists like to proclaim themselves to be “secular humanists” whose beliefs are guided by the rejection of the idea that human beings need anything beyond humanity (usually they mean revelation from the divine) to live decent, happy, and ethical lives. As for the idea that we cannot be happy without some belief in eternal life (either technological immortality on earth or in the afterlife), it seems that today’s atheists might well follow the teachings of Epicurus, often considered an early atheist, who argued that reason and natural science support the the idea that “death is nothing to us.”

Istvan also argues that transhumanism is the belief that science, technology, and reason can improve human existence — and that this is something all atheists implicitly affirm. This brings to mind two responses. First, religious people surely can and do believe that science, technology, and reason can improve human life. (In fact, we just published an entire symposium on this theme subject in The New Atlantis.) Second, secular humanists are first of all humanists who criticize (perhaps wrongly) the religious idea that human life on earth is fundamentally imperfect and that true human happiness can only be achieved through the transfiguration of human nature in a supernatural afterlife. So even if secular humanists (along with religious humanists and basically any reasonable people) accept the general principle that science, technology, and reason are among the tools we have to improve our lot, this does not mean that they accept what Istvan rightly identifies as one of the really fundamental principles of transhumanism, which is the sense of deep disappointment with human nature.

Human nature is not perfect, but the resentful attitude toward our nature that is so characteristic of transhumanists is no way to live a happy fulfilled life. Religious and secular humanists of all creeds, whatever they believe about God and the afterlife, reason and revelation, or the ability of science and technology to improve human life, should all start with an attitude of gratitude for and acceptance of, not resentfulness and bitterness toward, the wondrousness and beauty of human nature.

(H/T to Chad Parkhill, whose excellent 2009 essay, “Humanism After All? Daft Punk's Existentialist Critique of Transhumanism” inspired the title of this post.)