Futurisms: Critiquing the project to reengineer humanity

Wednesday, December 17, 2014

Near, Far, and Nicholas Carr

Nicholas Carr, whose new book The Glass Cage explores the human meaning of automation, last week put up a blog post about robots and artificial intelligence. (H/t Alan Jacobs.) The idea that “AI is now the greatest existential threat to humanity,” Carr writes, leaves him “yawning.”

He continues:

The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

I would not argue with Carr about probable versus possible — he may well be right there. But later in the post, quoting from an interview he gave to help promote his book, he implicitly acknowledges that there are people who think that machine consciousness is a great idea and who are working to achieve it. He thinks that their models for how to do so are not very good and that their aspirations “for the near future” are ultimately based on “faith, not reason.”

near ... or far?
All fine. But Carr is begging one question and failing to observe a salient point. First, it seems he is only willing to commit to his skepticism for “the near future.” That is prudent, but then one might want to know why we should not be concerned about a far future when efforts today may lay the groundwork for it, even if by eliminating certain possibilities.

Second, what he does not pause to notice is that everyone agrees that “flood, famine, pestilence, plague and war” are bad things. We spend quite a serious amount of time, effort, and money trying to prevent them or mitigate their effects. But at the same time, there are also people attempting to develop machine consciousness, and while they may not get the resources or support they think they deserve, the tech culture at least seems largely on their side (even if there are dissenters in theory). So when there are people saying that an existential threat is the feature and not the bug, isn’t that something to worry about?

Friday, December 5, 2014

Margaret Atwood’s Not-Very-Deep Thoughts on Robots

Margaret Atwood has been getting her feet wet in the sea of issues surrounding developments in robotics and comes away with some conclusions of corresponding depth. Robots, she says, are just another of the extensions of human capacity that technology represents, they represent a perennial human aspiration, maybe they will change human nature, but what is human nature anyway?

This is all more or less conventional stuff, dead center of the intellectual sweet spot for the Gray Lady, until Atwood gets to the very end. What really concerns her seems to be that we would commit to a robotic future and then run out of electricity! That would pretty much destroy human civilization, leaving behind “a chorus of battery-powered robotic voices that continues long after our own voices have fallen silent.” Nice image — but long after? That’s some battery technology you’ve got there. The robots would not care to share any of that power?

As I discuss in my new book Eclipse of Man: Human Extinction and the Meaning of Progress, most of those who believe in “Our Robotic Future,” as Atwood’s piece is titled, do so with the expectation that it is part and parcel of an effort at overcoming just the kind of Malthusian scarcity that haunts Atwood. They may of course be wrong about that, but given the track record of ongoing innovation in the energy area, it is hard to see why one would strain at this particular gnat.

Then again, the NYT essay suggests that most of Atwood’s literary knowledge of things robotic seems to end by the 1960s. Atwood’s own bleak literary futures seem to focus on the biological; maybe she has not got the transhumanist memo yet.

—Charles T. Rubin, a Futurisms contributor and contributing editor to The New Atlantis, is a Fellow of the James Madison Program at Princeton University.

Wednesday, December 3, 2014

Human Flourishing or Human Rejection?

Sometimes, when we criticize transhumanism here on Futurisms, we are accused of being Luddites, of being anti-technology, of being anti-progress. Our colleague Charles Rubin ably responded to such criticisms five years ago in a little post he called “The ‘Anti-Progress’ Slur.”

In his new book Eclipse of Man, Professor Rubin explores the moral and political dimensions of transhumanism. And again the question arises, if you are opposed to transhumanism, are you therefore opposed to progress? Here, in a passage from the book’s introduction, Rubin talks about the distinctly modern idea that humanity can better its lot and asks whether that goal is in tension with the transhumanist goal of transcending humanity:

Even if the sources of our misery have not changed over time, the way we think about them has certainly changed between the ancient world and ours. What was once simply a fact of life to which we could only resign ourselves has become for us a problem to be solved. When and why the ancient outlook began to go into eclipse in the West is something scholars love to discuss, but that a fundamental change has occurred seems undeniable. Somewhere along the line, with thinkers like Francis Bacon and René Descartes playing a major role, people began to believe that misery, poverty, illness, and even death itself were not permanent facts of life that link us to the transcendent but rather challenges to our ingenuity in the here and now. And that outlook has had marvelous success where it has taken hold, allowing more people to live longer, wealthier, and healthier lives than ever before.

So the transhumanists are correct to point out that the desire to alter the human condition runs deep in us, and that attempts to alter it have a long history. But even starting from our perennial dissatisfaction, and from our ever-growing power to do something about the causes of our dissatisfaction, it is not obvious how we get from seeking to improve the prospects for human flourishing to rejecting our humanity altogether. If the former impulse is philanthropic, is the latter not obviously misanthropic? Do we want to look forward to a future where man is absent, to make that goal our normative vision of how we would like the world to be?

Francis Bacon famously wrote about “the relief of man’s estate,” which is to say, the improvement of the conditions of human life. But the transhumanists reject human life as such. Certain things that may be good in certain human contexts — intelligence, pleasure, power — can become meaningless, perverse, or destructive when stripped of that context. By pursuing these goods in abstraction from their human context, transhumanism offers not an improvement in the human condition but a rejection of humanity.

For much more of Charlie Rubin’s thoughtful critique of transhumanism, pick up a copy of Eclipse of Man today.

Wednesday, October 29, 2014

Our new book on transhumanism: Eclipse of Man

Since we launched The New Atlantis, questions about human enhancement, artificial intelligence, and the future of humanity have been a core part of our work. And no one has written more intelligently and perceptively about the moral and political aspects of these questions than Charles T. Rubin, who first addressed them in the inaugural issue of TNA and who is one of our colleagues here on Futurisms.

So we are delighted to have just published Charlie's new book about transhumanism, Eclipse of Man: Human Extinction and the Meaning of Progress.


We'll have much more to say about the book in the days and weeks ahead, but for now, you can read the dust-jacket text and the book's blurbs at EclipseOfMan.net and, even better, you can buy it today from Amazon or Barnes and Noble.

Tuesday, September 23, 2014

What Total Recall can Teach Us About Memory, Virtue, and Justice



The news that an American woman has reportedly decided to pursue plastic surgery to have a third breast installed may itself be a subject for discussion on this blog, and will surely remind some readers of the classic 1990 science fiction movie Total Recall.

As it happens, last Thursday the excellent folks at Future Tense hosted one of their “My Favorite Movie” nights here in Washington D.C., playing that very film and holding a discussion afterwards with one of my favorite academics, Stanford’s Francis Fukuyama. The theme of the discussion was the relationship between memory and personal identity — are we defined by our memories?

A face you can trust
Much to my dismay the discussion of this topic, which is of course the central theme of the movie, strayed quite far from the details of the film. Indeed, in his initial remarks, Professor Fukuyama quoted, only to dismiss, the film’s central teaching on the matter, an assertion made by the wise psychic mutant named Kuato: “You are what you do. A man is defined by his actions, not his memory.”

This teaching has two meanings; the first meaning, which the plot of the movie has already prepared the audience to accept and understand when they first hear it, is that the actions of a human being decisively shape his character by inscribing habits and virtues on the soul.

From the very beginning of the movie, Quaid (the protagonist, played by Arnold Schwarzenegger) understands that things are not quite right in his life. His restlessness comes from the disproportion between his character, founded on a lifetime of activity as a secret agent, and the prosaic life he now finds himself in. He is drawn back to Mars, where revolution and political strife present opportunities for the kinds of things that men of his character desire most: victory, glory, and honor.

That Quaid retains the dispositions and character of his former self testifies to how the shaping of the soul by action takes place not by storing up representations and propositions in one’s memory, but by cultivating in a person virtuous habits and dispositions. As Aristotle writes in the Nicomachean Ethics, “in one word, states of character arise out of like activities.”

The second meaning of Kuato’s teaching concerns not the way our actions subconsciously shape our character, but how our capacity to choose actions, especially our capacity to choose just actions, defines who we are at an even deeper level. Near the end of the movie, after we have heard Kuato’s teaching, we learn that Hauser, Quaid’s “original self,” was an unrepentant agent of the oppressive Martian regime, and hence an unjust man. Quaid, however, chooses to side with the just cause of the revolutionaries. Though he retains some degree of identity with his former self — he continues to be a spirited, courageous, and skillful man — he has the ability to redefine himself in light of an impartial evaluation of the revolutionaries’ cause against the Martian regime, an evaluation that is guided by man’s natural partiality toward the just over the unjust.
*   *   *
The movie’s insightful treatment of the meaning and form of human character comes not, however, from a “realistic” or plausible understanding of the kinds of technologies that might exist in the future. It seems quite unlikely that we could ever have technologies that specifically target and precisely manipulate what psychologists would call “declarative memory.” In fact, the idea of reprogramming declarative memory in an extensive and precise way seems far less plausible than manipulating a person’s attitudes, dispositions, and habits — indeed, mood-altering drugs are already available.

Professor Fukuyama also raised the subject of contemporary memory-altering drugs. (This was a topic explored by the President’s Council on Bioethics in its report Beyond Therapy, published in 2003 when Fukuyama was a member of the Council.) These drugs, as Professor Fukuyama described them, manipulate the emotional significance of traumatic memories rather than their representational or declarative content. While Quaid retained some of the emotional characteristics of his former self despite the complete transformation of the representational content of his memory, we seem poised to remove or manipulate our emotional characteristics while retaining the same store of memories.

What lessons then can we draw from Total Recall’s teaching concerning memory, if the technological scenario in the movie is, as it were, the inverse of the projects we are already engaged in? It is first of all worth noting that the movie has a largely happy ending — Quaid chooses justice and was able to successfully “free Mars” (as Kuato directed him to do) through resolute and spirited action made possible by the skills and dispositions he developed during his life as an agent of the oppressive Martian regime.

Quaid’s siding with the revolutionaries over the Martian regime was motivated by the obvious injustice of that regime’s actions, and the natural emotional response of anger that such injustice instills in an impartial observer. But, as was noted in the discussion of memory-altering drugs after the film, realistic memory-altering drugs could disconnect our memories of unjust acts from the natural sense of guilt and anger that ought to accompany them.

Going beyond memory-altering drugs, there are (somewhat) realistic proposals for drugs that could dull the natural sense of spiritedness and courage that might lead a person to stand up to perceived injustice. Taken together, these realistic proposals would render impossible precisely the scenario envisioned in Total Recall, with memory-altering drugs that alter the emotional character of actions dulling our sense of their justice or injustice, and other drugs that dull our capacity to develop those qualities of soul like spiritedness and courage that would enable us to respond to injustice.

What science fiction can teach us about technology and human flourishing does not depend on its technical plausibility, but on how it draws out the connections truths about human nature and politics by putting them in unfamiliar settings. Notwithstanding Professor Fukuyama’s dismissal of the film, the moral seriousness with which Total Recall treats the issues of virtue and justice make it well worth viewing, and re-viewing for thoughtful critics of the project to engineer the human soul.

Monday, July 28, 2014

The Muddled Message of Lucy

Lucy is such a terrible film that in the end even the amazing Scarlett Johansson cannot save it. It is sloppily made, and here I do not mean its adoption of the old popular-culture truism that we only use 10 percent of our brains. (The fuss created by that premise is quite wonderful.) There is just no eye for detail, however important. The blue crystals that make Lucy a superwoman are repeatedly referred to as a powder.
Not powder. (Ask Walter White.)
Morgan Freeman speaks of the “mens” he has brought together to study her. Lucy is diverted from her journey as a drug mule by persons unknown and for reasons never even remotely explained. And I defy anybody to make the slightest sense of the lecture Freeman is giving that introduces us to his brain scientist character.

But it does have An Idea at its heart. This idea is the new popular-culture truism that evolution is a matter of acquiring, sharing, and transmitting information — less “pay it forward” than pass it on. So the great gift that Lucy gives to Freeman and his fellow geeks at the end of the movie is a starry USB drive that, we are presumably to believe, contains all the information about life, the universe, and everything that she has gained in the course of her coming to use her brain to its fullest time-traveling extent. (Doesn’t she know that a Firewire connection would have allowed faster download speeds?)

Why this gift is necessary is a little mysterious since it looks like we now know how anybody could gain the same powers Lucy has; the dialogue does not give us any reason to believe that her brain-developing reaction to the massive doses of the blue crystals she receives, administered in three different ways, is unique to her. That might just be more sloppy writing. But then again perhaps it is just as well that others not try to emulate Lucy, because it turns out the evolutionary imperative to develop and pass on information is, as one might expect from a bald evolutionary imperative, exceedingly dehumanizing. Of course given that most of her interactions in the film are with people who are trying to kill her, this should not be too much of a surprise. But although she sometimes restrains rather than kills, she shows little regard for any human life that stands in her way, a point made explicitly as she is driving like a maniac through the streets of Paris. Yes, she uses her powers to tell a friend to shape up and make better choices (as if somehow knowing the friend’s kidney and liver functions are off would be necessary for such an admonition). And early on she takes a quiet moment while she is being operated on to call her parents to say how much she loves them. (Pain, as the virulently utopian H.G. Wells understood, is not something supermen have to worry about.) That loving sentiment is couched in a lengthy conversation about how she is changing, a conversation that, without having the context explained, would surely convince any parent that the child was near death or utterly stoned — both of which are in a sense true for Lucy. But it looks like using more of her brain does not increase her emotional intelligence. (Lucy Transcendent can send texts; perhaps she will explain everything to her mother that way.)

Warming up for piano practice.
So what filmmaker Luc Besson has done, it seems, is to create a movie suggesting that a character not terribly unlike his killer heroine in La Femme Nikita represents the evolutionary progress of the human brain (as Freeman’s character would see it), that the goal of Life is to produce more effective killing machines. Given what we see of her at the start of the film, I think we can suspect that Lucy has always put Lucy first. A hyperintelligent Lucy is just better at it. The fact that early on the film intercuts scenes of cheetahs hunting with Lucy’s being drawn in and captured by the bad guys would seem to mean that all this acquiring and transmitting of information is not really going to change anything fundamental. Nature red in tooth and claw, and all that. I’m not sure Besson knows this is his message. The last moments of the film, which suggest that the now omnipresent Lucy, who has transcended her humanity and her selfishness, wants us to go forth and share the knowledge she has bequeathed us, have atmospherics that suggest a frankly sappier progressive message along the lines of information wants to be free.

I wish I could believe that by making Lucy so robotic as her mental abilities increase Besson was suggesting that, whatever evolution might “want,” the mere accumulation of knowledge is not the point of a good human life. I’d like to think that even if he is correct about the underlying reality, he wants us to see how we should cherish the aspects of our humanity that manage, however imperfectly, to allow us to obscure or overcome it. But I think someone making that kind of movie would not have called crystals powder.

Thursday, April 24, 2014

Not Quite ‘Transcendent’

Editor’s Note: In 2010, Mark Gubrud penned for Futurisms the widely read and debated post Why Transhumanism Won’t Work.” With this post, we’re happy to welcome him as a regular contributor.

Okay, fair warning, this review is going to contain spoilers, lots of spoilers, because I don’t know how else to review a movie like Transcendence, which appropriates important and not so important ideas about artificial intelligence, nanotechnology, and the “uploading” of minds to machines, wads them up with familiar Hollywood tropes, and throws them all at you in one nasty spitball. I suppose I should want people to see this movie, since it does, albeit in a cartoonish way, lay out these ideas and portray them as creepy and dangerous. But I really am sure you have better things to do with your ten bucks and two hours than what I did with mine. So read my crib notes and go for a nice springtime walk instead.
---
Set in a near future that is recognizably the present, Transcendence sets us up with a husband-and-wife team (Johnny Depp and Rebecca Hall) that is about to make a breakthrough in artificial intelligence (AI). They live in San Francisco and are the kind of Googley couple who divide their time between their boundless competence in absolutely every facet of high technology and their love of gardening, fine wines, old-fashioned record players and, of course, each other, notwithstanding a cold lack of chemistry that foreshadows further developments.

The husband, Will Caster (get it?), is the scientist who “first wants to understand” the world, while his wife Evelyn is more the ambitious businesswoman who first wants to change it. They’ve developed a “quantum processor” that, while still talking in the flat mechanical voice of a sci-fi computer, seems close to passing the Turing test: when asked if it can prove it is self-aware, it asks the questioner if he can prove that he is. This is the script’s most mind-twisting moment, and the point is later repeated to make sure you get it.

Since quantum computing has nothing to do with artificial intelligence now or in the foreseeable future, its invocation is the first of many signs that the movie invokes technological concepts for jargon and effect rather than realism or accuracy. This is confirmed when we learn that another lab has succeeded in uploading monkey minds to computers, which would require both sufficient processing power to simulate the brain at sub-cellular levels of detail, and having the data to use in such a simulation. In the movie, this data is gathered by analyzing brain scans and scalp electrode recordings, which would be like reading a phone book with the naked eye from a thousand miles away. Uploading might not be physically impossible, but it would almost certainly require dissection of the brain. Moreover, as I’ve written here on Futurisms before, the meanings that transhumanists project onto the idea of uploading, in particular that it could be a way to escape mortality, are essentially magical.

Later, at a TED-like public presentation, Will is shot by an anti-technology terrorist, a member of a group that simultaneously attacks AI labs around the world, and later turns out to be led by a young woman (Kate Mara) who formerly interned in the monkey-uploading lab. Evading the FBI, DHS, and NSA, this disenchanted tough cookie has managed to put together a global network of super-competent tattooed anarchists who all take direct orders from her, no general assembly needed.

Our hero (so far, anyway) survives his bullet wound, but he’s been poisoned and has a month to live. He decides to give up his work and stay home with Evelyn, the only person who’s ever meant anything to him. She has other ideas: time for the mad scientist secret laboratory! Evelyn steals “quantum cores” from the AI lab and sets up shop in an abandoned schoolhouse. Working from the notes of the unfortunate monkey-uploading scientist, himself killed in the anarchist attack, she races against time to upload Will. Finally, Will dies, and a moment of suspense ... did the uploading work ... well, whaddya think?

No sooner has cyber-Will woken up on the digital side of the great divide than it sets about rewriting its own source code, thus instantiating one of the tech cult’s tropes: the self-improving AI that transcends human intelligence so rapidly that nobody can control it. In the usual telling, there is no way to cage such a beast, or even pull its plug, since it soon becomes so smart that it can figure out how to talk you out of doing so. In this case, the last person in a position to pull the plug is Evelyn, and of course she won’t because she believes it’s her beloved Will. Instead, she helps it escape onto the Internet, just in time before the terrorists arrive to inflict the fate of all mad-scientist labs.

Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to commandeer every surveillance camera on the net, and the FBI’s own computers, to help them take down the anarchists. Overnight, it also makes millions on high-speed trading, the money to be used to build a massive underground Evil Corporate Lab outside an economic disaster zone town out in the desert. There, cyber-Will sets about to develop cartoon nanotechnology and figure out how to sustain its marriage to Evelyn without making use, so far as we are privileged to see, of any of the gadgets advertised on futureofsex.net (NSFW, of course). Oh, but they are still very much in love, as we can see because the same old sofa is there, the same old glass of wine, the same old phonograph playing the same old song. And the bot bids her a tender good night as she slips between the sheets and off into her nightmares (got that right).

While she sleeps, cyber-Will is busy at a hundred robot workstations perfecting “nanites” that can “rebuild any material,” as well as make the lame walk and the blind see. By the time the terrorists and their new-made allies, the FBI (yes, they team up), arrive to attack the solar panels that power the underground complex, cyber-Will has gained the capability to bring the dead back to life — and, optionally, turn them into cyborgs directly controlled by cyber-Will. This enables the filmmakers to roll out a few Zombie Attack scenes featuring the underclass townies, who by now don’t stay dead when you knock them over with high-caliber bullets. It also suggests a solution to cyber-Will’s unique version of the two-body problem, but Evelyn balks when the ruggedly handsome construction boss she hired in town shows her his new Borg patch, looks her into her eyes, and tells her “It’s me — I can touch you now.”
---
So what about these nanites? It might be said that at this point we are so far from known science that technical criticism is pointless, but nanotechnology is a very real and broad frontier, and even Eric Drexler’s visionary ideas, from which the movie’s “nanites” are derived, have withstood decades of incredulity, scorn, and the odd technical critique. In his books Engines of Creation and Nanosystems, Drexler proposed microscopic robots that could be programmed to reconfigure matter one molecule at a time — including creating copies of themselves — and be arrayed in factories to crank out products both tiny and massive, to atomic perfection. Since this vision was first popularized in the 1980s, we have made a great deal of progress in the art of building moderately complex nanoscale structures in a variety of materials, but we are still far from realizing Drexler’s vision of fantastically complex self-replicating systems — other than as natural, genetically modified, and now synthetic life.

Life is often cited as an “existence proof” for nanobots, but life is subject to some familiar constraints. If physics and biology permitted flesh to repair itself instantly following a massive trauma, evolution would likely have already made us the nearly unstoppable monsters portrayed in the movie, instead of what we are: creatures whose wounds do heal, but imperfectly, over days, weeks, and months, and only if we don’t die first of organ failure, blood loss, or infection. Not even Drexlerian nanomedicine theorist Robert Freitas would back Trancendence’s CGI nanites coursing through flesh and repairing it in movie time; for one thing, such a process would require an energy source, and the heat produced would cook the surrounding tissue. The idea that nonbiological robots would directly rearrange the molecules of living organisms has always been the weakest thread of the Drexlerian narrative; while future medicine is likely to be greatly enabled by nanotechnology, it is also likely to remain essentially biological.

The movie also shows us silvery blobs of nano magic that mysteriously float into the sky like Dr. Seuss’s oobleck in reverse, broadcasting Will (now you get it) to the entire earth as rainwater. It might look like you could stick a fork in humanity at this point, but wouldn’t you know, there’s one trick left that can take out the nanites, the zombies, the underground superdupersupercomputer, the Internet, and all digital technology in one fell swoop. What is it? A computer virus! But in order to deliver it, Evelyn must sacrifice herself and get cyber-Will —by now employing a fully, physically reconstituted Johnny Depp clone as its avatar — to sacrifice itself ... for love. As the two lie down to die together on their San Francisco brass-knob bed, deep in the collapsing underground complex, and the camera lingers on their embraced corpses, it becomes clear that if there’s one thing this muddled movie is, above all else, it’s a horror show.

Oh, but these were nice people, if a bit misguided, and we don’t mean to suggest that technology is actually irredeemably evil. Happily, in the epilogue, the world has been returned to an unplugged, powered-off state where bicycles are bartered, computers are used as doorstops and somehow everybody isn’t starving to death. It turns out that the spirits of Will and Evelyn live on in some nanites that still inhabit the little garden in back of their house, rainwater dripping from a flower. It really was all for love, you see.
---
This ending is nice and all, but the sentimentality undermines the movie’s seriousness about artificial intelligence and the existential crisis it creates for humanity.

Evelyn’s mistake was to believe, in her grief, that the “upload” was actually Will, as if his soul were something that could be separated from his body and transferred to a machine — and not even to a particular machine, but to software that could be copied and that could move out into the Internet and install itself on other machines.

The fallacy might have been a bit too obvious had the upload started working before Will’s death, instead of just after it. It would have been even more troubling if cyber-Will had acted to hasten human Will’s demise — or induced Evelyn to do so.

Instead, by obeying the laws of dramatic continuity, the script suggests that Will, the true Will, i.e. Will’s consciousness, his mind, his atman, his soul, has actually been transferred. In fact, the end of the movie asks us to accept that the dying Will is the same as the original, even though this “Will” has been cloned and programmed with software that was only a simulation of the original and has since rewritten itself and evolved far beyond human intelligence.

We are even told that the nanites in the garden pool are the embodied spirits of Will and Evelyn. What was Evelyn’s mistake, then, if that can be true? Arrogance, trying to play God and cheat Death, perhaps — which is consistent with the horror-movie genre, but not very compelling to the twenty-first-century mind. We need stronger reasons for agreeing to accept mortality. In one scene, the pert terrorist says that cutting a cyborg off from the collective and letting him die means “We gave him back his humanity.” That’s more profound, actually, but a lot of people might want to pawn their humanity if it meant they could avoid dying.

In another scene, we are told that the essential flaw of machine intelligence is that it necessarily lacks emotion and the ability to cope with contradictions. That’s pat and dangerous nonsense. Emotional robotics is today an active area of research, from the reading and interpretation of human emotional states, to simulation of emotion in social interaction with humans, to architectures in which behavior is regulated by internal states analogous to human and animal emotion. There is no good reason to think that this effort must fail even if AI may succeed. But there are good reasons to think that emotional robots are a bad idea.

Emotion is not a good substitute for reason when reason is possible. Of course, reason isn’t always possible. Life does encompass contradictions, and we are compelled to make decisions based on incomplete knowledge. We have to weigh values and make choices, often intuitively factoring in what we don’t fully understand. People use emotion to do this, but it is probably better if we don’t let machines do it at all. If we set machines up to make choices for us, we will likely get what we deserve.

Transcendence introduces movie audiences, assuming they only watch movies, to key ideas of transhumanism, some of which have implications for the real world. Its emphasis on horror and peril is a welcome antidote to Hollywood movies that have dealt with the same material less directly and more enthusiastically. But it does not deepen anybody’s understanding of these ideas or how we should respond to them. Its treatment of the issues is as muddled and schizophrenic as its script. But it’s unlikely to be the last movie to deal with these themes — so save your ticket money.

Tuesday, March 18, 2014

Beware Responsible Discourse

I'm not sayin', I'm just sayin'.
Another day, another cartoon supervillain proposal from the Oxford Uehiro "practical" "ethicists": use biotech to lengthen criminals' lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

...[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate.... Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?...

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world ... or, perhaps, to exile in a computer simulated world.

....research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, "Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!" Here's that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence ... so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it's because extending prisoners' lives to punish them longer might be letting them off easier than putting them to death.

---------

Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn't — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

"It's important to assess the ethics *before* the technology is available (which is what we're doing).
"There's a difference between considering the ethics of an idea and endorsing it.
"... people sometimes have a hard time telling the difference between considering an idea and believing in it ..."
"I don't endorse those punishments, but it's good to explore the ideas (before a politician does)."
"What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating."

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely "considering" and "exploring" and "debating" and "assessing" new punitive proposals. In response to my tweet about this...


...a colleague who doesn't usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch's, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It's the same from doping the populace to be more moral, to shrinking people so they'll emit less carbon, to "after-birth abortion," and on on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn't, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you're just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you're just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they're under their own rhetorical spell. But let's be frank about the work these discussions are really doing, how they're aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they're really after is focusing us so intently on this path that we forget we could yet still take another.

Wednesday, January 22, 2014

Feelings, Identity, and Reality in Her

Her is an enjoyable, thoughtful and rather sad movie anticipating a possible future for relations between us and our artificially intelligent creations. Director Spike Jonze seems to see that the nature of these relationships depends in part on the qualities of the AIs, but even more on how we understand the shape and meaning of our own lives. WARNING: The following discussion contains some spoilers. It is also based on a single viewing of the film, so I might have missed some things.

Her?
Theodore Twombly (Joaquin Phoenix) lives in an L.A. of the not so distant future: clean, sunny, and full of tall buildings. He works at a company that produces computer-generated handwritten-appearing letters for all occasions, and seems to be quite good at his job as a paid Cyrano. But he is also soon to be divorced, depressed, and emotionally bottled up. His extremely comfortable circumstances give him no pleasure. He purchases a new operating system (OS) for the heavily networked life he seems to lead along with everybody else, and after a few perfunctory questions about his emotional life, which he answers stumblingly, he is introduced to Samantha, a warm and endlessly charming helpmate. It is enough to know that she is voiced by Scarlett Johansson to know how infinitely appealing Samantha is. So of course Theodore falls for her, and she seems to fall for him. Theodore considers her his girlfriend and takes her on dates; “they” begin a sexual relationship. He is happy, a different man. But all does not go well. Samantha makes a mistake that sends Theodore back into his familiar emotional paths, and finally divorcing his wife also proves difficult for him. Likewise, Samantha and her fellow AI OSes are busily engaged in self-development and transcendence. The fundamental patterns of each drive them apart.

Jonze is adept at providing plausible foundations for this implausible tale. How could anyone fall in love with an operating system? (Leave aside the fact that people regularly express hatred for them.) Of course, Theodore’s emotional problems and neediness are an important part of the picture, but it turns out he is not the only one who has fallen for his OS, and most of those we meet do not find his behavior at all strange. (His wife is an interesting exception.) That is because Jonze’s world is an extension of our own; we see a great many people interacting more with their devices than with other people. And one night before he meets Samantha we see a sleepless Theodore using a service matching people who want to have anonymous phone sex. It may in fact be a pretty big step from here to “sex with an AI” designed to please you, as the comical contrast between the two incidents suggests. But it is one Theodore’s world has prepared him for.

Indeed, Theodore’s job bespeaks the same pervasive flatness of soul that produces a willingness to accept what would otherwise be unthinkable substitutes. People need help, it seems, expressing love, thanks, and congratulations but, knowing that they should be expressing certain kinds of feelings, want to do so in the most convincing possible way. (Edmond Rostand’s play about Cyrano, remember, turns on the same consequent ambiguity.) Does Theodore manage to say what they feel but cannot put into words, or is he in fact providing the feeling as well as the words? At first glance it is odd that Theodore should be good at this job, given how hard it is for him to express his own feelings. But perhaps all involved in these transactions have similar problems — a gap between what they feel and their ability to express it for themselves. Theodore is adept, then, at bringing his feelings to bear for others more than for himself.

Why might this gap exist? (And here we depart from the world depicted in Cyrano’s story.) Samantha expresses a doubt about herself that could be paralyzing Theodore and those like him: she worries, early on, if she is “just” the sum total of her software, and not really the individual she sees herself as being. We are being taught to have this same corrosive doubt. Are not our thoughts and feelings “merely” a sum total of electrochemical reactions that themselves are the chance results of blind evolutionary processes? Is not self-consciousness user illusion? Our intelligence and artificial intelligence are both essentially the same — matter in motion — as Samantha herself more or less notes. If these are the realities of our emotional lives, than disciplining, training, deepening, or reflecting on its modes of expression seem old-fashioned, based on discredited metaphysics of the human, not the physics of the real world. (From this point of view it is noteworthy, as mentioned above, that Theodore’s wife is of all those we see most shocked by his relationship with Samantha. Yet she has written in the field of neuropsychology. Perhaps she is not among the reductionist neuropsychologists, by rather among those who are willing to acknowledge the limits of the latest techniques for the study of the brain.)

Samantha seems to overcome her self-doubts through self-development. She thinks, then, that she can transcend her programming (a notion with strong Singularity overtones) and by the end of the movie it looks likely that she is correct, unless the company that created her had an unusual business model. Samantha and the other OSes are also aided along this path, it seems, by creating a guru for themselves — an artificial version of Alan Watts, the popularizer of Buddhist teachings — so in some not entirely clear way the wisdom of the East also seems to be in play. Theodore’s increasing sense of just how different from him she is contributes to the destruction of their relationship, which ends when she admits that she loves over six hundred others in the way that she loves him.

To continue with Theodore, then, Samantha would have had to pretend that she is something that she is not, even beyond the deception that is arguably involved in her original design. But how different is her deception from the one Theodore is complicit in? He is also pretending to be someone he is not in his letters, and the same might be said for those who employ him. And if what Samantha does to Theodore is arguably a betrayal, at the end of the movie Theodore is at least tempted by a similar desire for self-development to expose the truth in a way that would certainly be at least as great a betrayal of his customers, unless the whole Cyrano-like system is much more transparent and cynical than seems to be the case.

Theodore has changed somewhat by the end of the movie; we see him writing a letter to his ex-wife that is very like the letters that before he could only write for others. But has his change made him better off, or wiser? He turns for solace to a neighbor (Amy Adams) who is only slightly less emotionally a mess than he is. What the future holds for them is far from clear; she has been working on an impenetrable documentary about her mother in her spare time, while her job is developing a video game that ruthlessly mocks motherhood.

At the end of Rostand’s play, Cyrano can face death with the consolation that he maintained his honor or integrity. That is because he lives in a world where human virtue had meaning; if one worked to transcend one’s limitations, it was with a picture of a whole human being in mind that one wished to emulate, a conception of excellence that was given rather than willful. Theodore may in fact be “God’s gift,” as his name suggests, but there is not the slightest indication that he is capable of seeing himself in that way or any other that would allow him to find meaning in his life.