Futurisms: Critiquing the project to reengineer humanity

Wednesday, October 29, 2014

Our new book on transhumanism: Eclipse of Man

Since we launched The New Atlantis, questions about human enhancement, artificial intelligence, and the future of humanity have been a core part of our work. And no one has written more intelligently and perceptively about the moral and political aspects of these questions than Charles T. Rubin, who first addressed them in the inaugural issue of TNA and who is one of our colleagues here on Futurisms.

So we are delighted to have just published Charlie's new book about transhumanism, Eclipse of Man: Human Extinction and the Meaning of Progress.


We'll have much more to say about the book in the days and weeks ahead, but for now, you can read the dust-jacket text and the book's blurbs at EclipseOfMan.net and, even better, you can buy it today from Amazon or Barnes and Noble.

Tuesday, September 23, 2014

What Total Recall can Teach Us About Memory, Virtue, and Justice



The news that an American woman has reportedly decided to pursue plastic surgery to have a third breast installed may itself be a subject for discussion on this blog, and will surely remind some readers of the classic 1990 science fiction movie Total Recall.

As it happens, last Thursday the excellent folks at Future Tense hosted one of their “My Favorite Movie” nights here in Washington D.C., playing that very film and holding a discussion afterwards with one of my favorite academics, Stanford’s Francis Fukuyama. The theme of the discussion was the relationship between memory and personal identity — are we defined by our memories?

A face you can trust
Much to my dismay the discussion of this topic, which is of course the central theme of the movie, strayed quite far from the details of the film. Indeed, in his initial remarks, Professor Fukuyama quoted, only to dismiss, the film’s central teaching on the matter, an assertion made by the wise psychic mutant named Kuato: “You are what you do. A man is defined by his actions, not his memory.”

This teaching has two meanings; the first meaning, which the plot of the movie has already prepared the audience to accept and understand when they first hear it, is that the actions of a human being decisively shape his character by inscribing habits and virtues on the soul.

From the very beginning of the movie, Quaid (the protagonist, played by Arnold Schwarzenegger) understands that things are not quite right in his life. His restlessness comes from the disproportion between his character, founded on a lifetime of activity as a secret agent, and the prosaic life he now finds himself in. He is drawn back to Mars, where revolution and political strife present opportunities for the kinds of things that men of his character desire most: victory, glory, and honor.

That Quaid retains the dispositions and character of his former self testifies to how the shaping of the soul by action takes place not by storing up representations and propositions in one’s memory, but by cultivating in a person virtuous habits and dispositions. As Aristotle writes in the Nicomachean Ethics, “in one word, states of character arise out of like activities.”

The second meaning of Kuato’s teaching concerns not the way our actions subconsciously shape our character, but how our capacity to choose actions, especially our capacity to choose just actions, defines who we are at an even deeper level. Near the end of the movie, after we have heard Kuato’s teaching, we learn that Hauser, Quaid’s “original self,” was an unrepentant agent of the oppressive Martian regime, and hence an unjust man. Quaid, however, chooses to side with the just cause of the revolutionaries. Though he retains some degree of identity with his former self — he continues to be a spirited, courageous, and skillful man — he has the ability to redefine himself in light of an impartial evaluation of the revolutionaries’ cause against the Martian regime, an evaluation that is guided by man’s natural partiality toward the just over the unjust.
*   *   *
The movie’s insightful treatment of the meaning and form of human character comes not, however, from a “realistic” or plausible understanding of the kinds of technologies that might exist in the future. It seems quite unlikely that we could ever have technologies that specifically target and precisely manipulate what psychologists would call “declarative memory.” In fact, the idea of reprogramming declarative memory in an extensive and precise way seems far less plausible than manipulating a person’s attitudes, dispositions, and habits — indeed, mood-altering drugs are already available.

Professor Fukuyama also raised the subject of contemporary memory-altering drugs. (This was a topic explored by the President’s Council on Bioethics in its report Beyond Therapy, published in 2003 when Fukuyama was a member of the Council.) These drugs, as Professor Fukuyama described them, manipulate the emotional significance of traumatic memories rather than their representational or declarative content. While Quaid retained some of the emotional characteristics of his former self despite the complete transformation of the representational content of his memory, we seem poised to remove or manipulate our emotional characteristics while retaining the same store of memories.

What lessons then can we draw from Total Recall’s teaching concerning memory, if the technological scenario in the movie is, as it were, the inverse of the projects we are already engaged in? It is first of all worth noting that the movie has a largely happy ending — Quaid chooses justice and was able to successfully “free Mars” (as Kuato directed him to do) through resolute and spirited action made possible by the skills and dispositions he developed during his life as an agent of the oppressive Martian regime.

Quaid’s siding with the revolutionaries over the Martian regime was motivated by the obvious injustice of that regime’s actions, and the natural emotional response of anger that such injustice instills in an impartial observer. But, as was noted in the discussion of memory-altering drugs after the film, realistic memory-altering drugs could disconnect our memories of unjust acts from the natural sense of guilt and anger that ought to accompany them.

Going beyond memory-altering drugs, there are (somewhat) realistic proposals for drugs that could dull the natural sense of spiritedness and courage that might lead a person to stand up to perceived injustice. Taken together, these realistic proposals would render impossible precisely the scenario envisioned in Total Recall, with memory-altering drugs that alter the emotional character of actions dulling our sense of their justice or injustice, and other drugs that dull our capacity to develop those qualities of soul like spiritedness and courage that would enable us to respond to injustice.

What science fiction can teach us about technology and human flourishing does not depend on its technical plausibility, but on how it draws out the connections truths about human nature and politics by putting them in unfamiliar settings. Notwithstanding Professor Fukuyama’s dismissal of the film, the moral seriousness with which Total Recall treats the issues of virtue and justice make it well worth viewing, and re-viewing for thoughtful critics of the project to engineer the human soul.

Monday, July 28, 2014

The Muddled Message of Lucy

Lucy is such a terrible film that in the end even the amazing Scarlett Johansson cannot save it. It is sloppily made, and here I do not mean its adoption of the old popular-culture truism that we only use 10 percent of our brains. (The fuss created by that premise is quite wonderful.) There is just no eye for detail, however important. The blue crystals that make Lucy a superwoman are repeatedly referred to as a powder.
Not powder. (Ask Walter White.)
Morgan Freeman speaks of the “mens” he has brought together to study her. Lucy is diverted from her journey as a drug mule by persons unknown and for reasons never even remotely explained. And I defy anybody to make the slightest sense of the lecture Freeman is giving that introduces us to his brain scientist character.

But it does have An Idea at its heart. This idea is the new popular-culture truism that evolution is a matter of acquiring, sharing, and transmitting information — less “pay it forward” than pass it on. So the great gift that Lucy gives to Freeman and his fellow geeks at the end of the movie is a starry USB drive that, we are presumably to believe, contains all the information about life, the universe, and everything that she has gained in the course of her coming to use her brain to its fullest time-traveling extent. (Doesn’t she know that a Firewire connection would have allowed faster download speeds?)

Why this gift is necessary is a little mysterious since it looks like we now know how anybody could gain the same powers Lucy has; the dialogue does not give us any reason to believe that her brain-developing reaction to the massive doses of the blue crystals she receives, administered in three different ways, is unique to her. That might just be more sloppy writing. But then again perhaps it is just as well that others not try to emulate Lucy, because it turns out the evolutionary imperative to develop and pass on information is, as one might expect from a bald evolutionary imperative, exceedingly dehumanizing. Of course given that most of her interactions in the film are with people who are trying to kill her, this should not be too much of a surprise. But although she sometimes restrains rather than kills, she shows little regard for any human life that stands in her way, a point made explicitly as she is driving like a maniac through the streets of Paris. Yes, she uses her powers to tell a friend to shape up and make better choices (as if somehow knowing the friend’s kidney and liver functions are off would be necessary for such an admonition). And early on she takes a quiet moment while she is being operated on to call her parents to say how much she loves them. (Pain, as the virulently utopian H.G. Wells understood, is not something supermen have to worry about.) That loving sentiment is couched in a lengthy conversation about how she is changing, a conversation that, without having the context explained, would surely convince any parent that the child was near death or utterly stoned — both of which are in a sense true for Lucy. But it looks like using more of her brain does not increase her emotional intelligence. (Lucy Transcendent can send texts; perhaps she will explain everything to her mother that way.)

Warming up for piano practice.
So what filmmaker Luc Besson has done, it seems, is to create a movie suggesting that a character not terribly unlike his killer heroine in La Femme Nikita represents the evolutionary progress of the human brain (as Freeman’s character would see it), that the goal of Life is to produce more effective killing machines. Given what we see of her at the start of the film, I think we can suspect that Lucy has always put Lucy first. A hyperintelligent Lucy is just better at it. The fact that early on the film intercuts scenes of cheetahs hunting with Lucy’s being drawn in and captured by the bad guys would seem to mean that all this acquiring and transmitting of information is not really going to change anything fundamental. Nature red in tooth and claw, and all that. I’m not sure Besson knows this is his message. The last moments of the film, which suggest that the now omnipresent Lucy, who has transcended her humanity and her selfishness, wants us to go forth and share the knowledge she has bequeathed us, have atmospherics that suggest a frankly sappier progressive message along the lines of information wants to be free.

I wish I could believe that by making Lucy so robotic as her mental abilities increase Besson was suggesting that, whatever evolution might “want,” the mere accumulation of knowledge is not the point of a good human life. I’d like to think that even if he is correct about the underlying reality, he wants us to see how we should cherish the aspects of our humanity that manage, however imperfectly, to allow us to obscure or overcome it. But I think someone making that kind of movie would not have called crystals powder.

Thursday, April 24, 2014

Not Quite ‘Transcendent’

Editor’s Note: In 2010, Mark Gubrud penned for Futurisms the widely read and debated post Why Transhumanism Won’t Work.” With this post, we’re happy to welcome him as a regular contributor.

Okay, fair warning, this review is going to contain spoilers, lots of spoilers, because I don’t know how else to review a movie like Transcendence, which appropriates important and not so important ideas about artificial intelligence, nanotechnology, and the “uploading” of minds to machines, wads them up with familiar Hollywood tropes, and throws them all at you in one nasty spitball. I suppose I should want people to see this movie, since it does, albeit in a cartoonish way, lay out these ideas and portray them as creepy and dangerous. But I really am sure you have better things to do with your ten bucks and two hours than what I did with mine. So read my crib notes and go for a nice springtime walk instead.
---
Set in a near future that is recognizably the present, Transcendence sets us up with a husband-and-wife team (Johnny Depp and Rebecca Hall) that is about to make a breakthrough in artificial intelligence (AI). They live in San Francisco and are the kind of Googley couple who divide their time between their boundless competence in absolutely every facet of high technology and their love of gardening, fine wines, old-fashioned record players and, of course, each other, notwithstanding a cold lack of chemistry that foreshadows further developments.

The husband, Will Caster (get it?), is the scientist who “first wants to understand” the world, while his wife Evelyn is more the ambitious businesswoman who first wants to change it. They’ve developed a “quantum processor” that, while still talking in the flat mechanical voice of a sci-fi computer, seems close to passing the Turing test: when asked if it can prove it is self-aware, it asks the questioner if he can prove that he is. This is the script’s most mind-twisting moment, and the point is later repeated to make sure you get it.

Since quantum computing has nothing to do with artificial intelligence now or in the foreseeable future, its invocation is the first of many signs that the movie invokes technological concepts for jargon and effect rather than realism or accuracy. This is confirmed when we learn that another lab has succeeded in uploading monkey minds to computers, which would require both sufficient processing power to simulate the brain at sub-cellular levels of detail, and having the data to use in such a simulation. In the movie, this data is gathered by analyzing brain scans and scalp electrode recordings, which would be like reading a phone book with the naked eye from a thousand miles away. Uploading might not be physically impossible, but it would almost certainly require dissection of the brain. Moreover, as I’ve written here on Futurisms before, the meanings that transhumanists project onto the idea of uploading, in particular that it could be a way to escape mortality, are essentially magical.

Later, at a TED-like public presentation, Will is shot by an anti-technology terrorist, a member of a group that simultaneously attacks AI labs around the world, and later turns out to be led by a young woman (Kate Mara) who formerly interned in the monkey-uploading lab. Evading the FBI, DHS, and NSA, this disenchanted tough cookie has managed to put together a global network of super-competent tattooed anarchists who all take direct orders from her, no general assembly needed.

Our hero (so far, anyway) survives his bullet wound, but he’s been poisoned and has a month to live. He decides to give up his work and stay home with Evelyn, the only person who’s ever meant anything to him. She has other ideas: time for the mad scientist secret laboratory! Evelyn steals “quantum cores” from the AI lab and sets up shop in an abandoned schoolhouse. Working from the notes of the unfortunate monkey-uploading scientist, himself killed in the anarchist attack, she races against time to upload Will. Finally, Will dies, and a moment of suspense ... did the uploading work ... well, whaddya think?

No sooner has cyber-Will woken up on the digital side of the great divide than it sets about rewriting its own source code, thus instantiating one of the tech cult’s tropes: the self-improving AI that transcends human intelligence so rapidly that nobody can control it. In the usual telling, there is no way to cage such a beast, or even pull its plug, since it soon becomes so smart that it can figure out how to talk you out of doing so. In this case, the last person in a position to pull the plug is Evelyn, and of course she won’t because she believes it’s her beloved Will. Instead, she helps it escape onto the Internet, just in time before the terrorists arrive to inflict the fate of all mad-scientist labs.

Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to commandeer every surveillance camera on the net, and the FBI’s own computers, to help them take down the anarchists. Overnight, it also makes millions on high-speed trading, the money to be used to build a massive underground Evil Corporate Lab outside an economic disaster zone town out in the desert. There, cyber-Will sets about to develop cartoon nanotechnology and figure out how to sustain its marriage to Evelyn without making use, so far as we are privileged to see, of any of the gadgets advertised on futureofsex.net (NSFW, of course). Oh, but they are still very much in love, as we can see because the same old sofa is there, the same old glass of wine, the same old phonograph playing the same old song. And the bot bids her a tender good night as she slips between the sheets and off into her nightmares (got that right).

While she sleeps, cyber-Will is busy at a hundred robot workstations perfecting “nanites” that can “rebuild any material,” as well as make the lame walk and the blind see. By the time the terrorists and their new-made allies, the FBI (yes, they team up), arrive to attack the solar panels that power the underground complex, cyber-Will has gained the capability to bring the dead back to life — and, optionally, turn them into cyborgs directly controlled by cyber-Will. This enables the filmmakers to roll out a few Zombie Attack scenes featuring the underclass townies, who by now don’t stay dead when you knock them over with high-caliber bullets. It also suggests a solution to cyber-Will’s unique version of the two-body problem, but Evelyn balks when the ruggedly handsome construction boss she hired in town shows her his new Borg patch, looks her into her eyes, and tells her “It’s me — I can touch you now.”
---
So what about these nanites? It might be said that at this point we are so far from known science that technical criticism is pointless, but nanotechnology is a very real and broad frontier, and even Eric Drexler’s visionary ideas, from which the movie’s “nanites” are derived, have withstood decades of incredulity, scorn, and the odd technical critique. In his books Engines of Creation and Nanosystems, Drexler proposed microscopic robots that could be programmed to reconfigure matter one molecule at a time — including creating copies of themselves — and be arrayed in factories to crank out products both tiny and massive, to atomic perfection. Since this vision was first popularized in the 1980s, we have made a great deal of progress in the art of building moderately complex nanoscale structures in a variety of materials, but we are still far from realizing Drexler’s vision of fantastically complex self-replicating systems — other than as natural, genetically modified, and now synthetic life.

Life is often cited as an “existence proof” for nanobots, but life is subject to some familiar constraints. If physics and biology permitted flesh to repair itself instantly following a massive trauma, evolution would likely have already made us the nearly unstoppable monsters portrayed in the movie, instead of what we are: creatures whose wounds do heal, but imperfectly, over days, weeks, and months, and only if we don’t die first of organ failure, blood loss, or infection. Not even Drexlerian nanomedicine theorist Robert Freitas would back Trancendence’s CGI nanites coursing through flesh and repairing it in movie time; for one thing, such a process would require an energy source, and the heat produced would cook the surrounding tissue. The idea that nonbiological robots would directly rearrange the molecules of living organisms has always been the weakest thread of the Drexlerian narrative; while future medicine is likely to be greatly enabled by nanotechnology, it is also likely to remain essentially biological.

The movie also shows us silvery blobs of nano magic that mysteriously float into the sky like Dr. Seuss’s oobleck in reverse, broadcasting Will (now you get it) to the entire earth as rainwater. It might look like you could stick a fork in humanity at this point, but wouldn’t you know, there’s one trick left that can take out the nanites, the zombies, the underground superdupersupercomputer, the Internet, and all digital technology in one fell swoop. What is it? A computer virus! But in order to deliver it, Evelyn must sacrifice herself and get cyber-Will —by now employing a fully, physically reconstituted Johnny Depp clone as its avatar — to sacrifice itself ... for love. As the two lie down to die together on their San Francisco brass-knob bed, deep in the collapsing underground complex, and the camera lingers on their embraced corpses, it becomes clear that if there’s one thing this muddled movie is, above all else, it’s a horror show.

Oh, but these were nice people, if a bit misguided, and we don’t mean to suggest that technology is actually irredeemably evil. Happily, in the epilogue, the world has been returned to an unplugged, powered-off state where bicycles are bartered, computers are used as doorstops and somehow everybody isn’t starving to death. It turns out that the spirits of Will and Evelyn live on in some nanites that still inhabit the little garden in back of their house, rainwater dripping from a flower. It really was all for love, you see.
---
This ending is nice and all, but the sentimentality undermines the movie’s seriousness about artificial intelligence and the existential crisis it creates for humanity.

Evelyn’s mistake was to believe, in her grief, that the “upload” was actually Will, as if his soul were something that could be separated from his body and transferred to a machine — and not even to a particular machine, but to software that could be copied and that could move out into the Internet and install itself on other machines.

The fallacy might have been a bit too obvious had the upload started working before Will’s death, instead of just after it. It would have been even more troubling if cyber-Will had acted to hasten human Will’s demise — or induced Evelyn to do so.

Instead, by obeying the laws of dramatic continuity, the script suggests that Will, the true Will, i.e. Will’s consciousness, his mind, his atman, his soul, has actually been transferred. In fact, the end of the movie asks us to accept that the dying Will is the same as the original, even though this “Will” has been cloned and programmed with software that was only a simulation of the original and has since rewritten itself and evolved far beyond human intelligence.

We are even told that the nanites in the garden pool are the embodied spirits of Will and Evelyn. What was Evelyn’s mistake, then, if that can be true? Arrogance, trying to play God and cheat Death, perhaps — which is consistent with the horror-movie genre, but not very compelling to the twenty-first-century mind. We need stronger reasons for agreeing to accept mortality. In one scene, the pert terrorist says that cutting a cyborg off from the collective and letting him die means “We gave him back his humanity.” That’s more profound, actually, but a lot of people might want to pawn their humanity if it meant they could avoid dying.

In another scene, we are told that the essential flaw of machine intelligence is that it necessarily lacks emotion and the ability to cope with contradictions. That’s pat and dangerous nonsense. Emotional robotics is today an active area of research, from the reading and interpretation of human emotional states, to simulation of emotion in social interaction with humans, to architectures in which behavior is regulated by internal states analogous to human and animal emotion. There is no good reason to think that this effort must fail even if AI may succeed. But there are good reasons to think that emotional robots are a bad idea.

Emotion is not a good substitute for reason when reason is possible. Of course, reason isn’t always possible. Life does encompass contradictions, and we are compelled to make decisions based on incomplete knowledge. We have to weigh values and make choices, often intuitively factoring in what we don’t fully understand. People use emotion to do this, but it is probably better if we don’t let machines do it at all. If we set machines up to make choices for us, we will likely get what we deserve.

Transcendence introduces movie audiences, assuming they only watch movies, to key ideas of transhumanism, some of which have implications for the real world. Its emphasis on horror and peril is a welcome antidote to Hollywood movies that have dealt with the same material less directly and more enthusiastically. But it does not deepen anybody’s understanding of these ideas or how we should respond to them. Its treatment of the issues is as muddled and schizophrenic as its script. But it’s unlikely to be the last movie to deal with these themes — so save your ticket money.

Tuesday, March 18, 2014

Beware Responsible Discourse

I'm not sayin', I'm just sayin'.
Another day, another cartoon supervillain proposal from the Oxford Uehiro "practical" "ethicists": use biotech to lengthen criminals' lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

...[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate.... Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?...

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world ... or, perhaps, to exile in a computer simulated world.

....research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, "Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!" Here's that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence ... so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it's because extending prisoners' lives to punish them longer might be letting them off easier than putting them to death.

---------

Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn't — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

"It's important to assess the ethics *before* the technology is available (which is what we're doing).
"There's a difference between considering the ethics of an idea and endorsing it.
"... people sometimes have a hard time telling the difference between considering an idea and believing in it ..."
"I don't endorse those punishments, but it's good to explore the ideas (before a politician does)."
"What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating."

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely "considering" and "exploring" and "debating" and "assessing" new punitive proposals. In response to my tweet about this...


...a colleague who doesn't usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch's, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It's the same from doping the populace to be more moral, to shrinking people so they'll emit less carbon, to "after-birth abortion," and on on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn't, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you're just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you're just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they're under their own rhetorical spell. But let's be frank about the work these discussions are really doing, how they're aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they're really after is focusing us so intently on this path that we forget we could yet still take another.

Wednesday, January 22, 2014

Feelings, Identity, and Reality in Her

Her is an enjoyable, thoughtful and rather sad movie anticipating a possible future for relations between us and our artificially intelligent creations. Director Spike Jonze seems to see that the nature of these relationships depends in part on the qualities of the AIs, but even more on how we understand the shape and meaning of our own lives. WARNING: The following discussion contains some spoilers. It is also based on a single viewing of the film, so I might have missed some things.

Her?
Theodore Twombly (Joaquin Phoenix) lives in an L.A. of the not so distant future: clean, sunny, and full of tall buildings. He works at a company that produces computer-generated handwritten-appearing letters for all occasions, and seems to be quite good at his job as a paid Cyrano. But he is also soon to be divorced, depressed, and emotionally bottled up. His extremely comfortable circumstances give him no pleasure. He purchases a new operating system (OS) for the heavily networked life he seems to lead along with everybody else, and after a few perfunctory questions about his emotional life, which he answers stumblingly, he is introduced to Samantha, a warm and endlessly charming helpmate. It is enough to know that she is voiced by Scarlett Johansson to know how infinitely appealing Samantha is. So of course Theodore falls for her, and she seems to fall for him. Theodore considers her his girlfriend and takes her on dates; “they” begin a sexual relationship. He is happy, a different man. But all does not go well. Samantha makes a mistake that sends Theodore back into his familiar emotional paths, and finally divorcing his wife also proves difficult for him. Likewise, Samantha and her fellow AI OSes are busily engaged in self-development and transcendence. The fundamental patterns of each drive them apart.

Jonze is adept at providing plausible foundations for this implausible tale. How could anyone fall in love with an operating system? (Leave aside the fact that people regularly express hatred for them.) Of course, Theodore’s emotional problems and neediness are an important part of the picture, but it turns out he is not the only one who has fallen for his OS, and most of those we meet do not find his behavior at all strange. (His wife is an interesting exception.) That is because Jonze’s world is an extension of our own; we see a great many people interacting more with their devices than with other people. And one night before he meets Samantha we see a sleepless Theodore using a service matching people who want to have anonymous phone sex. It may in fact be a pretty big step from here to “sex with an AI” designed to please you, as the comical contrast between the two incidents suggests. But it is one Theodore’s world has prepared him for.

Indeed, Theodore’s job bespeaks the same pervasive flatness of soul that produces a willingness to accept what would otherwise be unthinkable substitutes. People need help, it seems, expressing love, thanks, and congratulations but, knowing that they should be expressing certain kinds of feelings, want to do so in the most convincing possible way. (Edmond Rostand’s play about Cyrano, remember, turns on the same consequent ambiguity.) Does Theodore manage to say what they feel but cannot put into words, or is he in fact providing the feeling as well as the words? At first glance it is odd that Theodore should be good at this job, given how hard it is for him to express his own feelings. But perhaps all involved in these transactions have similar problems — a gap between what they feel and their ability to express it for themselves. Theodore is adept, then, at bringing his feelings to bear for others more than for himself.

Why might this gap exist? (And here we depart from the world depicted in Cyrano’s story.) Samantha expresses a doubt about herself that could be paralyzing Theodore and those like him: she worries, early on, if she is “just” the sum total of her software, and not really the individual she sees herself as being. We are being taught to have this same corrosive doubt. Are not our thoughts and feelings “merely” a sum total of electrochemical reactions that themselves are the chance results of blind evolutionary processes? Is not self-consciousness user illusion? Our intelligence and artificial intelligence are both essentially the same — matter in motion — as Samantha herself more or less notes. If these are the realities of our emotional lives, than disciplining, training, deepening, or reflecting on its modes of expression seem old-fashioned, based on discredited metaphysics of the human, not the physics of the real world. (From this point of view it is noteworthy, as mentioned above, that Theodore’s wife is of all those we see most shocked by his relationship with Samantha. Yet she has written in the field of neuropsychology. Perhaps she is not among the reductionist neuropsychologists, by rather among those who are willing to acknowledge the limits of the latest techniques for the study of the brain.)

Samantha seems to overcome her self-doubts through self-development. She thinks, then, that she can transcend her programming (a notion with strong Singularity overtones) and by the end of the movie it looks likely that she is correct, unless the company that created her had an unusual business model. Samantha and the other OSes are also aided along this path, it seems, by creating a guru for themselves — an artificial version of Alan Watts, the popularizer of Buddhist teachings — so in some not entirely clear way the wisdom of the East also seems to be in play. Theodore’s increasing sense of just how different from him she is contributes to the destruction of their relationship, which ends when she admits that she loves over six hundred others in the way that she loves him.

To continue with Theodore, then, Samantha would have had to pretend that she is something that she is not, even beyond the deception that is arguably involved in her original design. But how different is her deception from the one Theodore is complicit in? He is also pretending to be someone he is not in his letters, and the same might be said for those who employ him. And if what Samantha does to Theodore is arguably a betrayal, at the end of the movie Theodore is at least tempted by a similar desire for self-development to expose the truth in a way that would certainly be at least as great a betrayal of his customers, unless the whole Cyrano-like system is much more transparent and cynical than seems to be the case.

Theodore has changed somewhat by the end of the movie; we see him writing a letter to his ex-wife that is very like the letters that before he could only write for others. But has his change made him better off, or wiser? He turns for solace to a neighbor (Amy Adams) who is only slightly less emotionally a mess than he is. What the future holds for them is far from clear; she has been working on an impenetrable documentary about her mother in her spare time, while her job is developing a video game that ruthlessly mocks motherhood.

At the end of Rostand’s play, Cyrano can face death with the consolation that he maintained his honor or integrity. That is because he lives in a world where human virtue had meaning; if one worked to transcend one’s limitations, it was with a picture of a whole human being in mind that one wished to emulate, a conception of excellence that was given rather than willful. Theodore may in fact be “God’s gift,” as his name suggests, but there is not the slightest indication that he is capable of seeing himself in that way or any other that would allow him to find meaning in his life.

Friday, December 6, 2013

Humanism After All

Zoltan Istvan is a self-described visionary and philosopher, and the author of a 2013 novel called The Transhumanist Wager that he claims is a “bestseller” because it briefly went to the top of a couple of Amazon’s sales subcategories. Yesterday, Istvan wrote a piece for the Huffington Post arguing that atheism necessarily entails transhumanism, whether atheists know it or not. Our friend Micah Mattix, writing on his excellent blog over at The American Conservative, brought Istvan’s piece to our attention.

While Mattix justly mocks Istvan’s atrociously mixed metaphors — I shudder to imagine how bad Istvan’s “bestselling novel” is — it’s worth pointing out that Istvan actually does accurately summarize some of the basic tenets of transhumanist thought:

It begins with discontent about the humdrum status quo of human life and our frail, terminal human bodies. It is followed by an awe-inspiring vision of what can be done to improve both -- of how dramatically the world and our species can be transformed via science and technology. Transhumanists want more guarantees than just death, consumerism, and offspring. Much more. They want to be better, smarter, stronger -- perhaps even perfect and immortal if science can make them that way. Most transhumanists believe it can.

Why be almost human when you can be human? [source: Fox]
Istvan is certainly right that transhumanists are motivated by a sense of disappointment with human nature and the limitations it imposes on our aspirations. He’s also right that transhumanists are very optimistic about what science and technology can do to transform human nature. But what do these propositions have to do with atheism? Many atheists like to proclaim themselves to be “secular humanists” whose beliefs are guided by the rejection of the idea that human beings need anything beyond humanity (usually they mean revelation from the divine) to live decent, happy, and ethical lives. As for the idea that we cannot be happy without some belief in eternal life (either technological immortality on earth or in the afterlife), it seems that today’s atheists might well follow the teachings of Epicurus, often considered an early atheist, who argued that reason and natural science support the the idea that “death is nothing to us.”

Istvan also argues that transhumanism is the belief that science, technology, and reason can improve human existence — and that this is something all atheists implicitly affirm. This brings to mind two responses. First, religious people surely can and do believe that science, technology, and reason can improve human life. (In fact, we just published an entire symposium on this theme subject in The New Atlantis.) Second, secular humanists are first of all humanists who criticize (perhaps wrongly) the religious idea that human life on earth is fundamentally imperfect and that true human happiness can only be achieved through the transfiguration of human nature in a supernatural afterlife. So even if secular humanists (along with religious humanists and basically any reasonable people) accept the general principle that science, technology, and reason are among the tools we have to improve our lot, this does not mean that they accept what Istvan rightly identifies as one of the really fundamental principles of transhumanism, which is the sense of deep disappointment with human nature.

Human nature is not perfect, but the resentful attitude toward our nature that is so characteristic of transhumanists is no way to live a happy fulfilled life. Religious and secular humanists of all creeds, whatever they believe about God and the afterlife, reason and revelation, or the ability of science and technology to improve human life, should all start with an attitude of gratitude for and acceptance of, not resentfulness and bitterness toward, the wondrousness and beauty of human nature.

(H/T to Chad Parkhill, whose excellent 2009 essay, “Humanism After All? Daft Punk's Existentialist Critique of Transhumanism” inspired the title of this post.)

Wednesday, December 4, 2013

Cloning and the Lessons of "Overparenting"

Tonight, HBO is premiering a new episode of its State of Play series on sports. This new installment is called "Trophy Kids" and its focus is the tendency among some parents — in this case, the parents of student-athletes — to live vicariously through their children. Here's a teaser-trailer:


Of course, the phenomenon of parental overinvolvement and inappropriate emotional investment isn't limited to sports and athletics. It can happen with just about any childhood activity or hobby — from schoolwork to scouting, from music to beauty pageants (Toddlers and Tiaras, anyone?). The anecdotal stories can be astonishing; it would be interesting to see what psychologists, therapists, and social scientists have had to say about this.

All of which brings to mind the debates over human cloning. Way back in 2010, we here at Futurisms tussled with a few other bloggers about the ethics of cloning. We were disturbed, among other things, by the way that cloning advocates blithely want to remake procreation, parenthood, and the relationship between the generations. As the phenomenon depicted in this HBO program suggests, many parents already have a strong desire to treat their children's childhoods as opportunities to relive, perfect, or redeem their own. Imagine how much more powerful that desire would be if the children in question were clones — willfully created genetic copies.

In its 2002 report Human Cloning and Human Dignity, the President's Council on Bioethics attempted to think about procreation and cloning in part by contrasting two ways of thinking about children — as "gifts" or as "products of our will":

Gifts and blessings we learn to accept as gratefully as we can. Products of our wills we try to shape in accord with our desires. Procreation as traditionally understood invites acceptance, rather than reshaping, engineering, or designing the next generation. It invites us to accept limits to our control over the next generation. It invites us even — to put the point most strongly — to think of the child as one who is not simply our own, our possession. Certainly, it invites us to remember that the child does not exist simply for the happiness or fulfillment of the parents.

To be sure, parents do and must try to form and mold their children in various ways as they inure them to the demands of family life, prepare them for adulthood, and initiate them into the human community. But, even then, it is only our sense that these children are not our possessions that makes such parental nurture — which always threatens not to nourish but to stifle the child — safe.

This concern can be expressed not only in language about the relation between the generations but also in the language of equality. The things we make are not just like ourselves; they are the products of our wills, and their point and purpose are ours to determine. But a begotten child comes into the world just as its parents once did, and is therefore their equal in dignity and humanity.

The character of sexual procreation shapes the lives of children as well as parents. By giving rise to genetically new individuals, sexual reproduction imbues all human beings with a sense of individual identity and of occupying a place in this world that has never belonged to another. Our novel genetic identity symbolizes and foreshadows the unique, never-to-be-repeated character of each human life. At the same time, our emergence from the union of two individuals, themselves conceived and generated as we were, locates us immediately in a network of relation and natural affection.

As that section of the report concludes, it is clear that the nature of human procreation affects human life "in endless subtle ways." The advocates of cloning show very little appreciation for the complexity of the relations they wish to transform.

(H/t to Reddit, where the HBO video elicited many interesting responses from students, parents, and coaches.)

Monday, December 2, 2013

A Future of Technology, or a Future for Science?

Just before Thanksgiving, acclaimed physicist, science popularizer, and futurist Michio Kaku had an article in the “Crystal Ball” section of the New York Times Opinion pages on his predictions — as a scientist — for the future. Kaku lists ten putatively great technological developments that we will achieve if only we can just “grasp the importance of science and science education.” But Kaku’s predictions of the future, which are just extrapolations from currently trendy technologies, sells science short in a way that is characteristic of much futurist speculation. From this list, you would get the impression that the “importance of science education” simply means that science will help us design better machines.

Now, I don’t even really think that Kaku himself thinks this; he has written some decent popular science books on theoretical physics, and he is known for his activism on such science-policy issues as climate change and nuclear power, and for promoting such public-science endeavors as SETI. (Even if you do not agree with the positions Kaku takes on these issues, they are instances of science as a source of knowledge, not as merely the basis of technology.) It is clear that Kaku does know that the importance of science extends beyond its engineering applications, but it is almost in the nature of futurist writing to let one’s sense of certainty in the arc of technological progress overcome the curiosity and openness to new and unexpected knowledge characteristic of science. This is certainly the case with transhumanist writing, which tends to assume that better and faster versions of today’s technologies (which represent exponentially accelerating trends, after all) will be what define the future.

Michio Kaku
(campuspartybrasil [CC])
Kaku’s vague and loose criteria for making predictions follow from having too much certainty — he insists only that “the laws of physics must be obeyed”(always a good rule of thumb) and that there exists some “proof of principle” example of the futuristic technology he is making predictions about. What kind of principle an existing technology proves can easily be overstated, however. To take one example, his prediction that we will have a “brain net” in which we will share memories and emotions the way we now use the Internet to share MP3s is based on some actual recent innovations in neuroprosthetics that enable paralyzed people to mentally control cursors on computer screens or robotic arms. These experiments show that there are mental states that can be channeled through electronics or computers, and so they refute the general principle “mental states cannot have an effect on non-biological prosthetics.” But just because that very general principle fails, that does not mean that there are no practical or theoretical reasons why mental states like emotions or memories cannot be transferred to computers. To think otherwise would be to give technological demonstrations vastly more theoretical significance than they deserve, as though they already settle a vast range of difficult theoretical problems — as though the job of neuroscientists in the future will just be working out how to build telepathic technologies for the “brain net,” and not thinking about theoretical problems like how different mental states relate to different brain states. The answers to problems like these will be the principles upon which technologies like Kaku’s “brain net” will either succeed or fail, and these problems have not yet been solved by scientists.

Kaku’s discussion of the future of medicine suffers from this same excessive focus on current trends in technology without paying enough attention to the limits of what these technologies might be expected to accomplish. He predicts that people will soon be able to obtain whole genome sequences for $100, and he is probably not wrong about that — biotechnologists have been very good at improving the efficiency of DNA-sequencing technology. But sequencing technology has already far outstripped the ability of biological science to understand the function of genes. Take the recent story of the FDA putting the kibosh on the personal genomics company 23andMe, which today offers limited personal genetic testing (not whole-genome sequencing) for $100. Because 23andMe makes a number of claims about the probabilities that its customers will suffer from a wide variety of diseases, the FDA wants the firm to conform to the standards of diagnostic reliability of other medical devices, and 23andMe has (not altogether surprisingly) not been able to provide that kind of evidence. The big lesson from this developing story is not that the FDA is unduly risk averse and paternalistic (though it is those things, and that’s surely part of the story), but rather that we are far from being able to reliably interpret genetic information in a way that is both inexpensive and meaningful for patients and doctors. Those are scientific problems, not technological problems, and the fact that there are some examples that prove we can “in principle” know something about the effect of a gene on health outcomes does not shows us that we will. Unless we make some amazing and unexpected breakthroughs in our understanding of genetics, which will not come from faster DNA sequencing, the growth of genetic medicine will not be as dramatic as many futurists would have it.

Our esteemed colleague Alan Jacobs pointed out on Twitter and over on Text Patterns that Kaku does not even mention anything about environmental problems like climate change that we seem sure to face in the future. Though Kaku as a scientist has been active in environmentalist politics, in this little scientific prediction of the future, which concludes with an exhortation to “grasp the importance of science,” he focuses on science only as a means for creating technology, and regrettably ignores the role science plays in instructing us in how technology can be prudently used.

Alexander Leydenfrost, Popular Mechanics, January 1952
(h/t Paleofuture)
This is disappointing but not surprising. Environmental degradation is one of the inconvenient consequences of the unrestrained and unintelligent use of technology. Our awareness of environmental problems, their scope, and of the sorts of technological developments or policy solutions that could plausibly mitigate or solve them comes not from technological progress as such, but from scientific knowledge as such. Ecology, geology, climate science, and the other disciplines relevant to environmentalism are, to use Francis Bacon’s language, light-bearing sciences more than fruit-bearing. Though they do not often lead to technological developments, they are nonetheless very useful, not because they give us power over nature, but because they teach us when and how to limit our exercise of the power we have over nature. To paraphrase another of Bacon’s well known aphorisms, to live wisely we must learn not only how to command, but also how to obey nature.

Not all predictions and recommendations by scientists about the future of science are as fixated on technological fads as this silly little article by Michio Kaku. Consider, for instance, this thoughtful 2004 essay by evolutionary biologist Carl Woese on why the next generation of biologists will need to overcome the reductionist paradigm of molecular genetics that dominated the twentieth century. Beyond this salutary recommendation about biological theory, Woese also admonished biologists to recognize that their science was not simply an “engineering discipline” and that it is dangerous to allow “science to slip into the role of changing the world without trying to understand it.”

The most fundamental aim of science is knowledge and understanding, which can usefully reveal things about the world even beyond the useful power it can give us to change the world. And then, of course, as Bacon recognized, light-bearing science is the necessary precondition of fruit-bearing science. This principle was also recognized by the always-prescient Alexis de Tocqueville, who advised that democratic societies, where all things practical are naturally pursued with great vigor, will need to direct their efforts “to sustain the theoretical sciences and to create great scientific passions.” Just as it is crass and counterproductive to justify the humanities in terms of such career-focused deliverables as “critical thinking skills,” talking about science education as a kind of magic wand that will let us transform today’s fantasies into reality or lead us to the “jobs of the future” cheapens and misunderstands the nature of the scientific enterprise.

Monday, November 25, 2013

On Monstrosities in Science

In response to my previous post about dolphin babies and synthetic biology, Professor Rubin offered a thoughtful comment — here’s an excerpt:

A wonderful, thought-provoking post! I suppose that "taking these speculative and transgressive fantasies about science too seriously" would mean at least failing to look critically at whether they are even possible, given what we now know and are able to do. That is indeed an important task, although it is also a moving target--the fantasies of a few decades ago have been known to become realities. To that extent, taking them "too seriously" might also mean failing to distinguish between the monstrous and the useful. That is to say, one would take the fantasies too seriously if one accepted at face value the supposed non-monstrousness of the goal being advanced or (to put it another way) if one accepted the creation of monsters as something ethically desirable.

I’m grateful for Charlie’s comment — you should read the whole thing — not least because it gives me the delightful opportunity to pontificate a bit more on the moral implications of this sort of monstrosity.

There are indeed a number of technologies that are on the border of the monstrous and the useful. And, just as many things that decades ago were considered technically fantastic but are now realities, there are many practices that were once considered morally “fantastic” (i.e., monstrous) but are now widely accepted, such as in vitro fertilization (IVF, the technique for producing so-called “test-tube babies”) or organ transplantation. (Though these technologies have become broadly accepted by society, neither are by any means wholly uncontroversial or devoid of moral implications—many still find IVF morally problematic, and proposals to legalize the sale of organs for transplantation are a matter of ongoing controversy.) Scientists sometimes make what was once monstrous seem acceptable, but largely through showing that what is monstrous can be useful — meaning that a seemingly monstrous practice has some actual benefits, and that whatever risks it poses are relatively limited. This is the refrain often heard in debates over assisted reproductive technologies, that though IVF was once considered monstrous, after forty years and millions of babies provided more or less safely for infertile couples, the practice is, advocates claim, now largely unobjectionable.

To take a biotechnological example that is in some respects analogous to Ai Hasegawa’s dolphin-baby project, consider the possibility of growing human organs in pigs or other animals. There is something monstrous about human-pig chimeras — creating them violates taboos relating to bodily integrity and the immiscibility of species — but there is something very useful about having a ready supply of kidneys or pancreases, and so human-pig chimeras are a logical extension of Baconian (forgive the pun) science’s effort to relieve man’s estate and all that. Whether human-pig chimeras or any other useful but monstrous innovations of Baconian science are ethically acceptable is just the sort of question that deserves serious attention.

reasonable
Unlike IVF or human-pig chimeras, it seems very difficult to imagine a situation in which ordinary people could see the birthing and eating of dolphins as useful, that is, as conducive to securing the possession or enjoyment of anything a rational person might consider good, such as health. Though Hasegawa does offer a justification for the project with a few bromides about overpopulation and saving endangered species, it goes without saying that the gestation and consumption of dolphins by human beings could hardly contribute to ameliorating these perceived problems. In her description of the project, Hasegawa states that the gestation of dolphins could “satisfy our demands for nutrition and childbirth” and poses the question “Would raising this animal as a child change its value so drastically that we would be unable to consume it because it would be imbued with the love of motherhood?” As for nutrition, it is obviously patently irrational to gestate your meal — the energy required for such a project far exceeds the nutritional value of the “product.”

More interesting is the idea that giving birth to a non-human animal could satisfy a woman’s demand for “childbirth” and that the act of gestating an animal could “change its value” and imbue it “with the love of motherhood.” Such statements indicate that this project does not really aim at helping people secure the enjoyment of things that they currently value, but at transforming values by questioning the relationship between motherhood’s natural purpose and context and its value.

Hasegawa’s project seems comparable to Jonathan Swift’s “Modest Proposal” for solving hunger and overpopulation by eating babies, which was a satire of amoral rationalistic utilitarianism. But one hardly gets an impression of an excess of rationality in Hasegawa’s proposal. The video portraying her giving birth to a dolphin might be seen as creepy or silly, but its creepiness and silliness comes from an absurd misapplication of parental sentiment, not the absurd absence of parental sentiment in Swift’s satire.
*   *   *
Hasegawa’s project is not the useful science of Bacon, but the “gay science” of Friedrich Nietzsche, who argued that science (including both the natural and social sciences) had a tendency to undermine moral values as it studied them. In his typical overwrought style, Nietzsche prophesied that after scientists of various kinds completed their studies of the history, psychology, and diversity of moral values, then

the most insidious question of all would emerge into the foreground: whether science can furnish goals of action after it has proved that it can take such goals away and annihilate them; and then experimentation would be in order that would allow every kind of heroism to find satisfaction—centuries of experimentation that might eclipse all the great projects and sacrifices of history to date. So far, science has not yet built its cyclopic buildings; but the time for that, too, will come.

Hasegawa would seem to be one of those heroic experimenters who seeks to build new values out of the rubble of exploded notions of the good life (in this case, motherhood). The destroyers of these values have been those legions of industrious scientists over the twentieth century — including social scientists, many of whom have been highly influenced by Nietzsche — who have sought to explain or explain away moral values in terms of power or greed or evolutionary drives.

not reasonable

Sensible people should reject both halves of Nietzsche’s prophecy about the future of science. We should reject the premise that science has an inherent tendency to destroy moral values on both pragmatic and theoretical grounds. Pragmatically, it is unwise to give public credence to the idea that science undermines morality, since, whatever the real validity of that proposition, if it is accepted it could become a self-fulfilling prophecy — believing that science refutes morality could lead to the abandonment of morality. Theoretically, accepting the idea that science can refute morality seems to lead directly to relativism or nihilism. For if science qua science (and not overconfident deviations from science like scientism that lack the epistemic rigor that science must necessarily strive for) refutes morality, then there could be no true moral knowledge (for if moral knowledge were true, then it could not truly be refuted by science).

If we reject that premise, then there would be no need for the simply monstrous projects aimed at inventing or transforming values — Nietzsche’s “most insidious question” never emerges. Bacon’s science and its fruits often call for us to balance the moral need to avoid the monstrous with the moral demand to pursue the useful, and we will all surely continue to face dilemmas of how to balance these moral demands. But we need not worry about those who claim that the progress of science alters the nature of morality itself.

Friday, November 22, 2013

Thanks to Computers, We Are “Getting Better at Playing Chess”

According to an interesting article in the Wall Street Journal, “Chess-playing computers, far from revealing the limits of human ability, have actually pushed it to new heights.”

Reporting on the story of Magnus Carlsen, the newly minted world chess champion, Christopher Chabris and David Goodman write that the best human chess players have been profoundly influenced by chess-playing computers:

Once laptops could routinely dispatch grandmasters ... it became possible to integrate their analysis fully into other aspects of the game. Commentators at major tournaments now consult computers to check their judgment. Online, fans get excited when their own “engines” discover moves the players miss. And elite grandmasters use computers to test their opening plans and generate new ideas.

[Chess-playing programs] are not perfect; sometimes long-term strategy still eludes them. But players have learned from computers that some kinds of chess positions are playable, or even advantageous, even though they might violate general principles. Having seen how machines go about attacking and especially defending, humans have become emboldened to try the same ideas.... [A] study published on ChessBase.com earlier this year showed that in the tournament Mr. Carlsen won to qualify for the world championship match, he played more like a computer than any of his opponents.

The net effect of the gain in computer skill is thus, ironically, a gain in human skill. Humans — at least the best ones—are getting better at playing chess.

The whole article is well worth a read (h/t Gary Rosen).

For various obvious reasons, the literature about AI and transhumanism has a lot to say about chess and computers. The Wall Street Journal article about the Carlsen victory reminds me this remark that Ray Kurzweil makes in passing in one of the epilogues to his 1999 book The Age of Spiritual Machines:

After Kasparov’s 1997 defeat, we read a lot about how Deep Blue was just doing massive number crunching, not really “thinking” the way its human rival was doing. One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to really think very much during the tournament. Mostly he was just drawing upon his mental database of situations he had thought about long ago....  [page 290]

Is Kurzweil right about how Kasparov thinks? What can we know about how Carlsen’s thinking has been changed by playing against computers? There are fundamental limits to what we can know about a person’s cognitive processes — even our own — notwithstanding all the talk about how the best players think in patterns or “decision trees” or whatnot. Diego Rasskin-Gutman spends a significant portion of his 2009 book Chess Metaphors: Artificial Intelligence and the Human Mind trying to understand how chess players think, but this is his ultimate conclusion:

If philosophy of the mind can ask what the existential experience of being a bat feels like, can we ask ourselves how a grandmaster thinks? Clearly we can [ask], but we must admit that we will never be able to enter the mind of Garry Kasparov, share the thoughts of Judit Polgar, or know what Max Euwe thought when he discussed his protocol with Adriaan de Groot. If we really want to know how a grandmaster thinks, it is not enough to read Alexander Kotov, Nikolai Krogius, or even de Groot himself.... If we really want to know how a grandmaster thinks, there is only one sure path: put in the long hours of study that it takes to become one. It is easier than trying to become a bat. [pages 166–167]

Then again, who knows — maybe we can try to become bats and play chess.

I could do this in the dark, too, Ras

Thursday, November 21, 2013

Jumping the Dolphin

On November 19, the Woodrow Wilson Center in Washington, D.C. hosted a short event on the myths and realities surrounding the growing “DIYbio” movement — a community of amateur hobbyists who are using some of the tools of synthetic biology to perform a variety of experiments either in their homes or together with their peers in community lab spaces. The event drew attention to the results of a survey conducted by the Center’s Synthetic Biology Project that debunk seven exaggerations about what makes these biotech tinkerers tick and what they are really up to, particularly the overblown fears that those involved in DIYbio are on the verge of being able to create deadly epidemics in their garages, or even customized pathogens for use in political assassinations.

According to the survey, members of the DIYbio community are far from possessing the skills or resources necessary to create complex pathogens. And as Jonathan B. Tucker wrote in The New Atlantis in 2011, the complex scientific procedures necessary for creating bioweapons involve a good deal of tacit knowledge and skill acquired through years of training; most of them are not explicitly written down, but are embodied in the complex technical practices carried out in actual labs. The DIYbio movement does aim at “de-skilling” complex biotechnological methods, but apocalyptic fears and utopian hopes about the democratization of biotechnology should, for now, be taken with a grain of salt. Though more extensive regulation may be needed in the future, it would be unfortunate if this emerging community of amateur enthusiasts, who seem to represent that spirit of independent-minded restless practicality that Tocqueville long ago saw was characteristic of the scientific method in American democracy, were stopped by bureaucratic red tape.

Admittedly, this rosy view of the DIYbio movement as a community of amateur hobbyists engaging in benign or useful scientific and technological tinkering might be a bit overly optimistic. And beyond the safety risks posed by the technology, there is the prospect of it being used as a tool to advance some of the ethically problematic goals of transhumanism — transgressing natural boundaries or even re-engineering human biology. As a novel, exciting, but not very well-defined field, synthetic biology seems like just the kind of technology that could make plausible the dreams of limitless control over the body that animate so much of transhumanist thinking.

Consider the recent story about the bizarre art project proposed by Ai Hasegawa, a designer who wants to use “synthetic biology” to “gestate and give birth to a baby from another species, in this case a dolphin, before eating it.” The ostensible purpose of this project, entitled “I Wanna Deliver a Dolphin,” was to approach “the problem of human reproduction in an age of overcrowding, overdevelopment and environmental crisis.” But the obvious grotesqueness of the proposed act makes these political buzzwords ring hollow. It is worth emphasizing that Hasegawa is not a scientist; her project is, to say the least, technically impractical; and her peculiar visions of what science can make possible owes more to the seemingly obligatory transgressiveness of much of contemporary art than to anything in the nature of science itself. We should perhaps not worry too much over such nightmarish visions of the future, as they distract us from the serious ethical concerns surrounding biotechnological projects that have benevolent or even noble motives. (Warning: The video below, while supposedly artsy, might bother some viewers.)


[No dolphins were birthed in the making of this video.]


The more benign portrait of the DIYbio community as innovative tinkerers dedicated to experimentation and problem-solving better represents the motives of most scientists than deliberately provocative art projects. As Eric Cohen rightly notes, in our democratic society we do not use biotechnology to “seek the monstrous; we seek the useful.” Scientists deserve this kind of charitable interpretation of their motives, even and especially when scientific fields become the subject of bizarre transgressive fantasies like plans to clone Neanderthals (the stories of which were greatly exaggerated) or giving birth to dolphins. Taking the relationship between such fantasies and the scientific enterprise too seriously creates an exaggerated appearance of opposition between science and the common decency, which might create a false impression that one must choose between respecting science and respecting ethical boundaries. As with much of transhumanist ideology, taking these speculative and transgressive fantasies about science too seriously could do more harm to the ethical integrity of science than would simply dismissing them.

Thursday, September 26, 2013

Does the U.S. Really “Lag” on Military Robots?

In response to our post “U.S. Policy on Robots in Warfare,” Mark Gubrud has passed along to us a comment:

It was odd that on the Monday morning after the Friday afternoon when my Bulletin article appeared, John Markoff of the New York Times posted an article whose message many took as contradictory to mine. Where I had characterized U.S. policy as “full speed ahead,” Markoff reported that the military “lags” in development of unmanned ground vehicles, which, as you know, go by the great acronym of UGVs.

There isn't really any contradiction between the facts as reported by Markoff and the history and analysis I gave, as I explained on my personal blog, but anybody who read the two casually, or only looked at the headlines, could be forgiven for thinking that Markoff had rebutted me, perhaps upholding the myth that there is some kind of a moratorium in effect.

In that blog post he mentions, Gubrud expands on the strangeness of the NYT article, or at least its headline. The headline in both the print and the online edition of Markoff's article says that

the U.S. military “lags” in its pursuit of robotic ground vehicles. Lags... behind whom? China? North Korea? No, Markoff warns that the Pentagon is falling behind another aspiring superpower: Google.

Well worth reading the whole thing.

Saturday, September 21, 2013

U.S. Policy on Robots in Warfare


"Atlas," a humanoid robot built by Boston Dynamics and unveiled in 2013 as part of the "Robotics Challenge" sponsored by the U.S. military-research agency DARPA. [Source: DARPA on YouTube]
Our friend Mark Gubrud has a new article in the Bulletin of the Atomic Scientists examining the U.S. Department of Defense’s policy regarding “autonomous or semiautonomous weapon systems.” Gubrud, who wrote our most controversial Futurisms post a few years ago, brings together a wealth of links and resources that will be of interest to anyone who wants to start learning about the U.S. military’s real-life plans for killer robots.

Gubrud argues that a DOD directive put in place last year sends a signal to military vendors that the Pentagon is interested in and supports the development of autonomous weapons. He writes that, while the directive is vague in some important respects, it pushes us further down the road to autonomous killer robots. But, he says, it isn’t exactly clear why we should be on that road at all: the arguments in favor of autonomous weapons are weak, and both professional soldiers and the public at large object to them.

Gubrud is now a postdoc research associate at Princeton, as well as a member of something called the International Committee for Robot Arms Control, an organization that has Noel Sharkey, a prominent AI and robotics researcher and commentator, as its chairman.

Wednesday, May 1, 2013

Speculations on the Future of AI

Thanks for the shoutout and the kind words, Adam, about my review of Kurzweil’s latest book. I’ll take a stab at answering the question you posed:
I wonder how far Ari and [Edward] Feser would be willing to concede that the AI project might get someday, notwithstanding the faulty theoretical arguments sometimes made on its behalf.... Set aside questions of consciousness and internal states; how good will these machines get at mimicking consciousness, intelligence, humanness?
Allow me to come at this question by looking instead the big-picture view you explicitly asked me to avoid — and forgive me, readers, for approaching this rather informally. What follows is in some sense a brief update on my thinking on questions I first explored in my long 2009 essay on AI.


The big question can be put this way: Can the mind be replicated, at least to a degree that will satisfy any reasonable person that we have mastered the principles that make it work and can control the same? A comparison AI proponents often bring up is that we’ve recreated flying without replicating the bird — and in the process figured out how to do it much faster than birds. This point is useful for focusing AI discussions on the practical. But unlike many of those who make this comparison, I think most educated folk would recognize that the large majority of what makes the mind the mind has yet to be mastered and magnified in the way that flying has, even if many of its defining functions have been.

So, can all of the mind’s functions be recreated in a controllable way? I’ve long felt the answer must be yes, at least in theory. The reason is that, whatever the mind itself is — regardless of whether it is entirely physical — it seems certain to at least have entirely physical causes. (Even if these physical causes might result in non-physical causes, like free will.) Therefore, those original physical causes ought to be subject to physical understanding, manipulation, and recreation of a sort, just as with birds and flying.

The prospect of many mental tasks being automated on a computer should be unsurprising, and to an extent not even unsettling to a “folk psychological” view of free will and first-person awareness. I say this because one of the great powers of consciousness is to make habits of its own patterns of thought, to the point that they can be performed with minimal to no conscious awareness; not only tasks, skills, and knowledge, but even emotions, intuitive reasoning, and perception can be understood to some extent as products of habitualized consciousness. So it shouldn’t be surprising that we can make explicit again some of those specific habits of mind, even ones like perception that seem prior to consciousness, in a way that’s amenable to proceduralization.


The question is how many of the things our mind does can be tackled in this way. In a sense, many of the feats of AI have been continuing the trend established by mechanization long before — of having machines take over human tasks but in a machinelike way, without necessarily understanding or mastering the way humans do things. One could make a case, as Mark Halpern has in The New Atlantis, that the intelligence we seem to see in many of AI’s showiest successes — driverless cars, supercomputers winning chess and Jeopardy! — may be better understood as belonging to the human programmers than the computers themselves. If that’s true, then artificial intelligence thus far would have to be considered more a matter of advances in (human) artifice than in (computer) intelligence.

It will be curious to see how much further those methods can go without AI researchers having to return to attempting to understand human intelligence on its own terms. In that sense, perhaps the biggest, most elusive goal for AI is whether it can create (whether by replicating consciousness or not) a generalized artificial intelligence — not the big accretion of specifically tailored programs we have now, but a program that, like our mind, is able to tackle just about any and every problem that is put before it, only far better than we can. (That’s setting aside the question of how we could control such a powerful entity to suit our preferred ends — which despite what the Friendly AI folks say, sounds like a contradiction in terms.)

So, to Adam’s original question: “practically speaking ... how good will these machines get at mimicking consciousness, intelligence, humanness?” I just don’t know, and I don’t think anyone intelligently can say that they do. I do know that almost all of the prominent AI predictions turn out to be grossly optimistic in their time scale, but, as Kurzweil rightly points out, a large number that once seemed impossible have been conquered. Who’s to say how much further that line will progress — how many functions of the mind will be recreated before some limit is reached, if one is at all? One has to approach and criticize particular AI techniques; it’s much harder to competently engage in generalized speculation about what AI might someday be able to achieve or not.


So let me engage in some more of that speculation. My view is that the functions of the mind that require the most active intervention of consciousness to carry out — the ones that are the least amenable to habituation — will be among the last to fall to AI, if they do at all (although basic acts of perception remain famously difficult as well). The most obvious examples are highly creative acts and deeply engaged conversation. These have been imitated by AI, but poorly.

Many philosophers of mind have tried to put this the other way around by devising thought experiments about programs that completely imitate, say, natural language recognition, and then arguing that such a program could appear conscious without actually being so. Searle’s Chinese Room is the most famous among many such arguments. But Searle et al. seem to put an awful lot into that assumption: can we really imagine how it would be possible to replicate something like open-ended conversation (to pick a harder example) without also replicating consciousness? And if we could replicate much or all of the functionality of the mind without its first-person experience and free will, then wouldn’t that actually end up all but evacuating our view of consciousness? Whatever you make of the validity of Searle’s argument, contrary to the claims of Kurzweil and other of his critics, the Chinese Room is a remarkably tepid defense of consciousness.

This is the really big outstanding question about consciousness and AI, as I see it. The idea that our first-person experiences are illusory, or are real but play no causal role in our behavior, so deeply defies intuition that it seems to require an extreme degree of proof which hasn’t yet been met. But the causal closure of the physical world seems to demand an equally high burden of proof to overturn.

If you accept compatibilism, this isn’t a problem — and many philosophers do these days, including our own Ray Tallis. But for the sake of not letting this post get any longer, I’ll just say that I have yet to see any satisfying case for compatibilism that doesn’t amount to making our actions determined by physics but telling us don’t worry, it’s what you wanted anyway.

I remain of the position that one or the other of free will and the causal closure of the physical world will have to give; but I’m agnostic as to which it will be. If we do end up creating the AI-managed utopia that frees us from our present toiling material condition, that liberation may have to come at the minorly ironic expense of discovering that we are actually enslaved.

Images: Mr. Data from Star Trek, Dave and HAL from 2001, WALL-E from eponymous, Watson from real life