Futurisms: Critiquing the project to reengineer humanity

Wednesday, December 17, 2014

Near, Far, and Nicholas Carr

Nicholas Carr, whose new book The Glass Cage explores the human meaning of automation, last week put up a blog post about robots and artificial intelligence. (H/t Alan Jacobs.) The idea that “AI is now the greatest existential threat to humanity,” Carr writes, leaves him “yawning.”

He continues:

The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

I would not argue with Carr about probable versus possible — he may well be right there. But later in the post, quoting from an interview he gave to help promote his book, he implicitly acknowledges that there are people who think that machine consciousness is a great idea and who are working to achieve it. He thinks that their models for how to do so are not very good and that their aspirations “for the near future” are ultimately based on “faith, not reason.”

near ... or far?
All fine. But Carr is begging one question and failing to observe a salient point. First, it seems he is only willing to commit to his skepticism for “the near future.” That is prudent, but then one might want to know why we should not be concerned about a far future when efforts today may lay the groundwork for it, even if by eliminating certain possibilities.

Second, what he does not pause to notice is that everyone agrees that “flood, famine, pestilence, plague and war” are bad things. We spend quite a serious amount of time, effort, and money trying to prevent them or mitigate their effects. But at the same time, there are also people attempting to develop machine consciousness, and while they may not get the resources or support they think they deserve, the tech culture at least seems largely on their side (even if there are dissenters in theory). So when there are people saying that an existential threat is the feature and not the bug, isn’t that something to worry about?

Friday, December 5, 2014

Margaret Atwood’s Not-Very-Deep Thoughts on Robots

Margaret Atwood has been getting her feet wet in the sea of issues surrounding developments in robotics and comes away with some conclusions of corresponding depth. Robots, she says, are just another of the extensions of human capacity that technology represents, they represent a perennial human aspiration, maybe they will change human nature, but what is human nature anyway?

This is all more or less conventional stuff, dead center of the intellectual sweet spot for the Gray Lady, until Atwood gets to the very end. What really concerns her seems to be that we would commit to a robotic future and then run out of electricity! That would pretty much destroy human civilization, leaving behind “a chorus of battery-powered robotic voices that continues long after our own voices have fallen silent.” Nice image — but long after? That’s some battery technology you’ve got there. The robots would not care to share any of that power?

As I discuss in my new book Eclipse of Man: Human Extinction and the Meaning of Progress, most of those who believe in “Our Robotic Future,” as Atwood’s piece is titled, do so with the expectation that it is part and parcel of an effort at overcoming just the kind of Malthusian scarcity that haunts Atwood. They may of course be wrong about that, but given the track record of ongoing innovation in the energy area, it is hard to see why one would strain at this particular gnat.

Then again, the NYT essay suggests that most of Atwood’s literary knowledge of things robotic seems to end by the 1960s. Atwood’s own bleak literary futures seem to focus on the biological; maybe she has not got the transhumanist memo yet.

—Charles T. Rubin, a Futurisms contributor and contributing editor to The New Atlantis, is a Fellow of the James Madison Program at Princeton University.

Wednesday, December 3, 2014

Human Flourishing or Human Rejection?

Sometimes, when we criticize transhumanism here on Futurisms, we are accused of being Luddites, of being anti-technology, of being anti-progress. Our colleague Charles Rubin ably responded to such criticisms five years ago in a little post he called “The ‘Anti-Progress’ Slur.”

In his new book Eclipse of Man, Professor Rubin explores the moral and political dimensions of transhumanism. And again the question arises, if you are opposed to transhumanism, are you therefore opposed to progress? Here, in a passage from the book’s introduction, Rubin talks about the distinctly modern idea that humanity can better its lot and asks whether that goal is in tension with the transhumanist goal of transcending humanity:

Even if the sources of our misery have not changed over time, the way we think about them has certainly changed between the ancient world and ours. What was once simply a fact of life to which we could only resign ourselves has become for us a problem to be solved. When and why the ancient outlook began to go into eclipse in the West is something scholars love to discuss, but that a fundamental change has occurred seems undeniable. Somewhere along the line, with thinkers like Francis Bacon and René Descartes playing a major role, people began to believe that misery, poverty, illness, and even death itself were not permanent facts of life that link us to the transcendent but rather challenges to our ingenuity in the here and now. And that outlook has had marvelous success where it has taken hold, allowing more people to live longer, wealthier, and healthier lives than ever before.

So the transhumanists are correct to point out that the desire to alter the human condition runs deep in us, and that attempts to alter it have a long history. But even starting from our perennial dissatisfaction, and from our ever-growing power to do something about the causes of our dissatisfaction, it is not obvious how we get from seeking to improve the prospects for human flourishing to rejecting our humanity altogether. If the former impulse is philanthropic, is the latter not obviously misanthropic? Do we want to look forward to a future where man is absent, to make that goal our normative vision of how we would like the world to be?

Francis Bacon famously wrote about “the relief of man’s estate,” which is to say, the improvement of the conditions of human life. But the transhumanists reject human life as such. Certain things that may be good in certain human contexts — intelligence, pleasure, power — can become meaningless, perverse, or destructive when stripped of that context. By pursuing these goods in abstraction from their human context, transhumanism offers not an improvement in the human condition but a rejection of humanity.

For much more of Charlie Rubin’s thoughtful critique of transhumanism, pick up a copy of Eclipse of Man today.

Wednesday, October 29, 2014

Our new book on transhumanism: Eclipse of Man

Since we launched The New Atlantis, questions about human enhancement, artificial intelligence, and the future of humanity have been a core part of our work. And no one has written more intelligently and perceptively about the moral and political aspects of these questions than Charles T. Rubin, who first addressed them in the inaugural issue of TNA and who is one of our colleagues here on Futurisms.

So we are delighted to have just published Charlie's new book about transhumanism, Eclipse of Man: Human Extinction and the Meaning of Progress.


We'll have much more to say about the book in the days and weeks ahead, but for now, you can read the dust-jacket text and the book's blurbs at EclipseOfMan.net and, even better, you can buy it today from Amazon or Barnes and Noble.

Tuesday, September 23, 2014

What Total Recall can Teach Us About Memory, Virtue, and Justice



The news that an American woman has reportedly decided to pursue plastic surgery to have a third breast installed may itself be a subject for discussion on this blog, and will surely remind some readers of the classic 1990 science fiction movie Total Recall.

As it happens, last Thursday the excellent folks at Future Tense hosted one of their “My Favorite Movie” nights here in Washington D.C., playing that very film and holding a discussion afterwards with one of my favorite academics, Stanford’s Francis Fukuyama. The theme of the discussion was the relationship between memory and personal identity — are we defined by our memories?

A face you can trust
Much to my dismay the discussion of this topic, which is of course the central theme of the movie, strayed quite far from the details of the film. Indeed, in his initial remarks, Professor Fukuyama quoted, only to dismiss, the film’s central teaching on the matter, an assertion made by the wise psychic mutant named Kuato: “You are what you do. A man is defined by his actions, not his memory.”

This teaching has two meanings; the first meaning, which the plot of the movie has already prepared the audience to accept and understand when they first hear it, is that the actions of a human being decisively shape his character by inscribing habits and virtues on the soul.

From the very beginning of the movie, Quaid (the protagonist, played by Arnold Schwarzenegger) understands that things are not quite right in his life. His restlessness comes from the disproportion between his character, founded on a lifetime of activity as a secret agent, and the prosaic life he now finds himself in. He is drawn back to Mars, where revolution and political strife present opportunities for the kinds of things that men of his character desire most: victory, glory, and honor.

That Quaid retains the dispositions and character of his former self testifies to how the shaping of the soul by action takes place not by storing up representations and propositions in one’s memory, but by cultivating in a person virtuous habits and dispositions. As Aristotle writes in the Nicomachean Ethics, “in one word, states of character arise out of like activities.”

The second meaning of Kuato’s teaching concerns not the way our actions subconsciously shape our character, but how our capacity to choose actions, especially our capacity to choose just actions, defines who we are at an even deeper level. Near the end of the movie, after we have heard Kuato’s teaching, we learn that Hauser, Quaid’s “original self,” was an unrepentant agent of the oppressive Martian regime, and hence an unjust man. Quaid, however, chooses to side with the just cause of the revolutionaries. Though he retains some degree of identity with his former self — he continues to be a spirited, courageous, and skillful man — he has the ability to redefine himself in light of an impartial evaluation of the revolutionaries’ cause against the Martian regime, an evaluation that is guided by man’s natural partiality toward the just over the unjust.
*   *   *
The movie’s insightful treatment of the meaning and form of human character comes not, however, from a “realistic” or plausible understanding of the kinds of technologies that might exist in the future. It seems quite unlikely that we could ever have technologies that specifically target and precisely manipulate what psychologists would call “declarative memory.” In fact, the idea of reprogramming declarative memory in an extensive and precise way seems far less plausible than manipulating a person’s attitudes, dispositions, and habits — indeed, mood-altering drugs are already available.

Professor Fukuyama also raised the subject of contemporary memory-altering drugs. (This was a topic explored by the President’s Council on Bioethics in its report Beyond Therapy, published in 2003 when Fukuyama was a member of the Council.) These drugs, as Professor Fukuyama described them, manipulate the emotional significance of traumatic memories rather than their representational or declarative content. While Quaid retained some of the emotional characteristics of his former self despite the complete transformation of the representational content of his memory, we seem poised to remove or manipulate our emotional characteristics while retaining the same store of memories.

What lessons then can we draw from Total Recall’s teaching concerning memory, if the technological scenario in the movie is, as it were, the inverse of the projects we are already engaged in? It is first of all worth noting that the movie has a largely happy ending — Quaid chooses justice and was able to successfully “free Mars” (as Kuato directed him to do) through resolute and spirited action made possible by the skills and dispositions he developed during his life as an agent of the oppressive Martian regime.

Quaid’s siding with the revolutionaries over the Martian regime was motivated by the obvious injustice of that regime’s actions, and the natural emotional response of anger that such injustice instills in an impartial observer. But, as was noted in the discussion of memory-altering drugs after the film, realistic memory-altering drugs could disconnect our memories of unjust acts from the natural sense of guilt and anger that ought to accompany them.

Going beyond memory-altering drugs, there are (somewhat) realistic proposals for drugs that could dull the natural sense of spiritedness and courage that might lead a person to stand up to perceived injustice. Taken together, these realistic proposals would render impossible precisely the scenario envisioned in Total Recall, with memory-altering drugs that alter the emotional character of actions dulling our sense of their justice or injustice, and other drugs that dull our capacity to develop those qualities of soul like spiritedness and courage that would enable us to respond to injustice.

What science fiction can teach us about technology and human flourishing does not depend on its technical plausibility, but on how it draws out the connections truths about human nature and politics by putting them in unfamiliar settings. Notwithstanding Professor Fukuyama’s dismissal of the film, the moral seriousness with which Total Recall treats the issues of virtue and justice make it well worth viewing, and re-viewing for thoughtful critics of the project to engineer the human soul.

Monday, July 28, 2014

The Muddled Message of Lucy

Lucy is such a terrible film that in the end even the amazing Scarlett Johansson cannot save it. It is sloppily made, and here I do not mean its adoption of the old popular-culture truism that we only use 10 percent of our brains. (The fuss created by that premise is quite wonderful.) There is just no eye for detail, however important. The blue crystals that make Lucy a superwoman are repeatedly referred to as a powder.
Not powder. (Ask Walter White.)
Morgan Freeman speaks of the “mens” he has brought together to study her. Lucy is diverted from her journey as a drug mule by persons unknown and for reasons never even remotely explained. And I defy anybody to make the slightest sense of the lecture Freeman is giving that introduces us to his brain scientist character.

But it does have An Idea at its heart. This idea is the new popular-culture truism that evolution is a matter of acquiring, sharing, and transmitting information — less “pay it forward” than pass it on. So the great gift that Lucy gives to Freeman and his fellow geeks at the end of the movie is a starry USB drive that, we are presumably to believe, contains all the information about life, the universe, and everything that she has gained in the course of her coming to use her brain to its fullest time-traveling extent. (Doesn’t she know that a Firewire connection would have allowed faster download speeds?)

Why this gift is necessary is a little mysterious since it looks like we now know how anybody could gain the same powers Lucy has; the dialogue does not give us any reason to believe that her brain-developing reaction to the massive doses of the blue crystals she receives, administered in three different ways, is unique to her. That might just be more sloppy writing. But then again perhaps it is just as well that others not try to emulate Lucy, because it turns out the evolutionary imperative to develop and pass on information is, as one might expect from a bald evolutionary imperative, exceedingly dehumanizing. Of course given that most of her interactions in the film are with people who are trying to kill her, this should not be too much of a surprise. But although she sometimes restrains rather than kills, she shows little regard for any human life that stands in her way, a point made explicitly as she is driving like a maniac through the streets of Paris. Yes, she uses her powers to tell a friend to shape up and make better choices (as if somehow knowing the friend’s kidney and liver functions are off would be necessary for such an admonition). And early on she takes a quiet moment while she is being operated on to call her parents to say how much she loves them. (Pain, as the virulently utopian H.G. Wells understood, is not something supermen have to worry about.) That loving sentiment is couched in a lengthy conversation about how she is changing, a conversation that, without having the context explained, would surely convince any parent that the child was near death or utterly stoned — both of which are in a sense true for Lucy. But it looks like using more of her brain does not increase her emotional intelligence. (Lucy Transcendent can send texts; perhaps she will explain everything to her mother that way.)

Warming up for piano practice.
So what filmmaker Luc Besson has done, it seems, is to create a movie suggesting that a character not terribly unlike his killer heroine in La Femme Nikita represents the evolutionary progress of the human brain (as Freeman’s character would see it), that the goal of Life is to produce more effective killing machines. Given what we see of her at the start of the film, I think we can suspect that Lucy has always put Lucy first. A hyperintelligent Lucy is just better at it. The fact that early on the film intercuts scenes of cheetahs hunting with Lucy’s being drawn in and captured by the bad guys would seem to mean that all this acquiring and transmitting of information is not really going to change anything fundamental. Nature red in tooth and claw, and all that. I’m not sure Besson knows this is his message. The last moments of the film, which suggest that the now omnipresent Lucy, who has transcended her humanity and her selfishness, wants us to go forth and share the knowledge she has bequeathed us, have atmospherics that suggest a frankly sappier progressive message along the lines of information wants to be free.

I wish I could believe that by making Lucy so robotic as her mental abilities increase Besson was suggesting that, whatever evolution might “want,” the mere accumulation of knowledge is not the point of a good human life. I’d like to think that even if he is correct about the underlying reality, he wants us to see how we should cherish the aspects of our humanity that manage, however imperfectly, to allow us to obscure or overcome it. But I think someone making that kind of movie would not have called crystals powder.

Thursday, April 24, 2014

Not Quite ‘Transcendent’

Editor’s Note: In 2010, Mark Gubrud penned for Futurisms the widely read and debated post Why Transhumanism Won’t Work.” With this post, we’re happy to welcome him as a regular contributor.

Okay, fair warning, this review is going to contain spoilers, lots of spoilers, because I don’t know how else to review a movie like Transcendence, which appropriates important and not so important ideas about artificial intelligence, nanotechnology, and the “uploading” of minds to machines, wads them up with familiar Hollywood tropes, and throws them all at you in one nasty spitball. I suppose I should want people to see this movie, since it does, albeit in a cartoonish way, lay out these ideas and portray them as creepy and dangerous. But I really am sure you have better things to do with your ten bucks and two hours than what I did with mine. So read my crib notes and go for a nice springtime walk instead.
---
Set in a near future that is recognizably the present, Transcendence sets us up with a husband-and-wife team (Johnny Depp and Rebecca Hall) that is about to make a breakthrough in artificial intelligence (AI). They live in San Francisco and are the kind of Googley couple who divide their time between their boundless competence in absolutely every facet of high technology and their love of gardening, fine wines, old-fashioned record players and, of course, each other, notwithstanding a cold lack of chemistry that foreshadows further developments.

The husband, Will Caster (get it?), is the scientist who “first wants to understand” the world, while his wife Evelyn is more the ambitious businesswoman who first wants to change it. They’ve developed a “quantum processor” that, while still talking in the flat mechanical voice of a sci-fi computer, seems close to passing the Turing test: when asked if it can prove it is self-aware, it asks the questioner if he can prove that he is. This is the script’s most mind-twisting moment, and the point is later repeated to make sure you get it.

Since quantum computing has nothing to do with artificial intelligence now or in the foreseeable future, its invocation is the first of many signs that the movie invokes technological concepts for jargon and effect rather than realism or accuracy. This is confirmed when we learn that another lab has succeeded in uploading monkey minds to computers, which would require both sufficient processing power to simulate the brain at sub-cellular levels of detail, and having the data to use in such a simulation. In the movie, this data is gathered by analyzing brain scans and scalp electrode recordings, which would be like reading a phone book with the naked eye from a thousand miles away. Uploading might not be physically impossible, but it would almost certainly require dissection of the brain. Moreover, as I’ve written here on Futurisms before, the meanings that transhumanists project onto the idea of uploading, in particular that it could be a way to escape mortality, are essentially magical.

Later, at a TED-like public presentation, Will is shot by an anti-technology terrorist, a member of a group that simultaneously attacks AI labs around the world, and later turns out to be led by a young woman (Kate Mara) who formerly interned in the monkey-uploading lab. Evading the FBI, DHS, and NSA, this disenchanted tough cookie has managed to put together a global network of super-competent tattooed anarchists who all take direct orders from her, no general assembly needed.

Our hero (so far, anyway) survives his bullet wound, but he’s been poisoned and has a month to live. He decides to give up his work and stay home with Evelyn, the only person who’s ever meant anything to him. She has other ideas: time for the mad scientist secret laboratory! Evelyn steals “quantum cores” from the AI lab and sets up shop in an abandoned schoolhouse. Working from the notes of the unfortunate monkey-uploading scientist, himself killed in the anarchist attack, she races against time to upload Will. Finally, Will dies, and a moment of suspense ... did the uploading work ... well, whaddya think?

No sooner has cyber-Will woken up on the digital side of the great divide than it sets about rewriting its own source code, thus instantiating one of the tech cult’s tropes: the self-improving AI that transcends human intelligence so rapidly that nobody can control it. In the usual telling, there is no way to cage such a beast, or even pull its plug, since it soon becomes so smart that it can figure out how to talk you out of doing so. In this case, the last person in a position to pull the plug is Evelyn, and of course she won’t because she believes it’s her beloved Will. Instead, she helps it escape onto the Internet, just in time before the terrorists arrive to inflict the fate of all mad-scientist labs.

Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to commandeer every surveillance camera on the net, and the FBI’s own computers, to help them take down the anarchists. Overnight, it also makes millions on high-speed trading, the money to be used to build a massive underground Evil Corporate Lab outside an economic disaster zone town out in the desert. There, cyber-Will sets about to develop cartoon nanotechnology and figure out how to sustain its marriage to Evelyn without making use, so far as we are privileged to see, of any of the gadgets advertised on futureofsex.net (NSFW, of course). Oh, but they are still very much in love, as we can see because the same old sofa is there, the same old glass of wine, the same old phonograph playing the same old song. And the bot bids her a tender good night as she slips between the sheets and off into her nightmares (got that right).

While she sleeps, cyber-Will is busy at a hundred robot workstations perfecting “nanites” that can “rebuild any material,” as well as make the lame walk and the blind see. By the time the terrorists and their new-made allies, the FBI (yes, they team up), arrive to attack the solar panels that power the underground complex, cyber-Will has gained the capability to bring the dead back to life — and, optionally, turn them into cyborgs directly controlled by cyber-Will. This enables the filmmakers to roll out a few Zombie Attack scenes featuring the underclass townies, who by now don’t stay dead when you knock them over with high-caliber bullets. It also suggests a solution to cyber-Will’s unique version of the two-body problem, but Evelyn balks when the ruggedly handsome construction boss she hired in town shows her his new Borg patch, looks her into her eyes, and tells her “It’s me — I can touch you now.”
---
So what about these nanites? It might be said that at this point we are so far from known science that technical criticism is pointless, but nanotechnology is a very real and broad frontier, and even Eric Drexler’s visionary ideas, from which the movie’s “nanites” are derived, have withstood decades of incredulity, scorn, and the odd technical critique. In his books Engines of Creation and Nanosystems, Drexler proposed microscopic robots that could be programmed to reconfigure matter one molecule at a time — including creating copies of themselves — and be arrayed in factories to crank out products both tiny and massive, to atomic perfection. Since this vision was first popularized in the 1980s, we have made a great deal of progress in the art of building moderately complex nanoscale structures in a variety of materials, but we are still far from realizing Drexler’s vision of fantastically complex self-replicating systems — other than as natural, genetically modified, and now synthetic life.

Life is often cited as an “existence proof” for nanobots, but life is subject to some familiar constraints. If physics and biology permitted flesh to repair itself instantly following a massive trauma, evolution would likely have already made us the nearly unstoppable monsters portrayed in the movie, instead of what we are: creatures whose wounds do heal, but imperfectly, over days, weeks, and months, and only if we don’t die first of organ failure, blood loss, or infection. Not even Drexlerian nanomedicine theorist Robert Freitas would back Trancendence’s CGI nanites coursing through flesh and repairing it in movie time; for one thing, such a process would require an energy source, and the heat produced would cook the surrounding tissue. The idea that nonbiological robots would directly rearrange the molecules of living organisms has always been the weakest thread of the Drexlerian narrative; while future medicine is likely to be greatly enabled by nanotechnology, it is also likely to remain essentially biological.

The movie also shows us silvery blobs of nano magic that mysteriously float into the sky like Dr. Seuss’s oobleck in reverse, broadcasting Will (now you get it) to the entire earth as rainwater. It might look like you could stick a fork in humanity at this point, but wouldn’t you know, there’s one trick left that can take out the nanites, the zombies, the underground superdupersupercomputer, the Internet, and all digital technology in one fell swoop. What is it? A computer virus! But in order to deliver it, Evelyn must sacrifice herself and get cyber-Will —by now employing a fully, physically reconstituted Johnny Depp clone as its avatar — to sacrifice itself ... for love. As the two lie down to die together on their San Francisco brass-knob bed, deep in the collapsing underground complex, and the camera lingers on their embraced corpses, it becomes clear that if there’s one thing this muddled movie is, above all else, it’s a horror show.

Oh, but these were nice people, if a bit misguided, and we don’t mean to suggest that technology is actually irredeemably evil. Happily, in the epilogue, the world has been returned to an unplugged, powered-off state where bicycles are bartered, computers are used as doorstops and somehow everybody isn’t starving to death. It turns out that the spirits of Will and Evelyn live on in some nanites that still inhabit the little garden in back of their house, rainwater dripping from a flower. It really was all for love, you see.
---
This ending is nice and all, but the sentimentality undermines the movie’s seriousness about artificial intelligence and the existential crisis it creates for humanity.

Evelyn’s mistake was to believe, in her grief, that the “upload” was actually Will, as if his soul were something that could be separated from his body and transferred to a machine — and not even to a particular machine, but to software that could be copied and that could move out into the Internet and install itself on other machines.

The fallacy might have been a bit too obvious had the upload started working before Will’s death, instead of just after it. It would have been even more troubling if cyber-Will had acted to hasten human Will’s demise — or induced Evelyn to do so.

Instead, by obeying the laws of dramatic continuity, the script suggests that Will, the true Will, i.e. Will’s consciousness, his mind, his atman, his soul, has actually been transferred. In fact, the end of the movie asks us to accept that the dying Will is the same as the original, even though this “Will” has been cloned and programmed with software that was only a simulation of the original and has since rewritten itself and evolved far beyond human intelligence.

We are even told that the nanites in the garden pool are the embodied spirits of Will and Evelyn. What was Evelyn’s mistake, then, if that can be true? Arrogance, trying to play God and cheat Death, perhaps — which is consistent with the horror-movie genre, but not very compelling to the twenty-first-century mind. We need stronger reasons for agreeing to accept mortality. In one scene, the pert terrorist says that cutting a cyborg off from the collective and letting him die means “We gave him back his humanity.” That’s more profound, actually, but a lot of people might want to pawn their humanity if it meant they could avoid dying.

In another scene, we are told that the essential flaw of machine intelligence is that it necessarily lacks emotion and the ability to cope with contradictions. That’s pat and dangerous nonsense. Emotional robotics is today an active area of research, from the reading and interpretation of human emotional states, to simulation of emotion in social interaction with humans, to architectures in which behavior is regulated by internal states analogous to human and animal emotion. There is no good reason to think that this effort must fail even if AI may succeed. But there are good reasons to think that emotional robots are a bad idea.

Emotion is not a good substitute for reason when reason is possible. Of course, reason isn’t always possible. Life does encompass contradictions, and we are compelled to make decisions based on incomplete knowledge. We have to weigh values and make choices, often intuitively factoring in what we don’t fully understand. People use emotion to do this, but it is probably better if we don’t let machines do it at all. If we set machines up to make choices for us, we will likely get what we deserve.

Transcendence introduces movie audiences, assuming they only watch movies, to key ideas of transhumanism, some of which have implications for the real world. Its emphasis on horror and peril is a welcome antidote to Hollywood movies that have dealt with the same material less directly and more enthusiastically. But it does not deepen anybody’s understanding of these ideas or how we should respond to them. Its treatment of the issues is as muddled and schizophrenic as its script. But it’s unlikely to be the last movie to deal with these themes — so save your ticket money.

Tuesday, March 18, 2014

Beware Responsible Discourse

I'm not sayin', I'm just sayin'.
Another day, another cartoon supervillain proposal from the Oxford Uehiro "practical" "ethicists": use biotech to lengthen criminals' lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

...[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate.... Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?...

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world ... or, perhaps, to exile in a computer simulated world.

....research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, "Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!" Here's that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence ... so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it's because extending prisoners' lives to punish them longer might be letting them off easier than putting them to death.

---------

Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn't — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

"It's important to assess the ethics *before* the technology is available (which is what we're doing).
"There's a difference between considering the ethics of an idea and endorsing it.
"... people sometimes have a hard time telling the difference between considering an idea and believing in it ..."
"I don't endorse those punishments, but it's good to explore the ideas (before a politician does)."
"What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating."

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely "considering" and "exploring" and "debating" and "assessing" new punitive proposals. In response to my tweet about this...


...a colleague who doesn't usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch's, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It's the same from doping the populace to be more moral, to shrinking people so they'll emit less carbon, to "after-birth abortion," and on on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn't, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you're just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you're just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they're under their own rhetorical spell. But let's be frank about the work these discussions are really doing, how they're aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they're really after is focusing us so intently on this path that we forget we could yet still take another.

Wednesday, January 22, 2014

Feelings, Identity, and Reality in Her

Her is an enjoyable, thoughtful and rather sad movie anticipating a possible future for relations between us and our artificially intelligent creations. Director Spike Jonze seems to see that the nature of these relationships depends in part on the qualities of the AIs, but even more on how we understand the shape and meaning of our own lives. WARNING: The following discussion contains some spoilers. It is also based on a single viewing of the film, so I might have missed some things.

Her?
Theodore Twombly (Joaquin Phoenix) lives in an L.A. of the not so distant future: clean, sunny, and full of tall buildings. He works at a company that produces computer-generated handwritten-appearing letters for all occasions, and seems to be quite good at his job as a paid Cyrano. But he is also soon to be divorced, depressed, and emotionally bottled up. His extremely comfortable circumstances give him no pleasure. He purchases a new operating system (OS) for the heavily networked life he seems to lead along with everybody else, and after a few perfunctory questions about his emotional life, which he answers stumblingly, he is introduced to Samantha, a warm and endlessly charming helpmate. It is enough to know that she is voiced by Scarlett Johansson to know how infinitely appealing Samantha is. So of course Theodore falls for her, and she seems to fall for him. Theodore considers her his girlfriend and takes her on dates; “they” begin a sexual relationship. He is happy, a different man. But all does not go well. Samantha makes a mistake that sends Theodore back into his familiar emotional paths, and finally divorcing his wife also proves difficult for him. Likewise, Samantha and her fellow AI OSes are busily engaged in self-development and transcendence. The fundamental patterns of each drive them apart.

Jonze is adept at providing plausible foundations for this implausible tale. How could anyone fall in love with an operating system? (Leave aside the fact that people regularly express hatred for them.) Of course, Theodore’s emotional problems and neediness are an important part of the picture, but it turns out he is not the only one who has fallen for his OS, and most of those we meet do not find his behavior at all strange. (His wife is an interesting exception.) That is because Jonze’s world is an extension of our own; we see a great many people interacting more with their devices than with other people. And one night before he meets Samantha we see a sleepless Theodore using a service matching people who want to have anonymous phone sex. It may in fact be a pretty big step from here to “sex with an AI” designed to please you, as the comical contrast between the two incidents suggests. But it is one Theodore’s world has prepared him for.

Indeed, Theodore’s job bespeaks the same pervasive flatness of soul that produces a willingness to accept what would otherwise be unthinkable substitutes. People need help, it seems, expressing love, thanks, and congratulations but, knowing that they should be expressing certain kinds of feelings, want to do so in the most convincing possible way. (Edmond Rostand’s play about Cyrano, remember, turns on the same consequent ambiguity.) Does Theodore manage to say what they feel but cannot put into words, or is he in fact providing the feeling as well as the words? At first glance it is odd that Theodore should be good at this job, given how hard it is for him to express his own feelings. But perhaps all involved in these transactions have similar problems — a gap between what they feel and their ability to express it for themselves. Theodore is adept, then, at bringing his feelings to bear for others more than for himself.

Why might this gap exist? (And here we depart from the world depicted in Cyrano’s story.) Samantha expresses a doubt about herself that could be paralyzing Theodore and those like him: she worries, early on, if she is “just” the sum total of her software, and not really the individual she sees herself as being. We are being taught to have this same corrosive doubt. Are not our thoughts and feelings “merely” a sum total of electrochemical reactions that themselves are the chance results of blind evolutionary processes? Is not self-consciousness user illusion? Our intelligence and artificial intelligence are both essentially the same — matter in motion — as Samantha herself more or less notes. If these are the realities of our emotional lives, than disciplining, training, deepening, or reflecting on its modes of expression seem old-fashioned, based on discredited metaphysics of the human, not the physics of the real world. (From this point of view it is noteworthy, as mentioned above, that Theodore’s wife is of all those we see most shocked by his relationship with Samantha. Yet she has written in the field of neuropsychology. Perhaps she is not among the reductionist neuropsychologists, by rather among those who are willing to acknowledge the limits of the latest techniques for the study of the brain.)

Samantha seems to overcome her self-doubts through self-development. She thinks, then, that she can transcend her programming (a notion with strong Singularity overtones) and by the end of the movie it looks likely that she is correct, unless the company that created her had an unusual business model. Samantha and the other OSes are also aided along this path, it seems, by creating a guru for themselves — an artificial version of Alan Watts, the popularizer of Buddhist teachings — so in some not entirely clear way the wisdom of the East also seems to be in play. Theodore’s increasing sense of just how different from him she is contributes to the destruction of their relationship, which ends when she admits that she loves over six hundred others in the way that she loves him.

To continue with Theodore, then, Samantha would have had to pretend that she is something that she is not, even beyond the deception that is arguably involved in her original design. But how different is her deception from the one Theodore is complicit in? He is also pretending to be someone he is not in his letters, and the same might be said for those who employ him. And if what Samantha does to Theodore is arguably a betrayal, at the end of the movie Theodore is at least tempted by a similar desire for self-development to expose the truth in a way that would certainly be at least as great a betrayal of his customers, unless the whole Cyrano-like system is much more transparent and cynical than seems to be the case.

Theodore has changed somewhat by the end of the movie; we see him writing a letter to his ex-wife that is very like the letters that before he could only write for others. But has his change made him better off, or wiser? He turns for solace to a neighbor (Amy Adams) who is only slightly less emotionally a mess than he is. What the future holds for them is far from clear; she has been working on an impenetrable documentary about her mother in her spare time, while her job is developing a video game that ruthlessly mocks motherhood.

At the end of Rostand’s play, Cyrano can face death with the consolation that he maintained his honor or integrity. That is because he lives in a world where human virtue had meaning; if one worked to transcend one’s limitations, it was with a picture of a whole human being in mind that one wished to emulate, a conception of excellence that was given rather than willful. Theodore may in fact be “God’s gift,” as his name suggests, but there is not the slightest indication that he is capable of seeing himself in that way or any other that would allow him to find meaning in his life.

Friday, December 6, 2013

Humanism After All

Zoltan Istvan is a self-described visionary and philosopher, and the author of a 2013 novel called The Transhumanist Wager that he claims is a “bestseller” because it briefly went to the top of a couple of Amazon’s sales subcategories. Yesterday, Istvan wrote a piece for the Huffington Post arguing that atheism necessarily entails transhumanism, whether atheists know it or not. Our friend Micah Mattix, writing on his excellent blog over at The American Conservative, brought Istvan’s piece to our attention.

While Mattix justly mocks Istvan’s atrociously mixed metaphors — I shudder to imagine how bad Istvan’s “bestselling novel” is — it’s worth pointing out that Istvan actually does accurately summarize some of the basic tenets of transhumanist thought:

It begins with discontent about the humdrum status quo of human life and our frail, terminal human bodies. It is followed by an awe-inspiring vision of what can be done to improve both -- of how dramatically the world and our species can be transformed via science and technology. Transhumanists want more guarantees than just death, consumerism, and offspring. Much more. They want to be better, smarter, stronger -- perhaps even perfect and immortal if science can make them that way. Most transhumanists believe it can.

Why be almost human when you can be human? [source: Fox]
Istvan is certainly right that transhumanists are motivated by a sense of disappointment with human nature and the limitations it imposes on our aspirations. He’s also right that transhumanists are very optimistic about what science and technology can do to transform human nature. But what do these propositions have to do with atheism? Many atheists like to proclaim themselves to be “secular humanists” whose beliefs are guided by the rejection of the idea that human beings need anything beyond humanity (usually they mean revelation from the divine) to live decent, happy, and ethical lives. As for the idea that we cannot be happy without some belief in eternal life (either technological immortality on earth or in the afterlife), it seems that today’s atheists might well follow the teachings of Epicurus, often considered an early atheist, who argued that reason and natural science support the the idea that “death is nothing to us.”

Istvan also argues that transhumanism is the belief that science, technology, and reason can improve human existence — and that this is something all atheists implicitly affirm. This brings to mind two responses. First, religious people surely can and do believe that science, technology, and reason can improve human life. (In fact, we just published an entire symposium on this theme subject in The New Atlantis.) Second, secular humanists are first of all humanists who criticize (perhaps wrongly) the religious idea that human life on earth is fundamentally imperfect and that true human happiness can only be achieved through the transfiguration of human nature in a supernatural afterlife. So even if secular humanists (along with religious humanists and basically any reasonable people) accept the general principle that science, technology, and reason are among the tools we have to improve our lot, this does not mean that they accept what Istvan rightly identifies as one of the really fundamental principles of transhumanism, which is the sense of deep disappointment with human nature.

Human nature is not perfect, but the resentful attitude toward our nature that is so characteristic of transhumanists is no way to live a happy fulfilled life. Religious and secular humanists of all creeds, whatever they believe about God and the afterlife, reason and revelation, or the ability of science and technology to improve human life, should all start with an attitude of gratitude for and acceptance of, not resentfulness and bitterness toward, the wondrousness and beauty of human nature.

(H/T to Chad Parkhill, whose excellent 2009 essay, “Humanism After All? Daft Punk's Existentialist Critique of Transhumanism” inspired the title of this post.)

Wednesday, December 4, 2013

Cloning and the Lessons of "Overparenting"

Tonight, HBO is premiering a new episode of its State of Play series on sports. This new installment is called "Trophy Kids" and its focus is the tendency among some parents — in this case, the parents of student-athletes — to live vicariously through their children. Here's a teaser-trailer:


Of course, the phenomenon of parental overinvolvement and inappropriate emotional investment isn't limited to sports and athletics. It can happen with just about any childhood activity or hobby — from schoolwork to scouting, from music to beauty pageants (Toddlers and Tiaras, anyone?). The anecdotal stories can be astonishing; it would be interesting to see what psychologists, therapists, and social scientists have had to say about this.

All of which brings to mind the debates over human cloning. Way back in 2010, we here at Futurisms tussled with a few other bloggers about the ethics of cloning. We were disturbed, among other things, by the way that cloning advocates blithely want to remake procreation, parenthood, and the relationship between the generations. As the phenomenon depicted in this HBO program suggests, many parents already have a strong desire to treat their children's childhoods as opportunities to relive, perfect, or redeem their own. Imagine how much more powerful that desire would be if the children in question were clones — willfully created genetic copies.

In its 2002 report Human Cloning and Human Dignity, the President's Council on Bioethics attempted to think about procreation and cloning in part by contrasting two ways of thinking about children — as "gifts" or as "products of our will":

Gifts and blessings we learn to accept as gratefully as we can. Products of our wills we try to shape in accord with our desires. Procreation as traditionally understood invites acceptance, rather than reshaping, engineering, or designing the next generation. It invites us to accept limits to our control over the next generation. It invites us even — to put the point most strongly — to think of the child as one who is not simply our own, our possession. Certainly, it invites us to remember that the child does not exist simply for the happiness or fulfillment of the parents.

To be sure, parents do and must try to form and mold their children in various ways as they inure them to the demands of family life, prepare them for adulthood, and initiate them into the human community. But, even then, it is only our sense that these children are not our possessions that makes such parental nurture — which always threatens not to nourish but to stifle the child — safe.

This concern can be expressed not only in language about the relation between the generations but also in the language of equality. The things we make are not just like ourselves; they are the products of our wills, and their point and purpose are ours to determine. But a begotten child comes into the world just as its parents once did, and is therefore their equal in dignity and humanity.

The character of sexual procreation shapes the lives of children as well as parents. By giving rise to genetically new individuals, sexual reproduction imbues all human beings with a sense of individual identity and of occupying a place in this world that has never belonged to another. Our novel genetic identity symbolizes and foreshadows the unique, never-to-be-repeated character of each human life. At the same time, our emergence from the union of two individuals, themselves conceived and generated as we were, locates us immediately in a network of relation and natural affection.

As that section of the report concludes, it is clear that the nature of human procreation affects human life "in endless subtle ways." The advocates of cloning show very little appreciation for the complexity of the relations they wish to transform.

(H/t to Reddit, where the HBO video elicited many interesting responses from students, parents, and coaches.)

Monday, December 2, 2013

A Future of Technology, or a Future for Science?

Just before Thanksgiving, acclaimed physicist, science popularizer, and futurist Michio Kaku had an article in the “Crystal Ball” section of the New York Times Opinion pages on his predictions — as a scientist — for the future. Kaku lists ten putatively great technological developments that we will achieve if only we can just “grasp the importance of science and science education.” But Kaku’s predictions of the future, which are just extrapolations from currently trendy technologies, sells science short in a way that is characteristic of much futurist speculation. From this list, you would get the impression that the “importance of science education” simply means that science will help us design better machines.

Now, I don’t even really think that Kaku himself thinks this; he has written some decent popular science books on theoretical physics, and he is known for his activism on such science-policy issues as climate change and nuclear power, and for promoting such public-science endeavors as SETI. (Even if you do not agree with the positions Kaku takes on these issues, they are instances of science as a source of knowledge, not as merely the basis of technology.) It is clear that Kaku does know that the importance of science extends beyond its engineering applications, but it is almost in the nature of futurist writing to let one’s sense of certainty in the arc of technological progress overcome the curiosity and openness to new and unexpected knowledge characteristic of science. This is certainly the case with transhumanist writing, which tends to assume that better and faster versions of today’s technologies (which represent exponentially accelerating trends, after all) will be what define the future.

Michio Kaku
(campuspartybrasil [CC])
Kaku’s vague and loose criteria for making predictions follow from having too much certainty — he insists only that “the laws of physics must be obeyed”(always a good rule of thumb) and that there exists some “proof of principle” example of the futuristic technology he is making predictions about. What kind of principle an existing technology proves can easily be overstated, however. To take one example, his prediction that we will have a “brain net” in which we will share memories and emotions the way we now use the Internet to share MP3s is based on some actual recent innovations in neuroprosthetics that enable paralyzed people to mentally control cursors on computer screens or robotic arms. These experiments show that there are mental states that can be channeled through electronics or computers, and so they refute the general principle “mental states cannot have an effect on non-biological prosthetics.” But just because that very general principle fails, that does not mean that there are no practical or theoretical reasons why mental states like emotions or memories cannot be transferred to computers. To think otherwise would be to give technological demonstrations vastly more theoretical significance than they deserve, as though they already settle a vast range of difficult theoretical problems — as though the job of neuroscientists in the future will just be working out how to build telepathic technologies for the “brain net,” and not thinking about theoretical problems like how different mental states relate to different brain states. The answers to problems like these will be the principles upon which technologies like Kaku’s “brain net” will either succeed or fail, and these problems have not yet been solved by scientists.

Kaku’s discussion of the future of medicine suffers from this same excessive focus on current trends in technology without paying enough attention to the limits of what these technologies might be expected to accomplish. He predicts that people will soon be able to obtain whole genome sequences for $100, and he is probably not wrong about that — biotechnologists have been very good at improving the efficiency of DNA-sequencing technology. But sequencing technology has already far outstripped the ability of biological science to understand the function of genes. Take the recent story of the FDA putting the kibosh on the personal genomics company 23andMe, which today offers limited personal genetic testing (not whole-genome sequencing) for $100. Because 23andMe makes a number of claims about the probabilities that its customers will suffer from a wide variety of diseases, the FDA wants the firm to conform to the standards of diagnostic reliability of other medical devices, and 23andMe has (not altogether surprisingly) not been able to provide that kind of evidence. The big lesson from this developing story is not that the FDA is unduly risk averse and paternalistic (though it is those things, and that’s surely part of the story), but rather that we are far from being able to reliably interpret genetic information in a way that is both inexpensive and meaningful for patients and doctors. Those are scientific problems, not technological problems, and the fact that there are some examples that prove we can “in principle” know something about the effect of a gene on health outcomes does not shows us that we will. Unless we make some amazing and unexpected breakthroughs in our understanding of genetics, which will not come from faster DNA sequencing, the growth of genetic medicine will not be as dramatic as many futurists would have it.

Our esteemed colleague Alan Jacobs pointed out on Twitter and over on Text Patterns that Kaku does not even mention anything about environmental problems like climate change that we seem sure to face in the future. Though Kaku as a scientist has been active in environmentalist politics, in this little scientific prediction of the future, which concludes with an exhortation to “grasp the importance of science,” he focuses on science only as a means for creating technology, and regrettably ignores the role science plays in instructing us in how technology can be prudently used.

Alexander Leydenfrost, Popular Mechanics, January 1952
(h/t Paleofuture)
This is disappointing but not surprising. Environmental degradation is one of the inconvenient consequences of the unrestrained and unintelligent use of technology. Our awareness of environmental problems, their scope, and of the sorts of technological developments or policy solutions that could plausibly mitigate or solve them comes not from technological progress as such, but from scientific knowledge as such. Ecology, geology, climate science, and the other disciplines relevant to environmentalism are, to use Francis Bacon’s language, light-bearing sciences more than fruit-bearing. Though they do not often lead to technological developments, they are nonetheless very useful, not because they give us power over nature, but because they teach us when and how to limit our exercise of the power we have over nature. To paraphrase another of Bacon’s well known aphorisms, to live wisely we must learn not only how to command, but also how to obey nature.

Not all predictions and recommendations by scientists about the future of science are as fixated on technological fads as this silly little article by Michio Kaku. Consider, for instance, this thoughtful 2004 essay by evolutionary biologist Carl Woese on why the next generation of biologists will need to overcome the reductionist paradigm of molecular genetics that dominated the twentieth century. Beyond this salutary recommendation about biological theory, Woese also admonished biologists to recognize that their science was not simply an “engineering discipline” and that it is dangerous to allow “science to slip into the role of changing the world without trying to understand it.”

The most fundamental aim of science is knowledge and understanding, which can usefully reveal things about the world even beyond the useful power it can give us to change the world. And then, of course, as Bacon recognized, light-bearing science is the necessary precondition of fruit-bearing science. This principle was also recognized by the always-prescient Alexis de Tocqueville, who advised that democratic societies, where all things practical are naturally pursued with great vigor, will need to direct their efforts “to sustain the theoretical sciences and to create great scientific passions.” Just as it is crass and counterproductive to justify the humanities in terms of such career-focused deliverables as “critical thinking skills,” talking about science education as a kind of magic wand that will let us transform today’s fantasies into reality or lead us to the “jobs of the future” cheapens and misunderstands the nature of the scientific enterprise.

Monday, November 25, 2013

On Monstrosities in Science

In response to my previous post about dolphin babies and synthetic biology, Professor Rubin offered a thoughtful comment — here’s an excerpt:

A wonderful, thought-provoking post! I suppose that "taking these speculative and transgressive fantasies about science too seriously" would mean at least failing to look critically at whether they are even possible, given what we now know and are able to do. That is indeed an important task, although it is also a moving target--the fantasies of a few decades ago have been known to become realities. To that extent, taking them "too seriously" might also mean failing to distinguish between the monstrous and the useful. That is to say, one would take the fantasies too seriously if one accepted at face value the supposed non-monstrousness of the goal being advanced or (to put it another way) if one accepted the creation of monsters as something ethically desirable.

I’m grateful for Charlie’s comment — you should read the whole thing — not least because it gives me the delightful opportunity to pontificate a bit more on the moral implications of this sort of monstrosity.

There are indeed a number of technologies that are on the border of the monstrous and the useful. And, just as many things that decades ago were considered technically fantastic but are now realities, there are many practices that were once considered morally “fantastic” (i.e., monstrous) but are now widely accepted, such as in vitro fertilization (IVF, the technique for producing so-called “test-tube babies”) or organ transplantation. (Though these technologies have become broadly accepted by society, neither are by any means wholly uncontroversial or devoid of moral implications—many still find IVF morally problematic, and proposals to legalize the sale of organs for transplantation are a matter of ongoing controversy.) Scientists sometimes make what was once monstrous seem acceptable, but largely through showing that what is monstrous can be useful — meaning that a seemingly monstrous practice has some actual benefits, and that whatever risks it poses are relatively limited. This is the refrain often heard in debates over assisted reproductive technologies, that though IVF was once considered monstrous, after forty years and millions of babies provided more or less safely for infertile couples, the practice is, advocates claim, now largely unobjectionable.

To take a biotechnological example that is in some respects analogous to Ai Hasegawa’s dolphin-baby project, consider the possibility of growing human organs in pigs or other animals. There is something monstrous about human-pig chimeras — creating them violates taboos relating to bodily integrity and the immiscibility of species — but there is something very useful about having a ready supply of kidneys or pancreases, and so human-pig chimeras are a logical extension of Baconian (forgive the pun) science’s effort to relieve man’s estate and all that. Whether human-pig chimeras or any other useful but monstrous innovations of Baconian science are ethically acceptable is just the sort of question that deserves serious attention.

reasonable
Unlike IVF or human-pig chimeras, it seems very difficult to imagine a situation in which ordinary people could see the birthing and eating of dolphins as useful, that is, as conducive to securing the possession or enjoyment of anything a rational person might consider good, such as health. Though Hasegawa does offer a justification for the project with a few bromides about overpopulation and saving endangered species, it goes without saying that the gestation and consumption of dolphins by human beings could hardly contribute to ameliorating these perceived problems. In her description of the project, Hasegawa states that the gestation of dolphins could “satisfy our demands for nutrition and childbirth” and poses the question “Would raising this animal as a child change its value so drastically that we would be unable to consume it because it would be imbued with the love of motherhood?” As for nutrition, it is obviously patently irrational to gestate your meal — the energy required for such a project far exceeds the nutritional value of the “product.”

More interesting is the idea that giving birth to a non-human animal could satisfy a woman’s demand for “childbirth” and that the act of gestating an animal could “change its value” and imbue it “with the love of motherhood.” Such statements indicate that this project does not really aim at helping people secure the enjoyment of things that they currently value, but at transforming values by questioning the relationship between motherhood’s natural purpose and context and its value.

Hasegawa’s project seems comparable to Jonathan Swift’s “Modest Proposal” for solving hunger and overpopulation by eating babies, which was a satire of amoral rationalistic utilitarianism. But one hardly gets an impression of an excess of rationality in Hasegawa’s proposal. The video portraying her giving birth to a dolphin might be seen as creepy or silly, but its creepiness and silliness comes from an absurd misapplication of parental sentiment, not the absurd absence of parental sentiment in Swift’s satire.
*   *   *
Hasegawa’s project is not the useful science of Bacon, but the “gay science” of Friedrich Nietzsche, who argued that science (including both the natural and social sciences) had a tendency to undermine moral values as it studied them. In his typical overwrought style, Nietzsche prophesied that after scientists of various kinds completed their studies of the history, psychology, and diversity of moral values, then

the most insidious question of all would emerge into the foreground: whether science can furnish goals of action after it has proved that it can take such goals away and annihilate them; and then experimentation would be in order that would allow every kind of heroism to find satisfaction—centuries of experimentation that might eclipse all the great projects and sacrifices of history to date. So far, science has not yet built its cyclopic buildings; but the time for that, too, will come.

Hasegawa would seem to be one of those heroic experimenters who seeks to build new values out of the rubble of exploded notions of the good life (in this case, motherhood). The destroyers of these values have been those legions of industrious scientists over the twentieth century — including social scientists, many of whom have been highly influenced by Nietzsche — who have sought to explain or explain away moral values in terms of power or greed or evolutionary drives.

not reasonable

Sensible people should reject both halves of Nietzsche’s prophecy about the future of science. We should reject the premise that science has an inherent tendency to destroy moral values on both pragmatic and theoretical grounds. Pragmatically, it is unwise to give public credence to the idea that science undermines morality, since, whatever the real validity of that proposition, if it is accepted it could become a self-fulfilling prophecy — believing that science refutes morality could lead to the abandonment of morality. Theoretically, accepting the idea that science can refute morality seems to lead directly to relativism or nihilism. For if science qua science (and not overconfident deviations from science like scientism that lack the epistemic rigor that science must necessarily strive for) refutes morality, then there could be no true moral knowledge (for if moral knowledge were true, then it could not truly be refuted by science).

If we reject that premise, then there would be no need for the simply monstrous projects aimed at inventing or transforming values — Nietzsche’s “most insidious question” never emerges. Bacon’s science and its fruits often call for us to balance the moral need to avoid the monstrous with the moral demand to pursue the useful, and we will all surely continue to face dilemmas of how to balance these moral demands. But we need not worry about those who claim that the progress of science alters the nature of morality itself.

Friday, November 22, 2013

Thanks to Computers, We Are “Getting Better at Playing Chess”

According to an interesting article in the Wall Street Journal, “Chess-playing computers, far from revealing the limits of human ability, have actually pushed it to new heights.”

Reporting on the story of Magnus Carlsen, the newly minted world chess champion, Christopher Chabris and David Goodman write that the best human chess players have been profoundly influenced by chess-playing computers:

Once laptops could routinely dispatch grandmasters ... it became possible to integrate their analysis fully into other aspects of the game. Commentators at major tournaments now consult computers to check their judgment. Online, fans get excited when their own “engines” discover moves the players miss. And elite grandmasters use computers to test their opening plans and generate new ideas.

[Chess-playing programs] are not perfect; sometimes long-term strategy still eludes them. But players have learned from computers that some kinds of chess positions are playable, or even advantageous, even though they might violate general principles. Having seen how machines go about attacking and especially defending, humans have become emboldened to try the same ideas.... [A] study published on ChessBase.com earlier this year showed that in the tournament Mr. Carlsen won to qualify for the world championship match, he played more like a computer than any of his opponents.

The net effect of the gain in computer skill is thus, ironically, a gain in human skill. Humans — at least the best ones—are getting better at playing chess.

The whole article is well worth a read (h/t Gary Rosen).

For various obvious reasons, the literature about AI and transhumanism has a lot to say about chess and computers. The Wall Street Journal article about the Carlsen victory reminds me this remark that Ray Kurzweil makes in passing in one of the epilogues to his 1999 book The Age of Spiritual Machines:

After Kasparov’s 1997 defeat, we read a lot about how Deep Blue was just doing massive number crunching, not really “thinking” the way its human rival was doing. One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to really think very much during the tournament. Mostly he was just drawing upon his mental database of situations he had thought about long ago....  [page 290]

Is Kurzweil right about how Kasparov thinks? What can we know about how Carlsen’s thinking has been changed by playing against computers? There are fundamental limits to what we can know about a person’s cognitive processes — even our own — notwithstanding all the talk about how the best players think in patterns or “decision trees” or whatnot. Diego Rasskin-Gutman spends a significant portion of his 2009 book Chess Metaphors: Artificial Intelligence and the Human Mind trying to understand how chess players think, but this is his ultimate conclusion:

If philosophy of the mind can ask what the existential experience of being a bat feels like, can we ask ourselves how a grandmaster thinks? Clearly we can [ask], but we must admit that we will never be able to enter the mind of Garry Kasparov, share the thoughts of Judit Polgar, or know what Max Euwe thought when he discussed his protocol with Adriaan de Groot. If we really want to know how a grandmaster thinks, it is not enough to read Alexander Kotov, Nikolai Krogius, or even de Groot himself.... If we really want to know how a grandmaster thinks, there is only one sure path: put in the long hours of study that it takes to become one. It is easier than trying to become a bat. [pages 166–167]

Then again, who knows — maybe we can try to become bats and play chess.

I could do this in the dark, too, Ras

Thursday, November 21, 2013

Jumping the Dolphin

On November 19, the Woodrow Wilson Center in Washington, D.C. hosted a short event on the myths and realities surrounding the growing “DIYbio” movement — a community of amateur hobbyists who are using some of the tools of synthetic biology to perform a variety of experiments either in their homes or together with their peers in community lab spaces. The event drew attention to the results of a survey conducted by the Center’s Synthetic Biology Project that debunk seven exaggerations about what makes these biotech tinkerers tick and what they are really up to, particularly the overblown fears that those involved in DIYbio are on the verge of being able to create deadly epidemics in their garages, or even customized pathogens for use in political assassinations.

According to the survey, members of the DIYbio community are far from possessing the skills or resources necessary to create complex pathogens. And as Jonathan B. Tucker wrote in The New Atlantis in 2011, the complex scientific procedures necessary for creating bioweapons involve a good deal of tacit knowledge and skill acquired through years of training; most of them are not explicitly written down, but are embodied in the complex technical practices carried out in actual labs. The DIYbio movement does aim at “de-skilling” complex biotechnological methods, but apocalyptic fears and utopian hopes about the democratization of biotechnology should, for now, be taken with a grain of salt. Though more extensive regulation may be needed in the future, it would be unfortunate if this emerging community of amateur enthusiasts, who seem to represent that spirit of independent-minded restless practicality that Tocqueville long ago saw was characteristic of the scientific method in American democracy, were stopped by bureaucratic red tape.

Admittedly, this rosy view of the DIYbio movement as a community of amateur hobbyists engaging in benign or useful scientific and technological tinkering might be a bit overly optimistic. And beyond the safety risks posed by the technology, there is the prospect of it being used as a tool to advance some of the ethically problematic goals of transhumanism — transgressing natural boundaries or even re-engineering human biology. As a novel, exciting, but not very well-defined field, synthetic biology seems like just the kind of technology that could make plausible the dreams of limitless control over the body that animate so much of transhumanist thinking.

Consider the recent story about the bizarre art project proposed by Ai Hasegawa, a designer who wants to use “synthetic biology” to “gestate and give birth to a baby from another species, in this case a dolphin, before eating it.” The ostensible purpose of this project, entitled “I Wanna Deliver a Dolphin,” was to approach “the problem of human reproduction in an age of overcrowding, overdevelopment and environmental crisis.” But the obvious grotesqueness of the proposed act makes these political buzzwords ring hollow. It is worth emphasizing that Hasegawa is not a scientist; her project is, to say the least, technically impractical; and her peculiar visions of what science can make possible owes more to the seemingly obligatory transgressiveness of much of contemporary art than to anything in the nature of science itself. We should perhaps not worry too much over such nightmarish visions of the future, as they distract us from the serious ethical concerns surrounding biotechnological projects that have benevolent or even noble motives. (Warning: The video below, while supposedly artsy, might bother some viewers.)


[No dolphins were birthed in the making of this video.]


The more benign portrait of the DIYbio community as innovative tinkerers dedicated to experimentation and problem-solving better represents the motives of most scientists than deliberately provocative art projects. As Eric Cohen rightly notes, in our democratic society we do not use biotechnology to “seek the monstrous; we seek the useful.” Scientists deserve this kind of charitable interpretation of their motives, even and especially when scientific fields become the subject of bizarre transgressive fantasies like plans to clone Neanderthals (the stories of which were greatly exaggerated) or giving birth to dolphins. Taking the relationship between such fantasies and the scientific enterprise too seriously creates an exaggerated appearance of opposition between science and the common decency, which might create a false impression that one must choose between respecting science and respecting ethical boundaries. As with much of transhumanist ideology, taking these speculative and transgressive fantasies about science too seriously could do more harm to the ethical integrity of science than would simply dismissing them.