Futurisms: Critiquing the project to reengineer humanity

Friday, December 6, 2013

Humanism After All

Zoltan Istvan is a self-described visionary and philosopher, and the author of a 2013 novel called The Transhumanist Wager that he claims is a “bestseller” because it briefly went to the top of a couple of Amazon’s sales subcategories. Yesterday, Istvan wrote a piece for the Huffington Post arguing that atheism necessarily entails transhumanism, whether atheists know it or not. Our friend Micah Mattix, writing on his excellent blog over at The American Conservative, brought Istvan’s piece to our attention.

While Mattix justly mocks Istvan’s atrociously mixed metaphors — I shudder to imagine how bad Istvan’s “bestselling novel” is — it’s worth pointing out that Istvan actually does accurately summarize some of the basic tenets of transhumanist thought:

It begins with discontent about the humdrum status quo of human life and our frail, terminal human bodies. It is followed by an awe-inspiring vision of what can be done to improve both -- of how dramatically the world and our species can be transformed via science and technology. Transhumanists want more guarantees than just death, consumerism, and offspring. Much more. They want to be better, smarter, stronger -- perhaps even perfect and immortal if science can make them that way. Most transhumanists believe it can.

Why be almost human when you can be human? [source: Fox]
Istvan is certainly right that transhumanists are motivated by a sense of disappointment with human nature and the limitations it imposes on our aspirations. He’s also right that transhumanists are very optimistic about what science and technology can do to transform human nature. But what do these propositions have to do with atheism? Many atheists like to proclaim themselves to be “secular humanists” whose beliefs are guided by the rejection of the idea that human beings need anything beyond humanity (usually they mean revelation from the divine) to live decent, happy, and ethical lives. As for the idea that we cannot be happy without some belief in eternal life (either technological immortality on earth or in the afterlife), it seems that today’s atheists might well follow the teachings of Epicurus, often considered an early atheist, who argued that reason and natural science support the the idea that “death is nothing to us.”

Istvan also argues that transhumanism is the belief that science, technology, and reason can improve human existence — and that this is something all atheists implicitly affirm. This brings to mind two responses. First, religious people surely can and do believe that science, technology, and reason can improve human life. (In fact, we just published an entire symposium on this theme subject in The New Atlantis.) Second, secular humanists are first of all humanists who criticize (perhaps wrongly) the religious idea that human life on earth is fundamentally imperfect and that true human happiness can only be achieved through the transfiguration of human nature in a supernatural afterlife. So even if secular humanists (along with religious humanists and basically any reasonable people) accept the general principle that science, technology, and reason are among the tools we have to improve our lot, this does not mean that they accept what Istvan rightly identifies as one of the really fundamental principles of transhumanism, which is the sense of deep disappointment with human nature.

Human nature is not perfect, but the resentful attitude toward our nature that is so characteristic of transhumanists is no way to live a happy fulfilled life. Religious and secular humanists of all creeds, whatever they believe about God and the afterlife, reason and revelation, or the ability of science and technology to improve human life, should all start with an attitude of gratitude for and acceptance of, not resentfulness and bitterness toward, the wondrousness and beauty of human nature.

(H/T to Chad Parkhill, whose excellent 2009 essay, “Humanism After All? Daft Punk's Existentialist Critique of Transhumanism” inspired the title of this post.)

Wednesday, December 4, 2013

Cloning and the Lessons of "Overparenting"

Tonight, HBO is premiering a new episode of its State of Play series on sports. This new installment is called "Trophy Kids" and its focus is the tendency among some parents — in this case, the parents of student-athletes — to live vicariously through their children. Here's a teaser-trailer:

Of course, the phenomenon of parental overinvolvement and inappropriate emotional investment isn't limited to sports and athletics. It can happen with just about any childhood activity or hobby — from schoolwork to scouting, from music to beauty pageants (Toddlers and Tiaras, anyone?). The anecdotal stories can be astonishing; it would be interesting to see what psychologists, therapists, and social scientists have had to say about this.

All of which brings to mind the debates over human cloning. Way back in 2010, we here at Futurisms tussled with a few other bloggers about the ethics of cloning. We were disturbed, among other things, by the way that cloning advocates blithely want to remake procreation, parenthood, and the relationship between the generations. As the phenomenon depicted in this HBO program suggests, many parents already have a strong desire to treat their children's childhoods as opportunities to relive, perfect, or redeem their own. Imagine how much more powerful that desire would be if the children in question were clones — willfully created genetic copies.

In its 2002 report Human Cloning and Human Dignity, the President's Council on Bioethics attempted to think about procreation and cloning in part by contrasting two ways of thinking about children — as "gifts" or as "products of our will":

Gifts and blessings we learn to accept as gratefully as we can. Products of our wills we try to shape in accord with our desires. Procreation as traditionally understood invites acceptance, rather than reshaping, engineering, or designing the next generation. It invites us to accept limits to our control over the next generation. It invites us even — to put the point most strongly — to think of the child as one who is not simply our own, our possession. Certainly, it invites us to remember that the child does not exist simply for the happiness or fulfillment of the parents.

To be sure, parents do and must try to form and mold their children in various ways as they inure them to the demands of family life, prepare them for adulthood, and initiate them into the human community. But, even then, it is only our sense that these children are not our possessions that makes such parental nurture — which always threatens not to nourish but to stifle the child — safe.

This concern can be expressed not only in language about the relation between the generations but also in the language of equality. The things we make are not just like ourselves; they are the products of our wills, and their point and purpose are ours to determine. But a begotten child comes into the world just as its parents once did, and is therefore their equal in dignity and humanity.

The character of sexual procreation shapes the lives of children as well as parents. By giving rise to genetically new individuals, sexual reproduction imbues all human beings with a sense of individual identity and of occupying a place in this world that has never belonged to another. Our novel genetic identity symbolizes and foreshadows the unique, never-to-be-repeated character of each human life. At the same time, our emergence from the union of two individuals, themselves conceived and generated as we were, locates us immediately in a network of relation and natural affection.

As that section of the report concludes, it is clear that the nature of human procreation affects human life "in endless subtle ways." The advocates of cloning show very little appreciation for the complexity of the relations they wish to transform.

(H/t to Reddit, where the HBO video elicited many interesting responses from students, parents, and coaches.)

Monday, December 2, 2013

A Future of Technology, or a Future for Science?

Just before Thanksgiving, acclaimed physicist, science popularizer, and futurist Michio Kaku had an article in the “Crystal Ball” section of the New York Times Opinion pages on his predictions — as a scientist — for the future. Kaku lists ten putatively great technological developments that we will achieve if only we can just “grasp the importance of science and science education.” But Kaku’s predictions of the future, which are just extrapolations from currently trendy technologies, sells science short in a way that is characteristic of much futurist speculation. From this list, you would get the impression that the “importance of science education” simply means that science will help us design better machines.

Now, I don’t even really think that Kaku himself thinks this; he has written some decent popular science books on theoretical physics, and he is known for his activism on such science-policy issues as climate change and nuclear power, and for promoting such public-science endeavors as SETI. (Even if you do not agree with the positions Kaku takes on these issues, they are instances of science as a source of knowledge, not as merely the basis of technology.) It is clear that Kaku does know that the importance of science extends beyond its engineering applications, but it is almost in the nature of futurist writing to let one’s sense of certainty in the arc of technological progress overcome the curiosity and openness to new and unexpected knowledge characteristic of science. This is certainly the case with transhumanist writing, which tends to assume that better and faster versions of today’s technologies (which represent exponentially accelerating trends, after all) will be what define the future.

Michio Kaku
(campuspartybrasil [CC])
Kaku’s vague and loose criteria for making predictions follow from having too much certainty — he insists only that “the laws of physics must be obeyed”(always a good rule of thumb) and that there exists some “proof of principle” example of the futuristic technology he is making predictions about. What kind of principle an existing technology proves can easily be overstated, however. To take one example, his prediction that we will have a “brain net” in which we will share memories and emotions the way we now use the Internet to share MP3s is based on some actual recent innovations in neuroprosthetics that enable paralyzed people to mentally control cursors on computer screens or robotic arms. These experiments show that there are mental states that can be channeled through electronics or computers, and so they refute the general principle “mental states cannot have an effect on non-biological prosthetics.” But just because that very general principle fails, that does not mean that there are no practical or theoretical reasons why mental states like emotions or memories cannot be transferred to computers. To think otherwise would be to give technological demonstrations vastly more theoretical significance than they deserve, as though they already settle a vast range of difficult theoretical problems — as though the job of neuroscientists in the future will just be working out how to build telepathic technologies for the “brain net,” and not thinking about theoretical problems like how different mental states relate to different brain states. The answers to problems like these will be the principles upon which technologies like Kaku’s “brain net” will either succeed or fail, and these problems have not yet been solved by scientists.

Kaku’s discussion of the future of medicine suffers from this same excessive focus on current trends in technology without paying enough attention to the limits of what these technologies might be expected to accomplish. He predicts that people will soon be able to obtain whole genome sequences for $100, and he is probably not wrong about that — biotechnologists have been very good at improving the efficiency of DNA-sequencing technology. But sequencing technology has already far outstripped the ability of biological science to understand the function of genes. Take the recent story of the FDA putting the kibosh on the personal genomics company 23andMe, which today offers limited personal genetic testing (not whole-genome sequencing) for $100. Because 23andMe makes a number of claims about the probabilities that its customers will suffer from a wide variety of diseases, the FDA wants the firm to conform to the standards of diagnostic reliability of other medical devices, and 23andMe has (not altogether surprisingly) not been able to provide that kind of evidence. The big lesson from this developing story is not that the FDA is unduly risk averse and paternalistic (though it is those things, and that’s surely part of the story), but rather that we are far from being able to reliably interpret genetic information in a way that is both inexpensive and meaningful for patients and doctors. Those are scientific problems, not technological problems, and the fact that there are some examples that prove we can “in principle” know something about the effect of a gene on health outcomes does not shows us that we will. Unless we make some amazing and unexpected breakthroughs in our understanding of genetics, which will not come from faster DNA sequencing, the growth of genetic medicine will not be as dramatic as many futurists would have it.

Our esteemed colleague Alan Jacobs pointed out on Twitter and over on Text Patterns that Kaku does not even mention anything about environmental problems like climate change that we seem sure to face in the future. Though Kaku as a scientist has been active in environmentalist politics, in this little scientific prediction of the future, which concludes with an exhortation to “grasp the importance of science,” he focuses on science only as a means for creating technology, and regrettably ignores the role science plays in instructing us in how technology can be prudently used.

Alexander Leydenfrost, Popular Mechanics, January 1952
(h/t Paleofuture)
This is disappointing but not surprising. Environmental degradation is one of the inconvenient consequences of the unrestrained and unintelligent use of technology. Our awareness of environmental problems, their scope, and of the sorts of technological developments or policy solutions that could plausibly mitigate or solve them comes not from technological progress as such, but from scientific knowledge as such. Ecology, geology, climate science, and the other disciplines relevant to environmentalism are, to use Francis Bacon’s language, light-bearing sciences more than fruit-bearing. Though they do not often lead to technological developments, they are nonetheless very useful, not because they give us power over nature, but because they teach us when and how to limit our exercise of the power we have over nature. To paraphrase another of Bacon’s well known aphorisms, to live wisely we must learn not only how to command, but also how to obey nature.

Not all predictions and recommendations by scientists about the future of science are as fixated on technological fads as this silly little article by Michio Kaku. Consider, for instance, this thoughtful 2004 essay by evolutionary biologist Carl Woese on why the next generation of biologists will need to overcome the reductionist paradigm of molecular genetics that dominated the twentieth century. Beyond this salutary recommendation about biological theory, Woese also admonished biologists to recognize that their science was not simply an “engineering discipline” and that it is dangerous to allow “science to slip into the role of changing the world without trying to understand it.”

The most fundamental aim of science is knowledge and understanding, which can usefully reveal things about the world even beyond the useful power it can give us to change the world. And then, of course, as Bacon recognized, light-bearing science is the necessary precondition of fruit-bearing science. This principle was also recognized by the always-prescient Alexis de Tocqueville, who advised that democratic societies, where all things practical are naturally pursued with great vigor, will need to direct their efforts “to sustain the theoretical sciences and to create great scientific passions.” Just as it is crass and counterproductive to justify the humanities in terms of such career-focused deliverables as “critical thinking skills,” talking about science education as a kind of magic wand that will let us transform today’s fantasies into reality or lead us to the “jobs of the future” cheapens and misunderstands the nature of the scientific enterprise.

Monday, November 25, 2013

On Monstrosities in Science

In response to my previous post about dolphin babies and synthetic biology, Professor Rubin offered a thoughtful comment — here’s an excerpt:

A wonderful, thought-provoking post! I suppose that "taking these speculative and transgressive fantasies about science too seriously" would mean at least failing to look critically at whether they are even possible, given what we now know and are able to do. That is indeed an important task, although it is also a moving target--the fantasies of a few decades ago have been known to become realities. To that extent, taking them "too seriously" might also mean failing to distinguish between the monstrous and the useful. That is to say, one would take the fantasies too seriously if one accepted at face value the supposed non-monstrousness of the goal being advanced or (to put it another way) if one accepted the creation of monsters as something ethically desirable.

I’m grateful for Charlie’s comment — you should read the whole thing — not least because it gives me the delightful opportunity to pontificate a bit more on the moral implications of this sort of monstrosity.

There are indeed a number of technologies that are on the border of the monstrous and the useful. And, just as many things that decades ago were considered technically fantastic but are now realities, there are many practices that were once considered morally “fantastic” (i.e., monstrous) but are now widely accepted, such as in vitro fertilization (IVF, the technique for producing so-called “test-tube babies”) or organ transplantation. (Though these technologies have become broadly accepted by society, neither are by any means wholly uncontroversial or devoid of moral implications—many still find IVF morally problematic, and proposals to legalize the sale of organs for transplantation are a matter of ongoing controversy.) Scientists sometimes make what was once monstrous seem acceptable, but largely through showing that what is monstrous can be useful — meaning that a seemingly monstrous practice has some actual benefits, and that whatever risks it poses are relatively limited. This is the refrain often heard in debates over assisted reproductive technologies, that though IVF was once considered monstrous, after forty years and millions of babies provided more or less safely for infertile couples, the practice is, advocates claim, now largely unobjectionable.

To take a biotechnological example that is in some respects analogous to Ai Hasegawa’s dolphin-baby project, consider the possibility of growing human organs in pigs or other animals. There is something monstrous about human-pig chimeras — creating them violates taboos relating to bodily integrity and the immiscibility of species — but there is something very useful about having a ready supply of kidneys or pancreases, and so human-pig chimeras are a logical extension of Baconian (forgive the pun) science’s effort to relieve man’s estate and all that. Whether human-pig chimeras or any other useful but monstrous innovations of Baconian science are ethically acceptable is just the sort of question that deserves serious attention.

Unlike IVF or human-pig chimeras, it seems very difficult to imagine a situation in which ordinary people could see the birthing and eating of dolphins as useful, that is, as conducive to securing the possession or enjoyment of anything a rational person might consider good, such as health. Though Hasegawa does offer a justification for the project with a few bromides about overpopulation and saving endangered species, it goes without saying that the gestation and consumption of dolphins by human beings could hardly contribute to ameliorating these perceived problems. In her description of the project, Hasegawa states that the gestation of dolphins could “satisfy our demands for nutrition and childbirth” and poses the question “Would raising this animal as a child change its value so drastically that we would be unable to consume it because it would be imbued with the love of motherhood?” As for nutrition, it is obviously patently irrational to gestate your meal — the energy required for such a project far exceeds the nutritional value of the “product.”

More interesting is the idea that giving birth to a non-human animal could satisfy a woman’s demand for “childbirth” and that the act of gestating an animal could “change its value” and imbue it “with the love of motherhood.” Such statements indicate that this project does not really aim at helping people secure the enjoyment of things that they currently value, but at transforming values by questioning the relationship between motherhood’s natural purpose and context and its value.

Hasegawa’s project seems comparable to Jonathan Swift’s “Modest Proposal” for solving hunger and overpopulation by eating babies, which was a satire of amoral rationalistic utilitarianism. But one hardly gets an impression of an excess of rationality in Hasegawa’s proposal. The video portraying her giving birth to a dolphin might be seen as creepy or silly, but its creepiness and silliness comes from an absurd misapplication of parental sentiment, not the absurd absence of parental sentiment in Swift’s satire.
*   *   *
Hasegawa’s project is not the useful science of Bacon, but the “gay science” of Friedrich Nietzsche, who argued that science (including both the natural and social sciences) had a tendency to undermine moral values as it studied them. In his typical overwrought style, Nietzsche prophesied that after scientists of various kinds completed their studies of the history, psychology, and diversity of moral values, then

the most insidious question of all would emerge into the foreground: whether science can furnish goals of action after it has proved that it can take such goals away and annihilate them; and then experimentation would be in order that would allow every kind of heroism to find satisfaction—centuries of experimentation that might eclipse all the great projects and sacrifices of history to date. So far, science has not yet built its cyclopic buildings; but the time for that, too, will come.

Hasegawa would seem to be one of those heroic experimenters who seeks to build new values out of the rubble of exploded notions of the good life (in this case, motherhood). The destroyers of these values have been those legions of industrious scientists over the twentieth century — including social scientists, many of whom have been highly influenced by Nietzsche — who have sought to explain or explain away moral values in terms of power or greed or evolutionary drives.

not reasonable

Sensible people should reject both halves of Nietzsche’s prophecy about the future of science. We should reject the premise that science has an inherent tendency to destroy moral values on both pragmatic and theoretical grounds. Pragmatically, it is unwise to give public credence to the idea that science undermines morality, since, whatever the real validity of that proposition, if it is accepted it could become a self-fulfilling prophecy — believing that science refutes morality could lead to the abandonment of morality. Theoretically, accepting the idea that science can refute morality seems to lead directly to relativism or nihilism. For if science qua science (and not overconfident deviations from science like scientism that lack the epistemic rigor that science must necessarily strive for) refutes morality, then there could be no true moral knowledge (for if moral knowledge were true, then it could not truly be refuted by science).

If we reject that premise, then there would be no need for the simply monstrous projects aimed at inventing or transforming values — Nietzsche’s “most insidious question” never emerges. Bacon’s science and its fruits often call for us to balance the moral need to avoid the monstrous with the moral demand to pursue the useful, and we will all surely continue to face dilemmas of how to balance these moral demands. But we need not worry about those who claim that the progress of science alters the nature of morality itself.

Friday, November 22, 2013

Thanks to Computers, We Are “Getting Better at Playing Chess”

According to an interesting article in the Wall Street Journal, “Chess-playing computers, far from revealing the limits of human ability, have actually pushed it to new heights.”

Reporting on the story of Magnus Carlsen, the newly minted world chess champion, Christopher Chabris and David Goodman write that the best human chess players have been profoundly influenced by chess-playing computers:

Once laptops could routinely dispatch grandmasters ... it became possible to integrate their analysis fully into other aspects of the game. Commentators at major tournaments now consult computers to check their judgment. Online, fans get excited when their own “engines” discover moves the players miss. And elite grandmasters use computers to test their opening plans and generate new ideas.

[Chess-playing programs] are not perfect; sometimes long-term strategy still eludes them. But players have learned from computers that some kinds of chess positions are playable, or even advantageous, even though they might violate general principles. Having seen how machines go about attacking and especially defending, humans have become emboldened to try the same ideas.... [A] study published on ChessBase.com earlier this year showed that in the tournament Mr. Carlsen won to qualify for the world championship match, he played more like a computer than any of his opponents.

The net effect of the gain in computer skill is thus, ironically, a gain in human skill. Humans — at least the best ones—are getting better at playing chess.

The whole article is well worth a read (h/t Gary Rosen).

For various obvious reasons, the literature about AI and transhumanism has a lot to say about chess and computers. The Wall Street Journal article about the Carlsen victory reminds me this remark that Ray Kurzweil makes in passing in one of the epilogues to his 1999 book The Age of Spiritual Machines:

After Kasparov’s 1997 defeat, we read a lot about how Deep Blue was just doing massive number crunching, not really “thinking” the way its human rival was doing. One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to really think very much during the tournament. Mostly he was just drawing upon his mental database of situations he had thought about long ago....  [page 290]

Is Kurzweil right about how Kasparov thinks? What can we know about how Carlsen’s thinking has been changed by playing against computers? There are fundamental limits to what we can know about a person’s cognitive processes — even our own — notwithstanding all the talk about how the best players think in patterns or “decision trees” or whatnot. Diego Rasskin-Gutman spends a significant portion of his 2009 book Chess Metaphors: Artificial Intelligence and the Human Mind trying to understand how chess players think, but this is his ultimate conclusion:

If philosophy of the mind can ask what the existential experience of being a bat feels like, can we ask ourselves how a grandmaster thinks? Clearly we can [ask], but we must admit that we will never be able to enter the mind of Garry Kasparov, share the thoughts of Judit Polgar, or know what Max Euwe thought when he discussed his protocol with Adriaan de Groot. If we really want to know how a grandmaster thinks, it is not enough to read Alexander Kotov, Nikolai Krogius, or even de Groot himself.... If we really want to know how a grandmaster thinks, there is only one sure path: put in the long hours of study that it takes to become one. It is easier than trying to become a bat. [pages 166–167]

Then again, who knows — maybe we can try to become bats and play chess.

I could do this in the dark, too, Ras

Thursday, November 21, 2013

Jumping the Dolphin

On November 19, the Woodrow Wilson Center in Washington, D.C. hosted a short event on the myths and realities surrounding the growing “DIYbio” movement — a community of amateur hobbyists who are using some of the tools of synthetic biology to perform a variety of experiments either in their homes or together with their peers in community lab spaces. The event drew attention to the results of a survey conducted by the Center’s Synthetic Biology Project that debunk seven exaggerations about what makes these biotech tinkerers tick and what they are really up to, particularly the overblown fears that those involved in DIYbio are on the verge of being able to create deadly epidemics in their garages, or even customized pathogens for use in political assassinations.

According to the survey, members of the DIYbio community are far from possessing the skills or resources necessary to create complex pathogens. And as Jonathan B. Tucker wrote in The New Atlantis in 2011, the complex scientific procedures necessary for creating bioweapons involve a good deal of tacit knowledge and skill acquired through years of training; most of them are not explicitly written down, but are embodied in the complex technical practices carried out in actual labs. The DIYbio movement does aim at “de-skilling” complex biotechnological methods, but apocalyptic fears and utopian hopes about the democratization of biotechnology should, for now, be taken with a grain of salt. Though more extensive regulation may be needed in the future, it would be unfortunate if this emerging community of amateur enthusiasts, who seem to represent that spirit of independent-minded restless practicality that Tocqueville long ago saw was characteristic of the scientific method in American democracy, were stopped by bureaucratic red tape.

Admittedly, this rosy view of the DIYbio movement as a community of amateur hobbyists engaging in benign or useful scientific and technological tinkering might be a bit overly optimistic. And beyond the safety risks posed by the technology, there is the prospect of it being used as a tool to advance some of the ethically problematic goals of transhumanism — transgressing natural boundaries or even re-engineering human biology. As a novel, exciting, but not very well-defined field, synthetic biology seems like just the kind of technology that could make plausible the dreams of limitless control over the body that animate so much of transhumanist thinking.

Consider the recent story about the bizarre art project proposed by Ai Hasegawa, a designer who wants to use “synthetic biology” to “gestate and give birth to a baby from another species, in this case a dolphin, before eating it.” The ostensible purpose of this project, entitled “I Wanna Deliver a Dolphin,” was to approach “the problem of human reproduction in an age of overcrowding, overdevelopment and environmental crisis.” But the obvious grotesqueness of the proposed act makes these political buzzwords ring hollow. It is worth emphasizing that Hasegawa is not a scientist; her project is, to say the least, technically impractical; and her peculiar visions of what science can make possible owes more to the seemingly obligatory transgressiveness of much of contemporary art than to anything in the nature of science itself. We should perhaps not worry too much over such nightmarish visions of the future, as they distract us from the serious ethical concerns surrounding biotechnological projects that have benevolent or even noble motives. (Warning: The video below, while supposedly artsy, might bother some viewers.)

[No dolphins were birthed in the making of this video.]

The more benign portrait of the DIYbio community as innovative tinkerers dedicated to experimentation and problem-solving better represents the motives of most scientists than deliberately provocative art projects. As Eric Cohen rightly notes, in our democratic society we do not use biotechnology to “seek the monstrous; we seek the useful.” Scientists deserve this kind of charitable interpretation of their motives, even and especially when scientific fields become the subject of bizarre transgressive fantasies like plans to clone Neanderthals (the stories of which were greatly exaggerated) or giving birth to dolphins. Taking the relationship between such fantasies and the scientific enterprise too seriously creates an exaggerated appearance of opposition between science and the common decency, which might create a false impression that one must choose between respecting science and respecting ethical boundaries. As with much of transhumanist ideology, taking these speculative and transgressive fantasies about science too seriously could do more harm to the ethical integrity of science than would simply dismissing them.

Thursday, September 26, 2013

Does the U.S. Really “Lag” on Military Robots?

In response to our post “U.S. Policy on Robots in Warfare,” Mark Gubrud has passed along to us a comment:

It was odd that on the Monday morning after the Friday afternoon when my Bulletin article appeared, John Markoff of the New York Times posted an article whose message many took as contradictory to mine. Where I had characterized U.S. policy as “full speed ahead,” Markoff reported that the military “lags” in development of unmanned ground vehicles, which, as you know, go by the great acronym of UGVs.

There isn't really any contradiction between the facts as reported by Markoff and the history and analysis I gave, as I explained on my personal blog, but anybody who read the two casually, or only looked at the headlines, could be forgiven for thinking that Markoff had rebutted me, perhaps upholding the myth that there is some kind of a moratorium in effect.

In that blog post he mentions, Gubrud expands on the strangeness of the NYT article, or at least its headline. The headline in both the print and the online edition of Markoff's article says that

the U.S. military “lags” in its pursuit of robotic ground vehicles. Lags... behind whom? China? North Korea? No, Markoff warns that the Pentagon is falling behind another aspiring superpower: Google.

Well worth reading the whole thing.

Saturday, September 21, 2013

U.S. Policy on Robots in Warfare

"Atlas," a humanoid robot built by Boston Dynamics and unveiled in 2013 as part of the "Robotics Challenge" sponsored by the U.S. military-research agency DARPA. [Source: DARPA on YouTube]
Our friend Mark Gubrud has a new article in the Bulletin of the Atomic Scientists examining the U.S. Department of Defense’s policy regarding “autonomous or semiautonomous weapon systems.” Gubrud, who wrote our most controversial Futurisms post a few years ago, brings together a wealth of links and resources that will be of interest to anyone who wants to start learning about the U.S. military’s real-life plans for killer robots.

Gubrud argues that a DOD directive put in place last year sends a signal to military vendors that the Pentagon is interested in and supports the development of autonomous weapons. He writes that, while the directive is vague in some important respects, it pushes us further down the road to autonomous killer robots. But, he says, it isn’t exactly clear why we should be on that road at all: the arguments in favor of autonomous weapons are weak, and both professional soldiers and the public at large object to them.

Gubrud is now a postdoc research associate at Princeton, as well as a member of something called the International Committee for Robot Arms Control, an organization that has Noel Sharkey, a prominent AI and robotics researcher and commentator, as its chairman.

Wednesday, May 1, 2013

Speculations on the Future of AI

Thanks for the shoutout and the kind words, Adam, about my review of Kurzweil’s latest book. I’ll take a stab at answering the question you posed:
I wonder how far Ari and [Edward] Feser would be willing to concede that the AI project might get someday, notwithstanding the faulty theoretical arguments sometimes made on its behalf.... Set aside questions of consciousness and internal states; how good will these machines get at mimicking consciousness, intelligence, humanness?
Allow me to come at this question by looking instead the big-picture view you explicitly asked me to avoid — and forgive me, readers, for approaching this rather informally. What follows is in some sense a brief update on my thinking on questions I first explored in my long 2009 essay on AI.

The big question can be put this way: Can the mind be replicated, at least to a degree that will satisfy any reasonable person that we have mastered the principles that make it work and can control the same? A comparison AI proponents often bring up is that we’ve recreated flying without replicating the bird — and in the process figured out how to do it much faster than birds. This point is useful for focusing AI discussions on the practical. But unlike many of those who make this comparison, I think most educated folk would recognize that the large majority of what makes the mind the mind has yet to be mastered and magnified in the way that flying has, even if many of its defining functions have been.

So, can all of the mind’s functions be recreated in a controllable way? I’ve long felt the answer must be yes, at least in theory. The reason is that, whatever the mind itself is — regardless of whether it is entirely physical — it seems certain to at least have entirely physical causes. (Even if these physical causes might result in non-physical causes, like free will.) Therefore, those original physical causes ought to be subject to physical understanding, manipulation, and recreation of a sort, just as with birds and flying.

The prospect of many mental tasks being automated on a computer should be unsurprising, and to an extent not even unsettling to a “folk psychological” view of free will and first-person awareness. I say this because one of the great powers of consciousness is to make habits of its own patterns of thought, to the point that they can be performed with minimal to no conscious awareness; not only tasks, skills, and knowledge, but even emotions, intuitive reasoning, and perception can be understood to some extent as products of habitualized consciousness. So it shouldn’t be surprising that we can make explicit again some of those specific habits of mind, even ones like perception that seem prior to consciousness, in a way that’s amenable to proceduralization.

The question is how many of the things our mind does can be tackled in this way. In a sense, many of the feats of AI have been continuing the trend established by mechanization long before — of having machines take over human tasks but in a machinelike way, without necessarily understanding or mastering the way humans do things. One could make a case, as Mark Halpern has in The New Atlantis, that the intelligence we seem to see in many of AI’s showiest successes — driverless cars, supercomputers winning chess and Jeopardy! — may be better understood as belonging to the human programmers than the computers themselves. If that’s true, then artificial intelligence thus far would have to be considered more a matter of advances in (human) artifice than in (computer) intelligence.

It will be curious to see how much further those methods can go without AI researchers having to return to attempting to understand human intelligence on its own terms. In that sense, perhaps the biggest, most elusive goal for AI is whether it can create (whether by replicating consciousness or not) a generalized artificial intelligence — not the big accretion of specifically tailored programs we have now, but a program that, like our mind, is able to tackle just about any and every problem that is put before it, only far better than we can. (That’s setting aside the question of how we could control such a powerful entity to suit our preferred ends — which despite what the Friendly AI folks say, sounds like a contradiction in terms.)

So, to Adam’s original question: “practically speaking ... how good will these machines get at mimicking consciousness, intelligence, humanness?” I just don’t know, and I don’t think anyone intelligently can say that they do. I do know that almost all of the prominent AI predictions turn out to be grossly optimistic in their time scale, but, as Kurzweil rightly points out, a large number that once seemed impossible have been conquered. Who’s to say how much further that line will progress — how many functions of the mind will be recreated before some limit is reached, if one is at all? One has to approach and criticize particular AI techniques; it’s much harder to competently engage in generalized speculation about what AI might someday be able to achieve or not.

So let me engage in some more of that speculation. My view is that the functions of the mind that require the most active intervention of consciousness to carry out — the ones that are the least amenable to habituation — will be among the last to fall to AI, if they do at all (although basic acts of perception remain famously difficult as well). The most obvious examples are highly creative acts and deeply engaged conversation. These have been imitated by AI, but poorly.

Many philosophers of mind have tried to put this the other way around by devising thought experiments about programs that completely imitate, say, natural language recognition, and then arguing that such a program could appear conscious without actually being so. Searle’s Chinese Room is the most famous among many such arguments. But Searle et al. seem to put an awful lot into that assumption: can we really imagine how it would be possible to replicate something like open-ended conversation (to pick a harder example) without also replicating consciousness? And if we could replicate much or all of the functionality of the mind without its first-person experience and free will, then wouldn’t that actually end up all but evacuating our view of consciousness? Whatever you make of the validity of Searle’s argument, contrary to the claims of Kurzweil and other of his critics, the Chinese Room is a remarkably tepid defense of consciousness.

This is the really big outstanding question about consciousness and AI, as I see it. The idea that our first-person experiences are illusory, or are real but play no causal role in our behavior, so deeply defies intuition that it seems to require an extreme degree of proof which hasn’t yet been met. But the causal closure of the physical world seems to demand an equally high burden of proof to overturn.

If you accept compatibilism, this isn’t a problem — and many philosophers do these days, including our own Ray Tallis. But for the sake of not letting this post get any longer, I’ll just say that I have yet to see any satisfying case for compatibilism that doesn’t amount to making our actions determined by physics but telling us don’t worry, it’s what you wanted anyway.

I remain of the position that one or the other of free will and the causal closure of the physical world will have to give; but I’m agnostic as to which it will be. If we do end up creating the AI-managed utopia that frees us from our present toiling material condition, that liberation may have to come at the minorly ironic expense of discovering that we are actually enslaved.

Images: Mr. Data from Star Trek, Dave and HAL from 2001, WALL-E from eponymous, Watson from real life

Wednesday, April 24, 2013

Reviewing Kurzweil’s Latest

Our own Ari Schulman recently reviewed Ray Kurzweil’s latest book How to Create a Mind for The American Conservative. Ari’s review challenges both Kurzweil’s ideas and his aspirations, which are, as is quite often the case in transhumanist fantasies, rather base — virtual sex and so on. Here Ari criticizes Kurzweil’s dismissal of human consciousness:

The fact that Kurzweil ignores or even denies the great mystery of consciousness may help explain why his theory has yet to create a mind. In truth, despite the revelatory suggestion of the book’s title, his theory is only a minor variation on ideas that date back decades, to when Kurzweil used them to build text-recognition systems. And while these techniques have produced many remarkable results in specialized artificial-intelligence tasks, they have yet to create generalized intelligence or creativity, much less sentience or first-person awareness.

Perhaps owing to this failure, Kurzweil spends much of the book suggesting that the features of consciousness he cannot explain — the qualities of the senses and the rest of our felt life and their role in deliberate thought and action — are mostly irrelevant to human cognition. Of course, Kurzweil is only the latest in a long line of theorists whose attempts to describe and replicate human cognition have sidelined the role of first-person awareness, subjective motivations, willful action, creativity, and other aspects of how we actually experience our lives and our decisions.

Read the whole thing here.

Another worthy take on Kurzweil’s book can be found in a review by Edward Feser, the fine philosophical duelist (and dualist) who recently caused a stir for his able defense of Thomas Nagel. Feser’s review of Kurzweil appears in the April 2013 issue of the magazine First Things, where it is, alas, behind a paywall for now. He focuses on Kurzweil’s ignorance of the distinction between “phantasms” (which are closely related to senses) and “concepts” (which are more abstract and universal) — a distinction found in Thomist and Aristotelian thinking about thinking. Here is just a very tiny snippet from Feser:

[Kurzweil’s] critics have pointed out that existing AI systems that implement ... pattern-recognition in fact succeed only within narrow boundaries. A deeper problem, though, is that nothing in these mechanisms goes beyond the formation of phantasms or images. And while a phantasm can have a certain degree of generality, as Kurzweil’s pattern-recognizers do, they lack the true universality and unambiguous content characteristic of concepts and definitive of genuine thought.

I wonder how Kurzweil’s admirers and defenders would respond to Feser’s critique. And I wonder how far Ari and Feser would be willing to concede that the AI project might get someday, notwithstanding the faulty theoretical arguments sometimes made on its behalf. Feser suggests that, instead of How to Create a Mind, Kurzweil’s book might more appropriately be titled “something like How to (Partially) Simulate a (Subhuman) Mind.” What does that mean, practically speaking? Set aside questions of consciousness and internal states; how good will these machines get at mimicking consciousness, intelligence, humanness?

Tuesday, April 23, 2013

The Silent History

Readers with iDevices might be interested to know that the originally serialized novel/app The Silent History is available free today and tomorrow in complete form from the App Store. While some of its more avant garde locational and social aspirations did not end up impressing me very much, the basic story itself, tracking over many decades a cohort of children mysteriously born without the ability to speak, is quite thought-provoking. The story is told through many different voices, with many different axes to grind, some of which will be particularly familiar to those with an interest in enhancement and human redesign. By turns satirical, amusing, shocking and poignant, I have greatly enjoyed it over the past months, and look forward to a quick reread now that it is complete. From early on I was more than satisfied with it having paid whatever its original price was, but you can't beat free.

Thursday, January 17, 2013

Fables of Posthumanity

If, as some transhumanists would have it, it is true that anyone with glasses, a hearing aid or a pacemaker should regard himself as a cyborg, then it is worth heeding this fable from Aesop, as translated in the Penguin edition by Olivia and Robert Temple:
Fable 139 - The Horse, the Ox, the Dog and the Man
When Zeus made man, he only gave him a short life-span. But man, making use of his intelligence, made a house and lived in it when winter came on. Then, one day, it became fiercely cold, it poured with rain and the horse could no longer endure it. So he galloped up to the man’s house and asked if he could take shelter with him. But the man said that he could only shelter there on one condition, and that was that the horse would give him a portion of the years of his life. The horse gave him some willingly.
A short time later, the ox also appeared. He too could not bear the bad weather any more. The man said the same thing to him, that he wouldn’t give him shelter unless the ox gave him a certain number of his own years. The ox gave him some and was allowed to go in.
Finally the dog, dying of cold, also appeared, and upon surrendering part of the time he had left to live, was given shelter.
Thus it resulted that for that portion of time originally allotted them by Zeus, men are pure and good; when they reach the years gained from the horse, they are glorious and proud; when they reach the years of the ox, they are willing to accept discipline; but when they reach the dog years, they become grumbling and irritable.

One could apply this fable to surly old men.
If it were not for the odd moral, here is a story that would surely do both Nick Bostrom and Natasha Vita-More proud, with its acknowledgement of the longstanding impulse to overcome the limitations of human givenness and the further spice of transgressive species-mixing.
But let us look at the fable again. In one sense it has a pretty literal truth: human beings have indeed lengthened our lifespans by the use of our intelligence, and surely the domestication of animals, by which they give us a portion of their years, is part of that long-term process. Aesop plainly understands the potential for the power we have over nature.
But Aesop adds to C.S. Lewis’s insight in The Abolition of Man that to speak of man’s conquest of nature is misleading; what really happens is the increase in the power some men hold over others. In the fable we see further that when we change, we echo the characteristics of what changes us. So if the transition to the transhuman requires — as it will — the combined forces of the medical-technological complex, then we should only expect that transhumans will reflect their origins. Indeed, when people express an aspiration to have minds uploaded into computers, and computer-based metaphors are routinely used to describe minds, noting the likelihood of such a transformation of human character would hardly seem to be cause for controversy. It is the very point.
But we can still pose the question about what traits we might actually expect to pick up from the medical-technological complex. Looking at Aesop’s fables generally, it has to be said that dogs do not come off very well — although they are occasionally portrayed as loyal and intelligent. So why do humans get their grumbling and irritable years instead? Who knows?
In the same way, we know the sorts of creative, free-spirited characteristics transhumanists aspire to — but is that all they will get? In computer-like minds and mind-like computers, will there be no admixture of the bureaucracy, the humorlessness, the impersonality and routinization that commonly characterize the kinds of large businesses on whom the burdens of actual human reconstruction will likely fall?
Case in point: when IBM discovered that the profit of “teaching” Watson colloquial language was that it started to curse, our modern Prospero quickly deleted the lesson from its Caliban.