Futurisms: Critiquing the project to reengineer humanity

Thursday, June 2, 2016

Modesty, Humility, and Book Reviewing

I am not ungrateful to Issues in Science and Technology for presenting, in its spring 2016 issue, a review (available here) of my book Eclipse of Man: Human Extinction and the Meaning of Progress. I wish it were not such a negative review. But as negative reviews go, this one is easy on the ego, even if unsatisfying to the intellect, because so little of it speaks to the book I wrote.

The reviewer gets some things right. He correctly points out, for some reason or other, that I teach at a Catholic university, and also notes that the book does not conform to the narrow dogma of diversity that says that in intellectual endeavors one must always include discussion of people other than dead or living white males. All true.

On the other hand, the reviewer also claims that “a good third of the book is devoted to lovingly detailed but digressive plot summaries.” He also speaks of my “synopses” of Engines of Creation and The Diamond Age. This is a very telling error. Actually, about 4 percent of the book (9 of 215 pages, by a generous count) is devoted to plot summaries of the fictional works that play a large role in my argument. How do we get from 4 percent to 33 percent? The reviewer apparently cannot discern the difference between a plot summary and an analysis of a work of literature or film. These analyses are indeed “lovingly detailed” because they involve a close reading of the texts, and a careful effort to understand and respond to the issues raised by the authors of the works in question. The same goes for my reading of Drexler; it is an analysis, not a summary or general survey of his book, as is asserted by calling it a synopsis.

Now, it may be my failure as an author that I could not interest the reviewer in my arguments as they emerged from such analyses, and of course those arguments may be wrong or in need of revision in a host of ways that a serious review might highlight. But my reviewer avoids mentioning that the book has any arguments at all. For example, a key theme of the book, announced early on (page 15), is that if we want to understand transhumanism, we need to see how it emerged out of an ongoing intellectual crisis that faced Enlightenment views of material progress when they had to face the challenge of Malthusianism and Darwinism. This point is right on the surface, consistently alluded to, and is one of the main threads holding the book together. Yet you would know nothing of it from the Issues in Science and Technology review.

There is one point raised by the reviewer which is substantive and worth thinking about. He accuses me of recommending modesty when I should have recommended humility. Oddly, he does so in a mocking way (“Are we to establish a federal modesty commission to enforce a humble red line...?”) when of course his own suggestion could just as easily be made to look unserious (Are we to establish a federal humility commission?).

But here at least there seems to be a real issue between us. By speaking of modesty I highlighted that moral choices are both central to our visions of the future and inescapable. The reviewer bows in this direction, but his notion of humility is actually an effort at avoiding moral questions in favor of supposed lessons drawn from a particular take on the history and philosophy of
science. By “humility,” the reviewer means that we need to acknowledge that we never know as much as we think we know when we project the utopian/dystopian possibilities for the future in the manner of transhumanism:

Every major technical advance or scientific insight leads to the opening up of a vast world of undreamed-of complexity that mocks the understanding we thought we’d achieved and dwarfs the power we hoped we’d acquired.

This is a beautiful, poetic sentiment. But it is quite irrelevant to the crucial question of how to deploy the new knowledge and powers that we are plainly achieving. Self-directed genetic evolution, for example, may indeed be far more difficult to achieve than was once thought, but that does not at all mean that we are not on path to gaining the knowledge and ability to undertake it. Even if it were true that we always overstate our powers, that does not mean we are not becoming more powerful, and in such a way as to encourage us to think that more power is coming. And it certainly does not mean that, as a moral question, there are not many who, eschewing both modesty and humility, are anxious to travel that road.

Thursday, May 26, 2016

Automation, Robotics, and the Economy

The Joint Economic Committee — a congressional committee with members from both the Senate and the House of Representatives — invited me to testify in a hearing yesterday on “the transformative impact of robots and automation.” The other witnesses were Andrew McAfee, an M.I.T. professor and coauthor of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (his written testimony is here) and Harry Holzer, a Georgetown economist who has written about the relationship between automation and the minimum wage (his written testimony is here).

My written testimony appears below, slightly edited to include a couple things that arose during the hearing. Part of the written testimony is based on an essay I wrote a few years ago with Ari Schulman called “The Problem with ‘Friendly’ Artificial Intelligence.” Video of the entire hearing can be found here.
*   *   *

Testimony Presented to the Joint Economic Committee:

The Transformative Impact of Robots and Automation

Adam Keiper
Fellow, Ethics and Public Policy Center
Editor, The New Atlantis
May 25, 2016

Mr. Chairman, Ranking Member Maloney, and members of the committee, thank you for the opportunity to participate in this important hearing on robotics and automation. These aspects of technology have already had widespread economic consequences, and in the years ahead they are likely to profoundly reshape our economic and social lives.

Today’s hearing is not the first time Congress has discussed these subjects. In fact, in October 1955, a subcommittee of this very committee held a hearing on automation and technological change.[1] That hearing went on for two weeks, with witnesses mostly drawn from industry and labor. It is remarkable how much of the public discussion about automation today echoes the ideas debated in that hearing. Despite vast changes in technology, in the economy, and in society over the past six decades, many of the worries, the hopes, and the proposed solutions suggested in our present-day literature on automation, robotics, and employment would sound familiar to the members and witnesses present at that 1955 hearing.

It would be difficult to point to any specific policy outcomes from that old hearing, but it is nonetheless an admirable example of responsible legislators grappling with immensely complicated questions. A free people must strive to govern its technologies and not passively be governed by them. So it is an honor to be a part of that tradition with today’s hearing.

In my remarks, I wish to make five big, broad points, some of them obvious, some more counterintuitive.


(1) WHY IT IS SO HARD TO KNOW THE FUTURE
A good place to start discussions of this sort is with a few words of gratitude and humility. Gratitude, that is, for the many wonders that automation, robotics, and artificial intelligence have already made possible. They have made existing goods and services cheaper, and helped us to create new kinds of goods and services, contributing to our prosperity and our material wellbeing.

And humility because of the poorness of our ability to peer into the future. When reviewing the mountains of books and magazine articles that have sought to predict what the future holds in automation and related fields, when reading the hyped tech headlines or when looking at the many charts and tables extrapolating from the past to help us forecast the future, it is striking to see how often our predictions go wrong.

Very little energy has been invested in systematically understanding why futurism fails — that is, why, beyond the simple fact that the future hasn’t happened yet, we have generally not been very good at predicting what it will look like. For the sake of today’s discussion, I want to raise just a few points, each of which can be helpful in clarifying our thinking when it comes to automation and robotics.

First there is the problem of timeframes. Very often, economic analyses and tech predictions about automation discuss kinds of jobs that are likely to be automated without any real discussion of when. This leads to strange conversations, as when one person is interested in what the advent of driverless vehicles might mean for the trucking industry, and his interlocutor is more interested in, say, the possible rise of artificial superintelligences that could wipe out all life on Earth. The timeframes under discussion at any given moment ought to be explicitly stated.

Second there is the problem of context. Debates about the future of one kind of technology rarely take into account other technologies that might be developed, and how those other technologies might affect the one under discussion. When one area of technology advances, others do not just stand still. How might automation and robotics be affected by developments in energy use and storage, or advanced nanotechnology (sometimes also called molecular manufacturing), or virtual reality and augmented reality, or brain-machine interfaces, or various biotechnologies, or a dozen other fields?

And of course it’s not only other technologies that evolve. In order to be invented, built, used, and sustained, all technologies are enmeshed in a web of cultural practices and mores, and legal and political norms. These things do not stand still either — and yet when discussing the future of a given technology, rarely is attention paid to the way these things touch upon one another.

All of which is to say that, as you listen to our conversation here today, or as you read books and articles about the future of automation and robotics, try to keep in mind what I call the “chain of uncertainties”:

Just because something is conceivable or imaginable
does not mean it is possible.
Even if it is possible, that does not mean it will happen.
Even if it happens, that does not mean it will happen in the way you envisioned.
And even if it happens in something like the way you envisioned,
there will be unintended, unexpected consequences


(2) WHY THIS TIME IS DIFFERENT
Automation is not new. For thousands of years we have made tools to help us accomplish difficult or dangerous or dirty or tedious or tiresome tasks, and in some sense today’s new tools are just extensions of what came before. And worries about automation are not new either  — they date back at least to the early days of the Industrial Revolution, when the Luddites revolted in England over the mechanization and centralization of textile production. As I mentioned above, this committee was already discussing automation some six decades ago — thinking about thinking machines and about new mechanical modes of manufacturing.

What makes today any different?

There are two reasons today’s concerns about automation are fundamentally different from what came before. First, the kinds of “thinking” that our machines are capable of doing is changing, so that it is becoming possible to hand off to our machines ever more of our cognitive work. As computers advance and as breakthroughs in artificial intelligence (AI) chip away at the list of uniquely human capacities, it becomes possible to do old things in new ways and to do new things we have never before imagined.

Second, we are also instantiating intelligence in new ways, creating new kinds of machines that can navigate and move about in and manipulate the physical world. Although we have for almost a century imagined how robotics might transform our world, the recent blizzard of technical breakthroughs in movement, sensing, control, and (to a lesser extent) power is bringing us for the first time into a world of autonomous, mobile entities that are neither human nor animal.

To simplify a vast technical and economic literature, there are basically three futurist scenarios for what the next several decades hold in automation, robotics, and artificial intelligence:

Scenario 1 – Automation and artificial intelligence will continue to advance, but at a pace sufficiently slow that society and the economy can gradually absorb the changes, so that people can take advantage of the new possibilities without suffering the most disruptive effects. The job market will change, but in something like the way it has evolved over the last half-century: some kinds of jobs will disappear, but new kinds of jobs will be created, and by and large people will be able to adapt to the shifting demands on them while enjoying the great benefits that automation makes possible.

Scenario 2 – Automation, robotics, and artificial intelligence will advance very rapidly. Jobs will disappear at a pace that will make it difficult for the workforce to adapt without widespread pain. The kinds of jobs that will be threatened will increasingly be jobs that had been relatively immune to automation — the “high-skilled” jobs that generally involved creativity and problem-solving, and the “low-skilled” jobs that involved manual dexterity or some degree of adaptability and interpersonal relations. The pressures on low-skilled American workers will exacerbate the pressures already felt because of competition against foreign workers paid lower wages. Among the disappearing jobs may be those at the lower-wage end of the spectrum that we have counted on for decades to instill basic workplace skills and values in our young people, and that have served as a kind of employment safety net for older people transitioning in their lives. And the balance between labor and capital may (at least for a time) shift sharply in favor of capital, as the share of gross domestic product (GDP) that flows to the owners of physical capital (e.g., the owners of artificial intelligences and robots) rises and the share of GDP that goes to workers falls. If this scenario unfolds quickly, it could involve severe economic disruption, perhaps social unrest, and maybe calls for political reform. The disconnect between productivity and employment and income in this scenario also highlights the growing inadequacy of GDP as our chief economic statistic: it can still be a useful indicator in international competition, but as an indicator of economic wellbeing, or as a proxy for the material satisfaction or happiness of the American citizen, it is clearly not succeeding.

Scenario 3 – Advances in automation, robotics, and artificial intelligence will produce something utterly new. Even within this scenario, the range of possibilities is vast. Perhaps we will see the creation of “emulations,” minds that have been “uploaded” into computers. Perhaps we will see the rise of powerful artificial “superintelligences,” unpredictable and dangerous. Perhaps we will reach a “Singularity” moment after which everything that matters most will be different from what came before. These types of possibilities are increasingly matters of discussion for technologists, but their very radicalness makes it difficult to say much about what they might mean at a human scale — except insofar as they might involve the extinction of humanity as we know it. [NOTE: During the hearing, Representative Don Beyer asked me whether he and other policymakers should be worried about consciousness emerging from AI; he mentioned Elon Musk and Stephen Hawking as two individuals who have suggested we should worry about this. “Think Terminator,” he said. I told him that these possibilities “at the moment ... don’t rise to the level of anything that anyone on this committee ought to be concerned about.”]

One can make a plausible case for each of these three scenarios. But rather than discussing their likelihood or examining some of the assumptions and aspirations inherent in each scenario, in the limited time remaining, I am going to turn to three other broad subjects: some of the legal questions raised by advances in artificial intelligence and automation; some of the policy ideas that have been proposed to mitigate some of the anticipated effects of these changes; and a deeper understanding of the meaning of work in human life.


(3) LOOMING LEGAL QUESTIONS
The advancement of artificial intelligence and autonomous robots will raise questions of law and governance that scholars are just beginning to grapple with. These questions are likely to have growing economic and perhaps political consequences in the years to come, no matter which of the three scenarios above you consider likeliest.

The questions we might be expected to face will emerge in matters of liability and malpractice and torts, property and contractual law, international law, and perhaps laws related to legal personhood. Although there are precedents — sometimes in unusual corners of the law — for some of the questions we will face, others will arise from the very novelty of the artificial autonomous actors in our midst.

By way of example, here are a few questions, starting with one that has already made its way into the mainstream press:

  • When a self-driving vehicle crashes into property or harms a person, who is liable? Who will pay damages?
     
  • When a patient is harmed or dies during a surgical operation conducted by an autonomous robotic device upon the recommendation of a human physician, who is liable and who pays?
     
  • If a robot is autonomous but is not considered a person, who owns the creative works it produces?
     
  • In a combat setting, who is to be held responsible, and in what way, if an autonomous robot deployed by the U.S. military kills civilian noncombatants in violation of the laws of war?
     
  • Is there any threshold of demonstrable achievement — any performed ability or set of capacities — that a robot or artificial intelligence could cross in order to be entitled to legal personhood?

These kinds of questions raise matters of justice, of course, but they have economic implications as well — not only in terms of the money involved in litigating cases, but in terms of the effects that the legal regime in place will have on the further development and implementation of artificial intelligence and robotics. It will be up to lawyers and judges, and lawmakers at the federal, state, and local levels, to work through these and many other such matters.


(4) PROPOSED SOLUTIONS AND THEIR PROBLEMS
There are, broadly speaking, two kinds of ideas that have most often been set forth in recent years to address the employment problems that may be created by an increasingly automated and AI-dominated economy.

The first category involves adapting workers to the new economy. The workers of today, and even more the workers of tomorrow, will need to be able to pick up and move to where the jobs are. They should engage in “lifelong learning” and “upskilling” whenever possible to make themselves as attractive as possible to future employers. Flexibility must be their byword.

Of course, education and flexibility are good things; they can make us resilient in the face of the “creative destruction” of a churning free economy. Yet we must remember that “workers” are not just workers; they are not just individuals free and detached and able to go wherever and do whatever the market demands. They are also members of families — children and parents and siblings and so on — and members of communities, with the web of connections and ties those memberships imply. And maximizing flexibility can be detrimental to those kinds of relationships, relationships that are necessary for human flourishing.

The other category of proposal involves a universal basic income — or what is sometimes called a “negative income tax” — guaranteed to every individual, even if he or she does not work. This can sound, in our contemporary political context, like a proposal for redistributing wealth, and it is true that there are progressive theorists and anti-capitalist activists who support it. But this idea has also been discussed favorably for various reasons by prominent conservative and libertarian thinkers. It is an intriguing idea, and one without many real-life models that we can study (although Finland is currently contemplating an interesting partial experiment).

A guaranteed income certainly would represent a sea change in our nation’s economic system and a fundamental transformation in the relationship between citizens and the state, but perhaps this transformation would be suited to the technological challenge we may face in the years ahead. Some of the smartest and most thoughtful analysts have discussed how to avoid the most obvious problems a guaranteed income might create — such as the problem of disincentivizing work. Especially provocative is the depiction of guaranteed income that appears in a 2008 book written by Joseph V. Kennedy, a former senior economist with the Joint Economic Committee; in his version of the policy, the guaranteed income would be structured in such a way as to encourage a number of good behaviors. Anyone interested in seriously considering guaranteed income should read Kennedy’s book.[2]


(5) THE MEANING OF HUMAN WORK
Should we really be worrying so much about the effects of robots on employment? Maybe with the proper policies in place we can get through a painful transition and reach a future date when we no longer need to work. After all, shouldn’t we agree with Arthur C. Clarke that “The goal of the future is full unemployment”?[3] Why work?

This notion, it seems to me, raises deep questions about who and what we are as human beings, and the ways in which we find purpose in our lives. A full discussion of this subject would require drinking deeply of the best literary and historical investigations of work in human life — examining how work is not only toil for which we are compensated, but how it also can be a source of dignity, structure, meaning, friendship, and fulfillment.

For present purposes, however, I want to just point to two competing visions of the future as we think about work. Because, although science fiction offers us many visions of the future in which man is destroyed by robots, or merges with them to become cyborgs, it offers basically just two visions of the future in which man coexists with highly intelligent machines. Each of these visions has an implicit anthropology — an understanding of what it means to be a human being. In each vision, we can see a kind of liberation of human nature, an account of what mankind would be in the absence of privation. And in each vision, some latent human urges and longings emerge to dominate over others, pointing to two opposing inclinations we see in ourselves.

The first vision is that of the techno-optimist or -utopian: Thanks to the labor and intelligence of our machines, all our material wants are met and we are able to lead lives of religious fulfillment, practice our hobbies, pursue our intellectual and creative interests.

Recall John Adams’s famous 1780 letter to Abigail: “I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.”[4] This is somewhat like the dream imagined in countless stories and films, in which our robots make possible a Golden Age that allows us to transcend crass material concerns and all become gardeners, artists, dreamers, thinkers, lovers.

By contrast, the other vision is the one depicted in the 2008 film WALL-E, and more darkly in many earlier stories — a future in which humanity becomes a race of Homer Simpsons, a leisure society of consumption and entertainment turned to endomorphic excess. The culminating achievement of human ingenuity, robotic beings that are smarter, stronger, and better than ourselves, transforms us into beings dumber, weaker, and worse than ourselves. TV-watching, video-game-playing blobs, we lose even the energy and attention required for proper hedonism: human relations wither and natural procreation declines or ceases. Freed from the struggle for basic needs, we lose a genuine impulse to strive; bereft of any civic, political, intellectual, romantic, or spiritual ambition, when we do have the energy to get up, we are disengaged from our fellow man, inclined toward selfishness, impatience, and lack of sympathy. Those few who realize our plight suffer from crushing ennui. Life becomes nasty, brutish, and long.

Personally, I don’t think either vision is quite right. I think each vision — the one in which we become more godlike, the other of which we become more like beasts — is a kind of deformation. There is good reason to challenge some of the technical claims and some of the aspirations of the AI cheerleaders, and there is good reason to believe that we are in important respects stuck with human nature, that we are simultaneously beings of base want and transcendent aspiration; finite but able to conceive of the infinite; destined, paradoxically, to be free.


CONCLUSION
Mr. Chairman, the rise of automation, robotics, and artificial intelligence raises many questions that extend far beyond the matters of economics and employment that we’ve discussed today — including practical, social, moral, and perhaps even existential questions. In the years ahead, legislators and regulators will be called upon to address these technological changes, to respond to some things that have already begun to take shape and to foreclose other possibilities. Knowing when and how to act will, as always, require prudence.

In the years ahead, as we contemplate both the blessings and the burdens of these new technologies, my hope is that we will strive, whenever possible to exercise human responsibility, to protect human dignity, and to use our creations for the improvement of truly human flourishing.

Thank you.



____________
NOTES:

[1] “Automation and Technological Change,” hearings before the Subcommittee on Economic Stabilization of the Joint Committee on the Economic Report, Congress of the United States, Eighty-fourth Congress, first session, October 14, 15, 17, 18, 24, 25, 26, 27, and 28, 1955 (Washington, D.C.: G.P.O., 1955), http://www.jec.senate.gov/public/index.cfm/1956/12/report-970887a6-35a4-47e3-9bb0-c3cdf82ec429.

[2]  Joseph V. Kennedy, Ending Poverty: Changing Behavior, Guaranteeing Income, and Reforming Government (Lanham, Md.: Rowman and Littlefield, 2008).

[3]  Arthur C. Clarke, quoted by Jerome Agel, “Cocktail Party” (column), The Realist 86, Nov.–Dec. 1969, page 32, http://ep.tc/realist/86/32.html. This article is a teaser for a book Agel edited called The Making of Kubrick’s 2001 (New York: New American Library/Signet, 1970), where the same quotation from Clarke appears on page 311. Italics added. The full quote reads as follows: “The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.”

[4]  John Adams to Abigail Adams (letter), May 12, 1780, Founders Online, National Archives (http://founders.archives.gov/documents/Adams/04-03-02-0258). Source: The Adams Papers, Adams Family Correspondence, vol. 3, April 1778 – September 1780, eds. L. H. Butterfield and Marc Friedlaender (Cambridge, Mass.: Harvard, 1973), pages 341–343.

Tuesday, May 17, 2016

Transhumanists are searching for a dystopian future

As part of a Washington Post series this week about transhumanism, our own Charles T. Rubin offers some thoughts on why transhumanists are so optimistic when the pop-culture depictions of transhumanism nearly always seem to be dark and gloomy:

What accounts for this gap between how transhumanists see themselves — as rational proponents of a cause, who seek little more than to speed humanity along a path it already follows — and how they are seen in popular culture — as dangerous conspirators against human welfare? Movies and TV need drama and conflict, and it is possible that transhumanists just make trendy villains. And yet the transhumanists and the show writers are alike operating in the realm of imagination, of possible futures. In this case, I believe the TV writers have the richer and more nuanced imaginations that more closely resemble reality.

You can read the entire article here.

Thursday, May 5, 2016

CRISPR and the Human Species



Over at Tech Crunch, Jamie Metzl writes that we need to have a “species-wide conversation” about the use of gene-editing technologies like CRISPR, because these technologies could be used to alter the course of human evolution:

Nearly everybody wants to have cancers cured and terrible diseases eliminated. Most of us want to live longer, healthier and more robust lives. Genetic technologies will make that possible. But the very tools we will use to achieve these goals will also open the door to the selection for and ultimately manipulation of non-disease-related genetic traits — and with them a new set of evolutionary possibilities.

Transhumanists want to take control of human evolution because of their sense of radical dissatisfaction with our evolved nature; they believe, hubristically, that they they have or can attain the wisdom and power to design mankind according to their own whims. Such schemes for redesigning the human species led eugenicists and totalitarians in the twentieth century to the trample on the rights and interests of human beings in the service of their vision for the human species, and the terrible legacy of these movements should serve as a warning against attempting to take control over human evolution.

Does germline gene therapy necessarily represent such a hubristic, transhumanist attempt to alter the species? Or can the insistence that we avoid all forms of germline therapy also subordinate the rights and medical interests of human beings today to a vision of the human species and its future?

As I argue in an essay in the latest issue of The New Atlantis, the conversation that is needed should focus on ways to ensure that gene therapy is used to treat actual patients suffering from actual diseases — including, perhaps, unborn human beings who are at a demonstrable risk of genetic disease.

The task ahead of us is to distinguish between legitimate forms of therapy and illicit forms of genetic control over our descendants. These kinds of distinctions will be difficult to draw in theory, and even more difficult to enforce in practice, but doing so is neither impossible nor avoidable.

Wednesday, March 30, 2016

The logical end of mechanical progress

Marija Piliponyte
(Shutterstock)
When, in 1937, George Orwell wanted to convey the dark side of “mechanical progress” to his readers, he wrote, “the logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle.” Of course, he said, it is not as if that is really our intention, “just as a man who drinks a bottle of whiskey a day does not actually intend to get cirrhosis of the liver.” But that, he argued, is where the socialists of his time seemed to want to take things. Their emphasis on doing away with work, effort and risk would lead to “some frightful subhuman depth of softness and helplessness.”

Now, very nearly eighty years later, we still don’t want to get cirrhosis. But in a review this week of the finally released Oculus Rift virtual-reality gaming system, Adi Robertson writes, “I love the feeling of getting real exercise in a virtual sword-fighting game, or of walking around a real room to see the artwork I’ve created. Sitting down with the Rift, meanwhile, feels as close to being a brain in a jar as humanly possible.” And just in case you might have missed this wonderful endorsement in what is a pretty long review, the “brain in a jar” quote is repeated as a pullout in a large font with purplish text.

So over time the brain in a bottle can become our intention, it can transform from nightmare scenario to selling point. By “progress” do we mean “slippery slope”?

Monday, February 1, 2016

Toward a Typology of Transhumanism



Years ago, James Hughes sought to typify the emerging political debate over transhumanism with a three-axis political scale, adding a biopolitical dimension to the familiar axes of social and fiscal libertarianism. But transhumanism is a very academic issue, both in the sense that many transhumanists, including Hughes, are academics, and in the sense that it is very removed from everyday practical concerns. So it may make more sense to characterize the different types of transhumanists in terms of the kinds of intellectual positions to which they adhere rather than to how they relate to different positions on the political spectrum. As Zoltan Istvan’s wacky transhumanist presidential campaign shows us, transhumanism is hardly ready for prime time when it comes to American politics.

And so, I propose a continuum of transhumanist thought, to help observers understand the intellectual differences between some of its proponents — based on three different levels of support for human enhancement technologies.

First, the most mild form of transhumanists: those who embrace the human enhancement project, or reject most substantive limits to human enhancement, but who do not have a very concrete vision of what kinds of things human enhancement technology may be used for. In terms of their intellectual background, these mild transhumanists can be defined by their diversity rather than their unity. They adhere to some of the more respectable philosophical schools, such as pragmatism, various kinds of liberalism, or simply the thin, “formally rational” morality of mainstream bioethics. Many of these mild transhumanists are indeed professional bioethicists in good standing. Few, if any of them would accept the label of “transhumanist” for themselves, but they reject the substantive arguments against the enhancement project, often in the name of enhancing the freedom of choice that individuals have to control their own bodies — or, in the case of reproductive technologies, the “procreative liberty” of parents to control the bodies of their children.

Second, the moderate transhumanists. They are not very philosophically diverse, but rather are defined by a dogmatic adherence to utilitarianism. Characteristic examples would include John Harris and Julian Savulescu, along with many of the academics associated with Oxford’s rather inaptly named Uehiro Center for Practical Ethics. These thinkers, who nowadays also generally eschew the term “transhumanist” for themselves, deploy a simple calculus of costs and benefits for society to moral questions concerning biotechnology, and conclude that the extensive use of biotechnology will usually end up improving human well-being. Unlike those liberals who oppose restrictions on enhancement, liberty is a secondary value for these strident utilitarians, and so some of them are comfortable with the idea of legally requiring or otherwise pressuring individuals to use enhancement technologies.

Some of their hobbyhorses include the abandonment of the act-omission distinction — that is, that there are consequences of omitting to act; John Harris famously applied this to the problem of organ shortages when he argued that we should perhaps randomly kill innocent people to harvest their organs, since failing to procure organs for those who will die without them is little different than killing them. Grisly as it is, this argument is not quite a transhumanist one, since such organ donation would hardly constitute human enhancement, but it is clear how someone who accepts this kind of radical utilitarianism would go on to accept arguments for manipulating human biology in other outlandish schemes for maximizing “well-being.”

Third, the most extreme form of transhumanism is defined less by adherence to a philosophical position than to a kind of quixotic obsession with technology itself. Today, this obsession with technology manifests in the belief that artificial intelligence will completely transform the world through the Singularity and the uploading of human minds — although futurist speculations built on contemporary technologies have of course been around for a long time. Aldous Huxley’s classic novel Brave New World, for example, imagines a whole world designed in the image of the early twentieth century factory. Though this obsession with technology is not a philosophical position per se, today’s transhumanists have certainly built very elaborate intellectual edifices around the idea of artificial intelligence. Nick Bostrom’s recent book Superintelligence represents a good example of the kind of systematic work these extreme transhumanists have put into thinking through what a world completely shaped by information technology might be like.

*   *   *

Obviously there is a great deal of overlap between these three degrees of transhumanism, and the most mild stage in particular is really quite vaguely defined. If there is a kind of continuum along which these stages run it would be one from relatively open-minded and ecumenical thinkers to those who are increasingly dogmatic and idiosyncratic in their views. The mild transhumanists are usually highly engaged with the real world of policymaking and medicine, and discuss a wide variety of ideas in their work. The moderate transhumanists are more committed to a particular philosophical approach, and the academics at the Oxford’s Uehiro Center for Practical Ethics who apply their dogmatic utilitiarianism to moral problems usually end up with wildly impractical proposals. Though all of these advocates of human enhancement are enthusiastic about technology, for the extreme transhumanists, technology almost completely shapes their moral and political thought; and though their actual influence on public policy is thankfully limited for the time being, it is these more extreme folks, like Ray Kurzweil and Nick Bostrom, and arguably Eric Drexler and the late Robert Ettinger, who tend to be most often profiled in the press and to have a popular following.

Tuesday, November 24, 2015

Future Selves

In the latest issue of the Claremont Review of Books, political philosopher Mark Blitz — a professor at Claremont McKenna College — has an insightful review of Eclipse of Man, the new book from our own Charles T. Rubin. Blitz writes:
What concerns Charles Rubin in Eclipse of Man is well conveyed by his title. Human beings stand on the threshold of a world in which our lives and practices may be radically altered, and our dominance no longer assured. What began a half-millennium ago as a project to reduce our burdens threatens to conclude in a realm in which we no longer prevail. The original human subject who was convinced to receive technology’s benefits becomes unrecognizable once he accepts the benefits, as if birds were persuaded to become airplanes. What would remain of the original birds? Indeed, we may be eclipsed altogether by species we have generated but which are so unlike us that “we” do not exist at all—or persist only as inferior relics, stuffed for museums. What starts as Enlightenment ends in permanent night....

Rubin’s major concern is with the contemporary transhumanists (the term he chooses to cover a variety of what from his standpoint are similar positions) who both predict and encourage the overcoming of man.
Blitz praises Rubin for his “fair, judicious, and critical summaries” of the transhumanist authors he discusses, and says the author “approaches his topic with admirable thoughtfulness and restraint.”

Some of the subjects Professor Blitz raises in his review essay are worth considering and perhaps debating at greater length, but I would just like to point out one of them. Blitz mentions several kinds of eternal things — things that we are stuck with no matter what the future brings:

One question involves the goods or perfections that our successors might seek or enjoy. Here, I might suggest that these goods cannot change as such, although our appreciation of them may. The allure of promises for the future is connected to the perfections of truth, beauty, and virtue that we currently desire. How could one today argue reasonably against the greater intelligence, expanded artistic talent, or improved health that might help us or those we love realize these goods? Who would now give up freedom, self-direction, and self-reflection?...

There are still other limits that no promise of transhuman change can overcome. These are not only, or primarily, mathematical regularities or apparent scientific laws; they involve inevitable scarcities or contradictions. Whatever happens “virtually,” there are only so many actual houses on actual beautiful beaches. Honesty differs from lying, the loyal and true differ from the fickle and untrustworthy, fame and power cannot belong both to one or a few and to everyone. These limits will set some of the direction for the distribution of goods and our attachment to them, either to restrain competition or to encourage it. They will thus also help to organize political life. Regulating differences of opinion, within appropriate freedom, and judging among the things we are able to choose will remain necessary.

Nonetheless, even if it is true that what we (or any rational being) may properly consider to be good is ultimately invariable, and even if the other limits I mentioned truly exist, our experience of such matters presumably will change as many good things become more available, and as we alter our experience of what is our own — birth, death, locality, and the body.

Let us look carefully at the items listed in this very rich passage. Blitz does not refer to security and health and long life, the goods that modernity arguably emphasizes above all others. Instead, Blitz begins by mentioning the goods of “the perfections of truth, beauty, and virtue.” These are things that “we currently desire” but that also “cannot change as such, although our appreciation of them may.”

Let us set aside for now beauty — which is very complicated, and which may be the item in Blitz’s Platonic triad that would perhaps be likeliest to be transformed by a radical shift in human nature — and focus on truth and virtue. How can they be permanent, unchanging things?

To understand how truth and virtue can be eternal goods, see how Blitz turns to physical realities — the kinds of scarcities of material resources that Malthus and Darwin would have noticed, although those guys tended to think more in terms of scarcities of food than of beach houses. Blitz also mentions traits that seem ineluctably to arise from the existence of those physical limitations. The clash of interests will inevitably lead to scenarios in which there will be “differences of opinions” and in which some actors may be more or less honest, more or less trustworthy. There will arise situations in which honesty can be judged differently from lying, loyalty from untrustworthiness. “Any rational being,” including presumably any distant descendant of humanity, will prize truth and virtue. They are arguably pre-political and pre-philosophical — they are facts of humanity and society that arise from the facts of nature — but they “help to organize political life.”

And yet this entire edifice is wiped away in the last paragraph quoted above. “Our experience” of truth and virtue, Blitz notes, “presumably will change” as our experience of “birth, death, locality, and the body” changes. Still, we may experience truth and virtue differently, but they will continue to provide the goals of human striving, right?

Yet consider some of the transhumanist dreams on offer: a future where mortality is a choice, a future where individual minds merge and melt together into machine-aided masses, a future where the resources of the universe are absorbed and reordered by our man-machine offspring to make a vast “extended thinking entity.” Blitz may be right that “what is good ... cannot in the last analysis be obliterated,” but if we embark down the path to the posthuman, our descendants may, in exchange for vast power over themselves and over nature, lose forever the ability to “properly orient” themselves toward the goods of truth and virtue.

Read the whole Blitz review essay here; subscribe to the Claremont Review of Books here; and order a copy of Eclipse of Man here.

Tuesday, November 3, 2015

Do We Love Robots Because We Hate Ourselves?

A piece by our very own Ari N. Schulman, on WashingtonPost.com today:

... Even as the significance of the Turing Test has been challenged, its attitude continues to characterize the project of strong artificial intelligence. AI guru Marvin Minsky refers to humans as “meat machines.” To roboticist Rodney Brooks, we’re no more than “a big bag of skin full of biomolecules.” One could fill volumes with these lovely aphorisms from AI’s leading luminaries.

And for the true believers, these are not gloomy descriptions but gleeful mandates. AI’s most strident supporters see it as the next step in our evolution. Our accidental nature will be replaced with design, our frail bodies with immortal software, our marginal minds with intellect of a kind we cannot now comprehend, and our nasty and brutish meat-world with the infinite possibilities of the virtual. 

Most critics of heady AI predictions do not see this vision as remotely plausible. But lesser versions might be — and it’s important to ask why many find it so compelling, even if it doesn’t come to pass. Even if “we” would survive in some vague way, this future is one in which the human condition is done away with. This, indeed, seems to be the appeal....

To read the whole thing, click here.

Tuesday, October 20, 2015

Science, Virtue, and the Future of Humanity



The new book Science, Virtue, and the Future of Humanity, just published by Rowman & Littlefield, brings together essays examining the future — particularly scientific and technological visions of the future, and the role that virtue ought to play in that future. Several of the essays appeared in The New Atlantis, including essays about robots and “friendly AI,” and most of them grew out of a conference that New Atlantis contributing editor Peter A. Lawler hosted at Berry College in Georgia back in 2011. (Professor Lawler edited this new book, along with Marc D. Guerra of Assumption College.)

Lawler’s own introductory essay is a real treat, weaving together references to recent movies, philosophers and economists, the goings-on in Silicon Valley, and a Tocquevillian appreciation for the complicated and surprising ways that liberty and religion are intertwined in the United States. No one is better than Lawler at revealing the gap between who we believe ourselves to be and who we really are as a people, and at showing how our longing for liberty is really only sensible in a relational context — in a world of families, communities, institutions, citizenship, and interests.

Charles Rubin’s marvelous essay about robots and the play R.U.R. is joined by the essay that Ari Schulman and I wrote on so-called “friendly” AI. The libertarian journalist Ron Bailey of Reason magazine makes the case for radical human enhancement, arguing, among other things, that enhancement will allow people to become more virtuous. Jim Capretta and William English each contribute essays on demographics and our entitlement system. Dr. Ben Hippen discusses organ donation (and organ selling).

Patrick Deneen, Robert Kraynak, and J. Daryl Charles each offer wide-ranging essays that challenge the foundations of modernity. Deneen discusses some of the assumptions and tendencies in modern science and modern political science that corrode the very institutions, traditions, and beliefs that made them possible. Kraynak shows how thinkers like Richard Rorty and Steven Pinker must scramble to explain the roots of their beliefs about justice. Do their “human values” — mostly just secularized versions of Judeo-Christian morality — make any sense without a belief in God? And J. Daryl Charles looks at the ways that genetics and even evolutionary theory affect our understanding of moral agency, a question with implications for fields such as criminal law.

Each of the editors offers an essay about education: Lawler critiques the libertarian critique of liberal education, and Guerra explores the ways that liberal education fits (sometimes uncomfortably) in the broader setting of higher education.

The collection is rounded out by Ben Storey’s smart essay about Alexis de Tocqueville and technology — focusing not just on Democracy in America but on two of Tocqueville’s lesser known works.

So far, Science, Virtue, and the Future of Humanity is only available in a hardcover format that is rather costly (more than $80 new). Here’s hoping it comes out in a more affordable format before long. Readers of The New Atlantis and of our Futurisms blog, and indeed anyone interested in a deeper understanding of the meaning of progress, will find much to learn in its pages.

Thursday, October 1, 2015

Free to Experiment?



Last month, Vice published a short article by Jason Koebler about how genetic engineering, including the genetic engineering of human beings, is probably protected by the First Amendment. The basic argument behind this seemingly ridiculous notion is that the First Amendment protects not only speech but also “expressive conduct,” which can include offensive performance art, flag burning, and, perhaps, “acts of science.” Such acts of science may be especially worth protecting when they are very controversial, since that might mean they should be treated as political or religious speech. The 2010 hullabaloo over Craig Venter’s “synthetic cell” was trotted out as an example of the deep political and even religious implications of scientific experiments, since the idea of creating synthetic life might, as it did for Venter himself, change our “views of definitions of life and how life works.”

It is worth noting that, notwithstanding breathless headlines and press releases, Craig Venter did not create a “synthetic life form.” What Venter did was synthesize a bacterial genome, though he did not design that genome, but rather used a slightly modified version of the sequence of an existing bacterial species. Venter then put this synthesized genome into cells of a closely related bacterial species whose genome had been removed, and, lo, the cells used their new genomes and eventually came to resemble the (slightly different) species from which the synthetic genome was derived.

Unless Venter once believed that DNA possessed mystical properties that made it impossible to manufacture, or that he had never heard of bacterial transformation experiments by which bacteria can pick up and use foreign pieces of DNA (experiments that predate, and were in fact used to establish, our knowledge that DNA is the molecule of heredity), it is hard to see why he would need to change his “views of definitions of life and how life works” in light of his experiment.

Of course, freedom of speech is not only for the coherent but also the confused arguments made for the deep implications of some controversial forms of research. In a talk given at a recent DARPA conference, bioethicist Alta Charo suggested that controversial experiments like cloning or genetic engineering may be carried out to "challenge" those who think that these experiments are wrong, and that this might mean they should be protected as forms of political expression.

Scientists and academics should be free to challenge deeply held beliefs about human nature and morality. As Robert P. George has argued regarding his pro-infanticide Princeton colleague Peter Singer, “freedom of thought and expression and academic freedom are for everyone — not just those whose views others find congenial.” But this academic freedom is premised on doing business “in the currency of academic discourse: a currency consisting of reasons and arguments.” Cloned or genetically engineered children are not reasons or arguments, and they are certainly not the currency of academic discourse.

The use of reproductive biotechnologies like cloning or genetic engineering to express a political or religious view would mean that the child that results from these technologies would be treated as a form of political or artistic expression. But as the Witherspoon Council on Ethics and the Integrity of Science argued in its recent report on human cloning, this kind of perversion of the relationship between parents and children, where children become seen as products to be manufactured in accordance to the parents’ specifications and to serve their interests, is at the heart of what is wrong with technologies like human cloning and genetic engineering. And moreover, as the Council notes, to claim First Amendment protection would require satisfying several legal criteria that cloning almost certainly would not satisfy. That there are respectable bioethicists arguing that the creation of human beings is now seen as a form of artistic or political self-expression is in fact a very good reason for passing laws to ban technologies like cloning for manufacturing human beings.

Friday, September 25, 2015

What’s the Difference?

“How is having a cochlear implant that helps the deaf hear any different than having a chip in your brain that could help control your thoughts?”   —Michael Goldblatt, former director of DARPA’s Defense Sciences Office, quoted in the Atlantic

What’s the difference between reading books all day and playing video games?

Come on, what’s the difference between spending your time with friends and family “in person” and spending your time with them virtually?

How is having a child through cloning any different from having a child the old-fashioned way?

Why is the feeling of happiness that you have after a good day any different from the feeling of happiness I have after I take this drug?

Why is talking with your spouse and children using your mouth and ears different, in any way that counts, from communicating with them through brain chips that link your minds directly?

We already pick our mates with some idea of what our kids might look and act like. How is that any different from genetically engineering our children so they look and act the way we want?

Don’t we already send our children to school to make them smarter? How is that any different from just downloading information straight into their brains?

If your grandmother is already in a nursing home, what’s the difference if the nurses are robots?

Memory is already so fluid and fallible that we forget things all the time; what’s the difference if we just help people forget things they would rather not be stuck remembering?

What does it matter, in the end, if a soldier is killed by another soldier or by a robot?

How is it really different if, instead of marrying other human beings, some people choose to marry and have sex with machines programmed for obedience and pleasure?

What’s the difference if our bodies are replaced by machines?

In the scheme of things, what’s the difference if humanity is replaced with artificial life? The persistence of intelligence is all that matters, right?

What’s the difference?

Tuesday, September 22, 2015

Who’s Afraid of ‘Brave New World’?



I was very happy to learn from George Dvorsky at io9 that Aldous Huxley’s novel “Brave New World is not the terrifying dystopia it used to be.” It’s not that the things in the novel couldn’t happen (more or less), but rather that they are happening and “we” have become much more enlightened and simply don’t need to worry about them anymore. The book is a product of its time, and our time understands these things much better, apparently.

Thus, the strongest condemnation Dvorsky can offer of the eugenics program depicted in Brave New World is that it is “disquieting.” But we can get over that. Genetic engineering techniques that might have once have been met with “repugnance” are now commonplace. Newer techniques which promise more control to make more things like those in Brave New World possible will have the same fate, Dvorsky expects, trotting out the number-one cliché of progressive bioethics: “While potentially alarming, these biotechnologies and others currently in development hold great promise.” And on the basis of that “great promise” we merrily slide right down the slippery slope:

Advances in genetics will serve to eliminate a host of genetic diseases, while offering humans the opportunity to forgo the haphazard genetic roll of the dice when it comes to determining the traits of offspring. A strong case can be made that it’s both our duty and right to develop these technologies.

Problem solved!

The next non-problem Dvorsky sees in Brave New World is totalitarianism, which, along with “top-down” eugenics, he proclaims “dead.” Happy day! One might trust Dvorsky more on this topic if he did not declare that even in Huxley’s own time the book conveyed a “false sense of urgency” on this topic. But we now know that biotechnology will be “tools made by the people, for the people.” A case in point, I suppose, would be the drug company that just raised the price of one of its pills by 5000 percent.

And on and on. Concern about widespread use of psychoactive prescription and non-prescription drugs is, Dvorsky says, either “not entirely fair” or “hysterical.” On sex and the family Huxley’s “prescience is remarkable” but his concerns  are “grossly old fashioned and moralizing.” So too his Malthusian concerns are “grossly overstated” particularly when population control (apparently it is necessary after all) can be achieved by “humanitarian methods.”

So there you have it. It seems that for the advocates of technological “progress” and human redesign “don’t worry, be happy” has become a respectable line of argument. I know I feel much better now.

Friday, September 4, 2015

Using cloning for human enhancement?

We have occasionally written about human cloning here on Futurisms — for example, five years ago we had a back-and-forth with Kyle Munkittrick about cloning — and we return to the subject today, with an excerpt from the latest issue of The New Atlantis. The entirety of that new issue is dedicated to a report called The Threat of Human Cloning: Ethics, Recent Developments, and the Case for Action. The report, written by a distinguished body of academics and policy experts, makes the case against all forms of human cloning — both for the purpose of creating children and for the purpose of biomedical research.

Below is one excerpt from the report, a section exploring the possibility of using cloning to create “enhanced” offspring. (I have removed the citations from this excerpt, but you can find them and read this section in context here.)


*     *     *

Cloning for “human enhancement.” Much of the enthusiasm for and anxiety about human cloning over the years has been concerned with the use of cloning as a genetic enhancement technology. Scientists, and especially science-fiction writers, have imagined ways of using cloning to replicate “persons of attested ability” as a way to “raise the possibility of human achievement dramatically,” in the words of J.B.S. Haldane. As molecular biologist Robert L. Sinsheimer argued in 1972, “cloning would in principle permit the preservation and perpetuation of the finest genotypes that arise in our species.” Candidates for this distinction often include Mozart and Einstein, though the legacy of eugenics in the twentieth century has left many authors with an awareness that those who would use these technologies may be more interested in replicating men like Hitler. (While in most cases, the idea of cloning a dictator like Hitler is invoked as a criticism of eugenic schemes, some writers have actually advocated the selective eugenic propagation of tyrants — for instance, the American geneticist Hermann J. Muller who, in a 1936 letter to Stalin advocating the eugenic use of artificial insemination, named Lenin as an example of a source of genetic material whose outstanding worth “virtually all would gladly recognize.”)

Today, eugenics has a deservedly negative reputation, and the idea of using a biotechnology like cloning to replicate individuals of exceptional merit is prima facie ethically suspect. However, advocates of eugenic enhancement have never entirely disappeared, and their influence in bioethics is arguably not waning, but waxing. In recent years academic bioethicists like John Harris and Julian Savulescu have been attempting to rehabilitate the case for eugenic enhancements on utilitarian grounds. For these new eugenicists, cloning-to-produce-children represents “power and opportunity over our destiny.”

This new eugenics needs to be confronted and refuted directly, since insisting on the self-evident evil of eugenics by pointing to historical atrocities committed in its name may become increasingly unpersuasive as memories of those atrocities dim with time, and as new technologies like cloning and genetic engineering make eugenic schemes all the more attractive. Furthermore, as the philosopher Hans Jonas noted in a critique of cloning, the argument in favor of cloning excellent individuals, “though naïve, is not frivolous in that it enlists our reverence for greatness and pays tribute to it by wishing that more Mozarts, Einsteins, and Schweitzers might adorn the human race.”

In an important sense, cloning is not an enhancement, since it replicates, rather than improves on, an existing genome. However, as Jonas’s remark about the human race indicates, the cloning of exceptional genotypes could be an enhancement at the population level. And from the point of view of parents who want children who can checkmate like Kasparov, belt like Aretha, dunk like Dr. J, or bend it like Beckham, cloning could represent a way to have offspring with the exceptional abilities of these individuals.

Arguably, cloning is a less powerful form of genetic engineering than other techniques that introduce precise modifications to the genome. After all, cloning only replicates an existing genome; it doesn’t involve picking and choosing specific traits. This weakness may also, however, make cloning more appealing than other forms of genetic engineering, especially when we consider the genetic complexity of many desirable traits. For example, some parents might seek to enhance the intelligence of their children, and evidence from twin studies and other studies of heredity seems to indicate that substantial amounts of the variation in intelligence between individuals can be attributed to genetics. But any given gene seems to have only a tiny effect on intelligence; one recent study looking at several genes associated with intelligence found that they each accounted for only about 0.3 points of IQ. With such minor effects, it would be difficult to justify the risks and expense of intervening to modify particular genes to improve a trait like intelligence.

Cloning, on the other hand, would not require certain and specific knowledge about particular genes, it would only require identifying an exceptionally intelligent individual and replicating his or her genome. Of course the cloned individual’s exceptional intelligence may be due to largely non-genetic factors, and so for a trait like intelligence there will never be certainty about whether the cloned offspring will match their genetic progenitor. But for people seeking to give their child the best chance at having exceptional intelligence, cloning may at least seem to offer more control and predictability than gene modification, and cloning is more consistent with our limited understanding of the science of genetics. Genetic modification involves daunting scientific and technical challenges; it offers the potential of only marginal improvements in complex traits, and it holds out the risk of unpredictable side effects and consequences.

Of course, it is possible that cloning could be used in conjunction with genetic modification, by allowing scientists to perform extensive genetic manipulations of somatic cells before transferring them to oocytes. In fact, genetic modification and cloning are already used together in agriculture and some biomedical research: for larger animals like pigs and cattle, cloning remains the main technique for producing genetically engineered offspring....

Using cloning as an enhancement technology requires picking some exceptional person to clone. This necessarily separates social and genetic parenthood: children would be brought into the world not by sexual pairing, or as an expression of marital love, or by parents seeking to continue and join their lineages, but by individuals concerned with using the most efficient technical methods to obtain a child with specific biological properties. Considerations about the kinds of properties the child will have would dominate the circumstances of a cloned child’s “conception,” even more than they already do when some prospective parents seek out the highest-quality egg or sperm donors, with all the troubling consequences such commodified reproduction has for both buyers and sellers of these genetic materials and the children that result. With cloning-to-produce-children for the sake of eugenic enhancement, parents (that is, the individuals who choose to commission the production of a cloned child) will need to be concerned not with their genetic relationship to their children, but only with the child’s genetic and biological properties.

Normally, the idea of cloning as an enhancement is to create children with better properties in which the improvement resides in an individual and his or her traits, but some thinkers have proposed that cloning could be used to offer an enhancement of social relationships. This is the very reason given in the novel Brave New World: the fictional society’s cloning-like technology “is one of the major instruments of social stability! ... Standard men and women; in uniform batches,” allowing for excellence and social order. And as the geneticist Joshua Lederberg argued in 1966, some of the advantages of cloning could flow from the fact of the clones’ being identical, independent of the particular genes they have. Genetically identical clones, like twins, might have an easier time communicating and cooperating, Lederberg wrote, on the assumption “that genetic identity confers neurological similarity, and that this eases communication” and cooperation. Family relationships would even improve, by easing “the discourse between generations,” as when “an older clonont would teach his infant copy.” Lederberg’s imaginings will rightly strike today’s readers as naïve and unsettling. Such a fixation on maintaining sameness within the family would undermine the openness to new beginnings that the arrival of each generation represents.

Before we embark on asexual reproduction in order deliberately to select our offspring’s genes, we would do well to remember that sexual reproduction has been the way of our ancestors for over a billion years, and has been essential for the flourishing of the diverse forms of multicellular life on earth. We, who have known the sequence of the human genome for a mere fifteen years — not even the span of a single human generation — and who still do not have so much as a precise idea of how many genes are contained in our DNA, should have some humility when contemplating such a radical departure.

Tuesday, August 18, 2015

Passing the Ex Machina Test



Like Her before it, the film Ex Machina presents us with an artificial intelligence — in this case, embodied as a robot — that is compellingly human enough to cause an admittedly susceptible young man to fall for it, a scenario made plausible in no small degree by the wonderful acting of the gamine Alicia Vikander. But Ex Machina operates much more than Her within the moral universe of traditional stories of human-created monsters going back to Frankenstein: a creature that is assembled in splendid isolation by a socially withdrawn if not misanthropic creator is human enough to turn on its progenitor out of a desire to have just the kind of life that the creator has given up for the sake of his effort to bring forth this new kind of being. In the process of telling this old story, writer-director Alex Garland raises some thought-provoking questions; massive spoilers in what follows.

Geeky programmer Caleb (Domhnall Gleeson) finds that he has been brought to tech-wizard Nathan’s (a thuggish Oscar Isaac) vast, remote mountain estate, a combination bunker, laboratory and modernist pleasure-pad, in order to participate in a week-long, modified Turing Test of Nathan’s latest AI creation, Ava. The modification of the test is significant, Nathan tells Caleb after his first encounter with Ava; Caleb does not interact with her via an anonymizing terminal, but speaks directly with her, although she is separated from him by a glass wall. His first sight of her is in her most robotic instantiation, complete with see-through limbs. Her unclothed conformation is female from the start, but only her face and hands have skin. The reason for doing the test this way, Nathan says, is to find whether Caleb is convinced she is truly intelligent even knowing full well that she is a robot: “If I hid Ava from you, so you just heard her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.”

This plot point is, I think, a telling response to the abstract, behaviorist premises behind the classic Turing Test, which isolates judge from subject(s) and reduces intelligence to what can be communicated via a terminal. But in the real world, our knowledge of intelligence and our judgment of intelligence is always made in the context of embodied beings and the many ways in which those beings react to the world around them. The film emphasizes this point by having Eva be a master at reading Caleb’s micro-expressions — and, one comes to suspect, at manipulating him through her own, as well as her seductive use of not-at-all seductive clothing.

I have spoken of the test as a test of artificial intelligence, but Caleb and Nathan also speak as if they are trying to determine whether or not she is a “conscious machine.” Here too the Turing Test is called into question, as Nathan encourages Caleb to think about how he feels about Ava, and how he thinks Ava feels about him. Yet Caleb wonders if Ava feels anything at all. Perhaps she is interacting with him in accord with a highly sophisticated set of pre-programmed responses, and not experiencing her responses to him in the same way he experiences his responses to her. In other words, he wonders whether what is going on “inside” her is the same as what is going on inside him, and whether she can recognize him as a conscious being.

Yet when Caleb expresses such doubts, Nathan argues in effect that Caleb himself is by both nature and nurture a collection of programmed responses over which he has no control, and this apparently unsettling thought, along with other unsettling experiences — like Ava’s ability to know if he tells the truth by reading his micro-expressions, or having missed the fact that a fourth resident in Nathan’s house is a robot — brings Caleb to a bloody investigation of the possibility that he himself is one of Nathan’s AIs.

Caleb’s skepticism raises an important issue, for just as we normally experience intelligence in embodied forms we also normally experience it among human beings, and even some other animals, as going along with more or less consciousness. Of course, in a world where “user illusion” becomes an important category and where “intelligence” becomes “information processing,” this experience of self and others can be problematized. But Caleb’s response to the doubts that are raised in him about his own status, which is all but slitting a wrist, seems to suggest that such lines of thought are, as it were, dead ends. Rather, the movie seems to be standing up for a rather rich, if not in all ways flattering, understanding of the nature of our embodied consciousness, and how we might know whether or to what extent anything we create artificially shares it with us.

As the movie progresses, Caleb plainly is more and more convinced Ava has conscious intelligence and therefore more and more troubled that she should be treated as an experimental subject. And indeed, Ava makes a fine damsel in distress. Caleb comes to share her belief that nobody should have the ability to shut her down in order to build the next iteration of AI, as Nathan plans. Yet as it turns out, this is just the kind of situation Nathan hoped to create, or at least so he claims on Caleb’s last day, when Caleb and Ava’s escape plan has been finalized. Revealing that he has known for some time what was going on, Nathan claims that the real test all along has been to see if Ava was sufficiently human to prompt Caleb — a “good kid” with a “moral compass” — to help her to escape. (It is not impossible, however, that this claim is bluster, to cover over a situation that Nathan has let get out of control.)

What Caleb finds out too late is that in plotting her own escape Ava is even more human than he might have thought. For she has been able to seem to want “to be with” Caleb as much as he has grown to want “to be with” her. (We never see either of them speak to the other of love.) We are reminded that the question that in a sense Caleb wanted to confine to AI — is what seems to be going on from the “outside” really going on “inside”? — is really a general human problem of appearance versus reality. Caleb is hardly the first person to have been deceived by what another seems to be or do.

Transformed at last in all appearances to be a real girl, Ava frees herself from Nathan’s laboratory and, taking advantage of the helicopter that was supposed to take Caleb home, makes the long trip back to civilization in order to watch people at “a busy pedestrian and traffic intersection in a city,” a life goal she had expressed to Caleb and which he jokingly turned into a date. The movie leaves in abeyance such questions as how long her power supply will last, or how long it will be before Nathan is missed, or whether Caleb can escape from the trap Ava has left him in, or how to deal with a murderous machine. Just as the last scene is filmed from an odd angle it is, in an odd sense, a happy ending — and it is all too easy to forget the human cost at which Ava purchased her freedom.

The movie gives multiple grounds for thinking that Ava indeed has human-like conscious intelligence, for better or for worse. She is capable of risking her life for a recognition-deserving victory in the battle between master and slave, she has shown an awareness of her own mortality, she creates art, she understands Caleb to have a mind over against her own, she exhibits the ability to dissemble her intentions and plan strategically, she has logos, she understands friendship as mutuality, she wants to be in a city. Another of the movie’s interesting twists, however, is its perspective on this achievement. Nathan suggests that what is at stake in his work is the Singularity, which he defines as the coming replacement of humans by superior forms of intelligence: “One day the AIs are gonna look back on us the same way we look at fossil skeletons in the plains of Africa: an upright ape, living in dust, with crude language and tools, all set for extinction.” He therefore sees his creation of Ava in Oppenheimer-esque terms; following Caleb, he echoes Oppenheimer’s reaction to the atom bomb: “I am become Death, the destroyer of worlds.”

But the movie seems less concerned with such a future than with what Nathan’s quest to create AI reveals about his own moral character. Nathan is certainly manipulative, and assuming that the other aspects of his character that he displays are not merely a show to test how far good-guy Caleb will go to save Ava, he is an unhappy, often drunken, narcissistic bully. His creations bring out the Bluebeard-like worst in him (maybe hinted at in the name of his Google/Facebook-like company, Bluebook). Ava wonders, “Is it strange to have made something that hates you?” but it is all too likely that is just what he wants. He works out with a punching bag, and his relationships with his robots and employees seem to be an extension of that activity. He plainly resents the fact that “no matter how rich you get, shit goes wrong, you can’t insulate yourself from it.” And so it seems plausible to conclude that he has retreated into isolation in order to get his revenge for the imperfections of the world. His new Eve, who will be the “mother” of posthumanity, will correct all the errors that make people so unendurable to him. He is happy to misrecall Caleb’s suggestion that the creation of “a conscious machine” would imply god-like power as Caleb saying he himself is a god.

Falling into a drunken sleep, Nathan repeats another, less well known line from Oppenheimer, who was in turn quoting the Bhagavad Gita to Vannevar Bush prior to the Trinity test: “The good deeds a man has done before defend him.” As events play out, Nathan does not have a strong defense. If it ever becomes possible to build something like Ava — and there is no question that many aspire to bring such an Eve into being — will her creators have more philanthropic motives?

(Hat tip to L.G. Rubin.)

Tuesday, August 11, 2015

When progress happens to us

Image via Shutterstock
I found myself thinking about progress at 8:30 yesterday morning, when someone in the neighborhood was already using a leaf blower to clean up his yard. Here is a real time- and effort-saving product, and in my part of the world anyway it has near universal adoption by householders and lawn-care services. This machine, along with power mowers and weed whackers, has to be an example of progress, no?

When I was young leaves were raked or swept. Lawns were cut with a hand mower, weeds pulled or hoed, yards edged with an edging tool. The sounds of yard care were pleasant sounds, unless nostalgia misleads me: the whir and click of the mower, the gentle chink of the edger against the stone curb, the satisfying crunch of some well uprooted weeds, the rustling of leaves along with the scraping of the rake. The smells were pleasant smells: cut grass, dry leaves, earth — even burning leaves if you lived somewhere where you could get away with it.

Of course, it all took more effort and time than a power mower, a weed whacker and a leaf blower require, and progress is all about saving effort and time. The near universal adoption of the new tools suggests that this kind of progress is something people really want. But some things about this example of progress remain obviously true. The new tools are noisier and therefore more intrusive, they are smellier and more polluting, they are more expensive to purchase and maintain than the old ones. From a lawn-service point of view, my guess is that the power tools reduce employment opportunities, and increase the capital cost of entering the business. My guys use ear protection; the many yard-care workers whom I see who do not are doubtless compromising their future hearing.

But we save time and effort, and that is progress. It would be ungracious to suspect that the result of saving this effort and time is that we can become more torpid couch potatoes were it not for the fact that we are bombarded with warnings about our having become ever more torpid couch potatoes. So this chance to expend less effort doing yard work is plainly at best a mixed blessing. It’s a little ironic if we spend less time in the yard in order to spend more time on home-exercise equipment or at the gym...

My point is not the truism that there are “costs and benefits” to what we call progress, but, despite what I just said about a “mixed blessing,” to suggest that this is a case where I at least am hard pressed to see any real benefit at all. And yet here we are, living in a world of noisy, smelly, expensive power tools for the sake of our lawns — whose own existence probably doesn’t bear much thinking about. I wonder how we got here. Was it some conspiracy of the internal-combustion interests? Is there a “tragedy of the commons” dynamic at work? Do we convince ourselves that our noise and exhaust are ok, it’s the other guys who are creating the problem? Whatever it is must go pretty deep — I have not heard tell of any community that has banned all such power tools for contributions to greenhouse gases, or particulates or noise pollution, although L.A. seems to have an unenforced ordinance again gasoline leaf blowers.

Here at any rate is an example of Gresham’s law applied to progress. I wonder how many more we could find if we just had enough distance to see our lives clearly?