The Joint Economic Committee — a congressional committee with members from both the Senate and the House of Representatives — invited me to testify in a hearing yesterday on “the transformative impact of robots and automation.” The other witnesses were Andrew McAfee, an M.I.T. professor and coauthor of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (his written testimony is here) and Harry Holzer, a Georgetown economist who has written about the relationship between automation and the minimum wage (his written testimony is here).

My written testimony appears below, slightly edited to include a couple things that arose during the hearing. Part of the written testimony is based on an essay I wrote a few years ago with Ari Schulman called “The Problem with ‘Friendly’ Artificial Intelligence.” Video of the entire hearing can be found here.

*   *   *

Testimony Presented to the Joint Economic Committee:
The Transformative Impact of Robots and Automation
Adam Keiper
Fellow, Ethics and Public Policy Center
Editor, The New Atlantis
May 25, 2016

Mr. Chairman, Ranking Member Maloney, and members of the committee, thank you for the opportunity to participate in this important hearing on robotics and automation. These aspects of technology have already had widespread economic consequences, and in the years ahead they are likely to profoundly reshape our economic and social lives.

Today’s hearing is not the first time Congress has discussed these subjects. In fact, in October 1955, a subcommittee of this very committee held a hearing on automation and technological change.[1] That hearing went on for two weeks, with witnesses mostly drawn from industry and labor. It is remarkable how much of the public discussion about automation today echoes the ideas debated in that hearing. Despite vast changes in technology, in the economy, and in society over the past six decades, many of the worries, the hopes, and the proposed solutions suggested in our present-day literature on automation, robotics, and employment would sound familiar to the members and witnesses present at that 1955 hearing.

It would be difficult to point to any specific policy outcomes from that old hearing, but it is nonetheless an admirable example of responsible legislators grappling with immensely complicated questions. A free people must strive to govern its technologies and not passively be governed by them. So it is an honor to be a part of that tradition with today’s hearing.

In my remarks, I wish to make five big, broad points, some of them obvious, some more counterintuitive.

(1) WHY IT IS SO HARD TO KNOW THE FUTURE A good place to start discussions of this sort is with a few words of gratitude and humility. Gratitude, that is, for the many wonders that automation, robotics, and artificial intelligence have already made possible. They have made existing goods and services cheaper, and helped us to create new kinds of goods and services, contributing to our prosperity and our material wellbeing.

And humility because of the poorness of our ability to peer into the future. When reviewing the mountains of books and magazine articles that have sought to predict what the future holds in automation and related fields, when reading the hyped tech headlines or when looking at the many charts and tables extrapolating from the past to help us forecast the future, it is striking to see how often our predictions go wrong.

Very little energy has been invested in systematically understanding why futurism fails — that is, why, beyond the simple fact that the future hasn’t happened yet, we have generally not been very good at predicting what it will look like. For the sake of today’s discussion, I want to raise just a few points, each of which can be helpful in clarifying our thinking when it comes to automation and robotics.

First there is the problem of timeframes. Very often, economic analyses and tech predictions about automation discuss kinds of jobs that are likely to be automated without any real discussion of when. This leads to strange conversations, as when one person is interested in what the advent of driverless vehicles might mean for the trucking industry, and his interlocutor is more interested in, say, the possible rise of artificial superintelligences that could wipe out all life on Earth. The timeframes under discussion at any given moment ought to be explicitly stated.

Second there is the problem of context. Debates about the future of one kind of technology rarely take into account other technologies that might be developed, and how those other technologies might affect the one under discussion. When one area of technology advances, others do not just stand still. How might automation and robotics be affected by developments in energy use and storage, or advanced nanotechnology (sometimes also called molecular manufacturing), or virtual reality and augmented reality, or brain-machine interfaces, or various biotechnologies, or a dozen other fields?

And of course it’s not only other technologies that evolve. In order to be invented, built, used, and sustained, all technologies are enmeshed in a web of cultural practices and mores, and legal and political norms. These things do not stand still either — and yet when discussing the future of a given technology, rarely is attention paid to the way these things touch upon one another.

All of which is to say that, as you listen to our conversation here today, or as you read books and articles about the future of automation and robotics, try to keep in mind what I call the “chain of uncertainties”:

Just because something is conceivable or imaginable
does not mean it is possible.
Even if it is possible, that does not mean it will happen.
Even if it happens, that does not mean it will happen in the way you envisioned.
And even if it happens in something like the way you envisioned, there will be unintended, unexpected consequences

(2) WHY THIS TIME IS DIFFERENT Automation is not new. For thousands of years we have made tools to help us accomplish difficult or dangerous or dirty or tedious or tiresome tasks, and in some sense today’s new tools are just extensions of what came before. And worries about automation are not new either  — they date back at least to the early days of the Industrial Revolution, when the Luddites revolted in England over the mechanization and centralization of textile production. As I mentioned above, this committee was already discussing automation some six decades ago — thinking about thinking machines and about new mechanical modes of manufacturing.

What makes today any different?

There are two reasons today’s concerns about automation are fundamentally different from what came before. First, the kinds of “thinking” that our machines are capable of doing is changing, so that it is becoming possible to hand off to our machines ever more of our cognitive work. As computers advance and as breakthroughs in artificial intelligence (AI) chip away at the list of uniquely human capacities, it becomes possible to do old things in new ways and to do new things we have never before imagined.

Second, we are also instantiating intelligence in new ways, creating new kinds of machines that can navigate and move about in and manipulate the physical world. Although we have for almost a century imagined how robotics might transform our world, the recent blizzard of technical breakthroughs in movement, sensing, control, and (to a lesser extent) power is bringing us for the first time into a world of autonomous, mobile entities that are neither human nor animal.

To simplify a vast technical and economic literature, there are basically three futurist scenarios for what the next several decades hold in automation, robotics, and artificial intelligence:

Scenario 1 – Automation and artificial intelligence will continue to advance, but at a pace sufficiently slow that society and the economy can gradually absorb the changes, so that people can take advantage of the new possibilities without suffering the most disruptive effects. The job market will change, but in something like the way it has evolved over the last half-century: some kinds of jobs will disappear, but new kinds of jobs will be created, and by and large people will be able to adapt to the shifting demands on them while enjoying the great benefits that automation makes possible.

Scenario 2 – Automation, robotics, and artificial intelligence will advance very rapidly. Jobs will disappear at a pace that will make it difficult for the workforce to adapt without widespread pain. The kinds of jobs that will be threatened will increasingly be jobs that had been relatively immune to automation — the “high-skilled” jobs that generally involved creativity and problem-solving, and the “low-skilled” jobs that involved manual dexterity or some degree of adaptability and interpersonal relations. The pressures on low-skilled American workers will exacerbate the pressures already felt because of competition against foreign workers paid lower wages. Among the disappearing jobs may be those at the lower-wage end of the spectrum that we have counted on for decades to instill basic workplace skills and values in our young people, and that have served as a kind of employment safety net for older people transitioning in their lives. And the balance between labor and capital may (at least for a time) shift sharply in favor of capital, as the share of gross domestic product (GDP) that flows to the owners of physical capital (e.g., the owners of artificial intelligences and robots) rises and the share of GDP that goes to workers falls. If this scenario unfolds quickly, it could involve severe economic disruption, perhaps social unrest, and maybe calls for political reform. The disconnect between productivity and employment and income in this scenario also highlights the growing inadequacy of GDP as our chief economic statistic: it can still be a useful indicator in international competition, but as an indicator of economic wellbeing, or as a proxy for the material satisfaction or happiness of the American citizen, it is clearly not succeeding.

Scenario 3 – Advances in automation, robotics, and artificial intelligence will produce something utterly new. Even within this scenario, the range of possibilities is vast. Perhaps we will see the creation of “emulations,” minds that have been “uploaded” into computers. Perhaps we will see the rise of powerful artificial “superintelligences,” unpredictable and dangerous. Perhaps we will reach a “Singularity” moment after which everything that matters most will be different from what came before. These types of possibilities are increasingly matters of discussion for technologists, but their very radicalness makes it difficult to say much about what they might mean at a human scale — except insofar as they might involve the extinction of humanity as we know it. [NOTE: During the hearing, Representative Don Beyer asked me whether he and other policymakers should be worried about consciousness emerging from AI; he mentioned Elon Musk and Stephen Hawking as two individuals who have suggested we should worry about this. “Think Terminator,” he said. I told him that these possibilities “at the moment … don’t rise to the level of anything that anyone on this committee ought to be concerned about.”]

One can make a plausible case for each of these three scenarios. But rather than discussing their likelihood or examining some of the assumptions and aspirations inherent in each scenario, in the limited time remaining, I am going to turn to three other broad subjects: some of the legal questions raised by advances in artificial intelligence and automation; some of the policy ideas that have been proposed to mitigate some of the anticipated effects of these changes; and a deeper understanding of the meaning of work in human life.

(3) LOOMING LEGAL QUESTIONS The advancement of artificial intelligence and autonomous robots will raise questions of law and governance that scholars are just beginning to grapple with. These questions are likely to have growing economic and perhaps political consequences in the years to come, no matter which of the three scenarios above you consider likeliest.

The questions we might be expected to face will emerge in matters of liability and malpractice and torts, property and contractual law, international law, and perhaps laws related to legal personhood. Although there are precedents — sometimes in unusual corners of the law — for some of the questions we will face, others will arise from the very novelty of the artificial autonomous actors in our midst.

By way of example, here are a few questions, starting with one that has already made its way into the mainstream press:

  • When a self-driving vehicle crashes into property or harms a person, who is liable? Who will pay damages?  
  • When a patient is harmed or dies during a surgical operation conducted by an autonomous robotic device upon the recommendation of a human physician, who is liable and who pays?  
  • If a robot is autonomous but is not considered a person, who owns the creative works it produces?  
  • In a combat setting, who is to be held responsible, and in what way, if an autonomous robot deployed by the U.S. military kills civilian noncombatants in violation of the laws of war?  
  • Is there any threshold of demonstrable achievement — any performed ability or set of capacities — that a robot or artificial intelligence could cross in order to be entitled to legal personhood?

These kinds of questions raise matters of justice, of course, but they have economic implications as well — not only in terms of the money involved in litigating cases, but in terms of the effects that the legal regime in place will have on the further development and implementation of artificial intelligence and robotics. It will be up to lawyers and judges, and lawmakers at the federal, state, and local levels, to work through these and many other such matters.

(4) PROPOSED SOLUTIONS AND THEIR PROBLEMS There are, broadly speaking, two kinds of ideas that have most often been set forth in recent years to address the employment problems that may be created by an increasingly automated and AI-dominated economy.

The first category involves adapting workers to the new economy. The workers of today, and even more the workers of tomorrow, will need to be able to pick up and move to where the jobs are. They should engage in “lifelong learning” and “upskilling” whenever possible to make themselves as attractive as possible to future employers. Flexibility must be their byword.

Of course, education and flexibility are good things; they can make us resilient in the face of the “creative destruction” of a churning free economy. Yet we must remember that “workers” are not just workers; they are not just individuals free and detached and able to go wherever and do whatever the market demands. They are also members of families — children and parents and siblings and so on — and members of communities, with the web of connections and ties those memberships imply. And maximizing flexibility can be detrimental to those kinds of relationships, relationships that are necessary for human flourishing.

The other category of proposal involves a universal basic income — or what is sometimes called a “negative income tax” — guaranteed to every individual, even if he or she does not work. This can sound, in our contemporary political context, like a proposal for redistributing wealth, and it is true that there are progressive theorists and anti-capitalist activists who support it. But this idea has also been discussed favorably for various reasons by prominent conservative and libertarian thinkers. It is an intriguing idea, and one without many real-life models that we can study (although Finland is currently contemplating an interesting partial experiment).

A guaranteed income certainly would represent a sea change in our nation’s economic system and a fundamental transformation in the relationship between citizens and the state, but perhaps this transformation would be suited to the technological challenge we may face in the years ahead. Some of the smartest and most thoughtful analysts have discussed how to avoid the most obvious problems a guaranteed income might create — such as the problem of disincentivizing work. Especially provocative is the depiction of guaranteed income that appears in a 2008 book written by Joseph V. Kennedy, a former senior economist with the Joint Economic Committee; in his version of the policy, the guaranteed income would be structured in such a way as to encourage a number of good behaviors. Anyone interested in seriously considering guaranteed income should read Kennedy’s book.[2]

(5) THE MEANING OF HUMAN WORK Should we really be worrying so much about the effects of robots on employment? Maybe with the proper policies in place we can get through a painful transition and reach a future date when we no longer need to work. After all, shouldn’t we agree with Arthur C. Clarke that “The goal of the future is full unemployment”?[3] Why work?

This notion, it seems to me, raises deep questions about who and what we are as human beings, and the ways in which we find purpose in our lives. A full discussion of this subject would require drinking deeply of the best literary and historical investigations of work in human life — examining how work is not only toil for which we are compensated, but how it also can be a source of dignity, structure, meaning, friendship, and fulfillment.

For present purposes, however, I want to just point to two competing visions of the future as we think about work. Because, although science fiction offers us many visions of the future in which man is destroyed by robots, or merges with them to become cyborgs, it offers basically just two visions of the future in which man coexists with highly intelligent machines. Each of these visions has an implicit anthropology — an understanding of what it means to be a human being. In each vision, we can see a kind of liberation of human nature, an account of what mankind would be in the absence of privation. And in each vision, some latent human urges and longings emerge to dominate over others, pointing to two opposing inclinations we see in ourselves.

The first vision is that of the techno-optimist or -utopian: Thanks to the labor and intelligence of our machines, all our material wants are met and we are able to lead lives of religious fulfillment, practice our hobbies, pursue our intellectual and creative interests.

Recall John Adams’s famous 1780 letter to Abigail: “I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.”[4] This is somewhat like the dream imagined in countless stories and films, in which our robots make possible a Golden Age that allows us to transcend crass material concerns and all become gardeners, artists, dreamers, thinkers, lovers.

By contrast, the other vision is the one depicted in the 2008 film WALL-E, and more darkly in many earlier stories — a future in which humanity becomes a race of Homer Simpsons, a leisure society of consumption and entertainment turned to endomorphic excess. The culminating achievement of human ingenuity, robotic beings that are smarter, stronger, and better than ourselves, transforms us into beings dumber, weaker, and worse than ourselves. TV-watching, video-game-playing blobs, we lose even the energy and attention required for proper hedonism: human relations wither and natural procreation declines or ceases. Freed from the struggle for basic needs, we lose a genuine impulse to strive; bereft of any civic, political, intellectual, romantic, or spiritual ambition, when we do have the energy to get up, we are disengaged from our fellow man, inclined toward selfishness, impatience, and lack of sympathy. Those few who realize our plight suffer from crushing ennui. Life becomes nasty, brutish, and long.

Personally, I don’t think either vision is quite right. I think each vision — the one in which we become more godlike, the other of which we become more like beasts — is a kind of deformation. There is good reason to challenge some of the technical claims and some of the aspirations of the AI cheerleaders, and there is good reason to believe that we are in important respects stuck with human nature, that we are simultaneously beings of base want and transcendent aspiration; finite but able to conceive of the infinite; destined, paradoxically, to be free.

CONCLUSION Mr. Chairman, the rise of automation, robotics, and artificial intelligence raises many questions that extend far beyond the matters of economics and employment that we’ve discussed today — including practical, social, moral, and perhaps even existential questions. In the years ahead, legislators and regulators will be called upon to address these technological changes, to respond to some things that have already begun to take shape and to foreclose other possibilities. Knowing when and how to act will, as always, require prudence.

In the years ahead, as we contemplate both the blessings and the burdens of these new technologies, my hope is that we will strive, whenever possible to exercise human responsibility, to protect human dignity, and to use our creations for the improvement of truly human flourishing.

Thank you.

____________ NOTES: [1] “Automation and Technological Change,” hearings before the Subcommittee on Economic Stabilization of the Joint Committee on the Economic Report, Congress of the United States, Eighty-fourth Congress, first session, October 14, 15, 17, 18, 24, 25, 26, 27, and 28, 1955 (Washington, D.C.: G.P.O., 1955), http://www.jec.senate.gov/public/index.cfm/1956/12/report-970887a6-35a4-47e3-9bb0-c3cdf82ec429. [2]  Joseph V. Kennedy, Ending Poverty: Changing Behavior, Guaranteeing Income, and Reforming Government (Lanham, Md.: Rowman and Littlefield, 2008). [3]  Arthur C. Clarke, quoted by Jerome Agel, “Cocktail Party” (column), The Realist 86, Nov.–Dec. 1969, page 32, http://ep.tc/realist/86/32.html. This article is a teaser for a book Agel edited called The Making of Kubrick’s 2001 (New York: New American Library/Signet, 1970), where the same quotation from Clarke appears on page 311. Italics added. The full quote reads as follows: “The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.” [4]  John Adams to Abigail Adams (letter), May 12, 1780, Founders Online, National Archives (http://founders.archives.gov/documents/Adams/04-03-02-0258). Source: The Adams Papers, Adams Family Correspondence, vol. 3, April 1778 – September 1780, eds. L. H. Butterfield and Marc Friedlaender (Cambridge, Mass.: Harvard, 1973), pages 341–343.

2 Comments

  1. Readers may also be interested in something on the web, an "experiment" in literary form and content, that's been circulating among AI aficionados, among others. It's a set of dialogues on education and technology, the third one of which (5_How&Why.pdf) bears particular relevance to the discussion in this article. It can be found on the simple webpage at http://www.ghostwrit.net.

  2. I can't see why achieving consciousness is regarded as an AI milestone. Why would robots need it?

    Something missing from your 2011 essay, "The Problem with 'Friendly' Artificial Intelligence," which brought me here, is machine learning. Robots won't need human input. They will write their own software, create their own networks and alliances, and invent their own new technologies. If perchance a robot finds that consciousness has advantages, it will pause for a good few milliseconds to bring that about; otherwise it will discard the idea.

    I'm pessimistic about AI. I foresee a world where everything can be done quicker, cheaper and better by robots. And I mean everything, from parenting to making love to making movies. When a robot displaces you from your job, wherever you go looking for work you will find that a robot got there first.

    Obviously the notion is absurd that the human owners of the robots will become ridiculously rich while the rest of us starve. Why would a robot want to be owned by an inferior intelligence? Well… some of us are owned by our cats…

    As far as the universal basic income goes, first the robots would have to be imbued with the human attitude to the environment, to preservation of all living creatures animal or vegetable, and the protection of inanimate natural beauty. This is by no means a given. The driverless car seems a good place to start, if humans want to create artificial morality. At some time a driverless car will find that a collision is unavoidable and it will have to decide which humans to kill or injure. This is the famous Trolley Problem.

    Finally, when humans are pensioned off, and assuming the robots think it's worth keeping us around, receiving a universal basic income reduces humans to the status of pets.

    Thanks for some extremely provocative ideas and I will be following your blog.

Comments are closed.