Futurisms: Critiquing the project to reengineer humanity

Wednesday, December 17, 2014

Near, Far, and Nicholas Carr

Nicholas Carr, whose new book The Glass Cage explores the human meaning of automation, last week put up a blog post about robots and artificial intelligence. (H/t Alan Jacobs.) The idea that “AI is now the greatest existential threat to humanity,” Carr writes, leaves him “yawning.”

He continues:

The odds of computers becoming thoughtful enough to decide they want to take over the world, hatch a nefarious plan to do so, and then execute said plan remain exquisitely small. Yes, it’s in the realm of the possible. No, it’s not in the realm of the probable. If you want to worry about existential threats, I would suggest that the old-school Biblical ones — flood, famine, pestilence, plague, war — are still the best place to set your sights.

I would not argue with Carr about probable versus possible — he may well be right there. But later in the post, quoting from an interview he gave to help promote his book, he implicitly acknowledges that there are people who think that machine consciousness is a great idea and who are working to achieve it. He thinks that their models for how to do so are not very good and that their aspirations “for the near future” are ultimately based on “faith, not reason.”

near ... or far?
All fine. But Carr is begging one question and failing to observe a salient point. First, it seems he is only willing to commit to his skepticism for “the near future.” That is prudent, but then one might want to know why we should not be concerned about a far future when efforts today may lay the groundwork for it, even if by eliminating certain possibilities.

Second, what he does not pause to notice is that everyone agrees that “flood, famine, pestilence, plague and war” are bad things. We spend quite a serious amount of time, effort, and money trying to prevent them or mitigate their effects. But at the same time, there are also people attempting to develop machine consciousness, and while they may not get the resources or support they think they deserve, the tech culture at least seems largely on their side (even if there are dissenters in theory). So when there are people saying that an existential threat is the feature and not the bug, isn’t that something to worry about?

Friday, December 5, 2014

Margaret Atwood’s Not-Very-Deep Thoughts on Robots

Margaret Atwood has been getting her feet wet in the sea of issues surrounding developments in robotics and comes away with some conclusions of corresponding depth. Robots, she says, are just another of the extensions of human capacity that technology represents, they represent a perennial human aspiration, maybe they will change human nature, but what is human nature anyway?

This is all more or less conventional stuff, dead center of the intellectual sweet spot for the Gray Lady, until Atwood gets to the very end. What really concerns her seems to be that we would commit to a robotic future and then run out of electricity! That would pretty much destroy human civilization, leaving behind “a chorus of battery-powered robotic voices that continues long after our own voices have fallen silent.” Nice image — but long after? That’s some battery technology you’ve got there. The robots would not care to share any of that power?

As I discuss in my new book Eclipse of Man: Human Extinction and the Meaning of Progress, most of those who believe in “Our Robotic Future,” as Atwood’s piece is titled, do so with the expectation that it is part and parcel of an effort at overcoming just the kind of Malthusian scarcity that haunts Atwood. They may of course be wrong about that, but given the track record of ongoing innovation in the energy area, it is hard to see why one would strain at this particular gnat.

Then again, the NYT essay suggests that most of Atwood’s literary knowledge of things robotic seems to end by the 1960s. Atwood’s own bleak literary futures seem to focus on the biological; maybe she has not got the transhumanist memo yet.

—Charles T. Rubin, a Futurisms contributor and contributing editor to The New Atlantis, is a Fellow of the James Madison Program at Princeton University.

Wednesday, December 3, 2014

Human Flourishing or Human Rejection?

Sometimes, when we criticize transhumanism here on Futurisms, we are accused of being Luddites, of being anti-technology, of being anti-progress. Our colleague Charles Rubin ably responded to such criticisms five years ago in a little post he called “The ‘Anti-Progress’ Slur.”

In his new book Eclipse of Man, Professor Rubin explores the moral and political dimensions of transhumanism. And again the question arises, if you are opposed to transhumanism, are you therefore opposed to progress? Here, in a passage from the book’s introduction, Rubin talks about the distinctly modern idea that humanity can better its lot and asks whether that goal is in tension with the transhumanist goal of transcending humanity:

Even if the sources of our misery have not changed over time, the way we think about them has certainly changed between the ancient world and ours. What was once simply a fact of life to which we could only resign ourselves has become for us a problem to be solved. When and why the ancient outlook began to go into eclipse in the West is something scholars love to discuss, but that a fundamental change has occurred seems undeniable. Somewhere along the line, with thinkers like Francis Bacon and René Descartes playing a major role, people began to believe that misery, poverty, illness, and even death itself were not permanent facts of life that link us to the transcendent but rather challenges to our ingenuity in the here and now. And that outlook has had marvelous success where it has taken hold, allowing more people to live longer, wealthier, and healthier lives than ever before.

So the transhumanists are correct to point out that the desire to alter the human condition runs deep in us, and that attempts to alter it have a long history. But even starting from our perennial dissatisfaction, and from our ever-growing power to do something about the causes of our dissatisfaction, it is not obvious how we get from seeking to improve the prospects for human flourishing to rejecting our humanity altogether. If the former impulse is philanthropic, is the latter not obviously misanthropic? Do we want to look forward to a future where man is absent, to make that goal our normative vision of how we would like the world to be?

Francis Bacon famously wrote about “the relief of man’s estate,” which is to say, the improvement of the conditions of human life. But the transhumanists reject human life as such. Certain things that may be good in certain human contexts — intelligence, pleasure, power — can become meaningless, perverse, or destructive when stripped of that context. By pursuing these goods in abstraction from their human context, transhumanism offers not an improvement in the human condition but a rejection of humanity.

For much more of Charlie Rubin’s thoughtful critique of transhumanism, pick up a copy of Eclipse of Man today.