Futurisms: Critiquing the project to reengineer humanity

Tuesday, May 17, 2011

Can we control AI? Will we walk away?

While the Singularity Hub normally sticks to reporting on emerging technologies, their primary writer, Aaron Saenz, recently posted a more philosophical venture that ties nicely into the faux-caution trope of transhumanist discourse that was raised in our last post on Futurisms.

Mr. Saenz is (understandably) skeptical about efforts being made to ensure that advanced AI will be “friendly” to human beings. He argues that the belief that such a thing is possible is a holdover from the robot stories of Isaac Asimov. He joins in with a fairly large chorus of criticism of Asimov’s famous “Three Laws of Robotics,” although unlike many such critics he also seems to understand that in the robot stories, Asimov himself seemed to be exploring the consequences and adequacy of the laws he had created. But in any case, Mr. Saenz notes how we already make robots that, by design, violate these laws (such as military drones) — and how he is very dubious that intelligence so advanced as to be capable of learning and modifying its own programming could be genuinely restrained by mere human intelligence.

That’s a powerful combination of arguments, playing off one anticipated characteristic of advanced AI (self-modification) over another (ensuring human safety), and showing that the reality of how we use robots already does and will continue to trump idealistic plans for how we should use them. So why isn’t Mr. Saenz blogging for us? A couple of intriguing paragraphs tell the story.

As he is warming to his topic, Mr. Saenz provides an extended account of why he is “not worried about a robot apocalypse.” Purposefully rejecting one of the most well-known sci-fi tropes, he makes clear that he thinks that The Terminator, Battlestar Galactica, 2001, and The Matrix all got it wrong. How does he know they all got it wrong? Because these stories were not really about robots at all, but about the social anxieties of their times: “all these other villains were just modern human worries wrapped up in a shiny metal shell.”

There are a couple of problems here. First, what’s sauce for the goose is sauce for the gander: if all of these films are merely interesting as sociological artifacts, then it would only seem fair to notice that Asimov’s robot stories are “really” about race relations in the United States. But let’s let that go for now.

More interesting is the piece’s vanishing memory of itself. At least initially, advanced AI will exist in a human world, and will play whatever role it plays in relation to human purposes, hopes and fears. But when Mr. Saenz dismisses the significance of human worries about destructive robots, he is forgetting his own observation that human worries are already driving us towards the creation of robots that will deliberately not be bound by anything that would prevent them from killing a human being. Every generation of robots that human beings make will, of necessity, be human worries and aspirations trapped in a shiny metal shell. So it is not a foolish thing to try to understand the ways that the potential powers of robots and advanced AI might play an increasingly large role in the realm of human concerns, since human beings have a serious capacity for doing very dangerous things.

Mr. Saenz is perfectly aware of this capacity, as he indicates in his remarkable concluding thoughts:
We cannot control intelligence — it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities. But just because I don’t believe in control, doesn’t mean that I’m not optimistic. Humanity has done many horrible things in the past, but it hasn’t wiped itself out yet. If machine intelligence proves to be a new form of Armageddon, I think we’ll be wise enough to walk away. If it proves to be benevolent, I think we’ll find a way to live with its unpredictability. If it proves to be all-consuming, I think we’ll find a way to become a part of it. I never bet against intelligence, even when it’s human.
Here, unfortunately, is the transhumanist magic wand in action, sprinkling optimism dust and waving away all problems. Yes, humans are capable of horrible things, but no real worry there. Why not? Because Mr. Saenz never bets against intelligence — examples of which would presumably include the intelligence that allows humans to do horrible things, and to, say, use AI to do them more effectively. And when worse comes to worst, we will “walk away” from Armageddon. Kind of like in Cormac McCarthy’s The Road, I suppose. That is not just whistling in the dark — it is whistling in the dark while wandering about with one’s eyes closed, pretending there is plenty of light.

Tuesday, May 10, 2011

The Disinformation Campaign of Transhumanist “Caution”

In my last post on ironic transhumanist tech failures, there was one great example I forgot to mention. If you subscribe to the RSS feed for the IEET blog, you may have noticed that most of their posts go up on the feed multiple times: my best guess is that, due to careless coding in their system (or a bad design idea that was never corrected), a post goes up as new on the feed every time it’s even modified. For example, here’s what the feed’s list of posts from early March looks like:


Ouch — kind of embarrassing. Every project has technical difficulties, of course, but — well, here’s another example:


Question: can we develop and test machine minds and uploads ethically? Well, one way to get at that question is to ask what it might say about technical fallibility when such a prominent transhumanist advocacy organization has not yet figured out how to eliminate inadvertent duplicates on its RSS feed, and how such an error might play out when, say, uploading a mind, where the technical challenges are a bit more substantial, and the consequences of accidentally creating copies a bit more tricky.

Don’t get me wrong — we all know that the IEET is all about being Very Serious and Handling the Future Responsibly. I mean, look, they’re taking the proper precaution of thinking through the ethics of mind uploading long before that’s even possible! Let’s have a look at that post:

Sometimes people complain that they “did not ask to be born.” Yet, nobody has an ethical right to decide whether or not to be born, as that would be temporally illogical. The solution to this conundrum is for someone else to consent on behalf of the newborn, whether this is done implicitly via biological parenting, or explicitly via an ethics committee.

Probably the most famous example of the “complaint” Ms. Rothblatt alludes to comes from Kurt Vonnegut’s final novel, Timequake, in which he depicts Hitler uttering the words, “I never asked to be born in the first place,” before shooting himself in the head. It doesn’t seem that either fictional-Hitler’s or real-Vonnegut’s complaint was answered satisfactorily by their parents’ “implicit biological consent” to their existence. And somehow it’s hard to imagine that either man would have been satisfied if an ethics committee had rendered the judgment instead.

Could Vonnegut (through Hitler) be showing us something too dark to see by looking directly in its face? Might these be questions for which we are rightly unable to offer easy answers? Is it possible that those crutches of liberal bioethics, autonomy and consent, are woefully inadequate to bear the weight of such fundamental questions? (Might it be absurd, for example, to think that one can write a loophole to the “temporal illogicality” of consenting to one’s own existence by forming a committee?) Apparently not: Rothblatt concludes that “I think practically speaking the benefits of having a mindclone will be so enticing that any ethical dilemma will find a resolution” and “Ultimately ... the seeming catch-22 of how does a consciousness consent to its own creation can be solved.” Problem solved!

----

In a similar vein, in response to my shameless opportunism in my last post in pointing out the pesky ways that technical reality undermines technological fantasy, Michael Anissimov commented:

In my writings, I always stress that technology fails, and that there are great risks ahead as a result of that. Only transhumanism calls attention to the riskiest technologies whose failure could even mean our extinction.

True enough. Of course, only transhumanism so gleefully advocates the technologies that could mean our extinction in the first place... but it’s cool: after his site got infested by malware for a few days, Anissimov got Very Serious, decided to Think About the Future Responsibly, and, in a post called “Security is Paramount,” figured things out:

For billions of years on this planet, there were no rules. In many places there still are not. A wolf can dine on the entrails of a living doe he has brought down, and no one can stop him. In some species, rape is a more common variety of impregnation than consensual sex.... This modern era, with its relative orderliness and safety, at least in the West, is an aberration. A bizarre phenomenon, rarely before witnessed in our solar system since its creation.... Reflecting back on this century, if we survive, we will care less about the fun we had, and more about the things we did to ensure that the most important transition in history went well for the weaker ambient entities involved in it. The last century didn’t go too well for the weak — just ask the victims of Hitler and Stalin. Hitler and Stalin were just men, goofballs and amateurs in comparison to the new forms of intelligence, charisma, and insight that cognitive technologies will enable.

Hey, did you know nature is bad and people can be pretty bad too? Getting your blog knocked offline for a few days can inspire some pretty cosmic navel-gazing. (As for the last part, though, it shouldn’t be a worry, as Hitler+ and Stalin+ will have had ethics committees who consented to their existences, and all their existential issues thereby solved.)

----

The funny thing about apocalyptic warnings like Anissimov’s is that they don’t seem to do a mite to slow down transhumanists’ enthusiasm for new technologies. Notably, despite his Serious warnings, Anissimov doesn’t even consider the possibility that the whole project might be ill-begotten. In fact, despite implicitly setting himself outside and above them, Anissimov is really one of the transhumanists he describes in the same post, who “see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly.” This is because, for all his lofty rhetoric of caution, he is still fundamentally credulous when it comes to the promise of transformative new technologies.

Take geoengineering: in Anissimov’s first post on the subject, he cheered the idea of intentionally warming the globe for certain ostensible benefits. Shortly later, he deleted the post “because of substantial uncertainty on the transaction costs and the possibility of catastrophic global warming through methane clathrate release.” It took someone pointing to a specific, known vector of possible disaster for him to reconsider; otherwise, only a few minutes’ thought given to what would be the most massive engineering project in human history was sufficient to declare it just dandy.

Of course, in real life, unlike in blogging, you can’t just delete your mistakes — say, releasing huge amounts of chemicals into the atmosphere that turn out to be harmful (as we’re learning today when it comes to carbon emissions). Nor did it occur to Anissimov that the one area on which he will readily admit concern about the potential downsides of future technologies — security — might also be an issue when it comes to granting the power to intentionally alter the earth’s climate to whoever has the means (whether they’re “friendly” or not).

----

One could go on at great length about the unanticipated consequences of transhumanism-friendly technologies, or the unseriousness of most pro-transhumanist ethical inquiries into those technologies. These points are obvious enough.

What is more difficult to see is that Michael Anissimov, Martine Rothblatt, and all of the other writers who proclaim themselves the “serious,” “responsible,” and “precautious” wing of the transhumanist party — including Eliezer Yudkowsky, Nick Bostrom, and Ray Kurzweil, among others — in fact function as a sort of disinformation campaign on behalf of transhumanists. They toss out facile work that calls itself serious and responsible, capable of grasping and dealing with the challenges ahead, when it could hardly be any less so — but all that matters is that someone says they’re doing it.

Point out to a transhumanist that they are as a rule uninterested in deeply and seriously engaging with the ramifications of the technologies they propose, or suggest that the whole project is more unfathomably reckless than any ever conceived, and they can say, “but look, we are thinking about it, we’re paying our dues to caution — don’t worry, we’ve got people on it!” And with their consciences salved, they can go comfortably back to salivating over the future.

Monday, May 9, 2011

They wuz robbed

Despite some promising early results, and finishing in 30th place in the online public poll, it looks like Ray Kurzweil did not, after all, make the Time 100 most influential people in the world, which was ultimately selected by the editors to highlight the most influential “artists and activists, reformers and researchers, heads of state and captains of industry. Their ideas spark dialogue and dissent and sometimes even revolution.”

While I can contain my outrage, I have to admit that the result is bizarre given the stated criteria. Kim Jong Un, the done-nothing son of the tyrant of North Korea makes the list, but not Ray Kurzweil? Prince William and Kate Middleton (notably counted as one person on the list) are a couple of cute kids, and I enjoyed watching their wedding, but what original or influential ideas have they had? Patti Smith but not Ray Kurzweil? Amy Poehler but not Ray Kurzweil? Lionel Messi but not Ray Kurzweil?

I’m hard-pressed to explain the result. Is it that transhumanism is not after all winning, let alone won? That those of us interested in it (for or against) are in fact merely patrons of a small and not yet fashionable intellectual boutique? Or is it that transhumanist goals are so mainstream (longer! better! faster!) that the team at Time can’t see them as anything but self-evident truths? Does the truth lie somewhere in between? Or is the list just another example of the sorry results you get when you try to repackage and extend the lifetime of mortal things like once-influential news magazines?

[Royal wedding image via Mashable.]