[Continuing coverage of the 2009 Singularity Summit in New York City.]
And now, Ray Kurzweil’s talk on response to critics of the Singularity. (Abstract here. My coverage of his previous talk is here.)
“If everything is going to hell in a handbasket,” says Kurzweil starting off, “It’s the exponentially growing technologies of GNR [Genetics, Nanotechnology, Robotics] that will save us.”
Kurzweil begins by responding to people who say, “Yeah, but computers can’t do this-and-that.” He lists several things that critics in the past said that computers cannot do but that computers now can do. Presumably there are many more things they can’t yet do that they eventually will be able to. This is an important point. Now he’s responding to the “argument from incredulity,” which he rightly notes is weak and has been proven wrong many times before.
Kurzweil is rehashing a lot of old ground. Most of this, like his talk yesterday, is straight out of his book, down to the details of the points he’s making and the graphics he’s using. Whether you agree or disagree with him, it’s clear that Kurzweil is picking really low-hanging fruit. He doesn’t engage the smartest critics at the highest level. He just responds to a lot of common misconceptions, which is an easy thing for a smart person in any debate to do. The audience eats it up, though — as I noted in the last post, sticking it to people (though whom I’m still not sure) seems to be a common theme of the conference’s speakers.
Kurzweil now points out that whenever we make new innovations or discoveries, they are easily dismissed because the mystery is gone. (For example, genome sequencing is already not a big deal, or autonomous vehicles are nothing to write home about, because we understand how they work.) This is a fair point, and an important rejoinder to “arguments from incredulity.” But Kurzweil doesn’t acknowledge the other side of the coin: As we learn more and conquer more through science and technology, our sense of wonder at the world — and even at our own achievements — actually diminishes. And what about the sense of wonder and awe surrounding the Singularity itself: Won’t it, too, seem mundane and disappointing once it’s actually achieved? And where will we go from there?
Kurzweil addresses the problem with relinquishment of potentially problematic technologies. (Relinquishment, remember, was most famously proposed nearly a decade ago by Bill Joy in his famous essay “Why the Future Doesn’t Need Us,” which he wrote after meeting Kurzweil.) First, Kurzweil says, relinquishment would deprive us of the great benefits of these technologies. Second, relinquishment could only be enforced by totalitarian governments. And third, relinquishment would just drive the new technologies underground. The latter two points, he says, were the message of Huxley’s Brave New World. (I think he needs to read the book again.)
Kurzweil says that the best way we can ensure that future A.I.s follow the values we want them to is to make sure that we have those values now. We need to be talking more about our values now, and not just among engineers; that, he says, is why we’re trying to foster discussion at this conference. He then goes on to talk about the intellectual movement of Luddites who hate technology.
The rest of the talk is a continued rehash of his book — and even a rehash of his talk from yesterday. One Twitterer notes, “Starting to think that I could repeat Kurzweil’s stump speech word for word.” But hey, the people are here for the man, not for the ideas. (Will there still be celebrity worship after the Singularity?)

0 Comments