[Continuing coverage of the 2009 Singularity Summit in New York City.]
Peter Thiel is a billionaire, known for cofounding PayPal and for his early involvement in Facebook. He also may be the largest benefactor of the Singularity Summit and longevity-related research. His talk today is on "Macroeconomics and Singularity." (Abstract and bio.)
Thiel begins by outlining common concerns about the Singularity, and then asks the members of this friendly audience to raise their hands to indicate which they are worried about:
1. Robots kill humans (Skynet scenario). Maybe 10% raise their hands.
2. Runaway biotech scenario. 30% raise hands.
3. The "gray goo scenario." 5% raise hands.
4. War in the Middle East, augmented by new technology. 20% raise hands.
5. Totalitarian state using technology to oppress people. 15% raise hands.
6. Global warming. 10% raise hands. (Interesting divergence again between transhumanism and environmentalism.)
7. Singularity takes too long to happen. 30% raise hands — and there is much laughter and applause.
Thiel says that, although it is rarely talked about, perhaps the most dangerous scenario is that the Singularity takes too long to happen. He notes that several decades ago, people expected American real wages to skyrocket and the amount of time working to decrease. Americans were supposed to be rich and bored. (Indeed, Thiel doesn't mention it, but the very first issue of The Public Interest, back in 1965, included essays that worried about this precise concern, under the heading "The Great Automation Question.") But it didn't happen — real wages have stayed the same since 1973 and Americans work many more hours per year than they used to.
Thiel says we should understand the recent economic problems not as a housing crisis or credit crisis but rather as a technology crisis. All forms of credit involve claims on the future. Credit works, he says, if you have a background of growth — if everything grows every year, you won't have a credit crisis. But a credit crisis means that claims for the future can't be matched.
He says that if we want to keep society stable, we have to keep growing, or else we can't support all of the projected growth that we've currently leveraged. Global stability, he says, depends on a "Good Singularity."
In essence, we have to keep growing because we've already bet on the promise that we'll grow. (I tried this argument in a poker game once for why a pair of threes should trump a flush — I already allocated my winnings for this game to pay next month's rent! — but it didn't take.)
Thiel's talk is over halfway into his forty-minute slot. He is an engaging speaker with a fascinating thesis. The questioners are lining up quickly — far more lined up than for any other speaker so far, including Kurzweil.
In response to the first question about the current recession, Thiel predicts there will be no more bubbles in the next twenty years; either it will boom continuously or stay bust, but people are too aware now, and the cycle pattern has been broken. The next questioner asks about regulation and government involvement — should all this innovation happen in the private sector, or should the government fund it? Thiel says that the government isn't anywhere near focused enough on science and technology right now, and he doesn't think it has any role to play in innovation.
Another questioner asks about Francis Fukuyama's book, Our Posthuman Future, in which he argues that once we create superhumans, there will be a superhuman/human divide. (Fukuyama has also called transhumanism one of the greatest threats to the welfare of humanity.) Thiel says it's implausible — technology filters down, just like cell phones. He says that it's a non-argument and that Fukuyama is hysterical, to rapturous applause from the audience.
After standing in line, holding my laptop with one hand and blogging with another, I take the stand and ask Thiel about the limits of his projection: if we're constantly leveraging against the future, what happens when growth reaches its limits? Will we hit some sort of catastrophic collapse? He says that we may reach some point in the future where we have, basically, a repeat of what we had over the last two years, when we can't meet growth and we have another collapse. So are there no limits to growth, I ask? He says if we hit other road bumps we'll have to just deal with it then. I try again, but the audience becomes restless and Thiel essentially repeats his point, so I go sit down.
What I should have asked was: Why is it so crucial to speed up innovation if catastrophic collapse is seemingly inevitable, whether it happens now or later?