Salamon starts off with highlighting the apparently stupid reasons people do what they do — habit, culture, etc. — and says they could achieve goals much more efficiently with a little strategic thinking. Humans tend to act from roles, she says, not goals. For example, people spend four years in medical school because they find the role of doctor important, rather than doing basic comparative research on salaries. Apparently roles cannot be goals. (Hmm, I wonder why Salamon does things like speak at conferences? Purely because it was the course of action that maximized her finances?)
Salamon continues to lament the way people don't think strategically when making decisions. She's extolling the virtues of writing down estimates and using those to make goals. This is a strangely long wind-up, going on and on about why making back-of-the-envelope calculations is good. (Does Salamon think she invented utilitarianism?)
Okay, now she's finally going for it: Her back-of-the-envelope calculations of the aggregate value and risk from A.I research. The risk from A.I., she says, is 7 percent. I guess she means a 7 percent chance of the world ending. The number of lives affected: about 7 billion. She breezes through more calculations, and manages to come up with some dollar amount of increased value through life. (Such estimates always have a touch of the absurd about them, no matter the context; here they seem especially silly.)
She breezes through the rest of the talk, too. Her conclusion is that we should think "damn hard" about the benefits and risks of the Singularity. And we should fund A.I. research and the Singularity Institute. A very underwhelming end to the summit, and quite an anticlimax after the previous panel.
And that's it for the conference. I'll have a final wrap-up later tonight (or possibly tomorrow), and will be going back and inserting a few more pictures into some of the earlier posts. Check back soon, and stay tuned, as the coverage we've been doing here marks just the beginning of our discussions here on "Futurisms."