The 2010 H+ Summit is back underway here at Harvard, running even later than yesterday. After the first couple of talks, the conference launches into a more philosophical block, which promises a break in the doldrums of most of these talks so far. First up in this block is George Dvorsky (bio, slides, on-the-fly transcript), who rightly notes that ethical considerations have largely gone unmentioned so far at this conference. And how. He also notes in a tweet that “The notion that ethicists are not needed at a conference on human enhancement is laughable.” Hear hear.
Dvorsky’s presentation is primarily concerned with machine consciousness, and ensuring the rights of new sentient computational lifeforms. He’s not talking about robots, he says, like the ones we have today that are not sentient but are anthropomorphized to evoke our responses as if they were. (Again, see Caitrin Nicol in The New Atlantis on this subject.) Dvorsky posits that these robots have no moral worth. For example, he says, you may have seen this video before — footage of a robot that looks a bit like a dog and is subjected to some abuse:
Even though many people want to feel sorry for the robot when it gets kicked, Dvorsky says, they shouldn’t, because it has no moral worth. Only things with subjective awareness have moral worth. I’d agree that moral worth doesn’t inhere in such a robot. But as for subjective awareness as the benchmark, what about babies and the comatose, even the temporarily comatose? Do they have any moral worth? Also, it is not a simple matter to say that we shouldn’t feel sorry for the robot even if it doesn’t have moral worth. Isn’t it worth considering the effects on ourselves when we override our instincts and intuitions for empathy toward what seem to be other beings, however aptly-directed those feelings may be? Is protecting the rights of others entirely a matter of our rational faculties?
Dvorsky continues by describing problems raised by advancing the moral rights of machines. One, he says, is human exceptionalism. (And here the notion of human dignity gets its first brief mention at the conference.) Dvorsky derides human exceptionalism as mere “substrate chauvinism” — the idea that you must be made of biological matter to have rights.
He proposes that conscious machines be granted the same rights as human beings. Among these rights, he says, should be the right not to be shut down, and to own and control their own source code. But how does this fit in with the idea of “substrate chauvinism”? I thought the idea was that substrate doesn’t matter. If it does matter — to the extent that these beings have special sorts of rights like owning their own source code that not only don’t apply but have no meaning for humans — doesn’t this mean that there is some moral difference for conscious machines that must be accounted for rather than scoffed off with the label “substrate chauvinism”?
George Dvorsky has a lot of work to do with resolving incoherences in his approach to these questions. But he deserves credit for trying, and for offering the first serious, thoughtful talk at this conference. The organizers should have given far more emphasis and time to presenters like him. Who knows how many of the gaps in Dvorsky’s argument might have been filled if he had been given more than the ten-minute slot that they’re giving everybody else here with a project to plug.

0 Comments