David Gelernter has written a characteristically thought-provoking essay about what guidance might be gleaned from Judaism for how human beings ought to treat “sophisticated anthropoid robots” with artificial intelligence powerful enough to allow them to respond to the world in a manner that makes them seem exactly like us. Taking his cue from Biblical and rabbinic strictures concerning cruelty to animals, he argues that because these robots “will seem human,” we should avoid treating them badly lest we become “more oblivious of cruelty to human beings.”This conclusion, which one might draw as well on Aristotelian as Biblical grounds, is a powerful one — and in a world of demolition derbies and “Will It Blend?,” where even a video of a washing machine being destroyed can go viral, it is hard to deny that Gelernter has identified a potentially serious issue. It was raised with great force in the “Flesh Fair” scenes of the 2001 movie A.I., where we see robots being hunted down, herded together, and subjected to various kinds of creative destruction in front of howling fans. Meanwhile, the robots look on quietly with what I have always found to be heartbreaking incomprehension.And yet, it also seems to me that the ringleader at the Flesh Fair, vicious though he is, is not entirely wrong when he harangues the crowd about the need to find a way to assert the difference between humans and robots in a world where it is becoming increasingly easy to confuse the two. And it is in this connection that I wonder whether Gelernter’s argument has sufficiently acknowledged the challenge to Jewish thought that is being posed by at least some of the advocates of the advanced artificial intelligence he is describing.Gelernter knows full well the “sanctity and ineffable value” that Judaism puts on human life, which is to say he knows that in Jewish thinking human beings are unique within creation. In such a framework, it is understandable why the main concern with animal (or robot) cruelty should be the harm it might do to “our own moral standing” or “the moral stature and dignity of human beings.” But the moral dignity of human beings and our uniqueness in creation is precisely what is coming under attack from transhumanists, as well as the less potent but more widespread forms of scientism and technophilia in our culture. Gelernter is certain that the robot will feel no pain; but what of those who would reply that they will “process” an electrical signal from some part of their bodies that will trigger certain kinds of functions — which is after all what pain “really” is? Gelernter is certain that these anthropoid robots will have no inner life, but what of those, such as Tor Nørretranders and Daniel Dennett, who are busy arguing that what we call consciousness is just “user illusion”?I don’t doubt that Gelernter could answer these questions. But I do doubt that his answers would put an end to all the efforts to convince us that after all we are simply “meat machines.” And if more and more we think of ourselves as “meat machines,” then what Gelernter calls the “pernicious incrementalism” of cruelty to robots that he is reasonably concerned about points in another direction as well: not that we start treating “thous” as “its,” but that in transforming “its” into “thous” we take all the moral meaning out of “human.”It probably should not surprise us that there are dangers of kindness to robots as well as cruelty, but the fact that it is so might prompt us to wonder about the reasons that seem to make going down this road so compelling. Speaking Jewishly, Gelernter might recall the lesson from the pre-twentieth-century accounts of the golem, the legends of pious men creating an artificial anthropoid that go back to the Talmud. Nearly from the start two things are clear about the golem: only the wisest and most pious could ever hope to make one, but the greatest wisdom would be to know how and not to do so.

5 Comments

  1. The Golem story (and in particular, the thematic substance) has always been pertinent to the debate around artificial intelligence. No doubt about it. But I think golems have something else to say.

    The first recorded usage of the term occurs in psalm 139 (KJV):

    14 I will praise You, for I am fearfully and wonderfully made;[b]
    Marvelous are Your works,
    And that my soul knows very well.
    15 My frame was not hidden from You,
    When I was made in secret,
    And skillfully wrought in the lowest parts of the earth.
    16 Your eyes saw my substance[Goylem], being yet unformed.
    And in Your book they all were written,
    The days fashioned for me,
    When as yet there were none of them.

    The story of the Golem is not built around the idea of a mere robot. It is goylem, the form that has yet to take shape. In other words, the Golem is anticipatory, a proto-man.

    A parable about arrogance, certainly. But it's about the arrogance of neglecting our creations, particularly when they become difficult to distinguish from our children.

  2. The relationship between the rare Biblical usage of the term golem and the subsequent mystical use is, to say the least, contested ground. What is clearer is that in the vast majority of discussions of the golem prior to the twentieth century, issues of neglect and arrogance simply do not arise. They are absent even in the ur-texts of twentieth-century golem legends by Yudl Rosenberg and Chaim Bloch where, contrary to most of the tradition, the golem is at least conceived with a useful purpose in mind. In short, the golem legend is useful for us in part because it is so little about what we might expect it to be about.

  3. It's funny… I too found the movie "A.I." to be quite moving, and the "Flesh Fair" to be disturbing… but for different reasons.

    From my point of view, humans are "meat machines"… but I don't attach the negative connotations to that phrase that you seem to be attaching.

    We are machines… conscious machines, and that fact is more than sufficient justification for placing value upon us. And, likewise, entities which are conscious and not made of squishy carbon compounds – say, like artificial intelligences? – are no less special than we are. The fact that we are machines does not inspire me to be cruel to other sentiences, but quite the opposite: It inspires me towards kindness and compassion, and the recognition that what matters is sentience, and not the means by which it happens to be instantiated.

    I don't need to view reality as a hierarchy with humans at the top of the pyramid in order to endow humans with value. Sentience has value, consciousness has value, and it doesn't particularly matter what physical substrate happens to support that consciousness. The Transhumanist movement doesn't endanger our ability to value consciousness or sentience… it just provides a larger potential palette and canvas upon which consciousness may be instantiated. Transhumanism doesn't endanger our humanity… only our illusions as to what that humanity consists of, and what features make it important. The only people who need fear Transhumanism are those who feel they have no choice but to cling to ancient myths as to the origin and nature of humanity, myths which they feel are the only possible foundation for asserting humanity's value.

    I also don't think you're characterizing the work of Tor Nørretranders and Daniel Dennett quite accurately; What Mr. Nørretranders explains in his book, "The User Illusion", is not that consciousness doesn't exist, but that it is not the homogeneous, indivisible whole that a more naive analysis might indicate.

  4. "Nearly from the start two things are clear about the golem: only the wisest and most pious could ever hope to make one, but the greatest wisdom would be to know how and not to do so."

    Prof. Rubin, you have once again hit the ball out of the park. I have said it myself many times, but never so eloquently.

    Frankly, like so many of its active pioneers (which I am not), I am fascinated by the concept and the technology of humanoid artificial intelligence, or artificial general intelligence, as I first called it. Fascinated to understand how it might be created, and what it would mean. And equally convinced that to create it is something we should not do.

    Even if we think the technology is one that could be immensely useful, we should know that to create a machine in the image of Man is to create a machine that by definition cannot be controlled — as the Golem myth, and its many recitations in science fiction, would tell us. And one which, by definition, should not be controlled, should not be enslaved or made to suffer. And one which, by definition, would be a threat to all humanity.

    Those who think they can define a way to create a trustworthy "friendly AI" are missing the point: any machine that is self-willed is by definition nobody's tool, is by definition out of control.

    We already have information systems that allow us to instantly retrieve more knowledge than any of us could possibly absorb in a lifetime. We have machines that can calculate trillions of times faster than any of us. We will soon have machines that see, hear, speak and understand. That can walk among us and do work for us, making and executing immediate plans in pursuit of high-level goals.

    Why is it hard to see that all this and more can be done, and must only be done, without creating machines that are in any way willful, or that could be mistaken as human? Which implies they are not to be created in the image of Man, because, as any parent should know, from the very beginning, will is the kernel of the human mind, and because, in deliberately creating machines that look like us, we have already begun to confuse ourselves about the difference between "other" and "tool."

  5. We have lived in a world where it was possible to distinguish a person from an it by fairly casual inspection. What AI threatens to do is take away that easy distinction.

    I wouldn't attack "the moral dignity of human beings" or "our uniqueness in creation." But the latter is not the basis of the former. Humanity is currently in a class by itself; if it's otherwise tomorrow, we'll have the same moral standing as ever, just not as much smugness.

    We are already confused if we think the present ease of distinguishing human from non comes from clarity on our part. Or if we think the distinction is an inherently simple one. The question is which of us particular kinds of machines have moral worth and why.

    If the test threatens to become as hard as the distinction is important, we have the choice to throw up our hands or blur distinctions; to find out what we care about and how to make decisions about it; or to try to prevent or delay our having to face those questions.

    Every human being is a threat to the human race, but a threat we have some statistics about. To underestimate the threat of AI would be a practical mistake; to misjudge the moral standing of AIs in either direction would be a moral mistake. There's something required about courage.

Comments are closed.