Futurisms: Critiquing the project to reengineer humanity

Monday, October 12, 2009

Moral relativism and the future of technology

There are aspects of the arguments of advocates of human re-engineering that, for what it’s worth, I agree with. One is that nanotechnology, or more specifically molecular manufacturing, holds the potential (if it is possible at all) to alter a great many things that we currently take for granted about the shape of human life. It may not yet be clear how exactly we might find ourselves in a world where something like the replicators from Star Trek are possible, but a fair amount of research and development is currently pointed directly or indirectly in that direction, and I would not want to bet against it. I’m not sure whether this belief makes me a technological optimist or a technological pessimist, which is one reason why I don’t find those terms very helpful when we try to think seriously about the future of technology.

A while back I did a phone interview with a reporter from the Miami Herald about the nano-future, and recently I found his story online. The comments that follow the story are worth noting, because they are common responses to the point I tried to make to the reporter — that not all the potential of nanotechnology is for the good, and that some of the things that sound good may not really, on reflection, be good.

At first glance, the criticisms in the comments section sound contradictory. One writer notes that all technologies have good uses and bad uses, and that since there is nothing new about that the Herald, as a newspaper, should not bother to feature stories that make this point. But a second commenter notes in effect that since molecular manufacturing could put an end to scarcity, it will have the very good effect of putting an end to all conflict and the only bad left will be those (like me, apparently) who want to tell other people what they should or should not do. So from the first point of view we should just go ahead with nanotechnology because it doesn’t really change anything, and from the second point of view we should go ahead because it will change (nearly) everything.

The link between these two arguments is moral relativism. The second author speaks in quasi-nonrelativistic terms of a “fundamental right” to life, but he or she seems to mean by that a right not to die or a right to do whatever one wants with one’s life. That is quite a distance from the meaning of those who articulated a natural right to life. What is so attractive about libertarian utopianism except if one believes that all “lifestyle” choices are morally incommensurable, that the height of moral wisdom is “do your own thing” (and for as long as possible)?

On the other hand, the truism that all technology has good and bad uses is only trite if one believes that being able to judge between them is uninteresting — that such judgments are nothing more than matters of subjective opinion. Otherwise one might think it very important indeed to find ways to maximize the good and minimize the bad.

What is really at stake here is not whether some people want to boss others around, but whether technological change is worth thinking about at all. Moral relativism makes it easy to not think about it, to just sit back and let things happen while reserving the right to protest when some arbitrary personal line is crossed. I’m skeptical that disarming our moral judgment is the best way to deal with the challenges of our ever increasing powers over nature.


  1. Everyone knows that morality is contractual in nature.

  2. @kurt9 -- everyone since Locke, that is. But if he's right, then why adhere to it?
    My money's with Aristotle on this one. By nature, we're political animals, and ethics is part and parcel of politics.

  3. You could have just titled your post "What Are Nanotechnology's Externalities?" By that I mean what will the free market fail to capture. A company invests in R&D with some certainty of return. If a company is investing in R&D in the area of nanotechnology then there is some near term expectation of returns on that investment, i.e., there is an identifiable market. Generally, a company has no interest in killing its customers but there are externalities associated with most technologies that the market fails to capture and potential liabilities to the manufacturer that are unkowable. Some of these externalities are positive and some are negative. It is often not clear when regulation is necessary and with a fast moving technology like nanotech regulation might come too late to be effective. How do you anticipate these negative externalities? We can have an open and free discussion of what it means to be human. Humanity today is defined by family and community and morality is buttressed by mortality. Innovation is driven by wont of something and prosperity is driven by innovation. If these foundations are somehow challenged by technologies that we see on the horizon then we must be certain that the evolution of our tools is what we want because they will define who we are.

    One social negative externality of molecular manufacturing could be a loss of the concept of what tools are as we view them today. Molecular manufacturing might be used to build everything from infrastructure to food and clothing. Today we value simple tools and more complex tools for what they can do to help us make something but if tools lose their meaning and change into an amorphous general category of a replicator then we might change as a civilization in ways that diminish our humanity. Technology is simply knowledge+skills and each of us grasps this because most of us know that to exploit knowledge and skill we must have tools. I might value a hammer more right now because I need it to lay some hardwood flooring and tomorrow less. But I grasp the meaning of this tool. I don't know how I would look at a replicator and nanobots but it would be substantially different than how I now view the collection of tools in my basement.

  4. I see nothing wrong with the notion that moral wisdom is defined as doing ones own thing as long as you do not cause intentional harm to others. In fact, I belive in this 100%. I fail to see why there is anything wrong with this.

    We all have different dreams and goals, different likes and dislikes. It is simply not possible to craft a single moral standard that can satisfy everyone. There is no one perfect social or moral system that is suitable for everyone.

    Libertarian transhumanists are not the utopians. We have no desire to "convert" others to our ways. We simply want to be left alone to do our own thing and to pursue our own abjectives. It is those of you who criticize us who are the utopians. You think that you can define one perfect system for everyone and to force everyone to accept it. You are so clueless as not to recognize this as the road to tyranny.


[Basic HTML tags can be used in this comment field. Comments are moderated for civility and relevance and will not appear until the blog's editors have approved them.]