Monday, November 30, 2009
The executive order formally creating the commission — what you might think of as the charter explaining the commission’s purpose and powers — was published today. It emphasizes policy-relevance: the commission is tasked with “recommend[ing] legal, regulatory, or policy actions” related to bioethics. This stands in contrast to its immediate predecessor, the President’s Council on Bioethics, the charter for which emphasized exploring and discussing over recommending. Since the former council’s website (bioethics.gov) has been taken down, we are pleased to announce that we have archived all of its publications here on the New Atlantis site. (The Council’s impressive website, which included transcripts of all its public meetings, will hopefully be restored somewhere online in its entirety soon; in the meantime, interested parties will have to make do with the incomplete record in the Internet Archive.)
The former council’s report that is most relevant to this blog is Beyond Therapy, a 2003 consideration of human enhancement. Perhaps most striking about that report is its modus operandi: instead of beginning with an analysis of novel and controversial enhancement technologies, the council chose to begin by examining human functions and activities that have been targeted for enhancement. “By structuring the inquiry around the desires and goals of human beings, we adopt the perspective of human experience and human aspiration, rather than the perspective of technique and power. By beginning with long-standing and worthy human desires, we avoid premature adverse judgment on using biotechnologies to help satisfy them.” Beyond Therapy is a powerful document, and it rewards careful attention. (We published a symposium of essays in response to the book.)
We will have more to say about the former council in the months ahead. But for now, one final amusing observation about the new commission: If you look closely at the executive order creating it, you will see that among the issues it is invited to discuss is “the application of neuro- and robotic sciences.” That’s right — President Obama’s new bioethics commission has been explicitly invited to take a look at robotics. Just the latest indication that the administration is worried about the looming robot threat.
Sunday, November 29, 2009
Monday, November 23, 2009
There is something about pictures of Earth from space that seems to call forth this judgment all the time; it is equivalent, I suppose, to the “those people look like ants” wonderment that used to be so common when viewing a city from the top of its tallest building. That humans are insignificant is a particularly common idea among those environmentalists and atheists who consider that their opinions are founded in a scientific worldview. It is also widely shared by transhumanists, who use it all the time, if only implicitly, when they debunk such pretensions as might make us satisfied with not making the leap to posthumanity.
But in fact, just as those people were not really ants, so it is not clear that we are so insignificant, even from the point of view of a science that teaches us that we are a vanishingly small part of what Michael Frayn, in his classic novel Sweet Dreams, called “a universe of zeros.” Let’s leave aside all the amazing human accomplishments in science and technology (let alone literature and the arts) that are required for Mr. Diaz to be able to call our attention to the video, and the amazing human accomplishments likewise necessary to produce the video. The bottom line is, we are the only beings out there observing what Earth’s weather looks like from space. Until we find alien intelligence, there is arguably no “observing” at all without us, and certainly no observations that would culminate in a judgment about how beautiful something is. At the moment, so far as we know (that is, leaving aside faith in God or aliens) we are the way in which the universe is coming to know itself, whether through the lens of science or aesthetics. That hardly seems like small potatoes.
Sometimes transhumanists play this side of the field, too. Perhaps we are the enlivening intelligence of a universe of otherwise dead matter, and it is the great task of humanity to spread intelligence throughout the cosmos, a task for which we are plainly unsuited in our present form. So onward, posthuman soldiers, following your self-willed evolutionary imperative! Those of us left behind may at least come to find some satisfaction that we were of the race that gave birth to you dancing stars.
It is interesting how quickly we come back to human insignificance; in this case, it is transhumanism’s belief in our vast potential to become what we are not, which makes what we are look so small.
Tuesday, November 17, 2009
1. Agriculture: decreased/more focused inputs, increased crop yields, quality, diversity, and reliability
2. Water: increased quality of water supply and more reliable and efficient distribution
3. Energy: more efficient energy production, transport and consumption, greater diversity of energy supplies
4. Transportation: increased speed and safety, greater energy efficiency
5. Food supply: improved quality and diversity along with increased safety and storage time
6. Space travel: reduced cost of routine manned space operations, increased capacity in exploratory efforts
7. Construction: more durable materials, increased simplicity and speed of commercial, residential and infrastructure construction
8. Military: increased ability to detect explosives and weapons of mass destruction, and to preempt their use or contain their consequences; increased reliability and precision of munitions
9. Medicine: increased safety, scope and availability of vaccination; less invasive and more precise surgery; personalized and/or narrowly targeted medical treatments; simplified diagnostics and treatments; better prosthetic devices for physical and neurological disabilities (and yes, I know about the thin line between therapy and enhancement, but we'll deal with that another time).
10. Nature: improved ability to predict extreme weather and geological events
11. Waste: treating waste products as resources
12. Communication: continued increases in speed, bandwidth and information connectivity
I’m not trying to be controversial or surprising here, nor to suggest that transhumanists are against any of these developments. But I am pointing out that progress is not the same as “the latest thing” or the most outré imaginings. Progress is not about being at the bleeding edge for its own sake, or having an idea that only a few people believe in, or being attracted to what is strange and unique. Let’s try not to confuse being for some less-than-controversial kinds of progress with being against progress simply. Any and all change is not progress, and somebody’s claim that a given change is “progress” should be taken as an invitation to critical thinking about what would make human life better — and not as the last word.
Monday, November 16, 2009
• Squishy but necessary: Last month, Athena Andreadis, the author of the book The Biology of Star Trek, had a piece in H+ Magazine throwing cold water on some visions of brain uploading and downloading. Money quote: “It came to me in a flash that many transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons.... The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.”
• Thanks, guys: We’re pleased to have fans over at the “Fight Aging” website, where they say we “write well.” The praise warms our hearts, it truly does. We only wish that those guys were capable of reading well. Their post elicited this response from our Futurisms coauthor Charles Rubin: “Questioning what look to us to be harebrained ideas of progress does not make us ‘against progress.’ Nor does skepticism about ill-considered notions of the benefits of immortality make us ‘for suffering’ or ‘pro-death.’ It may be that the transhumanists really cannot grasp those distinctions, perhaps because of their apparently absolute (yet completely unjustified) confidence in their ability to foretell the future. Only if they have a reliable crystal ball — if they can know with certainty that their vision of the future will come to pass — does opposition to their vision of progress make us ‘anti-progress’ and does acknowledging the consequences of mortality make us ‘pro-death.’” Indeed. And I might add that such confidence in unproven predictive powers seems less like the rationality transhumanists claim to espouse than like uncritical faith.
• A sporting chance: Gizmodo has an essay by Aimee Mullins — an actress, model, former athlete, and double amputee — about technology, disability, and competition. Her key argument: “Advantage is just something that is part of sports. No athletes are created equal. They simply aren’t, due to a multitude of factors including geography, access to training, facilities, health care, injury prevention, and sure, technology.” Mullins concedes that it might be appropriate to keep certain technological enhancements out of sport, but she is “not sure” where to draw the line, and she advises not making any decisions about technologies before they actually exist.
• On ‘Neuro-Trash’: A remarkable essay in the New Humanist by Raymond Tallis on the abuse of brain research. Tallis starts off by describing how neuroscience is being applied to ever more aspects of human affairs. “This might be regarded as harmless nonsense, were it not for the fact that it is increasingly being suggested ... that we should use the findings of neurosciences to guide policymakers. The return of political scientism, particularly of a biological variety, should strike a chill in the heart.” Beneath this trend, Tallis writes, lies the incorrect “fundamental assumption” that “we are our brains.” (Vaughan over at MindHacks describes Tallis’s essay as “barnstorming and somewhat bad-tempered.” Readers looking for more along these lines might also enjoy our friend Matt Crawford’s New Atlantis essay on “The Limits of Neuro-Talk.”)
• Calling Ringling Bros.: We’ve known for a long time that people talking on cell phones get so distracted that they can become oblivious to what’s physically around them — entering a state sometimes called “absent presence.” In the October issue of Applied Cognitive Psychology, a team of researchers from Western Washington University reported the results of an experiment observing and interviewing pedestrians to see if they noticed a nearby clown wearing “a vivid purple and yellow outfit, large shoes, and a bright red nose” as he rode a bicycle. As you would expect, cell phone users were pretty oblivious. Does this suggest that we’ll suffer from increasing “inattentional blindness” as we are bombarded with ever more stimuli from increasingly ubiquitous gadgets? Not necessarily: it turns out that pedestrians listening to music tended to notice the clown more than those walking in silence. The cohort likeliest to see the clown consisted of people walking in pairs.
• Metaphor creep: “If the brain is like a set of computers that control different tasks,” says an SFSU psychology professor, then “consciousness is the Wi-Fi network that allows different parts of the brain to talk to each other and decide which action ‘wins’ and is carried out.”
• Another kind of ‘Futurism’: This year marks the centenary of the international Futurist art movement. The 1909 Futurist Manifesto that kicked it all off is explicitly violent and even sexist in its aims (“we want to exalt movements of aggression, feverish sleeplessness, the double march, the perilous leap, the slap and the blow with the fist ... we want to glorify war — the only cure for the world...”) and critical of any conservative institutions (professors and antiquaries are called “gangrene”; museums, libraries, and academies are called “cemeteries of wasted effort, calvaries of crucified dreams, registers of false starts”). Central to the Futurist vision was a love of new technologies — and of all the speed, noise, and violence of the machine age.
Saturday, November 14, 2009
The options that the poll offers are mostly percentages — 10 percent, 20 percent, and so on — which is pretty silly, since it suggests that percentages are a useful way of talking about the human organism. (What is “20 percent” of a human body? Is that by mass? Or volume? Or perhaps surface area?) The logic of the poll’s options leads the respondent to think about the question along the lines of the Sorites paradox: “Well,” a respondent might think, “I don’t see what the big difference would be between 30 percent and 40 percent... or between 40 and 50 percent... or between 50 and 60...”
Given those options, it’s no surprise that three quarters of the respondents (as of this writing) have instead picked the following choice: “You can take away nearly everything, but if our brains are replaced by machines, we cease being human.”
In the comments beneath the poll, some readers objected to the options they were given. Commenter “newgalactic” argues that the correct answer to the question is a figure lower than any among the poll’s options: “The ‘body,’” he writes, “has more ties to the ‘mind/soul’ than we realize.” (He also wonders whether the poll results might be skewed by the fact that most Gizmodo readers are “dorks/nerds” who are “less physically blessed,” while someone with a body like Brad Pitt might “be more inclined to attach his humanity to his physical body.”)
The comments also suggest that any attempt to take the poll’s silly question seriously must start by asking a deeper question: What does it mean to be human? In the first issue of The New Atlantis, bioethicist Gilbert Meilaender described some of the difficulties that the deeper question entails:
We might try to think of human beings (or the other animals) [as collections of parts], and, indeed, we are often invited to think of them as collections of genes (or as collections of organs possibly available for transplant), but we might also wonder whether doing so loses a sense of ourselves as integrated, organic wholes.
Even if we think of the human being as an integrated organism, the nature of its unity remains puzzling in a second way. The seeming duality of person and body has played a significant role in bioethics. As the language of “personhood” gradually came to prominence in bioethical reflection, attention has often been directed to circumstances in which the duality of body and person seems pronounced. Suppose a child is born who, throughout his life, will be profoundly retarded. Or suppose an elderly woman has now become severely demented. Suppose because of trauma a person lapses into a permanent vegetative state. How shall we describe such human beings? Is it best to say that they are no longer persons? Or is it more revealing to describe them as severely disabled persons? Similar questions arise with embryos and fetuses. Are they human organisms that have not yet attained personhood? Or are they the weakest and most vulnerable of human beings?
Related questions arise when we think of conditions often, but controversially, regarded as disabilities.... Notice that the harder we press such views the less significant becomes any normative human form. A head, or a brain, might be sufficient, if it could find ways to carry out at a high level the functions important to our life.
Such puzzles are inherent in the human condition, and they are sufficiently puzzling that we may struggle to find the right language in which to discuss that aspect of the human being which cannot be reduced to body. Within the unity of the human being a duality remains, and I will here use the language of “spirit” to gesture toward it. As embodied spirits (or inspirited bodies) we stand at the juncture of nature and spirit, tempted by reductionisms of various sorts. We have no access to the spirit — the person — apart from the body, which is the locus of personal presence; yet, we are deeply ill at ease in the presence of a living human body from which all that is personal seems absent. It is fair to say, I think, that, in reflecting upon the duality of our nature, we have traditionally given a kind of primacy to the living human body. Thus, uneasy as we might be with the living body from which the person seems absent, we would be very reluctant indeed to bury that body while its heart still beat.
A definition of human being based only on biological parts will fail to capture the unique nature of the living human. A definition based on biological functions will fail to include human beings who lack those specific functions. Indeed, any strictly biological definition will miss the qualitative aspects of what it means to be human — how we live and behave over the course of our lives; what we do and are capable of doing; what we feel and experience; and how all of it changes — in short, the phenomena of life. And so a rich understanding of what it means to be human might start with science but must go beyond it, seeking wisdom especially in the disciplines rightly called “the humanities.”
There can be no honest answer to the Gizmodo poll as it is phrased, and there is no easy answer to the deeper question of what it means to be human. But the search is rewarding — and, in a way, the search may itself be part of the answer.
Friday, November 13, 2009
Thursday, November 12, 2009
As far as tyrannicide goes, like many transhumanists de Grey stops well short of thinking through the possible consequences of the change he proposes (we are all speculating here, but we can try to be thorough speculators). Remember that tyrants already tend to be fairly security-conscious, knowing that whatever happens they are still mortal. Why would the prospect of having power and immortality to lose make them less risk-averse? It seems rather more likely that the immortal tyrant will be extremely risk-averse and hence security-conscious, and therefore represent a very “hard target” for the assassin — who will have equally much to lose if his mission is unsuccessful. As it is, most people living under a tyrant just do their best to keep their heads down; tyrannicides are rare. Throw immortality into the mix, and they are likely to be rarer still.
As far as democracy goes, de Grey exhibits a confidence characteristic of transhumanists generally: he knows what the future holds. I would certainly join him in hoping that democracy is here to stay and increasingly the wave of the future, but I don’t know that to be true and I don’t know how anyone could know that to be true. The victory of democracy over tyranny in the twentieth century was a near thing. History tells us that good times readily give way to bad times. The belief that democracy represents a permanent cure to the problem of tyranny is facile, in the way that all easy confidence about the direction of history is facile.
Finally, de Grey falls back on the proposition ‘better the devil you know than the devil you don’t’ — better Lenin than Stalin, to use his example. Leaving aside the question of how different the two leaders actually were, here de Grey is apparently trying to be hard-headed: It may not be all sweetness and light when we’re all immortal after all! Like many transhumanists, he is not very good at moral realism. You have to wonder: would the character of the immortal tyrant really stay the same over time? If, as the old maxim holds, absolute power corrupts absolutely, it would seem very much more likely that life under an immortal tyrant would get worse.
Finally, the problem is not really just tyranny, it is evil. In his Wisconsin State Fair speech of 1859, Lincoln notes, “It is said an Eastern monarch once charged his wise men to invent him a sentence, to be ever in view, and which should be true and appropriate in all times and situations. They presented him the words: ‘And this, too, shall pass away.’ How much it expresses! How chastening in the hour of pride! — how consoling in the depths of affliction!” Immortal evil means a world where the prideful will never be chastened, and the afflicted only consoled by giving up the very boon that de Grey promises us.
Immortality lasts a long time. It is not for nothing that in his story “The Immortal” Jorge Luis Borges pictures the immortal characters as unconcerned with their lives or their surroundings. Once you’ve followed your passion — playing the saxophone, loving men or women, traveling, writing poetry — for, say, 10,000 years, it will likely begin to lose its grip. There may be more to say or to do than anyone can ever accomplish. But each of us develops particular interests, engages in particular pursuits. When we have been at them long enough, we are likely to find ourselves just filling time. In the case of immortality, an inexhaustible period of time.
And when there is always time for everything, there is no urgency for anything. It may well be that life is not long enough. But it is equally true that a life without limits would lose the beauty of its moments. It would become boring, but more deeply it would become shapeless. Just one damn thing after another.
This is the paradox death imposes upon us: it grants us the possibility of a meaningful life even as it takes it away. It gives us the promise of each moment, even as it threatens to steal that moment, or at least reminds us that some time our moments will be gone. It allows each moment to insist upon itself, because there are only a limited number of them. And none of us knows how many.
Wednesday, November 11, 2009
Called Stats Monkey, the new computer software analyzes the box scores, and play by plays to automatically generate the news article. It highlights key players and clutch plays and will even write an appropriate headline and find a matching photo for a [key] player!... [I]t could work for every sport humans like to read about. Moreover, Stats Monkey could be adapted to write business stories, or conference updates, or other forms of professional journalism that rely heavily on numbers and analytics. Writing, it seems, is no longer immune from automation.
Tuesday, November 10, 2009
Courage of course requires being in harm’s way, and what we might hope would be the normal qualms of a decent chain of command about putting people in harm’s way is only heightened in our particular cultural environment. This point was brought home to me with great force when a student directed me to a recruitment video at the United States Navy Memorial website with the tagline “working every day to unman the front lines,” featuring the Navy’s remotely-piloted drone technology. It would be churlish and wrongheaded to deny that such marvels are a wonderful way to avoid putting the lives of our sons and daughters at risk. But it would be foolish to ignore the double entendre as well. With the front lines unmanned, there will be less need of nerve, courage, and spiritedness — manly virtues that Officer Kimberly Munley, who took down the Fort Hood shooter, reminds us are not exclusively the province of men. And it is not the Navy alone, of course. The push to replace human soldiers and first responders with robotic devices is well underway in nearly all services (I don’t know that much is happening on the fire or emergency medicine fronts).
Battling ’bots may still be only a distant prospect, and right at the moment we plainly have no lack of fellow citizens willing and able to serve as our guardians (although some first responders, like volunteer fire services, might be an exception). But in her provocative book Systems of Survival, Jane Jacobs warns that the guardian virtues hang together, and if you tamper with one you risk undermining them all — a point Plato might well agree with. So we should be asking ourselves: What happens to virtues like honor, loyalty, or discipline when they are not only challenged from without by the bourgeois virtues, but from within; when a need for courage is seen by the guardians themselves as a sign of a defect in their ability to protect us without putting themselves in harm’s way? It is an awesome task to be responsible for the lives of others at the risk of one’s own life, and through the guardian virtues the terrible power of that task is directed and constrained. As much as we hope for a day when all men will live in peace, we are entitled to wonder whether that day will be brought closer by replacing the traditional terrors of battle with innovative methods of cold-blooded killing.
Monday, November 9, 2009
- David Chalmers on principles of simulation and the Singularity (video / post)
- Peter Thiel making the economic case for the Singularity (video / post)
- And the discussion with Stephen Wolfram on the Singularity at the cosmic scale (video / post)
- Brad Templeton's talk was one of the most entertaining, ambitious, and plausible; the audience question segment was also particularly good (video / post)
- Juergen Schmidhuber's talk on digitizing creativity was lively and engaging, if silly (video / post)
- The segment of Michael Nielsen's talk where he describes the principles of quantum computing (video / post)
Friday, November 6, 2009
Wired Science has a story by Brandon Keim featuring the work of University of Chicago geoscientist Patrick McGuire. McGuire is working on “wearable AI systems and digital eyes that see what human eyes can’t.” So equipped, “space explorers of the future could be not just astronauts, but ‘cyborg astrobiologists.’” That phrase — “cyborg astrobiologist” — comes from the title McGuire and his team gave to the paper reporting their early results. In their paper, they describe developing a “real-time computer-vision system” that has helped them successfully to identify “lichens as novel within a series of images acquired in semi‑arid desert environments.” Their system also quickly learned to distinguish between familiar and novel colored samples.
According to Keim, McGuire admits there is a long way to go before we get to the cyborg astrobiologist stage — a point that seems to have been missed by the folks at Wired Science, who gave Keim’s piece the headline “AI Spacesuits Turn Astronauts Into Cyborg Biologists” (note the present tense). But it’s true that the meaning of “cyborg” is contested ground. If Michael Chorost in his fine book Rebuilt (which I reviewed here) can decide that he is a cyborg because he has a cochlear implant, than perhaps those merely testing McGuire’s system are cyborg, too.
But my point now isn’t to be one of those sticklers who tries to argue with Humpty Dumpty that it is better if words don’t mean whatever we individually want them to mean. Rather, I’m wondering why McGuire should have used this phrase, “cyborg astrobiologists,” in this recent paper and a number of earlier ones. The word “cyborg” was originally used to describe something similar to what McGuire is attempting, as Adam Keiper has noted:
In 1960, at the height of interest in cybernetics, the word cyborg—short for “cybernetic organism”—was coined by researcher Manfred E. Clynes in a paper he co-wrote for the journal Astronautics. The paper was a theoretical consideration of various ways in which fragile human bodies could be technologically adapted and improved to better withstand the rigors of space exploration. (Clynes’s co-author said the word cyborg “sounds like a town in Denmark.”)
But McGuire doesn’t seem to be aware of the word’s original connection to space exploration — he doesn’t acknowledge it anywhere, as far as I can tell — and instead he seems to be using the word “cyborg” in its more recent and sensationalistic science-fiction-ish sense of part-man, part-machine. So why use that word? The simple answer, I suppose, is that academics are far from immune to the lure of attention-getting titles for their work. But it is still noteworthy that for McGuire and his audience, “cyborg” is apparently something to strive for, not a monstrous hybrid like most iconic cyborgs (think Darth Vader, the Borg, or the Terminators). Deliberately or not, McGuire is engaged in a revaluation of values. One wonders whether in a transhumanist future there will be any “monsters” at all; perhaps that word will share the fate of other terms of distinction that have become outmoded or politically incorrect. “Monster,” after all, implies some norm or standard, and transhumanism is in revolt against norms and standards.
Or perhaps the unenhanced human being will become the monster, the literal embodiment of all that right-thinking intelligence rebels against, a dead-end abortion of mere nature. Their obstinate persistence would be fearful if they themselves were not so pitiful. We came from that?
Tuesday, November 3, 2009
But recording everything you do takes people out of the "here and now," psychologists say. Constant documenting may make people less thoughtful about and engaged in what they're doing because they are focused on the recording process, Schwartz said.Moreover, if these documented memories are available to others, people may actually do things differently."If we have experiences with an eye toward the expectation that in the next five minutes, we're going to tweet them, we may choose difference experiences to have, ones that we can talk about rather than ones we have an interest in," he said.Similarly, a 1993 study led by researchers at the University of Virginia found that undergraduate students who were asked to think about their reasons for choosing posters chose differently and reported less satisfaction than those who did not have to justify their choices.
The opportunity to contact many people at once seems to encourage compartmentalization, as people try to establish different kinds of romantic attachments with different people at the same time.It seems to encourage an attitude of contingency. If you have several options perpetually before you, and if technology makes it easier to jump from one option to another, you will naturally adopt the mentality of a comparison shopper.It also seems to encourage an atmosphere of general disenchantment. Across the centuries the moral systems from medieval chivalry to Bruce Springsteen love anthems have worked the same basic way. They take immediate selfish interests and enmesh them within transcendent, spiritual meanings. Love becomes a holy cause, an act of self-sacrifice and selfless commitment.But texting and the utilitarian mind-set are naturally corrosive toward poetry and imagination. A coat of ironic detachment is required for anyone who hopes to withstand the brutal feedback of the marketplace. In today’s world, the choice of a Prius can be a more sanctified act that the choice of an erotic partner.
Monday, November 2, 2009
Productive? Efficient? More like running up and down a beach repairing a row of sand castles as the tide comes rolling in and the rain comes pouring down. Multitasking, a definition: “The attempt by human beings to operate like computers, often done with the assistance of computers.” It begins by giving us more tasks to do, making each task harder to do, and dimming the mental powers required to do them.
... I quickly adjusted to the Kindle’s screen and mastered the scroll and page-turn buttons. Nevertheless, my eyes were restless and jumped around as they do when I try to read for a sustained time on the computer. Distractions abounded. I looked up Dickens on Wikipedia, then jumped straight down the Internet rabbit hole following a link about a Dickens short story, “Mugby Junction.” Twenty minutes later I still hadn’t returned to my reading of Nickleby on the Kindle.
The child’s imagination and children’s nascent sense of probity and introspection are no match for a medium that creates a sense of urgency to get to the next piece of stimulating information. The attention span of children may be one of the main reasons why an immersion in on-screen reading is so engaging, and it may also be why digital reading may ultimately prove antithetical to the long-in-development, reflective nature of the expert reading brain as we know it....The habitual reader Aristotle worried about the three lives of the “good society”: the first life is the life of productivity and knowledge gathering; the second, the life of entertainment; and the third, the life of reflection and contemplation....I have no doubt that the digital immersion of our children will provide a rich life of entertainment and information and knowledge. My concern is that they will not learn, with their passive immersion, the joy and the effort of the third life, of thinking one’s own thoughts and going beyond what is given.
Sunday, November 1, 2009
Well, thank goodness for the variation of media formats. If you're looking for something short and clear, I found essentially a six-page version of Kurzweil's book that he did as an article for The Futurist, the magazine of the World Future Society. It's on pages 2-3 and 5-9 of the PDF here.
Charles T. Rubin, New Atlantis contributing editor.
Adam Keiper, New Atlantis editor.
Ari N. Schulman, New Atlantis senior editor.
Brendan Foht, New Atlantis assistant editor.
- Machine Morality and Human Responsibility
- Beyond Mankind
- Why Be Human?
- Our Bodies, Ourselves
- The Rhetoric of Extinction
- Man or Machine?
- Artificial Intelligence and Human Nature
by Adam Keiper and Ari N. Schulman
- Humanism and Transhumanism (Fred Baumann)
- The Trouble with the Turing Test (Mark Halpern)
- Disenchanting Determinism (Caitrin Nicol)
- The Anti-Theology of the Body (David B. Hart)
- Ageless Bodies, Happy Souls (Leon R. Kass)
- Transitional Humanity (Gilbert Meilaender)
- Till Malfunction Do Us Part (Caitrin Nicol)
- Methuselah and Us (Diana Schaub)
- ► 2012 (38)
- ► 2011 (46)
- ► 2010 (83)
- The New Bioethics Commission
- Looking for a Serious Debate
- The Significance of Man
- The “Anti-Progress” Slur
- Quick Links: Singularity University, Neuro-Trash, ...
- The Human Factor
- The more you know... (about radical life extension...
- Long Live the King
- The problem with defending death
- Robotic sports writers
- Someone is WRONG on the Internet!
- Unmanning the Front Lines
- Singularity Summit videos
- The Myth of Libertarian Enhancement, Cont'd
- Defining ‘Cyborg’ Down
- In texted time
- On being in the world
- Useful Singularity overview
- ▼ November (18)