Part One
Expanding the Human
Introduction to Part One: Expanding the Human
Eric Schwitzgebel
Oh, this primate body—so limited! Future generations could maybe shed it, or at least improve it. How attached are you to your primate form? Do you want to stay forever here on the ground, hooting to your conspecifics in slow language, with two legs, a weak mind, and a body that fails after eighty years if not sooner?
There is something beautiful about natural, unaltered Homo sapiens, with all their joy and misery, ability and disability, evil and good. A conservative about human enhancement might say: Whatever future technology arises to potentially improve us, let’s have no part of it. We are good enough as is. Let’s keep technology outside of our bodies and minds, an external tool, while we ourselves remain the same. Let’s not treat humanity like some genetically modifiable crop to be enhanced for pickability, shelf-life, and resistance to herbicides. Homo sapiens ought to stand pat as the wonderful, if flawed, things we already are.
If you’re a moderate about human enhancement, this reasoning might seem absurd—as absurd as rejecting the invention of penicillin so as to retain our beautiful susceptibility to fatal diseases. If we can improve, without fundamentally changing ourselves, why not do so? If we could extend our longevity to, say, two hundred years instead of eighty, wouldn’t that be better? If we could enhance our cognitive capacities, holding more in memory, working better with complex ideas, being less susceptible to fallacies, wouldn’t we make better decisions? If we could communicate more directly through brain-to-brain interfaces (with provisions for privacy of course) instead of being limited by slow, imperfect speech, why not go for it? If we can improve without leaving our core humanity behind, we should do it—or at least we should allow people to make such changes if they want.
If you’re a liberal about enhancement, you might ask, what is this supposed core humanity? And why not leave it behind? Why not, if the chance arises, allow something new and radically different to grow alongside, or even replace, traditional humans? Maybe we could become gods—something fundamentally different and better, something that defies our meager human understanding in the same way that we defy the meager understanding of rhesus monkeys. Imagine some primate 15 million years ago hoping for the end of evolutionary change!
Some of the reasons for conservatism about human enhancement are essentially the same reasons that moved Edmund Burke in his classic defense of political conservatism: Well-intended changes almost always have unforeseen consequences, and those unforeseen consequences can be disastrous.1 (Burke’s example was the French Revolution.) Even if existing institutions, traditions, and policies have some obvious bad consequences, they have stood the test of time and so are, Burke argued, at least minimally adequate. Slow and moderate change is best, if change is to be pursued at all. An extreme technological Burkean could argue that we might even someday regret the invention of penicillin, if an antibiotic resistant superbug eventually destroys us all. Negative consequences might be non-obvious and slow in coming. The first story in this section, “Excerpt from Theuth,” explores the possible unforeseen consequences of initially innocuous-seeming cognitive enhancements for lawyers. The second story, “Adjoiners,” likewise starts with something seemingly innocent, even joyful—transporting oneself into the mind and body of a bird—and ends by illustrating how the traditional concepts of selfhood and responsibility can break when your body is no longer experienced as your own.
All our values, all our laws, and our whole sense of the human condition are grounded in our particular evolutionary and cultural history: a history of embodiment in primate form, one body at a time, one location at a time, one mind at a time, within a limited range of variation. The Burkean conservative about enhancement holds that we have little idea what disasters might follow from changing this. What might be the consequences for our minds, societies, and personal identities? What unforeseen risks or losses might await us if we create, or become, conscious computer programs? Or if we learn to upload and duplicate ourselves, or merge our minds, or create the illusion of anything we want at our fingertips? Are we ready for the destabilization that would result?
Liberalism about human enhancement vividly raises one of the most fundamental questions in philosophy: What, if anything, is ultimately good? If we can imagine improving ourselves in various different directions, or even radically departing from our human form and past, what direction or directions should we go?
Consider an extreme example. According to hedonists, the ultimate good is pleasure (and the avoidance of pain). If pleasure is the ultimate good, here’s something we could aspire to: Convert all of the matter of the Solar System into “hedonium”—whatever biological or computational substrate most efficiently generates pleasure.2 The whole Solar System could become an unfathomably large, intense, constant orgasm. Wouldn’t that be amazing? Such a system might know nothing about its human past, nothing about great art or literature or music. It might have no social relationships and no ethics. It might have no “higher” cognition whatsoever. The advocate of simple hedonism is unperturbed: None of that other stuff matters, as long as the pleasure is intense, secure, and durable. (We, the editors, guess that a small but non-trivial minority of our readers will embrace the Solar System orgasmatron as a worthy ideal to aspire to.3)
According to eudaimonists, in contrast, the ultimate good consists of flourishing in one’s distinctively human capacities, such as creativity, intellect, appreciation of beauty, and loving relationships.4 In improving humanity, we should aspire primarily to enhance these aspects of ourselves. A eudaimonist might welcome enhancements, maybe even radical enhancements, that enable our descendants to be wiser, more creative, more loving, and more appreciative of the world’s beauty. The third story in this section, “The Intended,” articulates one eudaimonist vision. On the surface, the eudaimonists in this story embrace traditional values: monogamous love relationships, gardening, appreciation of nature. But furthering these goals requires, behind the scenes, a radical technology that is arguably oppressive.
In reading “The Intended” you might wonder why societies in the distant future, with great technological capacity, would look so much like our own, populated with people who live in one body at a time and who communicate in oral language through their mouths—and even more specifically, people who love gardening and who act like jerks in love triangles. A possible Burkean explanation is this: We have evolved so stubbornly into the primates we are that the societies that work best for us and for the descendants we grow or build will always take that familiar shape.
All conservatism is tossed aside in the final and most radical story in this section, “The New Book of the Dead,” in which we transcend death to become godmachines. If in reading “The New Book of the Dead” you find that you only gain a glimpse of what it would be like to be a godmachine, and if you find the story to be full of metaphors that are hard to translate into literal language … well, of course that’s because you are still only a weak-minded primate, and everything must be explained to you with pant-hoots and bananas. The godmachines will someday reminisce about us with tenderness and pity.
Recommended Reading/Viewing
Fiction:
•Sirius (novel, 1944, Olaf Stapledon). A dog enhanced to have humanlike intelligence struggles to make sense of love, value, and beauty in a world he doesn’t fit into.
•Flowers for Algernon (short story, 1959, novel, 1966, Daniel Keyes). A low IQ laborer is cognitively enhanced to have superhuman intelligence, but his life does not improve in the ways he expected.
•Gattaca (film, 1997, written and directed by Andrew Niccol). A dystopia in which people designed to be genetically superior are privileged over the rest.
•Diaspora (novel, 1997, Greg Egan). A future populated with cognitively enhanced artificial intelligences living in simulated worlds, biologically engineered humans of various types, and robots, exploring a wide variety of ways to create a meaningful existence.
•Feed (novel, 2002, M.T. Anderson). A teenage character’s perspective on a world where most of humanity has the internet piped directly into their minds.
Non-fiction:
•Haraway, Donna J. (1991). A cyborg manifesto. Science, Technology, and socialist-feminism in the late 20th century. In: D. Haraway, Simians, cyborgs, and women. The reinvention of nature (pp. 149–181). New York, NY: Routledge. A feminist, anti-essentialist critique of scientific and technological approaches to the body and the blurry boundaries between animal, human, and machine.
•Humanity+ Board, Transhumanist Declaration (1998/2009), https://humanityplus.org/philosophy/transhumanist-declaration/ A brief but influential online statement of fundamental principles of transhumanism, affirming the value of “allowing individuals wide personal choice over how they enable their lives” through future technologies.
•Clarke, Steve, Julian Savulescu, Tony Coady, Alberto Giubilini, and Sagar Sanyal,...