What to Think About Machines That Think
eBook - ePub

What to Think About Machines That Think

Today's Leading Thinkers on the Age of Machine Intelligence

  1. 576 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

What to Think About Machines That Think

Today's Leading Thinkers on the Age of Machine Intelligence

Book details
Book preview
Table of contents
Citations

About This Book

Weighing in from the cutting-edge frontiers of science, today's most forward-thinking minds explore the rise of "machines that think."

Stephen Hawking recently made headlines by noting, "The development of full artificial intelligence could spell the end of the human race." Others, conversely, have trumpeted a new age of "superintelligence" in which smart devices will exponentially extend human capacities. No longer just a matter of science-fiction fantasy ( 2001, Blade Runner, The Terminator, Her, etc.), it is time to seriously consider the reality of intelligent technology, many forms of which are already being integrated into our daily lives. In that spirit, John Brockman, publisher of Edge. org ("the world's smartest website" – The Guardian ), asked the world's most influential scientists, philosophers, and artists one of today's most consequential questions: What do you think about machines that think?

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access What to Think About Machines That Think by John Brockman in PDF and/or ePUB format, as well as other popular books in Psychology & Cognitive Science. We have over one million books available in our catalogue for you to explore.

Information

Year
2015
ISBN
9780062425669
SELF-AWARE AI? NOT IN 1,000 YEARS!
ROLF DOBELLI
Founder, Zurich Minds; journalist; author, The Art of Thinking Clearly
The widespread fear that AI will endanger humanity and take over the world is irrational. Here’s why.
Conceptually, autonomous or artificial intelligence systems can develop in two ways: either as an extension of human thinking or as radically new thinking. Call the first “Humanoid Thinking,” or Humanoid AI, and the second “Alien Thinking,” or Alien AI.
Almost all AI today is Humanoid Thinking. We use AI to solve problems too difficult, time-consuming, or boring for our limited brains to process: electrical-grid balancing, recommendation engines, self-driving cars, face recognition, trading algorithms, and the like. These artificial agents work in narrow domains with clear goals their human creators specify. Such AI aims to accomplish human objectives—often better, with fewer cognitive errors, distractions, outbursts of bad temper, or processing limitations. In a couple of decades, AI agents might serve as virtual insurance sellers, doctors, psychotherapists, and maybe even virtual spouses and children.
But such AI agents will be our slaves, with no self-concept of their own. They’ll happily perform the functions we set them up to do. If screwups happen, they’ll be our screwups, due to software bugs or overreliance on these agents (Dan Dennett’s point). Yes, Humanoid AIs might surprise us once in a while with novel solutions to specific optimization problems. But in most cases novel solutions are the last thing we want from AI (creativity in nuclear-missile navigation, anyone?). That said, Humanoid AI solutions will always fit a narrow domain. They’ll be understandable, either because we understand what they achieve or because we understand their inner workings. Sometimes the code will become too enormous and fumbled for one person to understand, because it’s continually patched. In these cases, we can turn it off and program a more elegant version. Humanoid AI will bring us closer to the age-old aspiration of having robots do most of the work while humans are free to be creative—or amused to death.
Alien Thinking is radically different. Alien Thinking could conceivably become a danger to Humanoid Thinking; it could take over the planet, outsmart us, outrun us, enslave us—and we might not even recognize the onslaught. What sort of thinking will Alien Thinking be? By definition, we can’t tell. It will encompass functionality we cannot remotely understand. Will it be conscious? Most likely, but it needn’t be. Will it experience emotion? Will it write bestselling novels? If so, bestselling to us or bestselling to it and its spawn? Will cognitive errors mar its thinking? Will it be social? Will it have a Theory of Mind? If so, will it make jokes, will it gossip, will it worry about its reputation, will it rally around a flag? Will it create its own version of AI (AI-AI)? We can’t say.
All we can say is that humans cannot construct truly Alien Thinking. Whatever we create will reflect our goals and values, so it won’t stray far from human thinking. You’d need real evolution, not just evolutionary algorithms, for self-aware Alien Thinking to arise. You’d need an evolutionary path radically different from the one that led to human intelligence and Humanoid AI.
So, how do you get real evolution to kick in? Replicators, variation, and selection. Once these three components are in place, evolution arises inevitably. How likely is it that Alien Thinking will evolve? Here’s a back-of-the-envelope calculation:
First, consider what getting from magnificently complex eukaryotic cells to human-level thinking involved. Achieving human thought required a large part of the Earth’s biomass (roughly 500 billion tons of eukaryotically bound carbon) during approximately 2 billion years. That’s a lot of evolutionary work! True, human-level thinking might have happened in half the time. With a lot of luck, even in 10 percent of the time, but it’s unlikely to have happened any faster. You don’t only need massive amounts of time for evolution to generate complex behavior, you also need a petri dish the size of Earth’s surface to sustain this level of experimentation.
Assume that Alien Thinking will be silicon-based, as all current AI is. A eukaryotic cell is vastly more complex than, say, Intel’s latest i7 CPU chip—both in hardware and software. Further assume that you could shrink that CPU chip to the size of a eukaryote. Leave aside the quantum effects that would stop the transistors from working reliably. Leave aside the question of the energy source. You’d have to cover the globe with 1030 microscopic CPUs and let them communicate and fight for 2 billion years for true thought to emerge.
Yes, processing speed is faster in CPUs than in biological cells, because electrons are easier to shuttle around than atoms. But eukaryotes work massively parallel, whereas Intel’s i7 works only four times parallel (four cores). Eventually, at least to dominate the world, these electrons would need to move atoms to store their software and data in more and more physical places. This would slow their evolution dramatically. It’s hard to say if, overall, silicon evolution will be faster than biological. We don’t know enough about it. I don’t see why this sort of evolution would be more than two or three orders of magnitude faster than biological evolution (if at all)—which would bring the emergence of self-aware Alien AI down to roughly a million years.
What if Humanoid AI becomes so smart it could create Alien AI from the top down? That’s where Leslie Orgel’s Second Rule kicks in: “Evolution is smarter than you are.” It’s smarter than human thinking. It’s even smarter than Humanoid Thinking. And it’s much slower than you think.
Thus, the danger of AI is not inherent to AI but rests on our overreliance on it. Artificial thinking won’t evolve to self-awareness in our lifetime. In fact, it won’t happen in 1,000 years.
I might be wrong, of course. After all, this back-of-the-envelope calculation applies legacy human thinking to Alien AI—which by definition we won’t understand. But that’s all we can do at this stage.
Toward the end of the 1930s, Samuel Beckett wrote in a diary, “We feel with terrible resignation that reason is not a superhuman gift . . . that reason evolved into what it is, but that it also, however, could have evolved differently.” Replace “reason” with “AI” and you have my argument.
MACHINES DON’T THINK, BUT NEITHER DO PEOPLE
CESAR HIDALGO
Associate professor, MIT Media Lab; author, Why Information Grows: The Evolution of Order, from Atoms to Economies
Machines that think? That’s as fallacious as people who think! Thinking involves processing information, begetting new physical order from incoming streams of physical order. Thinking is a precious ability, which unfortunately is not the privilege of single units such as machines or people but a property of the systems in which these units come to “life.”
Of course I’m being provocative here, since at the individual level we do process information. We do think—sometimes—or at least we feel like we do. But “our” ability to think is not entirely “ours”—it’s borrowed, since the hardware and software we use to think weren’t begot by us. You and I did not evolve the genes that helped organize our brains or the language we use to structure our thoughts. Our ability to think is dependent on events that happened prior to our mundane existence: the past chapters of biological and cultural evolution. So we can only understand our ability to think, and the ability of machines to mimic thought, by considering how the ability of a unit to process information relates to its context.
Think of a human born in the dark solitude of empty space. She’d have nothing to think about. The same would be true of an isolated and inputless computing machine. In this context, we can call our borrowed ability to process information “little” thinking—since it’s a context-dependent ability that happens at the individual level. “Large” thinking, by contrast, is the ability to process information embodied in systems, where units like machines or us are mere pawns.
Separating the little thinking of humans from the larger thinking of systems (which involves the process that begets the hardware and software that allow units to “little think”) helps us understand the role of thinking machines in this larger context. Our ability to think isn’t only borrowed; it also hinges on the use and abuse of mediated interactions. For human/machine systems to think, humans need to eat and regurgitate one another’s mental vomit, which sometimes takes the form of words. But since words vanish in the wind, our species’ enormous ability to think hinges on more sophisticated techniques to communicate and preserve the information we generate: our ability to encode information in matter.
For 100,000 years, our species has been busy transforming our planet into a giant tape player. The planet Earth is the medium wherein we print our ideas: sometimes in symbolic form, such as text and paintings, but, more important, in objects—like hair dryers, vacuum cleaners, buildings, and cars—built from the mineral loins of planet Earth. Our society has a great collective ability to process information because our communication involves more than words: It involves the creation of objects, which transmit not something as flimsy as an idea but something as concrete as know-how and the uses of knowledge. Objects augment us; they allow us to do things without knowing how. We all get to enjoy the teeth-preserving powers of toothpaste without knowing how to synthesize sodium fluoride, or the benefits of long-distance travel without knowing how to build a plane. By the same token, we all enjoy the benefits of sending texts throughout the world in seconds through social media or of performing complex mathematical operations by pressing a few keys on a laptop computer.
But our ability to create the trinkets augmenting us has also evolved, of course, as a result of our collective willingness to eat one another’s mental vomit. This evolution is the one that brings us now to the point where we have “media” that are beginning to rival our ability to process information, or “little think.”
For most of our history, our trinkets were static objects. Even our tools were solidified chunks of order, such as stone axes, knives, and knitting needles. A few centuries ago, we developed the ability to outsource muscle and motion to machines, causing one of the greatest economic expansions in history. Now we’ve evolved our collective ability to process information by creating objects endowed with the ability to beget and recombine physical order. These are machines that can process information—engines that produce numbers, like the engines Charles Babbage dreamed about.
So we’ve evolved our ability to think collectively by first gaining dominion over matter, then over energy, and now over physical order, or information. Yet this shouldn’t fool us into believing that we think or that machines do. The large evolution of human thought requires mediated interactions, and the future of thinking machines will also happen at the interface where humans connect with humans through objects.
As we speak, nerds in the best universities of the world are mapping out the brain, building robotic limbs, and developing primitive versions of technologies that will open up the future when your great-grandchild will get high by plugging his brain directly into the Web. The augmentation these kids will get is unimaginable to us—and so bizarre by our modern ethical standards that we’re not even in a position to properly judge it; it would be like a sixteenth-century Puritan judging present-day San Francisco. Yet in the grand scheme of the universe, these new human/machine networks will be nothing other than the next natural step in the evolution of our species’ ability to beget information. Together, humans and our extensions—machines—will continue to evolve networks that are enslaved to the universe’s main glorious purpose: the creation of pockets where information does not dwindle but grows.
TANGLED UP IN THE QUESTION
JAMES J. O’DONNELL
Classical scholar; University Professor, Georgetown University; author, Augustine, The Ruin of the Roman Empire, Pagans
Thinking is a word we apply with no discipline whatsoever to a huge variety of reported behaviors. “I think I’ll go to the store,” and “I think it’s raining,” and “I think, therefore I am,” and “I think the Yankees will win the World Series,” and “I think I’m Napoleon,” and “I think he said he would be here, but I’m not sure”—all use the same word to mean entirely different things. Which of them might a machine do someday? I think that’s an important question.
Could a machine get confused? Experience cognitive dissonance? Dream? Wonder? Forget the name of that guy over there and at the same time know that it really knows the answer and if it just thinks about something else for a while, it might remember? Lose track of time? Decide to get a puppy? Have low self-esteem? Have suicidal thoughts? Get bored? Worry? Pray? I think not.
Can artificial mechanisms be constructed to play the part in gathering information and making decisions that human beings now play? Sure, they already do. The ones controlling the fuel injection in my car are a lot smarter than I am. I think I’d do a lousy job of that.
Could we create machines that go further and act without human supervision in ways that prove good or bad for human beings? I guess so. I think I’ll love them, except when they do things that make me mad—then they’re really being like people. I suppose they could run amok and create mass havoc, but I have my doubts. (Of course, if they do, nobody will care what I think.)
But nobody would ever ask a machine what it thinks about machines that think. That’s a question that makes sense only if we care about the thinker as an autonomous and interesting being, like ourselves. If somebody ever does ask a machine this question, it won’t be a machine anymore. I think I’m not going to worry about that for a while. You may think I’m in denial.
When we get tangled up in this question, we need to ask ourselves just what it is we’re really thinking about.
MISTAKING PERFORMANCE FOR COMPETENCE
RODNEY A. BROOKS
Panasonic Professor of Robotics, emeritus, MIT; founder, chair, and CTO, Rethink Robotics; author, Flesh and Machines
Think and intelligence are both what Marvin Minsky has called suitcase words—words into which we pack many meanings so we can talk about complex issues in shorthand. When we look inside these words, we find many different aspects, mechanisms, and levels of understanding. This makes answering the perennial questions of “Can machines think?” or “When will machines reach human-level intelligence?” difficult. The suitcase words are used to cover both specific performance demonstrations by machines and the more general competence that humans might have. We generalize from performance to competence and grossly overestimate the capabilities of machin...

Table of contents

  1. Dedication
  2. Contents
  3. Acknowledgments
  4. Preface: The 2015 Edge Question
  5. Murray Shanahan: Consciousness in Human-Level AI
  6. Steven Pinker: Thinking Does Not Imply Subjugating
  7. Martin Rees: Organic Intelligence Has No Long-Term Future
  8. Steve Omohundro: A Turning Point in Artificial Intelligence
  9. Dimitar D. Sasselov: AI Is I
  10. Frank Tipler: If You Can’t Beat ’em, Join ’em
  11. Mario Livio: Intelligent Machines on Earth and Beyond
  12. Antony Garrett Lisi: I, for One, Welcome Our Machine Overlords
  13. John Markoff: Our Masters, Slaves, or Partners?
  14. Paul Davies: Designed Intelligence
  15. Kevin P. Hand: The Superintelligent Loner
  16. John C. Mather: It’s Going to Be a Wild Ride
  17. David Christian: Is Anyone in Charge of This Thing?
  18. Timo Hannay: Witness to the Universe
  19. Max Tegmark: Let’s Get Prepared!
  20. Tomaso Poggio: “Turing+” Questions
  21. Pamela Mccorduck: An Epochal Human Event
  22. Marcelo Gleiser: Welcome to Your Transhuman Self
  23. Sean Carroll: We Are All Machines That Think
  24. Nicholas G. Carr: The Control Crisis
  25. Jon Kleinberg & Sendhil Mullainathan: We Built Them, but We Don’t Understand Them
  26. Jaan Tallinn: We Need to Do Our Homework
  27. George Church: What Do You Care What Other Machines Think?
  28. Arnold Trehub: Machines Cannot Think
  29. Roy Baumeister: No “I” and No Capacity for Malice
  30. Keith Devlin: Leveraging Human Intelligence
  31. Emanuel Derman: A Machine Is a “Matter” Thing
  32. Freeman Dyson: I Could Be Wrong
  33. David Gelernter: Why Can’t “Being” or “Happiness” Be Computed?
  34. Leo M. Chalupa: No Machine Thinks About the Eternal Questions
  35. Daniel C. Dennett: The Singularity—an Urban Legend?
  36. W. Tecumseh Fitch: Nano-Intentionality
  37. Irene Pepperberg: A Beautiful (Visionary) Mind
  38. Nicholas Humphrey: The Colossus Is a BFG
  39. Rolf Dobelli: Self-Aware AI? Not in 1,000 Years!
  40. Cesar Hidalgo: Machines Don’t Think, but Neither Do People
  41. James J. O’Donnell: Tangled Up in the Question
  42. Rodney A. Brooks: Mistaking Performance for Competence
  43. Terrence J. Sejnowski: AI Will Make You Smarter
  44. Seth Lloyd: Shallow Learning
  45. Carlo Rovelli: Natural Creatures of a Natural World
  46. Frank Wilczek: Three Observations on Artificial Intelligence
  47. John Naughton: When I Say “Bruno Latour,” I Don’t Mean “Banana Till”
  48. Nick Bostrom: It’s Still Early Days
  49. Donald D. Hoffman: Evolving AI
  50. Roger Schank: Machines That Think Are in the Movies
  51. Juan Enriquez: Head Transplants?
  52. Esther Dyson: AI/AL
  53. Tom Griffiths: Brains and Other Thinking Machines
  54. Mark Pagel: They’ll Do More Good Than Harm
  55. Robert Provine: Keeping Them on a Leash
  56. Susan Blackmore: The Next Replicator
  57. Tim O’Reilly: What If We’re the Microbiome of the Silicon AI?
  58. Andy Clark: You Are What You Eat
  59. Moshe Hoffman: AI’s System of Rights and Government
  60. Brian Knutson: The Robot with a Hidden Agenda
  61. William Poundstone: Can Submarines Swim?
  62. Gregory Benford: Fear Not the AI
  63. Lawrence M. Krauss: What, Me Worry?
  64. Peter Norvig: Design Machines to Deal with the World’s Complexity
  65. Jonathan Gottschall: The Rise of Storytelling Machines
  66. Michael Shermer: Think Protopia, Not Utopia or Dystopia
  67. Chris Dibona: The Limits of Biological Intelligence
  68. Joscha Bach: Every Society Gets the AI It Deserves
  69. Quentin Hardy: The Beasts of AI Island
  70. Clifford Pickover: We Will Become One
  71. Ernst Pöppel: An Extraterrestrial Observation on Human Hubris
  72. Ross Anderson: He Who Pays the AI Calls the Tune
  73. W. Daniel Hillis: I Think, Therefore AI
  74. Paul Saffo: What Will the Place of Humans Be?
  75. Dylan Evans: The Great AI Swindle
  76. Anthony Aguirre: The Odds on AI
  77. Eric J. Topol: A New Wisdom of the Body
  78. Roger Highfield: From Regular-I to AI
  79. Gordon Kane: We Need More Than Thought
  80. Scott Atran: Are We Going in the Wrong Direction?
  81. Stanislas Dehaene: Two Cognitive Functions Machines Still Lack
  82. Matt Ridley: Among the Machines, Not Within the Machines
  83. Stephen M. Kosslyn: Another Kind of Diversity
  84. Luca De Biase: Narratives and Our Civilization
  85. Margaret Levi: Human Responsibility
  86. D. A. Wallach: Amplifiers/Implementers of Human Choices
  87. Rory Sutherland: Make the Thing Impossible to Hate
  88. Bruce Sterling: Actress Machines
  89. Kevin Kelly: Call Them Artificial Aliens
  90. Martin Seligman: Do Machines Do?
  91. Timothy Taylor: Denkraumverlust
  92. George Dyson: Analog, the Revolution That Dares Not Speak Its Name
  93. S. Abbas Raza: The Values of Artificial Intelligence
  94. Bruce Parker: Artificial Selection and Our Grandchildren
  95. Neil Gershenfeld: Really Good Hacks
  96. Daniel L. Everett: The Airbus and the Eagle
  97. Douglas Coupland: Humanness
  98. Josh Bongard: Manipulators and Manipulanda
  99. Ziyad Marar: Are We Thinking More Like Machines?
  100. Brian Eno: Just a New Fractal Detail in the Big Picture
  101. Marti Hearst: eGaia, a Distributed Technical-Social Mental System
  102. Chris Anderson: The Hive Mind
  103. Alex (Sandy) Pentland: The Global Artificial Intelligence Is Here
  104. Randolph Nesse: Will Computers Become Like Thinking, Talking Dogs?
  105. Richard E. Nisbett: Thinking Machines and Ennui
  106. Samuel Arbesman: Naches from Our Machines
  107. Gerald Smallberg: No Shared Theory of Mind
  108. Eldar Shafir: Blind to the Core of Human Experience
  109. Christopher Chabris: An Intuitive Theory of Machine
  110. Ursula Martin: Thinking Saltmarshes
  111. Kurt Gray: Killer Thinking Machines Keep Our Conscience Clean
  112. Bruce Schneier: When Thinking Machines Break the Law
  113. Rebecca Mackinnon: Electric Brains
  114. Gerd Gigerenzer: Robodoctors
  115. Alison Gopnik: Can Machines Ever Be As Smart As Three-Year-Olds?
  116. Kevin Slavin: Tic-Tac-Toe Chicken
  117. Alun Anderson: AI Will Make Us Smart and Robots Afraid
  118. Mary Catherine Bateson: When Thinking Machines Are Not a Boon
  119. Steve Fuller: Justice for Machines in an Organicist World
  120. Tania Lombrozo: Don’t Be a Chauvinist About Thinking
  121. Virginia Heffernan: This Sounds Like Heaven
  122. Barbara Strauch: Machines That Work Until They Don’t
  123. Sheizaf Rafaeli: The Moving Goalposts
  124. Edward Slingerland: Directionless Intelligence
  125. Nicholas A. Christakis: Human Culture As the First AI
  126. Joichi Ito: Beyond the Uncanny Valley
  127. Douglas Rushkoff: The Figure or the Ground?
  128. Helen Fisher: Fast, Accurate, and Stupid
  129. Stuart Russell: Will They Make Us Better People?
  130. Eliezer S. Yudkowsky: The Value-Loading Problem
  131. Kate Jeffery: In Our Image
  132. Maria Popova: The Umwelt of the Unanswerable
  133. Jessica L. Tracy & Kristin Laurin: Will They Think About Themselves?
  134. June Gruber & Raul Saucedo: Organic Versus Artifactual Thinking
  135. Paul Dolan: Context Surely Matters
  136. Thomas G. Dietterich: How to Prevent an Intelligence Explosion
  137. Matthew D. Lieberman: Thinking from the Inside or the Outside?
  138. Michael Vassar: Soft Authoritarianism
  139. Gregory Paul: What Will AIs Think About Us?
  140. Andrian Kreye: A John Henry Moment
  141. N. J. Enfield: Machines Aren’t into Relationships
  142. Nina Jablonski: The Next Phase of Human Evolution
  143. Gary Klein: Domination Versus Domestication
  144. Gary Marcus: Machines Won’t Be Thinking Anytime Soon
  145. Sam Harris: Can We Avoid a Digital Apocalypse?
  146. Molly Crockett: Could Thinking Machines Bridge the Empathy Gap?
  147. Abigail Marsh: Caring Machines
  148. Alexander Wissner-Gross: Engines of Freedom
  149. Sarah Demers: Any Questions?
  150. Bart Kosko: Thinking Machines = Old Algorithms on Faster Computers
  151. Julia Clarke: The Disadvantages of Metaphor
  152. Michael Mccullough: A Universal Basis for Human Dignity
  153. Haim Harari: Thinking About People Who Think Like Machines
  154. Hans Halvorson: Metathinking
  155. Christine Finn: The Value of Anticipation
  156. Dirk Helbing: An Ecosystem of Ideas
  157. John Tooby: The Iron Law of Intelligence
  158. Maximilian Schich: Thought-Stealing Machines
  159. Satyajit Das: Unintended Consequences
  160. Robert Sapolsky: It Depends
  161. Athena Vouloumanos: Will Machines Do Our Thinking for Us?
  162. Brian Christian: Sorry to Bother You
  163. Benjamin K. Bergen: Moral Machines
  164. Laurence C. Smith: After the Plug Is Pulled
  165. Giulio Boccaletti: Monitoring and Managing the Planet
  166. Ian Bogost: Panexperientialism
  167. Aubrey De Grey: When Is a Minion Not a Minion?
  168. Michael I. Norton: Not Buggy Enough
  169. Thomas A. Bass: More Funk, More Soul, More Poetry and Art
  170. Hans Ulrich Obrist: The Future Is Blocked to Us
  171. Koo Jeong-A: An Immaterial Thinkable Machine
  172. Richard Foreman: Baffled and Obsessed
  173. Richard H. Thaler: Who’s Afraid of Artificial Intelligence?
  174. Scott Draves: I See a Symbiosis Developing
  175. Matthew Ritchie: Reimagining the Self in a Distributed World
  176. Raphael Bousso: It’s Easy to Predict the Future
  177. James Croak: Fear of a God, Redux
  178. AndrĂ©s Roemer: Tulips on My Robot’s Tomb
  179. Lee Smolin: Toward a Naturalistic Account of Mind
  180. Stuart A. Kauffman: Machines That Think? Nuts!
  181. Melanie Swan: The Future Possibility-Space of Intelligence
  182. Tor NĂžrretranders: Love
  183. Kai Krause: An Uncanny Three-Ring Test for Machina sapiens
  184. Georg Diez: Free from Us
  185. Eduardo Salcedo-AlbarĂĄn: Flawless AI Seems Like Science Fiction
  186. Maria Spiropulu: Emergent Hybrid Human/Machine Chimeras
  187. Thomas Metzinger: What If They Need to Suffer?
  188. Beatrice Golomb: Will We Recognize It When It Happens?
  189. Noga Arikha: Metarepresentation
  190. Demis Hassabis, Shane Legg & Mustafa Suleyman: Envoi: A Short Distance Ahead—and Plenty to Be Done
  191. Notes
  192. About the Author
  193. Also by John Brockman
  194. Credits
  195. Back Ads
  196. Copyright
  197. About the Publisher