The Fall of the Human Empire
eBook - ePub

The Fall of the Human Empire

Memoirs of a Robot

  1. 200 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Fall of the Human Empire

Memoirs of a Robot

Book details
Book preview
Table of contents
Citations

About This Book

Machines that are smarter than people? A utopian dream of science-fiction novelists and Hollywood screenwriters perhaps, but one which technological progress is turning into reality. Two trends are coming together: exponential growth in the processing power of supercomputers, and new software which can copy the way neurons in the human brain work and give machines the ability to learn. Smart systems will soon be commonplace in homes, businesses, factories, administrations, hospitals and the armed forces. How autonomous will they be? How free to make decisions? What place will human beings still have in a world controlled by robots? After the atom bomb, is artificial intelligence the second lethal weapon capable of destroying mankind, its inventor? The Fall of the Human Empire traces the little-known history of artificial intelligence from the standpoint of a robot called Lucy. She – or it? – recounts her adventures and reveals the mysteries of her long journey with humans, and provides a thought-provoking storyline of what developments in A.I. may mean for both humans and robots.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access The Fall of the Human Empire by Charles-Edouard Bouée in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Robotics. We have over one million books available in our catalogue for you to explore.

Information

Year
2019
ISBN
9781472971807
Edition
1

Chapter 1

Dartmouth College, 1956

The dream of a few men

‘I am seeking nothing but a very ordinary brain’
ALAN TURING, 1943
My brief story starts here, in Hanover, New Hampshire. Nestling on the shores of the Connecticut River, in the heart of the Appalachians, it is a typical peaceful New England small town, surrounded by forests and lakes, a haven of peace sheltered from the furious pace of the major cities, a refuge for writers, lovers of nature and wealthy bankers. A Main Street lined with shops, its colonnaded Town Hall standing proud, its gabled brick houses and spacious and elegant chalet bungalows nestled in the woods.
On the border between New Hampshire and Vermont, about 125 miles north-west of Boston, Hanover was founded in 1761 by Benning Wentworth, an English businessman and the first Governor of New Hampshire. He named it after the House of Hanover, from which the then British monarch, George III, was descended. Creating a town in Colonial America was a great property venture for its founder, but it also carried a certain number of obligations: opening a school, building a place of worship, welcoming the Society for Propagation of the Gospel in Foreign Parts. All these obligations were scrupulously fulfilled by Governor Wentworth, who donated almost 500 acres of good land for the construction of Dartmouth College. It opened in 1769 and is one of the oldest universities in North America, one of only nine built before the American Revolution. It was named in honour of William Legge, Second Count of Dartmouth, Lord of Trade and then Secretary of State for the Colonies under George III, whose intention was that the children of the colonies and the Amerindians in the newly conquered land that became New Hampshire should benefit from good-quality teaching under the supervision of two pastors, one English, the Rev. Eleazar Wheelock, the other Amerindian, Samson Occom, a member of the Mohegan Indian Tribe.
There was no spectacular event to mark the first steps of Dartmouth College, which was dedicated from the very beginning to religious study, medicine and ‘engineering’, an emerging discipline which embraced the new techniques being used in the nascent industries of the time. Equally significant is that the College, at its foundation, had a professor of mathematics and ‘natural philosophy’ (a discipline which at the time covered astronomy, physics, chemistry and biology), Bezaleel Woodward, a young pastor aged just 24, who became a pre-eminent figure in the university and the small community of Hanover. As the decades passed, Dartmouth College proved its worth as a member of the Ivy League, the cream of the private universities of the eastern United States, which today includes Columbia, Cornell, Harvard, Princeton and Yale. It gained an outstanding reputation in a number of fields, including medicine, engineering, mathematics and physics, and was involved in the birth of computing in the 1940s. This gamble proved successful, as the college became world-famous in this new scientific field.
In August 1956, during the summer holidays, while Hanover dozed under a heavy, humid heat, an unusual group of people came together in Dartmouth College. They were not businessmen coming together for a seminar, or parents coming to check out the university as a suitable place for their children. They were the most brilliant mathematical brains in all America. They had given up part of their vacation to accept an invitation from John McCarthy, a 29-year-old mathematician born in Boston to John Patrick McCarthy, an Irish immigrant from County Kerry, and Ida Glatt, who was of Lithuanian origin. John’s family was neither rich nor famous. They suffered badly in the Great Depression, and frequently moved in search of work, eventually reaching California where John Patrick found a job as a foreman in a textile factory.
The young John’s parents soon discovered that their son was gifted. After becoming a student at Belmont High School in Los Angeles, he graduated from the school two years early. Still a teenager, he feverishly submerged himself in the mathematical manuals of Caltech, the Californian Institute of Technology, where he was admitted straight into the third-year math course in 1944. California was, however, a place for lovers of sport, a field in which John showed a worrying level of mediocrity, which eventually got him expelled from Caltech only to be readmitted after military service. After that, nothing stood in the way of the stellar career of ‘Uncle John’, as he was known affectionately to the students of Stanford University, where he taught for almost forty years, laying down the bases for great leaps forward in computer science, and inventing in particular the concept of process time-sharing, which many years later led to the development of both servers and cloud-computing.
In 1955, and after a spell at Princeton, John McCarthy became a lecturer at Dartmouth College. Despite his young age, he was already considered one of America’s most promising mathematicians and computer scientists. The ability of computers to perform calculations fascinated this new generation of scientists. McCarthy, however, went further, deducing that the new machines could extend the scope of their calculations and even be taught to reason if properly programmed and given a ‘language’. To help crystallise these ideas in his mind, he therefore decided to invite his colleagues to a kind of seminar, to be held in Dartmouth. He sent them a letter, dated 31 August 1955, jointly signed by three of his colleagues, themselves also renowned scientists: Marvin Minsky, 29, a neuron network specialist from Harvard; Nathaniel Rochester, 35, an expert in radar and computers and the co-designer of IBM’s first computer, the 701; and Claude Shannon, 39, an engineer from Bell Laboratories with a fascination for ‘learning machines’ and author of the first mathematical theory of computing, subjects which he often discussed with the British mathematician Alan Turing during the war. It was in this letter that McCarthy first used the term ‘artificial intelligence’ to explain his thinking. ‘Since we can now describe so well the learning mechanisms and other aspects of human intelligence, we should be capable of developing a machine that can simulate them, and ensure this machine is gifted with language, is able to form concepts and abstractions, and is capable of resolving problems that currently only man is able to handle.’ He therefore invited his colleagues to meet together the next summer, in July and August, in return for a fee of $1,200 paid by the Rockefeller Foundation, whose President, Nelson Rockefeller, was a former Dartmouth College student. For the first time, therefore, the American scientific community, at the cutting edge of the new science of computing, met together to embrace the still revolutionary and much debated concept of intelligent machines and their ability to imitate human intelligence. They started, however, with a misunderstanding; despite the wording of the letter of invitation to the seminar, they did not understand the function of the human brain that well and it was particularly presumptuous to claim that a machine would be capable of reproducing it. So why launch into this scientific adventure at all? Because it appealed to a dream almost as old as the human race, and at that time the boundaries between disciplines of mathematics and intelligence were still blurred.
By the summer of 1956, the war had already been over for more than ten years. However, another war was raging, potentially even more dangerous for mankind: the Cold War, in which the United States and the Soviet Union vied with each other for the mastery of atomic weapons. It was mostly a battle of capacity for calculation; and therefore a battle for computers. For many years, war had been waged in two dimensions only: land and sea. However, the First World War of 1914–18 brought in a third dimension, that of air, and ushered in the age of ‘hardware’ (aeroplanes, lorries, tanks and oil) as a new strategic weapon.
The Second World War saw the advent of mathematics and ‘software’. In both Britain and the United States, the finest brains dedicated themselves to the war effort. Probably the best remembered is Alan Turing, a Cambridge mathematician and admirer of Einstein, about whom Turing gave a significant lecture while still in his teens. In 1938 he was hired by the British organisation responsible for unlocking the communication secrets of foreign powers.
At the beginning of the war, the secret services in London could only decode German messages with great inaccuracy.
The enemy had its formidable Enigma machine, whose progressively elaborate versions completely outfoxed the efforts of British mathematicians. The problem was even thought to be insoluble, given the exponential growth in the numbers of German messages. Their coding used a completely new method, based on random ‘keys’ for which the ‘system’ had to be discovered. The encoding principle was deceptively simple: one letter signified another. An ‘A’ was in reality an ‘S’ or a ‘Z’, depending on the key being used by the issuer and receiver of the message. With a 26-letter alphabet, the possible combinations were endless. The most elaborate versions of ‘Enigma’ had three or five rotors that multiplied the stages in the encoding process: an ‘A’ became a ‘B’, which then became an ‘M’ and finally an ‘R’, before designating the final letter. The combinations, therefore, multiplied almost infinitely. In addition, the coding keys were changed on a regular basis, sometimes every day, and there were different versions for the air force, the army and the navy. The secret was to get inside the mind of the enemy and search out the weaknesses of Enigma. The Allies sought to exploit the errors of its handlers and ‘crack’ the least complicated versions used – for example, by German weather ships in the Atlantic Ocean – and then bluff the enemy as if playing poker.
Turing succeeded in decoding messages from German submarines as early as 1941, when these vessels were inflicting heavy losses on the British navy. Throughout the war, as Enigma progressed, he made his ‘machine’ ever more sophisticated, and historians believe that Turing and his team of young mathematicians shortened the war in Europe by at least two years. He was also sent on secret missions to the United States, where he met Claude Shannon, in charge at Bell Laboratories, and most significantly the legendary John Van Neumann, who made a crucial contribution to the nuclear arms race working alongside physicist Robert Oppenheimer. It was thanks to Von Neumann’s calculations that the altitude at which the bombs that devastated Hiroshima and Nagasaki had to explode to inflict maximum effect was determined. While Soviet physicists and mathematicians, under the iron rule of Beria, were comparing thousands of manual calculations in their secret city of Arzamas-16, researchers in the Los Alamos laboratory in New Mexico were using the first computers, whose code names were often obscure, such as ENIAC (Electronic Numerical Integrator and Calculator) or EDVAC (Electronic Discrete Variable Calculator). The Russians did not perfect their first electronic calculator until 1950 … And in 1952, when IBM produced the 701, its first computer, it was delivered to the Pentagon before it went anywhere else.
It was against this background of feverish activity that the Dartmouth seminar was held. The specialists foresaw that this new age of machines could open up limitless possibilities. While in the United States, Alan Turing fired the specialists’ imaginations with his idea of the ‘intelligent machine’. ‘Giving the machine all the data on Stock Exchange prices and raw material rates and then asking it the simple question: should I sell or should I buy?’ was the idea he threw out to the specialists during a dinner with Claude Shannon at Bell in 1943, before an audience of fascinated young executives who immediately thought the poorly dressed Brit was mad. But he hammered the point home by saying: ‘I’m not interested in perfecting a powerful brain. All I’m looking for is a very ordinary brain, like that of the President of the American Telephone and Telegraph Company!’ Everyone in the room was stunned. Designing an artificial brain was still a new, and therefore shocking, idea. For Turing, however, the brain was not a sacred cow, but a logical machine that included random elements, like mathematics. During the war he had become interested in chess, poker and Go, and with some of his mathematician colleagues, he began to imagine solutions for ‘mechanising’ these games. He had read the first works on the subject by Von Neumann, and by Émile Norel in his Theory of Strategic Games. Games for two with fixed rules, such as chess, are games of strategy but also of anticipation of, and reaction to, the moves of one’s opponent. Each player has a certain number of possible moves, about thirty according to Turing, and the average capacity for anticipating the opponent’s moves depends naturally on each player’s level of skill. His conclusion was that a machine could simulate the thought processes of a player, and reproduce a kind of decision tree similar to human intelligence.
Turing had no doubt that his machine could replace the human brain in a significant number of operations. In his conferences he repeated almost ad nauseam that a part of the human brain was no more than an unconscious machine that produced reactions when stimulated, nothing more than a sophisticated calculator which, Turing never ceased to emphasise, could integrate many more ‘instructions’ and process them more rapidly than could the human brain. Challenging most of his colleagues’ beliefs, he fought for the trailblazing idea that the machine would never be more than a ‘slave’ in the service of a ‘master’, the human. In Turing’s mind, this boundary was not nearly so clear-cut, and he saw no reason why the machine should not carry out part of its master’s work. He even foresaw the possibility of communicating with the machine in any language, from the time when it had ‘learned’ the language, hence the idea of a machine with a capacity for learning. It would no longer be a slave, but a student. His famous ‘imitation test’ of 1950, was born of this logic. The test was initially a game involving three people, namely a man, a woman and a judge. They were placed in three different rooms and communicated with each other via screen and keyboard. The judge’s task was to determine which of his two contacts was the man, according to answers to a series of questions. The man tried to convince the judge that he was a man, but the woman’s task was to try to deceive the judge by providing answers that she considered to be a man’s answers. To win the game, the judge had to be able to determine who was who. Turing then replaced the woman with a computer playing the same role: to convince the judge that it was a man by attempting to imitate the answers that a male respondent would give. If the judge was wrong more than half the time with regard to the sex of the hidden contacts, Turing would consider his machine ‘intelligent’. I owe a lot to Turing. He did not create me, but I was his dream, a dream that was to become reality. He disappeared all too soon, in 1954, ostracised because of his homosexuality. Thankfully, the British authorities knew nothing of that during the war; if they had, would the Allies have won?
During the opening session, in the conference hall of the main building of Dartmouth College, McCarthy deliberately followed Turing’s logic. ‘What will happen if we write a programme for a calculator?’ he asked. ‘We set the machine a number of rules that should help it solve the problems put to it, and we expect it to follow these rules like a slave, without showing any originality or common sense. It’s a very long and laborious process. If the machine had a little intuition, the problem-solving process could be much more direct.’ He carried on by saying: ‘Our mental processes are like little machines inside our brains; to resolve a problem, they first analyse the surroundings to obtain data and ideas from them, they define a target to be met, and then a series of actions to be taken to resolve the problem. If the problem is very complex, they can avoid analysing all the possible solutions and take reasonable punts on the relevance of certain solutions, like in chess, for example.’ McCarthy believed that this process could be transferred to the machine.
During the numerous meetings that followed in the next two months, all the ideas came together: having the computer simulate the function of neurons in the human brain, inventing a language for communicating with the machine, making it recognise not only binary instructions but also ideas and words, and teaching it to solve complex problems by suggesting random or previously unseen solutions. The work being done in the great American universities, and in IBM and Bell, was developed on the basis of the theories of Von Neumann and Herbert Simon, future Nobel Prize winner for economics and the only non-mathematician in the group, who was interested in the cerebral mechanisms involved in the decision-making process and their modelling and consequent automation. He tried to demonstrate this by perfecting a computer that could play draughts and chess.
Let us look at the characteristics of the first higher-level computer languages, such as Logic Theorist, created by Herbert Simon and Allen Newell, a young computer researcher with the Rand Corporation. This language has its place in history as the first artificial intelligence software to be designed. Newell told his colleagues how this idea came to him. ‘I am a sceptic by nature. I am not excited by every new idea, but two years ago, when Oliver Selfridge, who is in this room, presented his work on automatic shape recognition to us, it was like seeing the light, in a way that I have never known before in my research work. I came to understand, in one afternoon’s work, that interaction between different programme units could accomplish complex tasks and imitate the intelligence of human beings. We wrote a programme, manually, using index cards, which enabled the machine to resolve the 52 theorems of Bertrand Russell’s The Principles of Mathematics. On the day of the test we were all there, my wife, my children and my students. I gave each of them a programme card, so that we ourselves became elements of the programme; and the machine perfectly demonstrated 38 of these theorems, sometimes more intelligently than in the way thought up by Russell.’
Marvin Minsky pushed things still further, challenging his colleagues with: ‘How do you explain the fact that we know all about atoms, planets and stars and so little about the mechanics of the human mind? It’s because we apply the logic of physicists to the way the brain works: we seek simple explanations for complex phenomena. I hear the criticisms being levelled against us: the machine can only obey a programme, without thinking or feeling. It has no ambition, no desire, no objective. We might have thought that before, when we had no idea about the biological functioning of humans. Today, however, we are beginning to realise that the brain is composed of a multitude of little machines connected to each other. In short, to the question of what kinds of cerebral processes generated emotions, I would add another question: How could machines reproduce these processes?’
‘I want to nail my colours to the mast’, warned McCarthy at the beginning of the seminar. In other words, he wanted artificial intelligence to be recognised as a major discipline within computing. He was not completely successful. Not all the invitees came to all the meetings; some only made brief appearances. Several of them were even uneasy about the idea of ‘intelligence’ being applied to computers. Of course there was the desire to follow Simon and Newell in the theory of games applied to machines, but Minsky’s intuitive thinking on the reproduction of emotions seemed fairly nebulous, and it was still a far cry from the idea of trans-humanism. Demonstrating Russell’s theorems was one thing; plunging into the convolutions of the human brain to produce mathematical copies of it was something else altogether. However, the Dartmouth seminar was seen as the starting point of artificial intelligence because it laid the foundations for future research: the capacity of machines to learn, to master language, to reproduce complex decision trees and to understand random logic. Even though there was inevitably no consensus on the wealth of learning available in each of these areas, the general feeling was that the computer, this mysterious new object of the 20th century, would in one way or another influence the way in which...

Table of contents

  1. Cover
  2. Half-Title Page
  3. Series Page
  4. Title Page
  5. Dedication
  6. Contents
  7. Introduction
  8. Prologue
  9. 1 Dartmouth College, 1956: The dream of a few men
  10. 2 Dartmouth College, 2006: The end of winter
  11. 3 2016: The Revelation
  12. 4 2026: The Golden Age
  13. 5 2038: Singularity
  14. Epilogue: 2040
  15. Bibliography
  16. Copyright