Artificial Intelligence
eBook - ePub

Artificial Intelligence

An Introduction

Alan Garnham

Share book
  1. 296 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Artificial Intelligence

An Introduction

Alan Garnham

Book details
Book preview
Table of contents
Citations

About This Book

First published in 1987, this book provides a stimulating introduction to artificial intelligence (AI) - the science of thinking machines. After a general introduction to AI, including its history, tools, research methods, and its relation to psychology, Garnham gives an account of AI research in five major areas: knowledge representation, vision, thinking and reasoning, language, and learning. He then describes the more important applications of AI and discusses the broader philosophical issues raised by the possibility of thinking machines. In the final chapter, he speculates about future research in AI, and more generally in cognitive science. Suitable for psychology students, the book also provides useful background reading for courses on vision, thinking and reasoning, language and learning.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Artificial Intelligence an online PDF/ePUB?
Yes, you can access Artificial Intelligence by Alan Garnham in PDF and/or ePUB format, as well as other popular books in Psychologie & Geschichte & Theorie in der Psychologie. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
ISBN
9781351337861

1 Introduction

Artificial intelligence (AI) is an approach to understanding behaviour based on the assumption that intelligence can best be analysed by trying to reproduce it. In practice, reproduction means simulation by computer. AI is, therefore, part of computer science. Its history is a relatively short one - as an independent field of study it dates back to the mid-1950s. The AI approach contrasts with an older method of studying cognition, that of experimental psychology. Psychology has long had intelligence among its central concerns, intelligence not just as measured in IQ tests, but in the broader sense in which it is required for thinking, reasoning and learning, and in their prerequisites — high-level perceptual skills, the mental representation of information and the ability to use language.
AI and psychology have inevitably interacted with each other. Psychologists have borrowed concepts from AI, and AI workers have taken an interest in psychological findings. Nevertheless, there has been a certain amount of antagonism between the two approaches, with proponents of each pointing out the strengths of their own methodology and the weaknesses of their opponents'. This uneasy relationship lasted until the late 1970s, when many people on both sides felt the need for a more constructive amalgam of these different approaches to the same problems. A new discipline, cognitive science, came into being, combining the strengths of psychology, AI and other subjects, in particular linguistics, formal logic and philosophy. Cognitive science attempts to answer some of the unsolved problems about intelligent behaviour, in the widest sense of that term. The importance of its interdisciplinary approach is reflected in the fact that it is now difficult for psychologists to understand new work in perception and cognition if they are ignorant of AI.
This book is an introduction to AI. It describes research in AI, and discusses the strengths and weaknesses of the AI approach to cognition. Only by familiarising themselves with AI can psychologists, and others, judge for themselves what contribution it can make to the study of mental functioning, and in what way it complements and reinforces more traditional psychological techniques.
The present chapter provides a general introduction to AI, its history, its tools and research methods, and its relation to psychology. Chapters 2 to 6 describe AI research in five major areas: knowledge representation, vision, thinking and reasoning, language, and learning. The more important applications of AI are described in chapter 7, and some of its wider implications for a theory of mind are discussed in chapter 8. The final chapter speculates about future research in cognitive science.

What is 'artificial intelligence'?

Artificial intelligence is the study of intelligent behaviour. One of its goals is to understand human intelligence. Another is to produce useful machines. In some ways the term artificial intelligence is an unfortunate one. Both parts of it are misleading. On the one hand, as many people have pointed out, artificial implies not real. Although many critics of AI have claimed that artificial intelligences are not really intelligent (see chapter 8), most AI researchers disagree with them. On the other hand, the word intelligence suggests that AI is restricted to the study of behaviour that is indicative of intelligence, in the everyday sense of that term - behaviour such as solving problems, playing chess and proving theorems in geometry and predicate calculus. These kinds of behaviour are the ones that particularly interested the AI pioneers of the mid-1950s. However, in writing programs to simulate these skills, they developed a battery of programming techniques that could be applied to aspects of behaviour not normally thought of as requiring any great intelligence - recognising objects and understanding simple text, for example. More recently, particularly in the study of visual perception and speech recognition, programming techniques have been introduced that are not immediately applicable to the simulation of problem solving. Nevertheless, it is usual to extend the term artificial intelligence to include this work.
Even when AI research aims to reproduce human behaviour, it need not necessarily attempt to reproduce the mechanisms underlying it. AI is more immediately relevant to psychology when it does try to model those mechanisms, but other types of AI research may be of psychological interest if they suggest general principles for describing or modelling cognitive functions. Therefore, although this book is addressed primarily to psychologists, the term artificial intelligence will be taken in its most general sense, and the full range of AI research will be discussed.

Artificial intelligence — a brief history

The term artificial intelligence was first used in print by John McCarthy, in a proposal for a conference at Dartmouth College, New Hampshire, to discuss the simulation of intelligent behaviour by machines. As an academic discipline, AI had its origins in the mid-1950s, at about the time of the Dartmouth conference. Its history is, therefore, comparatively short. However, AI did not emerge from a theoretical vacuum. Before that time there had been many investigations of the nature of intelligence. However, until the advent of the digital computer, most of this effort was directed towards understanding intelligence as manifested by people.
It is possible, and in many ways useful, to view current Al research as a continuation of previous work in philosophy, science, and technology. Two authors who explore this idea in detail are McCorduck (1979) and Gregory (1981), who lays particular emphasis on the role of technological advances in understanding the human mind.
Perhaps the principal line of development that led to AI was the attempt to produce machines that took the drudgery out of human intellectual endeavour and, at the same time, eliminated some of the errors to which it is prone. Gregory (1981) describes an ancient Greek device, the so-called Antikythera Mechanism (c. 80 BC) which models the movements of heavenly bodies. Its discovery shows that the Greeks were more technologically advanced than has often been assumed. However, the gap between ancient Greek culture and our own is comparatively wide. It is difficult to be sure what the existence of such mechanisms meant to the Greeks, or how they shaped ideas about the mind.
The Antikythera Mechanism is the ancestor of medieval orreries and Renaissance clocks. It is not on the direct line to the digital computer. That device, as its name suggests, developed from aids to numerical calculation, which can be traced back through the abacus to groups of pebbles. Calculating machines, whose principal components were cogwheels, were first constructed by the philosophers Pascal (1623-1662) and Leibniz (1646-1716). However, the capabilities of these machines were severely limited (by today's standards) by the fact that their parts were mechanical. These same limitations thwarted the ambitions of Charles Babbage (1792-1871) to produce a much more powerful machine. Babbage's first project, the Difference Engine, was designed to perform the relatively modest task of compiling tables of logarithms, whose principle use was in nautical computations. In the early nineteenth century these tables were produced by teams of human computers and were often error-ridden, sometimes with fatal results. In the early 1830s, when its construction was almost finished, Babbage lost interest in the Difference Engine, because he had conceived the much more ambitious Analytical Engine, which never came close to realisation. The Analytical Engine was intended to compute any mathematical function, not just logarithms. It was to be programmable by punched cards, in much the same way as the recently invented Jacquard loom. As well as performing mathematical computations, Babbage realised that the Analytical Engine would be able to play games such as noughts-and-crosses (tic-tac-toe) and chess, and at one time he proposed to build a game-playing version of the machine to raise funds.
In the event, the all-purpose computer, such as the Analytical Engine was intended to be, did not become a reality until mass-produced electronic components were available. Even then the earliest machines, conceived during the Second World War and constructed shortly afterwards, were unreliable and difficult to operate. The real breakthrough came with the discovery of semi-conductors and the development of the transistor as a replacement for the vacuum-tube diode.
The first programmable computer was constructed in Germany by Konrad Zuse before the war was over (see McCorduck, 1979, p. 50). However, Zuse was not taken seriously by the German authorities, and Germany's defeat meant that his efforts came to nothing. In Britain and the USA, on the other hand, special purpose computing machines had contributed to the war effort and, when the war was over, funding was made available for further development. In the USA, scientists at the Moore School of the University of Pennsylvania had developed the ENIAC, a machine for calculating bombing tables. After the war, they explored its use as a general purpose computer. In Britain, Alan Turing and a team of cryptanalysts at Bletchley Park had used electromagnetic computing machines for code breaking. Only recently has it become public knowledge how crucial this work was in avoiding defeat in the early years of the war. Turing, also, obtained funds for the design of general purpose electronic computers after the war (see Hodges, 1983, for an account of his work).
The first electronic computers, although physically very large, were severely limited in their capabilities compared with even a modest home microcomputer of the late 1980s. Several developments, in addition to the change from valves to transistors, were needed before AI programming became a reality. Two of these were particularly important. The first was the idea of storing a program, rather than just data, in the computer's memory. The second was that of a high-level programming language, from which programs could be translated automatically into a form that the machine could use. This process of translation is called compilation.
The first programs that could be called AI programs, though that term had not yet been invented, were game players. Turing in Britain and Shannon (1950) in the USA, like Babbage before them, explored the idea of a chess-playing computer, and Turing had simulated the performance of such a machine by hand before he could program a real computer to play. However, these early projects were hampered by the lack of high-level programming languages. Programming a machine to perform any task was extremely laborious before the invention of such languages.
1956 is a crucial date in the history of AI. That was the year of the conference at Dartmouth College, which its organisers, in particular John McCarthy, hoped would give a large and immediate impetus to AI research. If the effects of the conference were not as dramatic as had been anticipated, with hindsight they still appear highly significant. If nothing else, the use of the term artificial intelligence in the proposal for the conference was instrumental in its gaining currency.
Although McCarthy was disappointed that the conference did not produce more immediate results, it brought together many of those who became prominent in the early days of AI, and laid the foundations for work that was very soon underway. Among the people at Dartmouth were Allen Newell and Herbert Simon, who had already implemented a high-level programming language designed for AI research. Using this language they had written a program, called the Logic Theory Machine, which could prove theorems of formal logic (see chapter 4).
In developing this program, Newell and Simon, together with their colleague Shaw, had largely ignored two lines of research that many of the other delegates at the conference considered important. The first was neural nets (McCulloch and Pitts, 1943). Neural nets are models of the logical properties of interconnected sets of nerve cells. By investigating the properties of such nets, neural net theorists hoped to show how the brain could mediate intelligent behaviour. However, this research was based on what later turned out to be a simplistic view of the properties of neurons, and AI researchers soon lost interest in it. Newell, Shaw and Simon emphasised the importance of studying intelligence at a functional, rather than a physiological, level, and it is only recently that ideas similar to those of McCulloch and Pitts have been revived in connectionist and parallel distributed processing (PDP) models.
The second idea that Newell, Shaw and Simon repudiated was that programs should work according to the principles of formal logic (rather than prove theorems in it). Their claim that people do not reason using logical rules but use heuristics, or rules of thumb, remains important to this day.
Although the implications of Newell, Shaw and Simon's work had not become fully apparent by the end of the Dartmouth conference, their ideas became the dominant influence in the early years of AI, as is evidenced by Feigenbaum and Feldman's (1963) Computers and Thought, the first collected volume of AI papers, and one which gives a good overview of the early work.
In the early 1960s the Massachusetts Institute of Technology (MIT) became the leading centre for AI research. This period is often referred to as the era of semantic information processing, a name taken from a book summarising its most important work (Minsky, 1968). The term semantic information processing indicates that the meaning of the information being processed, and not just its structure, is important for the task in hand. For example, in language processing, interest shifted from the syntax-based attempts at machine translation of the 1950s to meaning-based language-understanding systems.
The work Îżf the early 1960s retained the assumption, made by Newell, Shaw and Simon, that AI research should result in general models of intelligent behaviour. Later in the 1960s it was realised that such behaviour requires large amounts of background knowledge, often knowledge that is specific to a particular task. This observation led to the writing of programs that worked in restricted domains, most notably the MIT BLOCKSWORLD, which comprised prismatic blocks on a table top. However, many people continued to believe that principles discovered by solving problems in one domain would carry over to others. This era produced some programs that performed impressively, though it later turned out that they often depended on domain-specific tricks, and that the ideas on which they were based could not, after all, be generalised to other domains.
Another set of programs on which work began in the 1960s were unashamed specialists, whose performance was deliberately restricted to a single type of problem, such as diagnosing a particular class of diseases. These programs, which were intended to be used in the real world, initially met with some hostility - they were not regarded as genuine AI. Now renamed expert systems, they form one of the central areas of AI research (see chapter 7).
The 1960s also saw a revival of interest in formal logic as a tool in problem solving. The principal reason for this revival was the invention by Alan Robinson of the resolution method for deriving conclusions from premises stated in predicate calculus (see chapter 4).
The early 1970s was a comparatively quiet period. Although detailed accounts of some of the most impressive semantic information-processing systems were published (e.g. Winograd, 1972) and work on expert systems continued, there was little sense of progress. In Britain, the Lighthill report concluded that AI was not a priority area for research. However, the late 1970s saw a renaissance. After a lengthy period of evolution, the first expert systems were put into everyday use, working out the structure of organic molecules, configuring computer systems and diagnosing diseases. This development signalled that AI had potentially lucrative applications, and was partly responsible for an upsurge in funding. On the theoretical side, much of the work of the preceding twenty-five years was systematised, and a welcome attempt was made to identify underlying principles, and to dispense with ad hoc solutions to problems.
Because AI is a young discipline, the age of a piece of research is a poor guide to its current relevance. This book will, therefore, describe some of the earliest work in AI, as well as more recent studies. There will, however, be an emphasis on later work.

The goals of AI research

AI researchers try both to understand intelligent behaviour and to build clever machines. Indeed, a single AI project may have both of these goals. The fact that many projects aim to produce a specific product - a machine that is, or could be, used in the real world - suggests that AI is more like engineering than physics, that it is more of an applied than a pure science (see e.g. Feigenbaum, 1977). The truth is rather that it is more difficult to distinguish between pure and applied AI than between physics and engineering. In some AI projects the primary goal is to produce an intelligent artefact. The principles underlying its behaviour are of secondary importance, and there is no intention to search for new principles. However, in other projects, perhaps the majority, the aim is to understand the mechanisms underlying behaviour, to make general statements about knowledge representation, vision, thinking, language use or learning. In particular, Al programs that try to simulate human behaviour are often written in an attempt to give a principled account of that behaviour.
AI i...

Table of contents