Hypothetical Thinking
eBook - ePub

Hypothetical Thinking

Dual Processes in Reasoning and Judgement

Jonathan St. B. T. Evans

  1. 192 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Hypothetical Thinking

Dual Processes in Reasoning and Judgement

Jonathan St. B. T. Evans

Book details
Book preview
Table of contents
Citations

About This Book

Hypothetical thought involves the imagination of possibilities and the exploration of their consequences by a process of mental simulation. In this Classic Edition, Jonathan St B T Evans' presents his pioneering Hypothetical Thinking Theory; an integrated theoretical account of a wide range of psychological studies on hypothesis testing, reasoning, judgement and decision making.

Hypothetical Thinking Theory is built on three key principles and implemented in a version of Evans' well-known heuristic–analytic theory of reasoning. The central claim of this book is that this theory can provide an integrated account of apparently diverse phenomena including confirmation bias in hypothesis testing, acceptance of fallacies in deductive reasoning, belief biases in reasoning and judgement, biases of statistical judgement and numerous characteristic findings in the study of decision making.

Featuring a reflective and insightful new introduction to the book, this classic edition discusses contemporary theory on cognitive biases, human rationality and dual-process theories of higher cognition. It will be of great interest to researchers, post graduates as well as advanced undergraduate students.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is Hypothetical Thinking an online PDF/ePUB?
Yes, you can access Hypothetical Thinking by Jonathan St. B. T. Evans in PDF and/or ePUB format, as well as other popular books in Psicología & Historia y teoría en psicología. We have over one million books available in our catalogue for you to explore.

Information

Year
2007
ISBN
9781135419523

CHAPTER ONE
Introduction and theoretical framework

It is evident that the human species is highly intelligent and well adapted. Some of our intelligence we clearly share with many other animals: we have well-developed visual and other perceptual systems, complex motor skills and the ability to learn in many ways to adapt to the environment around us. We also seem to be smart in ways that other creatures are not: we have a language system that is complex and sophisticated in its ability both to represent knowledge and to communicate with other humans; we study and attempt to understand a multitude of subjects including our own history and that of the universe; we have devised systems of mathematics and logic; we design and build a huge range of structures and artifacts; we have constructed and mostly live our lives within highly complex economic and social structures. All of these distinctively human things imply an extraordinary ability to reason, entertain hypotheses and make decisions based upon complex mental simulations of future possibilities. I will use the term “hypothetical thinking” as a catch-all phrase for thought of this kind.
It is equally apparent that evidence of human error and fallibility surrounds us. The world is plagued by wars, famines and diseases that in many cases appear preventable. Stock markets collapse under panic selling when each individual acts to bring about the outcome that none of them wants. Doctors sometimes make disastrous misjudgements that result in the disability or death of their patients. Experts often fail to agree with each other and may be shown in hindsight to have made judgements that were both mistaken and overconfident. At the present time, governments of the world are well informed about the likely progress of global warming and its consequences but seem to be making minimal progress in doing anything to prevent it. Criminal courts continue to convict the innocent and acquit the guilty, with alarming regularity. And so on, and so forth.
It seems vital that psychologists should be able to provide understanding of the mental processes of reasoning and judgements that underlie the actions and decisions that people take. A fundamental premise of the current book is that there are two distinct kinds of thought, which for the moment I will call intuitive and deliberative. Many of our everyday decisions are made rapidly and intuitively because they just feel right. Others are made much more slowly, involving conscious deliberative thinking. Sometimes we have no time for deliberative thought and just have to react quickly to some situation. In fact, the great bulk of our everyday cognitive processing is carried out rapidly and implicitly without conscious thought. Such processes enable us to accomplish a multitude of necessary tasks, as, for example, when we recognize a face, extract the meaning from a sentence, keep our car safely on the road when driving to work (and thinking consciously about something quite different) or attend to the voice of one person in a room containing the babble of many conversations.
Much of our judgement and decision making takes place at this level also. A lot of our behaviour is habitual, so we are not conscious of choosing our direction at a junction on a familiar drive to work. However, something very different happens when we drive to a new location in an unfamiliar town, following verbal directions or trying to read a map. Now we have to engage conscious and deliberative thinking and reasoning to work out the route, identify landmarks, turn at the correct places and so on. In general, novel problems require much more deliberative thought than do familiar ones. When we have to do this kind of thinking it takes time, it requires effort and it prevents us from thinking about other things. Conscious, deliberative thinking is a singular resource that can only be applied to one task at a time. This is one reason that we allocate this kind of thought to tasks and decisions that have great importance for us and make snap intuitive decisions about less important things. However, there is no guarantee that thinking about our decisions will necessarily improve them (see Chapter 5).
Folk psychology – the common-sense beliefs that we all hold about our own behaviour and that of our fellow human beings – involves the idea that we are consciously in control of our own behaviour – we think, therefore we do. The opinion polling industry, for example, is built on the common-sense belief that people have conscious reasons for their actions which they can accurately report. Psychological research, however, seriously undermines this idea (Wilson, 2002). Not only is much of our behaviour unconsciously controlled, but many of our introspections provide us with unreliable information about the extent and the ways in which our conscious thinking controls our actions. Working out the relative influence of intuitive and deliberative thinking and the interaction between the two systems is a complex problem that must be addressed with the methods of experimental psychology. This enterprise lies at the heart of the current book.
Many of the phenomena to be discussed in this book are described as cognitive biases. It may appear that the demonstration of bias implies evidence for irrationality, and it is impossible to study these topics without taking some view on whether and in what way people are rational. Cognitive psychology as a whole studies the workings of the mind at a number of levels. Basic cognitive processes (still incredibly complex and sophisticated) form the building blocks for our behaviour and thought. These include such functions as pattern recognition, language comprehension, memory for events and the acquisition of conceptual knowledge about the world around us. None of these topics has generated debate about human rationality. Our visual systems have limited acuity and our memory systems limited capacity, we assume, because that is simply the way our brains are designed: the way they were shaped by evolution to be. The study of higher cognitive processes, on the other hand – thinking, reasoning, decision making and social cognition – has been somewhat obsessed by the notions of bias, error and irrationality. Author after author provide us with evidence of “bad” thinking: illogical reasoning, inconsequential decision making, prejudice and stereotyping in our view of people in the social world. The study of cognitive biases is something of a major industry.
What exactly is a cognitive bias? One definition is that it is systematic (not random) error of some kind. This then begs the question of what is an error. Psychologists have largely answered the second question by reference to normative systems. Thus reasoning is judged by formal logic; judgement under uncertainty by probability theory; choice behaviour by formal decision theory and so on. Some authors go further and claim that people who fail to conform to such normative standards are irrational. Most of the biases studied in cognitive psychology have been defined in this way, and yet this notion is today highly controversial. Some authors claim that people’s behaviour only appears biased or irrational because the wrong normative theory is being applied. For example, if standard logic requires that propositions are clearly true or false, then people’s reasoning in an uncertain world might better be assessed by norms based on probability theory (see Oaksford & Chater, 2001).
In fact, we do not necessarily need to invoke normative rationality in order to think about cognitive biases. We have much lower visual acuity than does a bird of prey, but vision researchers do not accuse us of being biased against distant objects. Similarly, memory researchers do not accuse us of irrationality if we cannot remember a phone number more than seven or eight digits in length. Researchers in this area rarely use the term “bias”, but their findings certainly indicate the constraints and limitations of human information processing. So we could think about biases of thought and judgement also as indicators of the design limitations of the brain. This is an approach that emphasizes what is known as bounded rationality (Simon, 1982). According to this view, we are not inherently irrational but we are cognitively constrained in the way we can reason about the world. For example, it may not be possible to calculate the best choice of action in a given situation, so we settle for one that is good enough.
Another concept of cognitive bias is dispositional: for example, people have different styles of thinking that may be related to personality or to culture. A widely cited claim is that Western people have a more analytic style of thinking, while Eastern people are more holistic or intuitive (Nisbett, Peng, Choi, & Norenzayan, 2001). One style is not necessarily better than the other, but each may fare better or worse on different kinds of task. Combining the dispositional and bounded rationality approaches, we might conclude that people’s ability to think in particular ways is biased or constrained not only biologically, in the design of our brains, but also culturally. Either or both kinds of explanation might be induced, for example, to account for biases in social cognition. For example, people seem compulsively to employ stereotypes when thinking about people from an “out-group” with whom they do not share social membership (Hinton, 2000). This could reflect some innate form of social intelligence shaped by evolution, learning of cultural norms passed from one generation to the next, or an interaction of the two.
As we shall see in this book, psychologists studying higher cognitive processes have discovered and documented a wide range of biases. In most cases, these biases have been defined as deviation from a normative standard, leading to a debate about whether or not they should be termed irrational. I have discussed the rationality issue in detail elsewhere (Evans & Over, 1996a), and it will not be the main focus of interest in this book. (I will, however, consider the issue in my final chapter.) The study of cognitive biases should be seen as important for two reasons, whether or not they are deemed to provide evidence of irrationality. First, they establish the phenomena that have to be explained. Second, they may have practical implications for reasoning and decision making in the everyday world. Hence, each bias gives rise both to a theoretical question: “Why do people think in this way?” and to a practical question: “How will this bias manifest itself in real-world behaviour and with what consequences?”
As an example, psychologists have accumulated much evidence that people’s evaluation of logical arguments is biased by whether or not they believe the conclusions given (Chapter 4). This is regarded as a bias because logical validity depends only on whether a conclusion follows necessarily from some assumptions and not on whether assumptions or conclusion are actually true. I suppose one could try to move directly from this result to its practical implications without any real theoretical analysis of the cause of the bias. Such an analysis might, however, conclude that human reasoning is automatically contextualized by prior knowledge and belief and that only a strong effort of deliberative conscious reasoning will overcome this. In my view, understanding of the likely practical implications of the bias is greatly assisted by this kind of theorizing.
In this book, I shall be viewing the phenomena discussed within both a broad and a more specific theoretical framework to be introduced later in this chapter. The broad framework, generally known as “dual-process” theory, has been applied to a wide range of cognitive studies, including learning (Reber, 1993), reasoning (Evans & Over, 1996a; Stanovich, 1999), conceptual thinking (Sloman, 1996), decision making (Kahneman & Frederick, 2002) and social cognition (Chaiken & Trope, 1999). Dual-processing approaches assert the existence of two kinds of mental processes corresponding broadly to the idea of intuitive and deliberative thinking and to more general distinctions between implicit and explicit cognitive processes, such as those involved in learning and memory. Within this general framework, I will, however, present a more specific dual-process theory of hypothetical thinking that updates and extends my earlier heuristic–analytic theory of reasoning (Evans, 1989). In support of this theory, I will discuss phenomena that are drawn mostly (but not exclusively) from two separate but related literatures: the psychology of reasoning on the one hand; and the study of judgement and decision making on the other. Before presenting my general and specific theoretical framework, I shall outline the nature of these two fields of study, including the methods and theoretical approaches that have tended to dominate them.

THE PSYCHOLOGY OF REASONING

I ought, perhaps, to start with the distinction between implicit and explicit inference (see also Johnson-Laird, 1983). Any kind of inference involves going beyond the information given and may technically be regarded as deductive or inductive. Inductive inferences add new information, whereas deductive inferences draw out only what was implicit in assumptions or premises. Both deductive and inductive inferences may be either implicit or explicit in terms of cognitive processing. I shall illustrate this with some examples.
Pragmatic inferences are almost always involved in the comprehension of linguistic statements (see Sperber & Wilson, 1995, for discussion of many examples). Because they typically add information from prior knowledge relevant to the context, they are generally inductive as well as implicit. As a result, such inferences are plausible or probable but not logically sound and may turn out to be incorrect. In accordance with the communicative principle of relevance (Sperber & Wilson, 1995), every utterance conveys a guarantee of its own relevance, and this licenses many pragmatic inferences. Consider the following dialogue between an adult son and his mother:

“I think I am going to be late for work”
“My car keys are in the usual place”
“Thanks, Mum”.

There will be a context behind this exchange that is mutually manifest to both parties. For example, the son usually travels to work but his mother sometimes lets him borrow her car, which takes 15 minutes off the journey. Hence, the first statement is interpreted as a request to borrow the car, and the reply acquiescence to this request. Neither speaker has actually stated that the car is to be borrowed, so the inferencing is clearly implicit. It is also hardly deductive and can be incorrect. Suppose the dialogue actually went like this:

“I think I am going to be late for work”
“My car keys are in the usual place”
“I am going for a drink after work. Can’t you drop me off ?”

The son’s reply clearly signals that the mother’s original inference that he wanted to borrow the car was wrong. This is cancelled by the reply with a further implicit inference: the son wishes to drink and will therefore not drive home afterwards. This kind of inferencing occurs all the time in everyday dialogue, but it is not what the psychology of reasoning is (apparently) concerned with, as we shall see shortly. Note that such implicit inferences can be deductive in nature, as in:

“I can’t play golf this weekend; my sister is visiting”
“Surely, she can spare her brother for a few hours?”

By the conventions of relationships, it follows logically that if X (male) has a sister Y, then X is the brother of Y. This inference is included in the riposte above, but it is most unlikely that either party would have required any conscious reasoning to deduce it. Such inferences are also implicit or automatic but cannot normally be cancelled, unless the premise on which they are based is withdrawn.
What is described as the psychology of reasoning should really be known as the psychology of explicit reasoning as it has, at least on the face of it, nothing to do with these kinds of conversational inferences. Instead, psychologists in this field have concentrated on giving participants in their experiments verbal statements from which explicit conclusions need to be inferred. Explicit reasoning tasks can in principle be deductive or inductive, but the field has been generally dominated by the former, using what is known as the “deduction paradigm” (Evans, 2002a). This method involves giving people some premises, asking them to assume that they are true and then asking them to decide whether some conclusions necessarily follow. This method allows people’s reasoning to be assessed against the normative framework of formal logic. For example, people might be presented with a syllogism that has two premises and a conclusion, such as:

Some of the blue books are geography books
None of the large books are geography books
Therefore, some of the blue books are not large.
1.1

The logical question to be asked is, does the conclusion of this argument necessarily follow from its premises? To put it another way, if the premises of the argument are true, must the conclusion be true as well, no matter what else we assume about the state of the world? The above argument is valid in this sense. The first premise establishes that there exists at least one blue geography book. Since none of the large books are geography books there exists at least one blue book that is not large. Hence, some (meaning at least one) of the blue books are not large. Suppose, we reorder the terms of the conclusion:

Some of the blue books are geography books
None of the large books are geography books
Therefore, some of the large books are not blue.
1.2

Now is Argument 1.2 still valid? The answer now is no. The conclusion would be false if all of the large books are blue. Although there is at least one blue geography book (that is not large), it is perfectly possible that all of the large books are blue. The actual state of affairs, for example, might be:

10 small blue geography books
20 small red geography books
30 large blue history books.

Given this collection of books, both premises of both arguments hold: some of the blue books are geography books, and none of the large books are geography books. The conclusion of 1.1 also holds: some of the blue books are not large. However, the conclusion of 1.2 is demonstrably fallacious because all of the large books are blue. What this illustrates is the semantic principle of validity: an argument is valid if there is no counterexample to it. This principle is favoured by psychologists in the mental model tradition (Johnson-Laird, 1983; Johnson-Laird & Byrne, 1991), who have built a popular theory of human deductive reasoning around it.
In contrast with conversational inferences, which are automatic and effortless (though not necessarily logically valid), explicit deductive reasoning tasks of this kind are slow and difficult to solve for most people. In fact, psychological experiments on deductive reasoning show that many mistakes occur with ordinary participants (Evans, Newstead, & Byrne, 1993). In particular, people make many fallacies: that is, they declare arguments as valid when their conclusions could be true given the premises, but do not need to be true. Hence, many people would indicate if asked, that both 1.1 and 1.2 above are valid arguments. Syllogistic reasoning is also known to be systematically biased by several factors as we will see later in this book.
How do ordinary people engage in deductive reasoning? For many years, the psychology of reasoning was dominated by two apparently contrasting theories. According to a tradition known ...

Table of contents

  1. Cover Page
  2. HYPOTHETICAL THINKING
  3. ESSAYS IN COGNITIVE PSYCHOLOGY
  4. Title Page
  5. Copyright Page
  6. Foreword and acknowledgements
  7. CHAPTER ONE: Introduction and theoretical framework
  8. CHAPTER TWO: Hypothesis testing
  9. CHAPTER THREE: Suppositional reasoning: if and or
  10. CHAPTER FOUR: The role of knowledge and belief in reasoning
  11. CHAPTER FIVE: Dual processes in judgement and decision making
  12. CHAPTER SIX: Thinking about chance and probability
  13. CHAPTER SEVEN: Broader issues
  14. References
  15. Notes
Citation styles for Hypothetical Thinking

APA 6 Citation

Evans, J. St. (2007). Hypothetical Thinking (1st ed.). Taylor and Francis. Retrieved from https://www.perlego.com/book/1696924/hypothetical-thinking-dual-processes-in-reasoning-and-judgement-pdf (Original work published 2007)

Chicago Citation

Evans, Jonathan St. (2007) 2007. Hypothetical Thinking. 1st ed. Taylor and Francis. https://www.perlego.com/book/1696924/hypothetical-thinking-dual-processes-in-reasoning-and-judgement-pdf.

Harvard Citation

Evans, J. St. (2007) Hypothetical Thinking. 1st edn. Taylor and Francis. Available at: https://www.perlego.com/book/1696924/hypothetical-thinking-dual-processes-in-reasoning-and-judgement-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Evans, Jonathan St. Hypothetical Thinking. 1st ed. Taylor and Francis, 2007. Web. 14 Oct. 2022.