Mind and Body
eBook - ePub

Mind and Body

  1. 208 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Mind and Body

Book details
Book preview
Table of contents
Citations

About This Book

A great deal of work in philosophy today is concerned with some aspect of the complex tangle of problems and puzzles roughly labelled the mind-body problem. This book is an introduction to it. It is a readable, lucid and accessible guide that provides readers with authoritative exposition, and a solid and reliable framework which can be built on as needed. The first chapter briefly introduces the subject and moves on to discuss mechanism - the idea that minds are machines - focusing on Searle's Chinese Room argument. The next three chapters discuss dualism, physicalism, and some hard problems for physicalism, especially those concerning phenomenal consciousness. Chapters on behaviourism and functionalism follow. The central mind-body topics are then each given deeper consideration in separate chapters. Intentionality is investigated via Fodor's doctrine of the Language of Thought, taking account of connectionism. The main theories of consciousness are examined and the author's own approach outlined. The concluding chapter briefly resumes the theme of psychological explanation, linking it to further topics. Each chapter ends with a summary of the main points together with suggestions for further reading.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Mind and Body by Robert Kirk in PDF and/or ePUB format, as well as other popular books in Philosophy & Philosophy History & Theory. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2014
ISBN
9781317489252
1 Introduction: are we just machines?
1.1 Clockwork Snoopy
Here is a wind-up toy dog, Snoopy, about as big as my thumb. He stands on two legs, and when I wind him up the clockwork motor makes his legs move and he walks. When he hits an obstacle he sometimes stops, sometimes rocks gently and moves off in a different direction. If you were very simple-minded you might think Snoopy decided to stop walking, then decided to move off again; and that generally he knew what he was doing. But we know this clockwork toy really has no thoughts or feelings.
Why are we so confident? Do we know what it takes to have thoughts and feelings? Plenty of philosophers would say we do, but plenty would disagree. As a preliminary to studying the mind–body problem it will be useful to consider the following question:
What reasons are there for thinking that the clockwork dog has neither thoughts nor feelings?
Here are some of the replies people typically offer:
A.
It isn’t conscious.
B.
It hasn’t got a mind (or a soul).
C.
It hasn’t got a brain.
D.
It’s just a machine.
E.
It’s made of the wrong stuff.
F.
It doesn’t behave in the right ways.
Discussing these suggestions will help to expose some main strands in the complex tangle of the mind–body problem.
1.2 Why doesn’t the clockwork dog have thoughts or feelings?
Reply A: “It isn’t conscious.”
That is surely true. However, plenty of people accept that we have unconscious thoughts; and some philosophers argue there can be unconscious experiences too. So even though it seems clear that clockwork Snoopy isn’t conscious, that doesn’t settle the matter. Besides, to say it isn’t conscious immediately raises the question of how we know. The answers tend to be like the ones given to the first question, so we’re not much further forward.
Reply B: “It hasn’t got a mind or a soul.”
You might object that minds and souls are very different kinds of thing, not to be lumped together. But whatever exactly may be meant by those words, it seems clear that clockwork Snoopy has neither. Unfortunately this too doesn’t seem to move us forward. Consider what is supposed to be involved in having a mind or a soul. Many who take the idea seriously will agree that the whole point of having one is that it enables you to have thoughts and feelings. If that is right, the present suggestion amounts to no more than saying that the clockwork dog lacks thoughts and feelings because it lacks whatever would have enabled it to have thoughts or feelings.
Reply C: “It hasn’t got a brain.”
When someone tells Scarecrow in The Wizard of Oz to make up his mind, he tells them “I haven’t got a brain, only straw. So I haven’t got a mind to make up.” Cogent reasoning – assuming a brain is necessary for having a mind. Is it?
We know that with us and other terrestrial animals the brain is heavily involved in the control of behaviour. But perhaps things could have been different. Perhaps robotic systems or alien organisms could be adequately controlled in some different way, without brains. It’s one thing to say a very simple system like clockwork Snoopy has no thoughts or feelings; something else to say no possible robot or computer-controlled system whatever could have them. If it could, then brains are not necessary for thinking and feeling.
Reply D: “It’s just a machine.”
Of course the machines we are familiar with don’t even seem capable of having thoughts or feelings. But that doesn’t mean no machine at all could have thoughts or feelings. Many scientists and philosophers claim it might eventually become possible to construct sophisticated robots with genuine intelligence and even consciousness.
It’s also true that clockwork Snoopy isn’t alive. But we know that being alive is not sufficient for having thoughts or feelings (unless we happen to believe that even plants can feel). If sophisticated robots can be intelligent, it isn’t necessary either.
Reply E: “It’s made of the wrong stuff.”
We are strongly inclined to suppose that nothing made of plastic and metal could have thoughts or feelings. But why not? Two reasons for challenging this reply can be noted straight away. First, if you inspect a brain, living or dead, it looks an unpromising source of thoughts and feelings. If there has to be one bodily organ to do the job, then perhaps, as the ancients supposed, the heart is the more promising candidate. We can at least feel it beating; and loud insistent heartbeats signify strong emotion. Regardless of whether we choose the heart or the brain, though, there seems to be a huge gap between the physiological facts about those organs on the one hand, and thoughts and feelings on the other. The inclination to say that plastic and metal are not the right sort of stuff could turn out to be prejudice. What do brains have that metal and plastic lack?
You might suggest that brains can do things that plastic and metal can’t. That’s true: for one thing, brains bleed. But so what? Evidently the question has to be whether plastic and metal can do the right kinds of things. Perhaps further advances in technology will enable them to do what brains can do. Even today bits of plastic and metal can be inserted into people’s heads to take over some functions of the inner ear. Why shouldn’t it be possible to have totally synthetic brains? Perhaps the trouble is not that the stuff is the wrong kind, but that the mechanisms are the wrong kind. If so, again we meet a further question: what makes the difference between right and wrong kinds of mechanism?
Many people find it so mysterious that anything material should have thoughts or feelings that they think something extra must be involved, something beyond the merely physical. That idea, which arose in very ancient times and is still influential, no doubt partly helps to explain the persistence of the mind–body problem.
It is not the whole of the explanation. Even if we are purely physical organisms and nothing beyond the physical is involved in thoughts and feelings, that leaves most of the main philosophical problems still unsolved, as we shall see.
Reply F: “It doesn’t behave in the right ways.”
Clockwork Snoopy has a very limited behavioural repertoire, while human beings seem capable of indefinitely many different types of behaviour. That is particularly clear when we consider language. We are able to understand an indefinitely large number of sentences, and similarly we can construct and produce an indefinitely large number of sentences, which others can understand. Clockwork Snoopy has nothing approaching those capacities. But that still doesn’t settle the matter. After all, young infants and dumb animals have no language to speak of, and generally a very restricted behavioural repertoire. Why shouldn’t clockwork Snoopy have a limited repertoire of thoughts and feelings to go with its limited behavioural repertoire, rather than none at all?
You might think the point is not so much that clockwork Snoopy can’t do enough, as that he can’t do the right sorts of things. But what are they? What sorts of behaviour, if any, reveal that the system (organism, robot, Martian, whatever) is a genuine subject of thoughts and feelings, not just a clever mimic? That question becomes especially urgent if linguistic behaviour is not necessary.
1.3 Going deeper
You may have been getting impatient with the replies I have discussed so far. You will probably be sympathetic to the last one, but point out that the main trouble with Snoopy is not so much that he doesn’t do the right sorts of things as that he is (a) insensitive to the world around him, and (b) lacks the ability to work out fresh ways of behaving. If observing his actual behaviour isn’t enough to convince us of (a) and (b), then studying his insides will clinch the matter. There is only a simple clockwork motor, whose cogwheels turn a cam which lifts Snoopy’s legs one after the other. Nothing else; and in particular, no mechanisms for receiving information from the outside world, and none for processing that information and working out appropriate behaviour.
It is at least highly plausible that unless an organism or an artificial behaving system is sensitive to the world outside it, and also able to work out fresh ways of behaving, it can’t qualify as having thoughts or feelings. (I say only that this is “highly plausible” because some people maintain there could be thoughts and feelings in a thing without those seemingly vital features.)
Mentioning Snoopy’s insides reminds us that although a great deal is still not known about the detailed workings of human and animal brains and nervous systems, a great deal has already been discovered, and more is being discovered every day. In spite of the enormous difficulty of the task it might eventually be successfully completed, in the sense that scientists will thoroughly understand how the brain and central nervous system generally function. Will the philosophical problems thereby be solved? Some influential philosophers think so: Daniel Dennett, for example (see his Consciousness Explained, 1991). However, even if that is correct, it is far from obvious.
One reason is that discovering the workings of animal bodies and nervous systems will not automatically tell us which features of these systems matter from the point of view of an interest in the nature of thinking and feeling in general. It will not automatically enable us to tell whether a complicated robot – which we can take to be any system controlled by a standard type of computer – has thoughts or feelings, for example. Nor will it automatically enable us to decide whether all that matters is how the system behaves, as some philosophers – philosophical “behaviourists” – still maintain (“If it looks like a duck, flies like a duck, walks like a duck, and quacks like a duck, it’s a duck”). The scientists themselves discuss such questions – but by doing so they engage in philosophy.
Yet if science alone won’t solve all the problems, philosophy alone isn’t going to do it either. Some of the philosophical problems arise only because of our acquaintance with scientific and technological achievements. An example is provided by the work of Ludwig Wittgenstein, who died in 1952. Although research in artificial intelligence had started before Wittgenstein stopped working, and although Alan Turing had already published a significant non-technical paper on these matters in 1950, developments in computing were very far from the stage they have reached today. Those developments in both hardware and software raise possibilities which Wittgenstein just did not consider, as we shall see later in this chapter and in Chapters 5 and 7.
1.4 Consciousness, intentionality and explanation
There is no single clearly defined “mind–body problem”. As the examples of the toy dog and the sophisticated robot suggest, we face a complex of interrelated problems and puzzles. But the following double question will serve as a brisk statement of the problem:
(a)
What is it to have thoughts and feelings? and
(b)
how are thoughts and feelings related to the physical world?
You might reckon we could leave out (a); but surely we can’t hope to understand how thoughts and feelings are related to the physical world unless we have some understanding of the nature of thoughts and feelings themselves.
I take feelings to include sensations, emotions and perceptions; I will also use “consciousness” to cover this large aspect of mental life. Experiences, feelings, emotions and the like have “subjective character”: there is something it is like to have them. This feature is sometimes called “phenomenal consciousness”. We want to know what it is for something to be phenomenally conscious. Can a purely physical system (organism, artefact, extraterrestrial) be phenomenally conscious? How can a mere mound of molecules have mentality? We shall be constantly returning to these questions; they will receive extended treatment in Chapter 8.
The particularly problematic feature of thinking is “intentionality”. This is the technical-sounding word standardly applied to that feature of our thoughts by which they can be about things, and in some cases true or false. (Note that intentionality is not confined to intentions.) Can a purely physical system have intentionality? How is that possible? Again, we shall be considering this and related questions throughout, with an extended treatment in Chapter 7.
There are some reasons to suppose that thinking and feeling are fundamentally different; but many philosophers argue that they are different sides of the same coin. Further, our problems are not just a result of the fact that the things physics tells us about seem very differen...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. Preface
  7. 1 Introduction: are we just machines?
  8. 2 Is there something extra?
  9. 3 Physicalism
  10. 4 Some objections to physicalism
  11. 5 Behaviourism
  12. 6 Functionalism
  13. 7 More about thinking
  14. 8 More about feeling
  15. 9 Conclusion
  16. Websites
  17. References
  18. Index