The Routledge Handbook of Philosophy of Animal Minds
eBook - ePub

The Routledge Handbook of Philosophy of Animal Minds

Kristin Andrews, Jacob Beck, Kristin Andrews, Jacob Beck

  1. 522 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Routledge Handbook of Philosophy of Animal Minds

Kristin Andrews, Jacob Beck, Kristin Andrews, Jacob Beck

Book details
Book preview
Table of contents
Citations

About This Book

While philosophers have been interested in animals since ancient times, in the last few decades the subject of animal minds has emerged as a major topic in philosophy. The Routledge Handbook of Philosophy of Animal Minds is an outstanding reference source to the key topics, problems, and debates in this exciting subject and is the first collection of its kind. Comprising nearly fifty chapters by a team of international contributors, the Handbook is divided into eight parts:



  • Mental representation
  • Reasoning and metacognition
  • Consciousness
  • Mindreading
  • Communication
  • Social cognition and culture
  • Association, simplicity, and modeling
  • Ethics.

Within these sections, central issues, debates, and problems are examined, including: whether and how animals represent and reason about the world; how animal cognition differs from human cognition; whether animals are conscious; whether animals represent their own mental states or those of others; how animals communicate; the extent to which animals have cultures; how to choose among competing models and explanations of animal behavior; and whether animals are moral agents and/or moral patients.

The Routledge Handbook of Philosophy of Animal Minds is essential reading for students and researchers in philosophy of mind, philosophy of psychology, ethics, and related disciplines such as ethology, biology, psychology, linguistics, and anthropology.

Frequently asked questions

How do I cancel my subscription?
Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
Can/how do I download books?
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
What is the difference between the pricing plans?
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
What is Perlego?
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Do you support text-to-speech?
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Is The Routledge Handbook of Philosophy of Animal Minds an online PDF/ePUB?
Yes, you can access The Routledge Handbook of Philosophy of Animal Minds by Kristin Andrews, Jacob Beck, Kristin Andrews, Jacob Beck in PDF and/or ePUB format, as well as other popular books in Filosofia & Storia e teoria della filosofia. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2017
ISBN
9781317585602

Part I
Mental representation

1
Arthropod intentionality?1

Andrew Knoll and Georges Rey

Introduction

A ubiquitous idiom in cognitive science is:
  • x represents y
Thus, one reads of the visual system representing edges, surfaces, color and objects; birds representing the stars; bees the azimuth of the sun, and ants the direction of food from their nest. We will presume here that such talk treats the system as in some way mental, involving the puzzling phenomenon of intentionality: representations are about a certain subject matter, and they may be non-factive, non-existential and aspective: i.e., they can be false, represent non-existing things, and usually represent things “as” one way or another, e.g., a four-sided figure as a square or as a diamond. That is, representations have “correctness” (or “veridicality”) conditions which specify when they’re correct and in what way. We will regard those conditions as providing the “content” of the representation.2
An obviously important question is when talk of such intentional representation is literally true, and when it is merely metaphorical or a façon de parler. Sunflowers “follow” the sun through the course of a day, presumably not because they literally represent the sun, but simply because cells on the sunless side of the stem grow faster, causing the plant to droop towards it.3 Drooping sunflowers at night don’t misrepresent the position of the sun.
In this short entry, we want to address this question by focusing on some of the simplest animal “minds” that have so far been investigated: those of arthropods, specifically ants and bees. This is partly because their relative simplicity permits a clearer understanding of what’s relevant to literal intentional ascription than is easy to acquire of more complex animals, particularly human beings; and partly because they seem very near – or past – the limits of such ascription. Getting clearer about them should help clarify what’s essential in the more complex cases. Moreover, ants and bees have been the subject of quite exciting, detailed research, with which we think any philosopher of mind ought to be acquainted.
Whether a system has literal intentionality has sometimes been thought to turn on its cognitive architecture. For example, Carruthers (2004) argues that some insects (ticks, caterpillars, Sphex and digger wasps) have an inflexible architecture, which is unamenable to explanation in terms of intentional attitude states, while the behavior of insects that exhibit flexible navigational capacities, such as honeybees, is best explained in terms of practical syllogisms operating over states with intentional content. We agree with Carruthers’ general point that the flexibility of a system may be a good guide to whether it involves states with intentional content. But we think that much turns on the details of this flexibility, in particular on how much it involves a certain kind of information integration, which we think in turn requires the features of intentionality we have mentioned. The empirical case is still out on the cognitive architecture of ants and bees; however, there is a prima facie case that some ant navigation involves a flexible architecture that doesn’t require intentional explanation, while the honeybees have one that does.

The desert ant (Cataglyphis fortis)

First, consider the desert ant, Cataglyphis fortis, which lives in the relatively featureless Tunisian desert. It goes on meandering foraging expeditions from its nest that can cover 100m. After finding food, it can return on a direct path to its nest, despite its tortuous outbound route.
Cataglyphis relies on several systems to perform its navigational feats, including a sun compass, wind compass, odor beaconing system, and retinotopic landmark guidance system. Its principle navigation system, however, is a “dead reckoning” or path integration (“PI”4) system. This system keeps track of the steps the ant has taken, and of the polarization of incoming light, which usually changes as a result of changes in the ant’s direction of travel. By realizing a simple vector algebra, the system computationally transforms these correlates of distance and direction, and generates the vector sum of the distance-direction vectors that describe its outward walk. It then follows the inverse of that vector back to its nest.
Our question is whether ascribing states with intentional content to this computational process is part of the best explanation of the PI system. For Charles Gallistel (1990: 58–83), a representation just is a state in a computational system that stands in an isomorphic relation to the structure of the environment and functions to regulate the behavior of an animal in that environment. By these lights, because the ant has states that are functionally isomorphic to the distance and direction it traverses, it therefore has “representations” of distance and direction.
Tyler Burge (2010: 502) complains that such states are “representational” in name only. That is, the states are not about actual spatial distance and direction, in the interesting sense that they have correctness conditions that do any explanatory work. The ant would behave the same whether or not it had representations with those or any other correctness conditions.
One needs to be careful here. It just begs the question against Gallistel to claim that his “representations” play no explanatory role in his theory. Insofar as he thinks representations just are states isomorphically related to and triggered by environmental phenomena, they consequently play an explanatory role if the correlations do. Theoretical reduction is not elimination: if talk of “salt” can be replaced by talk of “NaCl,” salt will play precisely the same explanatory role as NaCl! The substantive issue is whether properties that are arguably constitutive of being an intentional representation – e.g., the properties we mentioned at the start – are essential to the explanation. But we can give Gallistel the word “representation” for functional isomorphisms, and use “i-representation” for the representations that exhibit intentional properties.5
Burge’s complaint is nevertheless on the right track. Isomorphic correlations don’t exhibit the intentional properties that make the representational idiom distinctively interesting. Correlations are factive and relate existing phenomena: no state of an animal can be correlated with features of non-existent landscapes, but animals might represent them nonetheless. Moreover, if a state is sometimes mistakenly correlated with environmental features that fail to take an ant back to its nest, then that’s as real a part of the correlation as features that do take it back.
This latter fact raises what has been called the “disjunction” problem (Fodor 1987), a consequence of an i-representation’s non-factivity. If an i-representation can be erroneous, what determines when that might be? To take a much-discussed example, suppose a frog flicks its tongue at flies, but equally at beebees and mosquitos. Is the content of the relevant representation [fly], and the beebees are errors, or is it [fly or beebee] – “[flybee]”? – and flies and beebees are right and the mosquitos errors? Or perhaps it is merely [moving black dot], and all of those are correct but [moving square] would be wrong. Generally, any non-factive representation can be entokened under conditions C1 C2 C3 … Cn, for indefinitely large n: what determines which of these conditions are the correct ones and which mistakes?6
Many philosophers have proposed solving the disjunction problem by imposing further constraints on correlations or other natural relations – say, that they must be law-like, obtaining under “normal” circumstances (Dretske 1987); that they must be specified by an “interpretation function” (Cummins 1989); that the correctness conditions involve evolutionary selection (Millikan 1984; Dretske 1986; Neander 1995, 2017); or that erroneous conditions asymmetrically depend upon the correct ones (Fodor 1987, 1990). Any of these constraints might succeed abstractly in distinguishing correct from incorrect uses of an i-representation. But, although defining correctness conditions is certainly an important issue, Burge’s point is an additional one. The question he raises is whether any assignment of correctness conditions, appropriately idealized or not, would be explanatory. We want to agree with Burge that, insofar as the ant seems insensitive to whether any of its states are in error, correctness conditions seem irrelevant to that explanation, however they might be defined.

Cataglyphis: navigation without i-representations

As we’ve noted, the ant’s navigational capacities are sensitive to a wide variety of proximal inputs beyond those that factor directly into the PI system.7 The ant can follow odor concentrations emanating from food and nests (Buehlmann, Hansson, and Knaden 2013, 2012); its antennae are sensitive to displacement, which ordinarily correlates well with wind direction, and which the ant can use to set its direction of travel (Müller & Wehner 2007; Wolf and Wehner 2000); and it has systems that track changes in polarized light and also photoscopic patterns that track the position of the sun in the sky (Wehner and Müller 2006). Additionally, it is able to perform its PI in three dimensions, when its foraging path takes it up and down hills, and even when it is forced to trip and stumble over corrugations on its foraging paths.8 More surprisingly still, Steck, Wittlinger, and Wolf (2009) showed that amputation of two of the ant’s six legs doesn’t impede the ant’s successful navigation, even though such amputations cause the ant to stumble and use irregular gaits.
The ant is also sensitive to visual stimuli that correlate with landmarks. The prevailing view is that it responds to stored retinotopic “snapshots” of landmarks in its terrain,9 which it can match with current retinal stimuli in order to influence navigation in a variety of ways (Collett, Chittka, and Collett 2013). Cartwright and Collett (1983, 1987) have described an algorithm that operates only upon proximal retinal stimuli to implement these capacities.
It might be supposed that this sensitivity to a wide variety of stimuli is enough to establish that Cataglyphis makes use of intentional states, i-representing the wind direction, sun position, and landmarks that are germane to computing its location. This inference is too hasty. Insofar as intentionality is genuinely explanatory, its properties, e.g., non-factivity, should support counterfactuals, indicating how, ceteris paribus, the ant would respond if the representation were false.10 On the face of it, if a system’s computations are counterfactually defined only over purely proximal inputs, then it would behave the same whenever different distal stimuli had the same proximal effects – e.g., in the case of an ant navigating by PI, a vector trajectory toward the nest vs. the same trajectory away from the nest. The fact that it’s a distal error would make no difference to the ant: it wouldn’t lead the ant to correct it.11 Classifying states of the ant as “true” or “false” relative to the distal stimuli they are responding to would capture no generalizations not already accounted for by their response to proximal stimuli.
Indeed, not being able to recover from error seems precisely to be Cataglyphis’ plight. Ants that are displaced after finding food will walk in the direction that would have taken them back to their nest had they not been moved (Wehner and Srinivasan 1981). Ants that have pig bristle stilts attached to their legs after they find food end up overshooting their nest on their homebound walk, whereas ants whose legs are amputated end up undershooting it (Wittlinger, Wehner, and Wolf 2006, 2007). One might think, given enough time, the ants will eventually be able to recover from such displacements. But Andel and Wehner (2004) gathered data indicating that, even given ample time, ants can’t so correct. They manipulated the ant’s PI system so that it ran in the direction away from its nest upon getting food,12 and then recorded the behavior of the ants for three minutes after they had executed this PI vector. For those three minutes, the ants did run back and forth parallel to their PI vector. But ants execute this back and forth after completing all of their PI walks, whether or not they succeed in taking them toward the nest. The behavior seems to be not a correction from error, but mere execution of the motor routine triggered by activation of the PI vector. The ants have been observed persisting in this motor routine for up to two hours without finding their nest upon having been displaced (Müller and Wehner 1994: 529). They seem to lack the cognitive resources to recover from error.
Of course, it’s still possible in these instances that there just isn’t enough information available to the ant to allow it to revise course. But there are instances in which the ants are unable to use available proximal stimuli to orient themselves even if those same stimuli can orient them in other circumstances. For example, ants deprived of polarized light can use the sun compass to navigate just as accurately to and from the nest. However, if an ant uses polarized light to chart a course from nest to food, and then is deprived of polarized light cues, it cannot use its sun compass to navigate back home, even though sun compass cues are still readily available (Wehner and Müller 2006). The ant can’t use the sun compass to correct its course, though it could have had it been deprived of polarized light from the start. Perhaps it just doesn’t like using the sun compass, or it falsely believes the sun compass is inaccurate under these conditions – but absent such alternate accounts, the best explanation is that the ant is not i-representing the direction to its nest, but executing a routine that’s sensitive only to stimulation of the polarization detectors.
The similar Australian desert ant, Melophorus bagoti, also demonstrates insensitivity to stimuli that in other circumstances would allow it to recover from displacements (Wehner et al. 2006). These ant...

Table of contents