Part I
Mental representation
1
Arthropod intentionality?1
Andrew Knoll and Georges Rey
Introduction
A ubiquitous idiom in cognitive science is:
Thus, one reads of the visual system representing edges, surfaces, color and objects; birds representing the stars; bees the azimuth of the sun, and ants the direction of food from their nest. We will presume here that such talk treats the system as in some way mental, involving the puzzling phenomenon of intentionality: representations are about a certain subject matter, and they may be non-factive, non-existential and aspective: i.e., they can be false, represent non-existing things, and usually represent things âasâ one way or another, e.g., a four-sided figure as a square or as a diamond. That is, representations have âcorrectnessâ (or âveridicalityâ) conditions which specify when theyâre correct and in what way. We will regard those conditions as providing the âcontentâ of the representation.2
An obviously important question is when talk of such intentional representation is literally true, and when it is merely metaphorical or a façon de parler. Sunflowers âfollowâ the sun through the course of a day, presumably not because they literally represent the sun, but simply because cells on the sunless side of the stem grow faster, causing the plant to droop towards it.3 Drooping sunflowers at night donât misrepresent the position of the sun.
In this short entry, we want to address this question by focusing on some of the simplest animal âmindsâ that have so far been investigated: those of arthropods, specifically ants and bees. This is partly because their relative simplicity permits a clearer understanding of whatâs relevant to literal intentional ascription than is easy to acquire of more complex animals, particularly human beings; and partly because they seem very near â or past â the limits of such ascription. Getting clearer about them should help clarify whatâs essential in the more complex cases. Moreover, ants and bees have been the subject of quite exciting, detailed research, with which we think any philosopher of mind ought to be acquainted.
Whether a system has literal intentionality has sometimes been thought to turn on its cognitive architecture. For example, Carruthers (2004) argues that some insects (ticks, caterpillars, Sphex and digger wasps) have an inflexible architecture, which is unamenable to explanation in terms of intentional attitude states, while the behavior of insects that exhibit flexible navigational capacities, such as honeybees, is best explained in terms of practical syllogisms operating over states with intentional content. We agree with Carruthersâ general point that the flexibility of a system may be a good guide to whether it involves states with intentional content. But we think that much turns on the details of this flexibility, in particular on how much it involves a certain kind of information integration, which we think in turn requires the features of intentionality we have mentioned. The empirical case is still out on the cognitive architecture of ants and bees; however, there is a prima facie case that some ant navigation involves a flexible architecture that doesnât require intentional explanation, while the honeybees have one that does.
The desert ant (Cataglyphis fortis)
First, consider the desert ant, Cataglyphis fortis, which lives in the relatively featureless Tunisian desert. It goes on meandering foraging expeditions from its nest that can cover 100m. After finding food, it can return on a direct path to its nest, despite its tortuous outbound route.
Cataglyphis relies on several systems to perform its navigational feats, including a sun compass, wind compass, odor beaconing system, and retinotopic landmark guidance system. Its principle navigation system, however, is a âdead reckoningâ or path integration (âPIâ4) system. This system keeps track of the steps the ant has taken, and of the polarization of incoming light, which usually changes as a result of changes in the antâs direction of travel. By realizing a simple vector algebra, the system computationally transforms these correlates of distance and direction, and generates the vector sum of the distance-direction vectors that describe its outward walk. It then follows the inverse of that vector back to its nest.
Our question is whether ascribing states with intentional content to this computational process is part of the best explanation of the PI system. For Charles Gallistel (1990: 58â83), a representation just is a state in a computational system that stands in an isomorphic relation to the structure of the environment and functions to regulate the behavior of an animal in that environment. By these lights, because the ant has states that are functionally isomorphic to the distance and direction it traverses, it therefore has ârepresentationsâ of distance and direction.
Tyler Burge (2010: 502) complains that such states are ârepresentationalâ in name only. That is, the states are not about actual spatial distance and direction, in the interesting sense that they have correctness conditions that do any explanatory work. The ant would behave the same whether or not it had representations with those or any other correctness conditions.
One needs to be careful here. It just begs the question against Gallistel to claim that his ârepresentationsâ play no explanatory role in his theory. Insofar as he thinks representations just are states isomorphically related to and triggered by environmental phenomena, they consequently play an explanatory role if the correlations do. Theoretical reduction is not elimination: if talk of âsaltâ can be replaced by talk of âNaCl,â salt will play precisely the same explanatory role as NaCl! The substantive issue is whether properties that are arguably constitutive of being an intentional representation â e.g., the properties we mentioned at the start â are essential to the explanation. But we can give Gallistel the word ârepresentationâ for functional isomorphisms, and use âi-representationâ for the representations that exhibit intentional properties.5
Burgeâs complaint is nevertheless on the right track. Isomorphic correlations donât exhibit the intentional properties that make the representational idiom distinctively interesting. Correlations are factive and relate existing phenomena: no state of an animal can be correlated with features of non-existent landscapes, but animals might represent them nonetheless. Moreover, if a state is sometimes mistakenly correlated with environmental features that fail to take an ant back to its nest, then thatâs as real a part of the correlation as features that do take it back.
This latter fact raises what has been called the âdisjunctionâ problem (Fodor 1987), a consequence of an i-representationâs non-factivity. If an i-representation can be erroneous, what determines when that might be? To take a much-discussed example, suppose a frog flicks its tongue at flies, but equally at beebees and mosquitos. Is the content of the relevant representation [fly], and the beebees are errors, or is it [fly or beebee] â â[flybee]â? â and flies and beebees are right and the mosquitos errors? Or perhaps it is merely [moving black dot], and all of those are correct but [moving square] would be wrong. Generally, any non-factive representation can be entokened under conditions C1 C2 C3 ⌠Cn, for indefinitely large n: what determines which of these conditions are the correct ones and which mistakes?6
Many philosophers have proposed solving the disjunction problem by imposing further constraints on correlations or other natural relations â say, that they must be law-like, obtaining under ânormalâ circumstances (Dretske 1987); that they must be specified by an âinterpretation functionâ (Cummins 1989); that the correctness conditions involve evolutionary selection (Millikan 1984; Dretske 1986; Neander 1995, 2017); or that erroneous conditions asymmetrically depend upon the correct ones (Fodor 1987, 1990). Any of these constraints might succeed abstractly in distinguishing correct from incorrect uses of an i-representation. But, although defining correctness conditions is certainly an important issue, Burgeâs point is an additional one. The question he raises is whether any assignment of correctness conditions, appropriately idealized or not, would be explanatory. We want to agree with Burge that, insofar as the ant seems insensitive to whether any of its states are in error, correctness conditions seem irrelevant to that explanation, however they might be defined.
Cataglyphis: navigation without i-representations
As weâve noted, the antâs navigational capacities are sensitive to a wide variety of proximal inputs beyond those that factor directly into the PI system.7 The ant can follow odor concentrations emanating from food and nests (Buehlmann, Hansson, and Knaden 2013, 2012); its antennae are sensitive to displacement, which ordinarily correlates well with wind direction, and which the ant can use to set its direction of travel (MĂźller & Wehner 2007; Wolf and Wehner 2000); and it has systems that track changes in polarized light and also photoscopic patterns that track the position of the sun in the sky (Wehner and MĂźller 2006). Additionally, it is able to perform its PI in three dimensions, when its foraging path takes it up and down hills, and even when it is forced to trip and stumble over corrugations on its foraging paths.8 More surprisingly still, Steck, Wittlinger, and Wolf (2009) showed that amputation of two of the antâs six legs doesnât impede the antâs successful navigation, even though such amputations cause the ant to stumble and use irregular gaits.
The ant is also sensitive to visual stimuli that correlate with landmarks. The prevailing view is that it responds to stored retinotopic âsnapshotsâ of landmarks in its terrain,9 which it can match with current retinal stimuli in order to influence navigation in a variety of ways (Collett, Chittka, and Collett 2013). Cartwright and Collett (1983, 1987) have described an algorithm that operates only upon proximal retinal stimuli to implement these capacities.
It might be supposed that this sensitivity to a wide variety of stimuli is enough to establish that Cataglyphis makes use of intentional states, i-representing the wind direction, sun position, and landmarks that are germane to computing its location. This inference is too hasty. Insofar as intentionality is genuinely explanatory, its properties, e.g., non-factivity, should support counterfactuals, indicating how, ceteris paribus, the ant would respond if the representation were false.10 On the face of it, if a systemâs computations are counterfactually defined only over purely proximal inputs, then it would behave the same whenever different distal stimuli had the same proximal effects â e.g., in the case of an ant navigating by PI, a vector trajectory toward the nest vs. the same trajectory away from the nest. The fact that itâs a distal error would make no difference to the ant: it wouldnât lead the ant to correct it.11 Classifying states of the ant as âtrueâ or âfalseâ relative to the distal stimuli they are responding to would capture no generalizations not already accounted for by their response to proximal stimuli.
Indeed, not being able to recover from error seems precisely to be Cataglyphisâ plight. Ants that are displaced after finding food will walk in the direction that would have taken them back to their nest had they not been moved (Wehner and Srinivasan 1981). Ants that have pig bristle stilts attached to their legs after they find food end up overshooting their nest on their homebound walk, whereas ants whose legs are amputated end up undershooting it (Wittlinger, Wehner, and Wolf 2006, 2007). One might think, given enough time, the ants will eventually be able to recover from such displacements. But Andel and Wehner (2004) gathered data indicating that, even given ample time, ants canât so correct. They manipulated the antâs PI system so that it ran in the direction away from its nest upon getting food,12 and then recorded the behavior of the ants for three minutes after they had executed this PI vector. For those three minutes, the ants did run back and forth parallel to their PI vector. But ants execute this back and forth after completing all of their PI walks, whether or not they succeed in taking them toward the nest. The behavior seems to be not a correction from error, but mere execution of the motor routine triggered by activation of the PI vector. The ants have been observed persisting in this motor routine for up to two hours without finding their nest upon having been displaced (MĂźller and Wehner 1994: 529). They seem to lack the cognitive resources to recover from error.
Of course, itâs still possible in these instances that there just isnât enough information available to the ant to allow it to revise course. But there are instances in which the ants are unable to use available proximal stimuli to orient themselves even if those same stimuli can orient them in other circumstances. For example, ants deprived of polarized light can use the sun compass to navigate just as accurately to and from the nest. However, if an ant uses polarized light to chart a course from nest to food, and then is deprived of polarized light cues, it cannot use its sun compass to navigate back home, even though sun compass cues are still readily available (Wehner and MĂźller 2006). The ant canât use the sun compass to correct its course, though it could have had it been deprived of polarized light from the start. Perhaps it just doesnât like using the sun compass, or it falsely believes the sun compass is inaccurate under these conditions â but absent such alternate accounts, the best explanation is that the ant is not i-representing the direction to its nest, but executing a routine thatâs sensitive only to stimulation of the polarization detectors.
The similar Australian desert ant, Melophorus bagoti, also demonstrates insensitivity to stimuli that in other circumstances would allow it to recover from displacements (Wehner et al. 2006). These ant...