Mathematical Models of Perception and Cognition Volume II
eBook - ePub

Mathematical Models of Perception and Cognition Volume II

A Festschrift for James T. Townsend

  1. 252 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Mathematical Models of Perception and Cognition Volume II

A Festschrift for James T. Townsend

Book details
Book preview
Table of contents
Citations

About This Book

In this two volume festschrift, contributors explore the theoretical developments (Volume I) and applications (Volume II) in traditional cognitive psychology domains, and model other areas of human performance that benefit from rigorous mathematical approaches. It brings together former classmates, students and colleagues of Dr. James T. Townsend, a pioneering researcher in the field since the early 1960s, to provide a current overview of mathematical modeling in psychology. Townsend's research critically emphasized a need for rigor in the practice of cognitive modeling, and for providing mathematical definition and structure to ill-defined psychological topics. The research captured demonstrates how the interplay of theory and application, bridged by rigorous mathematics, can move cognitive modeling forward.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Mathematical Models of Perception and Cognition Volume II by Joseph Houpt, Leslie M. Blaha, Joseph W. Houpt, Leslie M. Blaha in PDF and/or ePUB format, as well as other popular books in Psychology & History & Theory in Psychology. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2016
ISBN
9781317297482
Edition
1

1
The Neural Basis of General Recognition Theory

F. Gregory Ashby and Fabian A. Soto

1.1 Introduction

General recognition theory (GRT; Ashby & Townsend, 1986) is a multivariate extension of signal detection theory to cases in which there is more than one perceptual dimension. GRT has all the advantages of univariate signal detection theory (i.e., it separates perceptual and decision processes), but it also offers the best existing method for examining interactions among perceptual dimensions (or components). Since its inception, hundreds of articles have applied GRT to a wide variety of phenomena, including categorization (e.g., Ashby & Gott, 1988; Maddox & Ashby, 1993), similarity judgment (Ashby & Perrin, 1988), face perception (Blaha, Silbert, & Townsend, 2011; Soto, Vucovich, Musgrave, & Ashby, 2014; Thomas, 2001; Wenger & Ingvalson, 2002), recognition and source memory (Banks, 2000; Rotello, Macmillan, & Reeder, 2004), source monitoring (DeCarlo, 2003), attention (Maddox, Ashby, & Waldron, 2002), object recognition (Cohen, 1997; Demeyer, Zaenen, & Wagemans, 2007), feature binding (Ashby et al., 1996), perception/action interactions (Amazeen & DaSilva, 2005), auditory and speech perception (Silbert, 2012; Silbert, Townsend, & Lentz, 2009), haptic perception (Giordano et al., 2012; Louw, Kappers, & Koenderink, 2002), and the perception of sexual interest (Farris, Viken, & Treat, 2010). Townsend has been at the forefront of this movement, coauthoring the article that introduced and named GRT and advancing the theory with many subsequent contributions.
Of course, the perceptual and cognitive processes modeled by GRT are mediated by circuits in the brain. During the past decade or two, much has been learned about the architecture and functioning of these circuits. This chapter reviews this neuroscience literature, with a focus on three separate questions. First, what does the neuroscience literature say about the validity of GRT? In other words, do the recent discoveries in neuroscience support or disconfirm the fundamental assumptions of GRT? Second, how can we use results from the neuroscience literature to improve GRT applications? For example, are there experimental conditions that improve the validity of GRT analyses? Third, how can GRT analyses be extended to neuroscience data and especially to data from neuroimaging experiments?

1.2 Supporting Evidence for the Neural Feasibility of GRT

GRT is an extremely general model of perception and decision making that has been applied to a great variety of different tasks and behaviors. Nevertheless, it makes some core assumptions that are assumed to hold in all applications. In particular, it assumes two separate stages of processing: a sensory/perceptual stage that generally precedes a second decision stage, and it also assumes that all sensory representations are inherently noisy, that every behavior, no matter how trivial, requires a decision, and that decision processes can be modeled via a decision bound. All but the last of these assumptions also define univariate signal detection theory.
When GRT was first proposed nearly 30 years ago, the only one of these assumptions with any independent support was that all sensory representations are noisy. The other assumptions were justified almost exclusively on the basis of intuitive appeal. During the intervening three decades, however, an explosion of new neuroscience knowledge has provided strong tests of all GRT assumptions. This section reviews these assumptions and the relevant neuroscience data. As we will see, for the most part, neuroscience has solidified the foundations of GRT.

1.2.1 Separate Sensory and Decision Processes

GRT and signal detection theory both assume that decision processes act on a percept that may depend on the nature of the task, but does not depend on the actual response that is made. GRT can be applied to virtually any task. However, the most common applications are to tasks where the stimuli vary on two stimulus components or dimensions. Call these A and B. Then a common practice is to let AiBj denote the stimulus in which component A is at level i and component B is at level j. GRT models the sensory or perceptual effects of stimulus AiBj via the joint probability density function (pdf) fij(x1,x2). On any particular trial when stimulus AiBj is presented, GRT assumes that the subject’s percept can be modeled as a random sample from this joint pdf and that the subject uses these values to select a response. Thus the pdf fij(x1,x2) is the same on every trial that stimulus AiBj is presented, regardless of what response was made or, in other words, GRT assumes that a sensory representation is formed first, and then decision processes use this representation to select a response. A possible neural implementation of the theory would require that the neural networks used to represent the perceptual properties of stimuli are relatively separated from the networks used to set criteria for decision making (which determine response biases).
Even 30 years ago, it was known that the flow of information from sensory receptors up through the brain, passes through sensory cortical regions before reaching motor areas that initiate behaviors. In the case of vision, for example, it was known that retinal ganglion cells project from the retina to the lateral geniculate nucleus of the thalamus, which projects to V1 (primary visual cortex) then V2, V4, and many other regions before the M1 (primary motor cortex) neurons are stimulated, which cause the subject to press one response key or the other. So the general neuroanatomy seemed consistent with the GRT assumption of separate sensory and decision processes. Even so, there was virtually nothing known about whether visual cortex plays a significant role in the decision process. For example, in 1986, knowledge of neuroanatomy was also consistent with a theory in which the response was actually selected as the representation moved up through higher levels of visual cortex, and the main goal of the processing that occurred in many psychophysical tasks in later non-visual areas (e.g., premotor cortex and M1) was simply to serve as a relay between visual areas and the effectors that would execute the selected behavior. This type of intermingling would violate the GRT assumption that sensory/perceptual and decision processes are relatively separate.
In fact, there is now good evidence that decisions are not mediated within the visual cortex. For a while, however, evidence against the GRT assumption of separate sensory and decision processes seemed strong. The most damning evidence came from reports of a variety of category-specific agnosias that result from lesions in inferotemporal cortex (IT) and other high-level visual areas. Category-specific agnosia refers to the ability to perceive or categorize most visual stimuli normally but with a reduced ability to recognize exemplars from some specific category, such as inanimate objects (e.g., tools or fruits). The most widely known of such deficits, which occur with human faces (i.e., prosopagnosia), is associated with lesions to the fusiform gyrus in IT. In GRT, a category is defined by a response region, not by a perceptual distribution. So the association of category-specific agnosias to lesions in visual cortex seemed to suggest that the visual areas were also learning the decision bounds that defined the categories.
Of course, a category-specific agnosia that results from an IT lesion does not logically imply that category representations are stored in IT. For example, although such agnosias are consistent with the hypothesis that category learning occurs in IT, they are also generally consistent with the hypothesis that visually similar objects are represented in nearby areas of visual cortex. In particular, it is well known that neighboring neurons in IT tend to fire to similar stimuli.
Take the example of the most anterior IT region in the monkey brain: area TE. This is the final stage of purely visual processing in the primate brain; thus if high-level categorical representations were stored in visual cortex, TE would be a likely place for their storage. Research indicates that most neurons in this area are maximally activated by moderately complex shapes or object parts (for reviews, see Tanaka, 1996, 2004). More specifically, they are maximally activated by features that are more complex than simple edges or textures, but not complex enough to represent a whole natural object or object category. Because neurons in TE are selective to partial object features, the representation of a whole object requires the combined activation of at least several of these neurons. In other words, anterior IT seems to code for objects in a sparsely distributed manner (Rolls, 2009; E. Thomas, Van Hulle, & Vogels, 2001), which is confirmed by analyses showing that the way in which information about a stimulus increases with the number of IT neurons that are sampled is in line with a sparsely distributed code (Abbott, Rolls, & Tovee, 1996; Hung, Kreiman, Poggio, & DiCarlo, 2005; Rolls, Treves, & Tovee, 1997). It appears that TE neurons that code for similar features cluster in columns (Fujita, Tanaka, Ito, & Cheng, 1992), that a single object activates neurons in several columns (Wang, Tanifuji, & Tanaka, 1998; Yamane, Tsunoda, Matsumoto, Phillips, & Tanifuji, 2006), and that the columns that are activated by two similar objects represent features that are common to both (Tsunoda, Yamane, Nishizaki, & Tanifuji, 2001). Thus damage to some contiguous region of IT (or any other visual cortical area) is likely to lead to perception deficits within a class of similar stimuli due to their shared perceptual features.
In fact, there is also now strong evidence that decision processes are not implemented within visual cortex. For example, Rolls, Judge, and Sanghera (1977) made recordings from neurons in the ITs of monkeys. In these experiments, one visual stimulus was associated with reward and one with a mildly aversive taste. After training, the rewards were switched. Thus, in effect, the animals were taught two simple categories (i.e., “good” and “bad”), and then the category assignments were switched. If the categorical decision was represented in the visual cortex, then the firing properties of visual cortical neurons should have changed when the category memberships were switched. However, Rolls et al. found no change in the response of any of these cortical neurons, although other similar studies have found changes in the responses of neurons in other downstream brain areas (e.g., orbitofrontal cortex).
More recent studies have found similar null results with more traditional categorization tasks (Freedman, Riesenhuber, Poggio, & Miller, 2003; Op de Beeck, Wagemans, & Vogels, 2001; Sigala, 2004; E. Thomas et al., 2001; Vogels, 1999). In each of these studies, monkeys were taught to classify visual objects into one of two categories (e.g., tree versus non-tree, two categories of arbitrary complex shapes). Single-cell recordings showed that the firing properties of IT neurons did not change with learning. In particular, IT neurons showed sensitivity to specific visual images, but category training did not make them more likely to respond to other stimuli in the same category or less likely to respond to stimuli belonging to the contrasting category.
Similar results have been found in neurobiological studies of visual perceptual learning. The standard model of perceptual learning includes an early stage of sensory processing that is separate from a later stage of decision making (Amitay, Zhang, Jones, & Moore, 2014; Law & Gold, 2010). Theories proposing changes in the later decision stage of processing have been particularly successful in accounting for the available data (Amitay et al., 2014), including findings of heightened behavioral sensitivity that are not associated with changes in early sensory areas, but instead with the way that sensory information is used to form a decision variable at later stages of processing (e.g., Kahnt, Grueschow, Speck, & Haynes, 2011; Law & Gold, 2008).
On the other hand, under certain conditions, categorization training can change the firing properties of IT neurons. Sigala and Logothetis (2002; see also De Baene, Ons, Wagemans, & Vogels, 2008; Sigala, 2004) trained two monkeys to classify faces into one of two categories and then in a separate condition to classify fish. In both conditions, some stimulus features were diagnostic and some were irrelevant to the categorization response. After categorization training, many neurons in IT showed enhanced sensitivity to the diagnostic features compared to the irrelevant features. Such changes are consistent with the widely held view that category learning is often associated with changes in the allocation of perceptual attention (Nosofsky, 1986). Accounting for such shifts in perceptual attention is straightforward in GRT. The typical approach is to assume that increases in the amount of attention allocated to a perceptual dimension reduces perceptual variance on that dimension (Maddox, et al., 2002; Soto et al., 2014).
Changes in the selectivity of IT neurons after categorization training are consistent with the hypothesis that category learning is mediated outside the visual system and that the attentional effects of categorization training are propagated back to visual areas through feedback projections (see Gilbert & Sigman, 2007; Kastner & Ungerleider, 2000). In support of this hypothesis, the effect of category learning on neural responses is stronger in non-visual areas, such as the striatum and prefrontal cortex (PFC), than in IT (De Baene et al., 2008; Seger & Miller, 2010). Simultaneous recordings from PFC and IT neurons during category learning show that, although IT neurons change their firing after learning, the changes are weaker than i...

Table of contents

  1. Cover
  2. Title
  3. Copyright
  4. Dedication
  5. Contents
  6. Figures and Tables
  7. 1 The Neural Basis of General Recognition Theory
  8. 2 Visual Processing Capacity
  9. 3 On the Relationship Between Perceived Structural Complexity and Temporal Judgments
  10. 4 The Mental Representation of Roman Letters: Revisiting Townsend’s 1971 Letter-Identification Data
  11. 5 Exposing the Hidden Ideal
  12. 6 Hearing What We See: The Temporal Dynamics of Audiovisual Speech Integration
  13. 7 Processing Characteristics of Monaural Tone Detection: A Reaction-Time Perspective on a Classic Psychoacoustic Problem
  14. 8 Characterizing and Quantifying Human Bandwidth: On the Utility and Criticality of the Construct of Capacity
  15. 9 Modeling Stress Effects on Coping-Related Cognition
  16. 10 Systems Factorial Technology Provides New Insights on the Perceptual Comparison and Decision Process in Change Detection
  17. 11 Combining the Capacity Coefficient with Statistical Learning to Explore Individual Differences
  18. Index