Basic Processes in Reading
eBook - ePub

Basic Processes in Reading

Visual Word Recognition

  1. 352 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Basic Processes in Reading

Visual Word Recognition

Book details
Book preview
Table of contents
Citations

About This Book

The chapters in this new book span the range of reading processes from early visual analysis to semantic influences on word identification, thus providing a state-of-the-art summary of current work and offering important contributions to prospective reading research. Basic Processes in Reading examines both future plans and past accomplishments in the world of word identification research. Three chapters provide a futuristic view taking a parallel distributed processing approach to semantic priming, phonology, and the identification of old words and the learning of new words. Reviews on eye movements in reading and semantic priming on word identification provide a retrospective summary of work on these issues as well as solid pointers for future investigations. Other chapters provide new demonstrations of the importance of phonological contributions to word identification, of interactive processes in the identification of handwritten words, and a re-evaluation of the processes involved in the neuropsychological syndrome described as "letter-by-letter" reading.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Basic Processes in Reading by Derek Besner,Glyn W. Humphreys in PDF and/or ePUB format, as well as other popular books in Pedagogía & Educación general. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2012
ISBN
9781136464089
Edition
1

BASIC PROCESSES IN WORD RECOGNITION AND IDENTIFICATION: AN OVERVIEW

DEREK BESNER
Department of Psychology
University of Waterloo
Waterloo, Ontario, Canada
GLYNW. HUMPHREYS
Department of Psychology
University of Birmingham
Birmingham, England
I. Word recognition and identification as a growth industry.
II. Some hitches.
III. The book.
IV. Conclusions.

I. WORD RECOGNITION AND IDENTIFICATION AS A GROWTH INDUSTRY

Over the past two decades, more cognitive psychologists have paid more attention to the processes involved in visual word recognition than to almost any other subject in their field. The annals of cognitive psychology have thus burgeoned with papers on word recognition while work on other topics, many relating to other aspects of reading such as syntactic parsing or discourse memory, have been substantially less popular. Why has this been so? Has the work had any theoretical or practical success? And, why another book devoted to the topic now?
There are many reasons why work in one research area can take off and flourish; reasons which are sociological and pragmatic rather than just scientific. As far as visual word recognition is concerned, there are several sociological/pragmatic factors. One relates to the advent of new technology. The development of the microcomputer provided ready access to procedures for online control of reaction time (RT) and tachistoscopic experiments, and there are few simpler stimuli to present on-line than single printed words. With simplicity comes some degree of popularity. The advent of the microcomputer stimulated research into visual word recognition in a less trivial way too, because microcomputers allowed more sophisticated experimental procedures to develop than were hitherto possible. In particular, by linking computer controlled displays to eye movement recording apparatus, experimenters began for the first time to gain direct evidence of the relations between eye movements and reading (e.g., Balota & Rayner, this volume; Rayner & Pollatsek, 1987).
A second reason for the popularity of visual word recognition is that simple tasks are at hand, for which accurate and sensitive measures can be derived, such as lexical decision, naming, and semantic classification. Further, and perhaps most importantly, these tasks can be related to models of word recognition, in which task performance is decomposed into a series of processing stages characterized by access to different knowledge representations. An example of this is the logogen model in its revised form (e.g., Morton & Patterson, 1980). This model hypothesizes the existence of separate stored representations for orthographic, semantic and phonological representations of words. Different tasks may tap into different levels of representation. For example, lexical decisions may be accomplished by monitoring activation in the orthographic lexicon (e.g., Coltheart, Davelaar, Jonasson & Besner, 1977); word naming will require access to the phonological lexicon (at least for words with irregular spelling-sound correspondences); semantic classification requires access to stored semantic knowledge. By using such tasks, investigators could attempt to tap and test the characteristics of the different stages in the processing system (see Besner & McCann, 1987, for one application of this approach). Thus, visual word recognition has proved attractive because it has a broadly specified multistage architecture, with the stages apparently open to testing via the judicious use of different tasks. Consequently it can serve as a test-bed for experiments concerned with such general issues as how stored knowledge influences perception.
A third reason for the large body of research on word recognition is that it is a basic process in reading upon which all other reading processes are predicated. Moreover, other processes in reading, such as syntactic parsing, sentence comprehension and so on, may exert only relatively weak influences on the recognition of fixated words, at least in skilled readers (see Humphreys, 1985). In essence, skilled word identification may operate as a relatively freestanding module, and so can be studied in isolation from factors affecting other reading processes.
A fourth reason is that word identification is the interface between higher-order cognitive processes (such as those concerned with text comprehension) and eye movements. The effect of such higher-order cognitive processes on eye movements can be assessed by testing whether saccadic and fixation patterns on particular words vary according to the syntactic ambiguity of the sentence or according to whether the sentence contains a “garden path.” Studies of the relations between eye movements and word processing therefore speaks to the general issue of how the eye movement system is controlled ( Balota & Rayner, this volume).

II. SOME HITCHES

Nonetheless, word recognition research is not without its problems. Though inferences concerning how a particular task may be performed seem clear, given the assumed “functional architecture” of a system, the evidence is often more muddy. For example, lexical decisions may be affected by the imageability of words (James, 1975), word frequency effects can be larger in lexical decision than in tasks such as naming and semantic classification (Balota & Chumbley, 1984; though see Monsell, this volume), and particular forms of priming are more pronounced in lexical decision than in naming and vice versa (Neely, this volume). All this suggests the general point that we cannot take it for granted that all subjects perform a particular task in a given way that maps directly onto an assumed functional architecture. For example, when performing lexical decisions, subjects may check whether the letter string is meaningful or whether it is visually familiar, in addition to checking activation levels in the orthographic lexicon (cf. Besner & McCann, 1987). Although a task, may, in theory, tap a particular processing stage, in reality performance may be based on a number of processing stages whose precise contribution may alter on a trial-by-trial basis.
A second problem is that there are many functional architectures and many models of visual word recognition. The problem with this is that, as the models tend to be broadly defined, it becomes increasingly difficult to generate predictions that are uniquely tied to one account. Given the multi-stage nature of most models, and that performance at any given time could be derived from a number of processing stages (see above), there are sufficient degrees of freedom to fit data to any number of apparently competing accounts of performance (see Humphreys & Evett, 1985, and the associated commentaries, for an example of this problem relating to the existence of an independent “nonlexical” processing route in word naming).
One way to address the latter problem is to assess performance when the reading system has been lesioned (e.g., in cases of acquired dyslexia). Providing we assume that a lesioned system behaves essentially as a reduced normal system (see Patterson, 1981), we gain insight into the working of the model operating with fewer degrees of freedom. Indeed, it can be argued that predictions concerning acquired dyslexia provide some of the clearest tests of current reading models (with some patterns of dyslexia being distinctly uncomfortable for some models; see Patterson, Seidenberg & McClelland, 1989). This is not to say that tests of the models using neuropsychological data do not have problems of their own (e.g., concerning the vexing question of whether any individual brain damaged patient uses a reduced but otherwise normal reading system; see Seidenberg, 1988); it is simply that such evidence can be used to help constrain accounts of normal performance. For instance, models of normal reading that assume that the pronunciation of a word can only be accessed after the word's meaning has been retrieved have severe difficulties explaining how patients may read aloud irregular words (requiring lexical access to phonology) and yet show no comprehension (e.g., Schwartz, Saffran, & Marin, 1979).
It can also be argued that the research into normal word recognition has had practical benefits in fostering a better understanding of both acquired and developmental reading problems (see Coltheart, 1985). The benefits are of two kinds.
The first is in assessing different reading disorders. The various tasks and variables (such as word frequency, imageability, spelling to sound regularity, etc.) used to tap different stages of normal reading provide clear diagnoses of different reading disorders. Most accounts agree that there are differences between surface dyslexics whose reading is phonologically based (so they have problems reading irregular words) and phonological dyslexics who are impaired at assembling phonological representations (contrast Coltheart et al., 1983, with Funnell, 1983). Patients with these different reading problems can be teased apart by contrasting the reading of regular words, irregular words and nonwords, using a logic similar to that used to assess the components of the “normal” reading system.
The second benefit concerns the guidance for rehabilitation programs given by the processing models. Here therapy may be directed at reconstituting or by- passing the supposed deficit within one component of the reading system (e.g., see de Partz, 1986).
A second way around the problem of increased degrees of explanatory freedom in complex processing accounts is to construct detailed working simulations of performance. In this respect, recent “connectionist” models have been influential. In such models, knowledge is represented in terms of the pattern of weights on the connections between processing units at different levels within the system. Such models can be “trained” to associate input and output patterns, so that a given output will be established more easily for patterns presented more frequently. They can “learn” regular mappings between input spellings and output pronunciations, and, following training, they can perform such transformations more easily for words than for nonwords, even though the model is not explicitly given a spelling-to-sound rule system (e.g., Seidenberg & McClelland, 1989). Such models give us some indications of the patterns of performance that emerge from systems adopting the simplest possible representational assumptions (see also Masson, this volume). To the extent that the models are consistent with or are contradicted by data drawn from both skilled readers and patients with acquired reading disorders, they are open to future amendment and development. Thus, computer simulations should not be thought of as a one-way development, based around the simulation of already acquired data, but as a valuable part of an interactive research exercise, in which the models evolve as more specific empirical tests are produced. The trend toward simulation also indicates a second practical benefit of research on visual word recognition, in that in the long term it may be possible to develop machines capable of fast and accurate automatic reading. Further, such machines could also learn, so that they operate as “experts” with their own specialized vocabularies and domain-specific knowledge.
We have thus argued that there has been considerable research into visual word recognition for both theoretical and sociological/pragmatic reasons, and that in spite of the difficulties involved, the work has had some practical successes (e.g., concerning the understanding and retraining of reading disorders).

III. THE BOOK

In the present book, we bring together chapters relevant to recent debates and what we think are likely to be future developments in the field. It is thus no coincidence that the chapters pick up on many of the themes that we believe are relevant to the growth in research on the topic. In particular, these themes include (a) the development of processing accounts of performance that have either been implemented as computer simulations or are sufficiently well specified to enable them to be so, (b) further examination of the contrasting patterns of performance that emerge under different task conditions, and (c) the application of processing models to patients with acquired reading disorders.
The chapters are organized in an order roughly reflecting a sequence of processes from the early visual analysis of words to semantic influences on word identification.
The chapter by Manso de Zuniga, Humphreys and Evett is concerned with the effects of handwriting on visual word identification. These authors relate attempts to build computational models capable of reading handwriting to empirical studies examining how handwriting affects word identification in humans. They propose that handwriting requires a specific process not required in the reading of printed words, which they term cursive normalization, that acts to segment letters in cursive script. The empirical work reported in the chapter shows that cursive normalization operates subsequent to the effects of contrast reduction, and that it interacts with lexical processes (as indicated by the effects of word frequency and long-lasting word repetition). The reading of handwriting can also be guided by the use of stored episodic representations, which may be matched against incoming script in order to bypass the cursive normalization process. The evidence for lexical interactions and episodic matching in humans fits with some of the ideas developed from computational modelling, and illustrate how computational and empirical studies can converge to provide a common solution concerning practical information processing problems. The chapter also shows how our knowledge of early visual processing in word identification can be increased by more detailed studies examining the effects of visual factors, such as handwriting, that are commonly encountered outside the laboratory.
Handwriting is particularly difficult to read for patients who, following brain damage, suffer the reading disorder described as “letter-by-letter” reading (see Warrington & Shallice, 1980; see also Dejerine, 1892, for the original description of the syndrome). Because such patients often read words by first naming their constituent letters aloud, it is commonly assumed that the patients are using a serial letter identification process in order to read words (e.g., Patterson & Kay, 1982; Warrington & Shallice, 1980). In Chapter 3, Howard presents a reevaluation of this disorder, in which he concludes that such patients, like normal subjects, read using a parallel letter identification process. Howard argues convincingly that, for the patients, parallel letter identification is an abnormally noisy and error prone process, thereby producing large effects of word length upon single word reading time. This chapter illustrates the usefulness of the interplay between theory, data and methodology from both the experimental and neuropsychological literature.
Most current accounts of visual word identification assume that, in normal subjects, letter processing takes place in parallel across the word. A much more controversial issue concerns the nature of the representation that mediates lexical access. This controversy has a long history in both experimental psychology and education. In recent years, the traditional view that reading is parasitic upon some form of speech code has given way to the view that orthographic codes (at least in skilled readers) dominate lexical access. In Chapters 4 and 5, Van Orden and Brown reassert the traditional view that phonological coding is a necessary preliminary to word recognition.
Van Orden extends his original line of work (see Van Orden, 1987; Van Orden, Johnston & Hale, 1988) to proofreading. The experiments reported here demonstrate that misspelled words which preserve phonology (e.g., SLEAT for SLEET) are particularly difficult to detect in proofreading. These results are interpreted as strong evidence that phonology is dominant not only in single word identification but also in...

Table of contents

  1. Front Cover
  2. Half Title
  3. Title Page
  4. Copyright
  5. Contents
  6. PREFACE
  7. 1. BASIC PROCESSES IN WORD RECOGNITION AND IDENTIFICATION: AN OVERVIEW
  8. 2. ADDITIVE AND INTERACTIVE EFFECTS OF REPITITION, DEGRADATION, AND WORD FREQUENCY IN THE READING OF HANDWRITING
  9. 3. LETTER-BY-LETTER READERS: EVIDENCE FOR PARALLEL PROCESSING
  10. 4. PHONOLOGIC MEDIATION IS FUNDAMENTAL TO READING
  11. 5. DEREK: THE DIRECT ENCODING ROUTINE FOR EVOKING KNOWLEDGE
  12. 6. THE NATURE AND LOCUS OF WORD FREQUENCY EFFECTS IN READING
  13. 7. WORD RECOGNITION PROCESSES IN FOVEAL AND PARAFOVEAL VISION: THE RANGE OF INFLUENCE OF LEXICAL VARIABLES
  14. 8. A DISTRIBUTED MEMORY MODEL OF CONTEXT EFFECTS IN WORD IDENTIFICATION
  15. 9. SEMANTIC PRIMING EFFECTS IN VISUAL WORD RECOGNITION: A SELECTIVE REVIEW OF CURRENT FINDINGS AND THEORIES
  16. AUTHOR INDEX
  17. SUBJECT INDEX