[1] A Theory of Situation Assessment : Implications for Measuring Situation Awareness
MARTIN L. FRACKER CAPTAIN, USAF
Human Engineering Laboratory-Armstrong Aerospace Medical Research Laboratory Wright-Patterson Air Force Base, Ohio 45433
ABSTRACT
Measures of pilot situation awareness (SA) are needed in order to know whether new concepts in display design help pilots keep track of rapidly changing tactical situations. In order to measure SA, a theory of situation assessment is needed. In this paper, I summarize such a theory encompassing both a definition of SA and a model of situation assessment. SA is defined as the pilotâs knowledge about a zone of interest at a given level of abstraction. Pilots develop this knowledge by sampling data from the environment and matching the sampled data to knowledge structures stored in long-term memory. Matched knowledge structures then provide the pilotâs assessment of the situation and serve to guide his attention. A number of cognitive biases that result from the knowledge matching process are discussed, as are implications for partial report measures of situation awareness.
Introduction
Under the intense stress of combat, military pilots will need to keep track of a rapidly evolving tactical situation. Helping the pilot to maintain his knowledge of the situation from moment to moment, referred to as situation awareness (SA), has become a matter of considerable interest. Measures of pilot SA are needed in order to know whether new concepts in avionics and display design improve SA or not, but psychologists are only now beginning to explore whether and how SA can be measured. Two fundamental questions must be answered before appropriate measures can be developed: precisely what is situation awareness, and how do pilots maintain it. A clear definition of SA is needed because we do not know what to measure otherwise. A model of how pilots maintain SA is needed in order to show how SA can be measured.
In this paper, I will summarize a theory of situation assessment encompassing a definition of SA and a model of how SA is maintained. In the course of this summary, I will show how the theory accounts for certain well-known biases in human cognition. Then, I will describe some of the theoryâs implications for an increasingly popular approach to SA measurement.
What is Situation. Awareness?
Several attempts at defining situation awareness have appeared both in and out of the published literature. Some of these definitions are given in Table 1.
Table 1 Definitions of Situation Awareness
Tolk & Keether, 1982: âthe ability to envision the current and future disposition of both Red and Blue aircraft and surface threatsâ. |
McKinnon, 1986: The pilotâs knowledge of: 1. Where both friendly and enemy aircraft are and what they are doing. 2. What the pilotâs flight knows and what the flightâs options are for offense and defense. 3. What other flights know and what their intentions are. 4. What information from above is missing. |
Endsley (1988): âthe perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near futureâ. |
Harwood, Barnett, and Wickens (1988): Where: the pilotâs knowledge of the spatial relationships among aircraft and other objects. What: the pilotâs knowledge of the presence of threats and their objectives, and of his own aircraft system state variables. Who: the pilotâs knowledge of who is in charge, him or an automated system. When: the pilotâs knowledge of the evolution of events over time. |
Whitaker & Klein (1988): âthe pilotâs knowledge about his surroundings in light of his missionâs goalsâ (p. 321). |
In this paper, I define situation awareness as the knowledge that results when attention is allocated tn a zone of interest at a level of abstraction. For convenience, I refer to the intersection of zones of interest with levels of abstraction as the âfocal regionâ. Assuming that attention is a scarce resource (Kahneman, 1973; Wickens, 1984), situation awareness should be better within a small focal region than within a large one (cf., Eriksen & Yeh, 1985).
Endsley (1988) has defined zones of interest as concentric volumes of space surrounding the pilot throughout which he distributes his attention. But these zones need not be spatially defined. For example, the pilotâs own aircraft could define one zone, his own flight could define a larger zone, and the overall battle may define yet a larger zone. In either case, an important feature of zones of interest is that they may or may not be nested within each other.
Levels of abstraction refer to the level at which a zone of interest is analyzed. At the highest level of abstraction may be mission goals (Brewer & Dupree, 1983; Vallacher & Wegner, 1987; Wyer & Srull, 1986; cf., Pryor, McDaniel, and KottâRusso, 1986). At the lowest level may be the momentâtoâmoment states of specific situation variables at given moments in time. In between these two extremes may be mediators such as organizations, functions, and processes.
Because each level of abstraction in some sense causes the next level down, understanding a situation at a higher level should enhance understanding at a lower level. That is, the better a pilot understands the goals, organizations, and functions of two opposing forces, the more clearly he should understand the processes at work in the engagement, and the better able he should be to anticipate future changes in situation states.
Levels of abstraction may be applicable within any particular zone of interest. For example, suppose the pilotâs own aircraft is his zone of interest. The pilot has his own individual mission goals. To âachieve those goals, his aircraft has been organized and configured in a certain way.* Each instrument, aircraft component, and weapons system has a certain function to perform in order to serve the mission objectives. And the processes that result from those functions control aircraft states such as speed, altitude, fuel pressure, and so on.
Although levels of abstraction (goals, organization, functions, processes, and states) can. apparently be applied to any zone of interest, abstraction levels are not independent of zones of interest. For example, the mission objectives of the individual pilot are subservient to the objectives of his flight, and flight objectives are subservient to the objectives of the overall friendly force. Thus, when one zone of interest is nested within another, the same nominal abstraction level is said to be lower in the nested zone than in the larger zone.
A Model of Situation Assessment
Defining situation awareness determines what is to be measured but does not suggest how it should be measured. For this latter purpose, a model of situation assessment is needed. Ideally, such a model will indicate what kinds of measurement operations will target SA and what kinds will miss the target altogether.
Some models of situation assessment stress that pilots develop and maintain a mental representation of the situation in working memory (Endsley, 1988). Because SA is maintained in working memory, these models predict that pilot SA should improve as the pilotâs working memory capacity increases. Wickens, Stokes, Barnett, and Davis (1987) have recently provided evidence in support of this prediction. But a strictly increasing monotonie relation between working memory capacity and the quality of SA is expected only if all critical information about the situation must be represented in working memory at all times. This condition would exist only if the environment were the pilotâs only source of information. But many theorists propose that recognized patterns among incoming sensory data may identify knowledge structures stored in long-term memory and that these identified structures are also a source of knowledge about the situation (Anderson, 1983; Rumelhart, 1984; Shank, 1982; Wyer & Srull, 1986). The knowledge structures in long-term memory go by different names, depending upon the theorist: associative networks (Anderson, 1983), memory organization packets (Shank, 1982), referent bins (Wyer & Srull, 1986), or schemata (Rumelhart, 1984). In recent years, the term âschemataâ has gained some acceptance as a theoretically neutral reference to the knowledge structures in long term memory (Hastie, 1981; Pryor et al., 1986), and is adopted here.
If schemata can provide substantial information about a situation, then the pilot need not attend to every detail of the environment in order to have a reasonably complete assessment of the situation. Rather, he needs to have schemata that accurately fill in many of the details, and he needs to recognize patterns in the incoming sensory data adequate to identify these schemata. Once a schema has been identified, the pilot needs only to search the schema for items of information not currently in working memory. When an appropriate schema is not found in long-term memory, then the pilot must resort to a backup procedure that greatly increases the load on working memory. This backup procedure has been described by Wyer and Srull (1986). Essentially, the pilot must attend to a larger amount of information in the environment, identify multiple schemata that may be appropriate, place information from these several schemata into working memory, and then integrate the information into a single result.
This model of situation assessment predicts that the relationship between working memory capacity and quality of SA is dependent on the completeness of the knowledge the pilot has stored in long-term memory. If that knowledge is sufficiently complete with respect to a particular focal region, then the quality of SA should be less sensitive to working memory capacity. This dependence on long-term memory suggests that working memory capacity should have a greater impact on the SA of novice pilots than of highly trained expert pilots.
The Model in Operation: Cognitive Biases in Situation Assessment
Although schemata can facilitate situation assessment and relieve the load on working memory, they can also lead to biases that degrade the quality of situation assessment. These biases are representativeness, availability, the confirmation bias, cue salience, and the âas ifâ heuristic (see Kahneman, Slovic, & Tversky, 1982; Wickens, 1984; Wickens et al., 1987). These heuristics and biases can be divided into two groups: those that operate when incoming data match some schema, and those that operate when no match is found. Representativeness, availability, and the confirmation bias belong to the first group and are natural consequences of the situation assessment model. Cue salience and the âas ifâ heuristic belong to the second group and result from the demands of the backup assessment process on limited working memory and attentional resources.
âRepresentativenessâ is defined in Kahneman et al. as the process of matching the pattern of incoming data with a typical pattern for a particular situation stored in long-term memory. Such a matching process is not a computational short-cut as it is sometimes said to be (Wickens et al., 1988) but is instead the central mechanism of situation assessment. Nevertheless, that such pattern matches can sometimes lead to errors in assessment seems indisputable.
One way such matches can go wrong is captured by the availability heuristic. âAvailabilityâ occurs when pilots select the most accessible schema rather than the âbestâ schema. Within the model, availability results when two or more schemas identify themselves as matching inâcoming data and the schema with the strongest level of activation provides the pilot with his situation assessment. Activation strength may be high for several reasons. One is that activation strength should increase as the goodnessâofâfit between the data and the schema increases. Another is that a schema may have been primed by earlier events and so already have a high baseâline level of activation. If so, then a partial match may result in a higher level Of activation than that found in another, u...