Human Error in Aviation
eBook - ePub

Human Error in Aviation

R. Key Dismukes, R.Key Dismukes

  1. 608 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Human Error in Aviation

R. Key Dismukes, R.Key Dismukes

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Most aviation accidents are attributed to human error, pilot error especially. Human error also greatly effects productivity and profitability. In his overview of this collection of papers, the editor points out that these facts are often misinterpreted as evidence of deficiency on the part of operators involved in accidents. Human factors research reveals a more accurate and useful perspective: The errors made by skilled human operators - such as pilots, controllers, and mechanics - are not root causes but symptoms of the way industry operates. The papers selected for this volume have strongly influenced modern thinking about why skilled experts make errors and how to make aviation error resilient.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Human Error in Aviation un PDF/ePUB en línea?
Sí, puedes acceder a Human Error in Aviation de R. Key Dismukes, R.Key Dismukes en formato PDF o ePUB, así como a otros libros populares de Psicologia y Psicologia in industrie e organizzazioni. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
Routledge
Año
2017
ISBN
9781351563468

Part I
Conceptual Frameworks for Thinking About Human Error

Chapter 1, is a ground-breaking essay by James Reason that led the applied science community to realize that most accidents attributed to errors of human operators in fact involved a long trail of organizational factors. Reason introduced the concept of latent failures, organizational conditions and actions whose damaging consequences often lie dormant for years until they happen to combine with active failures, such as operator errors or equipment malfunctions, to cause an unwanted event. (Reason's 1990 book, Human Error, is an expanded treatment of these ideas.)
In their 1998 book. Beyond Aviation Human Factors, Dan Maurino. James Reason, Neil Johnston, and Rob Lee extend Reason's theoretical perspective and apply it to aviation safety. They show how this perspective can be used both to analyze accidents retrospectively and to assess the health of an organization by identifying potential for latent failures. Because of space limitations, only one chapter is included here from this book, the first chapter, which provides an overview and appears here as Chapter 2.
The next essasy (Chapter 3) is the concluding chapter from the book, The Limits of Expertise: Rethinking Pilot Error and the Causes of Airline Accidents, by Key Dismukes, Ben Berman, and Loukia Loukopoulos. This book was the product of a NASA study of the 19 major U.S. airline accidents from 1991 through 2000 in which the NTSB found pilot error to be a causal factor. The authors framed their analysis of these accidents by asking why might any well trained, highly motivated cockpit crew in the position of the accident pilots and knowing only what the accident pilots knew at each moment, be vulnerable to going down the same paths as the accident pilots. Strongly influenced by Reason's perspective of latent failures, this study traces the probabilistic interaction of multiple factors leading to accidents. The interplay of factors is at least as important as the presence of multiple factors acting separately. The concluding chapter from this book draws together the study's findings and discusses the implications for aviation safety.
The essay by Scott Shappell, Cristy Detwiler, Kali Holcomb, Carla Hackworth, Albert Boquet, and Douglas Wiegmann, (Chapter 4) is representative of a series of studies using the Human Factors Analysis and Classification System (HFACS), which have drawn considerable attention in recent years. Shappell and Wiegmann (2001) drew upon Reason's theoretical perspective in developing HFACS as a way of systematically describing human error at each of four levels: organizational, unsafe supervision (middle management), preconditions for unsafe acts, and unsafe acts of operators. The essay reprinted here extends previous studies on military aviation and general aviation to commercial aviation and emphasizes HFACS as a tool for tracking trends in large accident data sets. Curiously, this analysis found organization factors to be present in less than six percent of the accidents - a finding at variance with the perspectives of the three preceding essays.
I suspect that the low occurrence of organizational factors reported stems from the fact that Shappell et al. deliberately coded only factors explicitly identified by the accident investigation reports. As discussed in the overview of this book, accident investigators seem to have given greater weight, historically, to pilot error than to latent organizational factors. This is not because investigators are not interested in all factors contributing to accidents, but rather because of the greater difficulty in determining the influence of latent factors in a given accident. It is fairly easy in hindsight to see what the accident pilots could have done differently to prevent the accident. It is also possible to identify organizational factors that might have influenced the pilots' actions, but it is rarely possible to be certain that altering those factors would have prevented the accident. Thus investigators are reluctant to name organizational factors as 'causal' or contributing unless the organization clearly deviates from industry regulations or norms. But this begs the question of whether those regulations and norms are adequate, given that organizational practices are probably the biggest single influence on pilot behavior (Reason, 1997a).
This discussion notwithstanding, in recent years NTSB reports have more frequently identified organizational factors as contributing to accidents (e.g., NTSB, 2003). This gradual shift illustrates a concern one might have with any taxonomy used to evaluate trends in broad categories of factors, including organizational ones. Changes in actual rates of particular categories may be confounded with drift in how investigators analyze causal and contributing factors as industry understanding of the complex interaction of factors leading to accidents evolves.
In the next essay (Chapter 5) Sidney Dekker cautions that error classification schemes have little value unless they provide a way to dig into why errors occur. He points out that simple models of causality fail to capture the probabilistic interplay of factors leading to accidents (see also the chapter by Dismukes et al. on this point). Dekker advocates seeking recurring patterns in this interplay, such as the 'normalization of deviance' (Vaughan, 1996) and 'practical drift' (Snook, 2000).
Returning to HFACS, I think it can be useful if it leads accident investigators and others to systematically consider influences and actions at different levels within organizations that interact to produce vulnerability to accidents. However, it is crucial to go beyond enumerating contributing factors and to find some way to characterize the interplay among those factors in each accident. This will be a considerable challenge, given the large number of permutations and combinations possible among factors. The level of granularity in categorizing errors is also an issue. For example, Shappell et al. found skill-based errors to occur in over half their sample of commercial accidents, but both the nature of those errors and the reasons they occurred may be heterogeneous.
Perhaps a way of bridging between the perspectives of Shappell et al. and Dekker might be found. HFACS might be used to guide investigations to systematically consider each type of factor involved, but the specific factors could be retained within the type (e.g., retaining information about improperly executed landing flare as one of many types of skill-based errors). Then the interplay among factors might be captured with some formal system of describing interactions, perhaps similar to the citation analysis system used to describe interconnections among scientific publications in given domains. This could provide a way to identify recurring patterns of interaction in large accident databases.
The final essay in this section (Chapter 6), by R. Amalberti, argues that the measures used to improve system safety become less effective and even counterproductive as the system becomes ultra-safe. In an ultra-safe system, attempts to stamp out the last vestiges of human error may make the system less adaptive if protective measures add too many constraints.

References

National Transportation Safety Board (NTSB) (2003), 'Loss of Pitch Control on Takeoff', Emeiy Worldwide Airlines Inc. McDonnell Douglas DC-8-71F. N8079U. Ranch Cordova, California, February 16. 2000 (Report No. AAR-03-02). Washington, DC: Author.
Reason, J. (1990), Human Error, New York: Cambridge University Press.
Reason, J. (1997a), Managing the Risks of Organizational Accidents, Aldershot: Ashgate.
Shappell, S. and Wiegmann, D. (2001), 'Applying Reason: The Human Factors Analysis and Classification System (HFACS)', Human Factors and Aerospace Safety, 1. pp. 59-86.
Snook. S.A. (2000). Friendly Fire: The accidental shootdown of U. S. Blackhawks over Northern Iraq, Princeton, NJ: Princeton University Press.
Vaughan, D. (1996), The Challenger Launch Decision: Risky technology, culture and deviance at NASA, Chicago, IL: University of Chicago Press.

[1]
The contribution of latent human failures to the breakdown of complex systems

BY J. REASON
Department of Psychology, University of Manchester, Manchester M13 9PL, U.K.
Several recent accidents in complex high-risk technologies had their primary origins in a variety of delayed-action human failures committed long before an emergency state could be recognized. These disasters were due to the adverse conjunction of a large number of causal factors, each one necessary but singly insufficient to achieve the catastrophic outcome. Although the errors and violations of those at the immediate human-system interface often feature large in the post-accident investigations, it is evident that these 'front-line' operators are rarely the principal instigators of system breakdown. Their part is often to provide just those local triggering conditions necessary to manifest systemic weaknesses created by fallible decisions made earlier in the organizational and managerial spheres.
The challenge facing the human reliability community is to find ways of identifying and neutralizing these latent failures before they combine with local triggering events to breach the system's defences. New methods of risk assessment and risk management are needed if we are to achieve any significant improvements in the safety of complex, well-defended, sociotechnical systems. This paper distinguishes between active and latent human failures and proposes a general framework for understanding the dynamics of accident causation. It also suggests ways in which current methods of protection may be enhanced, and concludes by discussing the unusual structural features of 'high-reliability' organizations.

1. INTRODUCTION

The past few years have seen a succession of major disasters afflicting a wide range of complex technologies: nuclear power plants, chemical installations, spacecraft, 'roll-on-roll-off' ferries, commercial and military aircraft, off-shore oil platforms and railway networks. If we were to focus only upon the surface details, each of these accidents could be regarded as a singular event, unique in its aetiology and consequences. At a more general level, however, these catastrophes are seen to share a number of important features.
(i) They occurred within complex sociotechnical systems, most of which possessed elaborate safety devices. That is, these systems required the precise coordination of a large number of human and mechanical elements, and were defended against the uncontrolled release of mass and energy by the deliberate redundancy and diversity of equipment, by automatic shut-down mechanisms and by physical barriers.
(ii) These accidents arose from the adverse conjunction of several diverse causal sequences, each necessary but none sufficient to breach the system's defences by itself. Moreover, a large number of the root causes were present within the system long before the accident sequence was apparent.
(iii) Human rather than technical failures played the dominant roles in all of these accidents. Even when they involved faulty components, it was subsequently judged that appropriate human action could have avoided or mitigated the tragic outcome.
Thanks to the abundance and sophistication of engineered safety measures, many high-risk technologies are now largely proof against single failures, either of humans or components. This represents an enormous engineering achievement. But it carries a penalty. The existence of elaborate 'defences in depth' renders the system opaque to those who control it. The availability of cheap computing power (which provided many of these defences) means that, in several modern technologies, human operators are increasingly remote from the processes that they nominally govern. For much of the time, their task entails little more than monitoring the system to ensure that it functions within acceptable limits.
A point has been reached in the development of technology where the greatest dangers ...

Índice

  1. Cover
  2. Half Title
  3. Title
  4. Copyright
  5. Contents
  6. Acknowledgements
  7. Series Preface
  8. Introduction
  9. PART I CONCEPTUAL FRAMEWORKS FOR THINKING ABOUT HUMAN ERROR
  10. PART II SPECIFIC ASPECTS OF SKILLED HUMAN PERFORMANCE
  11. PART III FACTORS AFFECTING SKILLED PERFORMANCE
  12. PART IV BEYOND THE COCKPIT
  13. Name Index
Estilos de citas para Human Error in Aviation

APA 6 Citation

Dismukes, R. K. (2017). Human Error in Aviation (1st ed.). CRC Press. Retrieved from https://www.perlego.com/book/1488467/human-error-in-aviation-pdf (Original work published 2017)

Chicago Citation

Dismukes, R.Key. (2017) 2017. Human Error in Aviation. 1st ed. CRC Press. https://www.perlego.com/book/1488467/human-error-in-aviation-pdf.

Harvard Citation

Dismukes, R. K. (2017) Human Error in Aviation. 1st edn. CRC Press. Available at: https://www.perlego.com/book/1488467/human-error-in-aviation-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Dismukes, R.Key. Human Error in Aviation. 1st ed. CRC Press, 2017. Web. 14 Oct. 2022.