The Culture and Politics of Health Care Work
eBook - ePub

The Culture and Politics of Health Care Work

What Else Health Care Can Learn from Aviation Teamwork and Safety

  1. 272 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

The Culture and Politics of Health Care Work

What Else Health Care Can Learn from Aviation Teamwork and Safety

Book details
Book preview
Table of contents
Citations

About This Book

The U.S. healthcare system is now spending many millions of dollars to improve "patient safety" and "inter-professional practice." Nevertheless, an estimated 100, 000 patients still succumb to preventable medical errors or infections every year. How can health care providers reduce the terrible financial and human toll of medical errors and injuries that harm rather than heal?

Beyond the Checklist argues that lives could be saved and patient care enhanced by adapting the relevant lessons of aviation safety and teamwork. In response to a series of human-error caused crashes, the airline industry developed the system of job training and information sharing known as Crew Resource Management (CRM). Under the new industry-wide system of CRM, pilots, flight attendants, and ground crews now communicate and cooperate in ways that have greatly reduced the hazards of commercial air travel.

The coauthors of this book sought out the aviation professionals who made this transformation possible. Beyond the Checklist gives us an inside look at CRM training and shows how airline staff interaction that once suffered from the same dysfunction that too often undermines real teamwork in health care today has dramatically improved. Drawing on the experience of doctors, nurses, medical educators, and administrators, this book demonstrates how CRM can be adapted, more widely and effectively, to health care delivery.

The authors provide case studies of three institutions that have successfully incorporated CRM-like principles into the fabric of their clinical culture by embracing practices that promote common patient safety knowledge and skills.They infuse this study with their own diverse experience and collaborative spirit: Patrick Mendenhall is a commercial airline pilot who teaches CRM; Suzanne Gordon is a nationally known health care journalist, training consultant, and speaker on issues related to nursing; and Bonnie Blair O'Connor is an ethnographer and medical educator who has spent more than two decades observing medical training and teamwork from the inside.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access The Culture and Politics of Health Care Work by Suzanne Gordon,Patrick Mendenhall,Bonnie Blair O'toole in PDF and/or ePUB format, as well as other popular books in Medicine & Medical Education. We have over one million books available in our catalogue for you to explore.

Information

Publisher
ILR Press
Year
2012
ISBN
9780801465345

Chapter 1

History of Crew Resource Management

UNITED 173, DECEMBER 28, 1978: A DEFINING MOMENT

18:06:40—First Officer: I think you just lost number four
better get some cross-feeds open there or something.
18:06:46—First Officer: We’re going to lose an engine
.
18:06:49—Captain: Why?
18:06:49—First Officer: We’re losing an engine.
Captain: Why?
First Officer: Fuel.
18:07:06—First Officer: It’s flamed out.
18:07:12—Captain: to Portland Approach:
would like clearance for an approach into two eight left, now.
18:07:27—Flight Engineer: We’re going to lose number three in a minute, too.
18:07:31—Flight Engineer: It’s showing zero.
Captain: You got a thousand pounds. You got to.
Flight Engineer: Five thousand in there
but we lost it.
Captain: All right.
18:07:38—Flight Engineer: Are you getting it back?
18:07:40—First Officer: No number four. You got that cross-feed open?
18:07:41—Flight Engineer: No, I haven’t got it open. Which one?
18:07:42—Captain: Open ’em both—get some fuel in there. Got some fuel pressure?
Flight Engineer: Yes sir.
18:07:48—Captain: Rotation. Now she’s coming.
18:07:52—Captain: Okay, watch one and two. We’re showing down to zero or a thousand.
Flight Engineer: Yeah.
Captain: On number one?
Flight Engineer: Right.
18:08:08—First Officer: Still not getting it.
18:08:11—Captain: Well, open all four cross-feeds.
Flight Engineer: All four?
Captain: Yeah.
18:08:14—First Officer: All right, now it’s coming.
18:08:19—First Officer: It’s going to be—on approach though.
Unknown Voice: Yeah.
18:08:42—Captain: You gotta keep ’em running
.
Flight Engineer: Yes, sir.
18:08:45—First Officer: Get this [expletive] on the ground.
Flight Engineer: Yeah. It’s showing not very much more fuel.
18:09:16—Flight Engineer: We’re down to one on the totalizer. Number two is empty.
18:13:21—Flight Engineer: We’ve lost two engines, guys.
18:13:25—Engineer: We just lost two engines—one and two.
18:13:38—Captain: They’re all going. We can’t make Troutdale.
First Officer: We can’t make [expletive]!1
At 6:15 p.m., December 28, 1978, United Airlines Flight 173 crashed into a wooded area near Portland, Oregon, about six miles short of the airport. Incredibly, of 189 souls on board, only 13 were killed (including 2 crew members), and 23 were seriously injured. In addition, two unoccupied homes were destroyed. In part, that was because there was no fire since there was no fuel on board. The aircraft was destroyed, however.
This incident became a defining moment in commercial aviation, a tipping point that captured the attention of aviation safety experts and agencies throughout the industry. Aside from the obvious—the spectacular crash of an aircraft and associated loss of life—UA 173 focused a very bright light on a culture that twentieth-century aviators had inherited from the pioneers of the field: a culture that, while purposeful in the past, had become increasingly dysfunctional in the world of modern jet aircraft.
Put very simply, the airline crew culture in 1978 was extremely hierarchical and autocratic. United 173 was flown by a crew that was socialized in what is referred to as the “captain is king” tradition. According to the investigation report written by the National Transportation Safety Board (the federal agency that investigates airline accidents and makes recommendations about how to prevent them), the cause of the crash was the “failure of the captain to monitor properly the aircraft’s fuel state and to properly respond to the low fuel state and the crewmembers’ advisories regarding the fuel state
. Contributing to the accident was the failure of the other two crewmembers to fully comprehend the criticality of the fuel state or to successfully communicate their concern to the captain.”2
“The captain,” the report states, “had a management style that precluded eliciting or accepting feedback.” The first officer and the flight engineer (FE) (who paid with his life) failed to “monitor the captain” and give effective feedback and provide sufficient redundancy.3 It was only when it was too late that the first officer expressed a direct view, ‘Get this
on the ground!’4 In this kind of environment, the crisis was neither prevented, managed, nor contained. Why? Because, as the NTSB reported, “the landing gear problem had a seemingly disorganizing [our italics] effect on the flight crew’s performance
. The Safety Board believes that this accident exemplifies a recurring problem—a breakdown in cockpit management and teamwork during a situation involving malfunctions of aircraft systems in flight.”5
Because of the spectacular nature of aircraft accidents in terms of potential—and actual—loss of life and damage, commercial aircraft accidents get a great deal of attention. In reality, accidents are incredibly rare compared with the high numbers of positive outcomes when a flight is challenged by mechanical or other threats. Aviation successes receive scant attention in all but the most stunning cases.

JET BLUE 292, SEPTEMBER 21, 2005: A SUCCESS STORY

One such success occurred on September 21, 2005, when JetBlue Flight 292, an Airbus A-320 with 140 passengers and a crew of 6, took off from Southern California’s Bob Hope Airport (Burbank) headed for New York’s John F. Kennedy International Airport. After the plane lifted off the runway, when the captain tried to retract the landing gear, a display of two error messages indicated that there was a problem. The first officer (FO) continued to fly the plane while the captain responded to the electronic centralized aircraft monitor (ECAM) prompts.6 The captain then consulted the flight crew operating manual (COM), which suggested that the nose gear—the wheels directly under the aircraft’s cockpit—had somehow rotated ninety degrees from its normal “aligned” position, making it physically impossible to retract it. The captain informed—and later continued to update—the flight attendants (FAs) about the problem. They in turn advised passengers about the development and kept them informed.
In an effort to get a visual confirmation of the situation, the captain decided to do a “fly-by” or “low pass” in front of the air traffic control tower in Long Beach, California, to see if they could verify the problem with the nose gear. The tower, JetBlue ground personnel, and a local news helicopter verified that the nose gear was indeed cocked ninety degrees to the left. In this situation, the plane could not land safely. There would be no alternative but to execute an emergency landing.
As the FO continued as the “pilot flying,” the captain, in consultation with safety personnel in New York, decided to divert the plane to Los Angeles International Airport (LAX), where the airline has a maintenance hub. Because it was making a transcontinental flight, the plane had taken on a large quantity of fuel. All agreed that it would not be safe for it to land with the existing fuel load, and the decision was made to delay the landing until the majority of the fuel on board had been burned off to reduce the possibility of a fire and to make the plane lighter and an emergency landing safer.
The aircraft circled for three hours until the fuel had burned down. During this time, the pilots consulted with JetBlue in New York and with maintenance personnel, as well as with engineers in France at Airbus and Messier-Dowty, the manufacturers of the plane and its landing gear. They also thoroughly briefed the flight attendants on the aircraft status and what they could expect. The captain requested their assistance in trying to shift the center of gravity (CG) of the aircraft as far aft as the structural limits would allow. This would allow the captain to hold the defective nose gear off the ground as long as possible after touchdown. The flight attendants worked with passengers to move them and their luggage toward the rear of the aircraft.
They spoke to all the passengers individually prior to the landing to ensure that they knew the emergency procedures that would take place and how to properly brace themselves. The flight attendants checked and double-checked each other’s work to ensure that everything was completed and would go according to plan. This kind of communication is critical in reassuring passengers and preventing panic in the cabin. The captain briefed the FAs that they could not evacuate passengers through the doors in the rear of the aircraft and advised them that they would have to use the forward doors. After three hours, the plane approached LAX for its emergency landing.
With emergency equipment at the ready on the ground, the plane touched down at 120 knots (138.2 miles) an hour.7 As the aircraft slowed down, the nose gear touched ground. With their nose wheels perpendicular to the direction of motion, the tires quickly shredded until the metal wheels were scraping the ground at such high speed that it created a plume of white smoke, which made it difficult to see the plane. Although no one could see what had happened to the wheels, the flight attendants in the front of the aircraft could smell the strong odor of burning rubber. Air traffic controllers were able to observe that there was no fire and reported this to the captain, who in turn relayed the information to the cabin crew.
While media and airport observers held their collective breaths, the aircraft skittered to a halt a thousand feet short of the end. The nose tires completely shredded during the landing and “about half of the two wheels were ground off.” Otherwise the plane was undamaged and none of the 145 people on board were hurt. Because of the clear communication among all parties—air traffic control (ATC), cockpit, FAs, and passengers—regarding the aircraft status once it was on the ground, everyone understood the condition of the aircraft and recognized that it was no longer in danger. There was no panic among the passengers, and it was quickly determined that an emergency evacuation was not necessary.
To understand the significance of this successful outcome, it is necessary to travel back several decades to contrast this incident with a seemingly relentless series of aviation disasters that too often captured the headlines during that time. Let’s return to the disaster that was United Flight 173 and compare it with the success of the Jet Blue flight. Like the crew of Jet Blue 292, the United 173 crew had the luxury of time, but the similarities end there. In Portland in 1978, the crew of United Flight 173 failed to monitor the captain; the flight attendants were not alerted to the problem; the captain failed to listen to the flight engineer, who in turn failed to alert the captain to the seriousness of the situation. The crew, as the NTSB report emphasized, became disorganized instead of hyperorganized in a crisis. Why? Why weren’t flight attendants made aware of the significance of the problem? Why did no one listen to the first officer, and why did he fail to signal the problem with sufficient urgency? There was a lot of time available to create and utilize crew communication, to enlist the crew as a resource, and to manage the emergency. None of that happened. Again, the question is, Why?
The answer is that, for the most part, the assumption in the cockpit during that era was that the captain was responsible for everything and had all the answers. In spite of the fact that there were two other highly qualified and experienced crew members available in the same small space, one person made all the decisions.8 That person apparently did not recognize that others had relevant expertise and could contribute to solving a very pressing problem. The reason that the United 173 scenario did not repeat itself thirty years later is that in the ensuing decades commercial aviation began to reevaluate its modes of operation and to drastically reconsider its dominant hierarchal structure. Reports of flights like United 173—among many others—led to a dramatic reconsideration of the culture of aviation, a culture whose positives had as many negatives and whose heroic aviators had learned a whole lot of “wrong stuff” along with the Right Stuff.

A BIT OF HISTORY

It is almost a clichĂ© to say that flying a plane requires a lot of skill and occasional heroism and is, by its very nature, risky. This statement was certainly true for the early years of powered flight. Statistics from the past fifty years document that today it’s considerably safer to fly an airplane at 36,000 feet than to drive a car at sea level. The new clichĂ© is that you have a much greater chance of being struck by lightning (about 1 per million in a given year)9 or dying in a car crash (1.4 per million miles) than you do of getting harmed in a commercial airliner (0.2 per million miles).10
So does that mean that flying is inherently safe? Should we conclude that technology has made it possible for a plane to fly itself or for a smart flight attendant or even passenger—guided, we would assume by a qualified individual on the ground—to land the plane if both pilots are somehow incapacitated? This may be a common Hollywood fantasy, but in actuality it is not likely. As we noted in the introduction, airline safety isn’t a result only of better technology but also of hard work with the human beings who train to interface with that technology and with each other. It’s the result of a decades-long reassessment of the heroic, “captain-as-king-who-need-not-listen-to-the-commoners” model of aviation. It’s also a result of a reevaluation of acceptable risk, of conceptualizations and understandings of human error, and of accepted definitions of who is and who is not a member of the team and how the team is formed. All of this began in the 1970s when crash after crash mobilized the industry for dramatic change.

RISKY BUSINESS

In the early days of aviation, there was a tacitly granted ...

Table of contents

  1. Foreword
  2. Acknowledgments
  3. Introduction
  4. 1 History of Crew Resource Management
  5. 2 Communication
  6. 3 Case Study
  7. 4 Team Building
  8. 5 Case Study
  9. 6 Workload Management
  10. 7 Case Study
  11. 8 Threat and Error Management
  12. 9 Why CRM Worked
  13. 10 The Problems in Medicine
  14. 11 Conclusion
  15. Appendix
  16. Glossary
  17. Notes