Once upon a time, Korean Air had trouble getting its pilots to talk to each other.
Flying a plane is complicated. When something goes awry, pilots have to communicate with each other, with flight engineers, and with air traffic controllers, to get everyone home safe and sound. In Korean Air’s case, though, the pilots weren’t communicating when their partner misread an instrument, or ignored a warning bell, or failed to heed a cautionary cue. The results were bad: at one point, fatal air traffic incidents were 17 times more common in South Korea than in America.1
Then all of a sudden, Korean Air got its act together. They figured out that — owing mostly to well-worn cultural antecedents — junior pilots didn’t feel empowered to point out when their seniors were asleep at the stick. There was an unquestioned hierarchy in place, and less experienced pilots knew that to criticize their superiors would be a massive breach of etiquette, doubtless provoking all manner of sanction. Because tight lips crash planes, airline executives resolved to shift the company’s internal culture so that copilots, regardless of seniority, could communicate in the cockpit as equals. The hierarchy was flattened, the lines of communication opened, and Korean Air’s performance quickly rose to among the best in the world — all because Korean Air pilots were allowed to be whistleblowers.
Continue Reading
* * *
Airplane cockpits have a lot in common with operating rooms: both host a small team that is trusted to perform an extraordinarily complex task with the clear understanding that any misstep could spell doom. There is also a fairly rigid hierarchic structure in the operating room, especially when an attending physician and a resident are working together. And just like with pilots, that power differential can sometimes keep residents who have concerns about patient safety from crying foul.2
All of that leads naturally to the idea that hospitals should implement aviation industry-style initiatives to open up communication in the operating room,3 particularly between residents and attending physicians. After all, at the very worst, commercial pilot errors cause only a few dozen deaths per year4; iatrogenic deaths, on the other hand, top that figure every single day.5 Surely, we have plenty of room to improve.
This idea of cutting through hierarch levels and streamlining safety-related communication falls under the umbrella of the “no-blame” approach to error management.6 In essence, the no blame concept is to recharacterize errors not as products of individual mismanagement, but instead as a greater lesson from which everyone in the organization can learn. The principle grew out of the study of so-called “highly reliable organizations” — air traffic control systems and nuclear plants are the most common examples — that exhibit long periods of uninterrupted safety and depend on nearly error-free operation. Even though they are quite reliable on average, these institutions are still considered high risk because occasionally an error (or, really, a series of errors) will cause the system to grind to a halt. The general idea is that the no-blame standard is well suited to highly reliable organizations, where errors are thought to arise primarily from systemic rather than individual failures, but not necessarily applicable beyond that.
The problem, of course, is that it’s not at all clear that hospitals — any hospital anywhere, much less all hospitals in general — are properly classified as highly reliable organizations. Take another look at the criteria defining highly reliable organizations: long periods of safety and dependence on error-free operation. Hospitals make catastrophic, life-ending errors every single day. When those errors come to light (if they do at all), they rarely cause more than a hiccup. Iatrogenic deaths certainly don’t routinely cause hospitals to shut down, even temporarily. Hospitals just aren’t highly reliable in the way that, say, nuclear power plants are.
Which isn’t to say that hospitals, or even operating rooms, are devoid of systemic issues that lead to errors. They’re not, and efforts to excise process-type mistakes — such as the World Health Organization surgery safety checklist — have proven worthwhile. But where there is a phalanx of safeguards preventing a rogue air traffic controller from wreaking havoc in the skies, there are no such barriers between a doctor and a patient. An air traffic controller error is necessarily a failure of the system; it simply can’t happen without the elaborate safeguards somehow breaking down. Medical errors, on the other hand, can be the direct and exclusive fault of the doctor.
The issue isn’t just that it’s epistemologically suspect to uniformly classify hospital errors as systematic rather than individual. No-blame regimes also assume expertise where none exists. A permissive stance on whistleblowing relies on the notion that everyone can make an equally valid contribution to system’s overall performance. This makes a lot of sense in a highly reliable organization: a given employee may not have a holistic, bird’s-eye view of the entire nuclear plant, but they can still figure out how a particular subsection is supposed to work and recognize when something seems off. In this sense, no-blame whistleblowing is leveraging the wisdom of the crowd. It sums the expertise of a whole staff of mini-experts to help sidestep systemic catastrophes. The same thinking recommends encouraging the whole team — nurses, doctors, technicians, even patients — to spot potential systemic failures on the wards and elsewhere in the hospital.
But that concept is an awkward fit inside the surgical theater. When an attending surgeon operates with a resident, there is, by definition, 1 expert and 1 nonexpert at the table. This is not at all like the case of the Korean Air pilots, where the copilot is certified and capable of flying the plane if need be. The resident is neither qualified nor credentialed to take over the surgery. There is an unquestioned hierarchy in the operating room for good reason: there is, almost certainly, a substantial difference in relevant knowledge and skills. What’s more, where the pilot hierarchy seemed immutable, surgical hierarchies turn out not to be. As residents advance through postgraduate training, they become more and more willing to question attending physicians and assert their own expertise in the operating room,7 but in the operating room, whistles are a bit like scalpels: you don’t get to use them before you’re ready.
References
- Gladwell M. Outliers: The Story of Success. New York, NY: Little, Brown and Company; 2008.
- Fleming CA, Humm G, Wild JR, et al. Supporting doctors as healthcare quality and safety advocates: recommendations from the Association of Surgeons in Training (ASit). Int J Surg. 2018;52:349-354.
- Gordon H. Aviation: a pilot study for safety in gastroenterology? Frontline Gastroenterol. 2012;3(2):90-91.
- Shepardson D. 2017 safest year on record for commercial passenger air travel: groups. Reuters. https://www.reuters.com/article/us-aviation-safety/2017-safest-year-on-record-for-commercial-passenger-air-travel-groups-idUSKBN1EQ17L. January 1, 2018. Accessed May 1, 2018.
- Makary MA, Daniel M. Medical error — the third leading cause of death in the US. BMJ. 2016;353:i2139.
- Provera B, Montefusco A, Canato A. A ‘no blame’ approach to organizational learning. British Journal of Management. 2010;21(4):1057-1074.
- Sydor DT, Bould MD, Naik VN, et al. Challenging authority during a life-threatening crisis: the effect of operating theatre hierarchy. Br J Anaesth. 2013;110(3):463-471.