Please reload

Recent Posts

The Tenets of the Wilderness Risk Management Conference: Part One

September 8, 2019

1/4
Please reload

Featured Posts

What happens if we fail to learn from our near-misses? WITH NEW UPDATES

March 13, 2019

 

“The day soldiers stop bringing you their problems is the day you have stopped leading them. 

 

They have either lost confidence that you can help them or concluded that you do not care. 

 

Either case is a failure of leadership.” - Gen. Colin Powell 

 

 

At Experiential Consulting, LLC we have been focused on the importance of learning from near-misses for many years, and have helped clients integrate near-miss reporting into their organizational culture. We believe that sharing the learning from near-misses is the gateway for organizations to develop a culture of openness, feedback, problem solving, and continuous learning. Experts debate if the things that cause near-misses are the things that ultimately lead to catastrophes or fatalities, but in the outdoor programs we work with, we find that a near-miss can serve as an accident precursor, and that there is much to be gained by learning to talk about our near-misses. We have written and presented at conferences extensively about this concept before. 

 

Several recent events (in early 2019) lead us to revisit this topic today. The most newsworthy (and obvious) example can be found in the tragic crashes involving Boeing's 737 MAX, and the subsequent global grounding of those planes. As the news continues to come in, we see some themes here that are worth highlighting, including systems thinking, learning from near-misses, and ultimately, culture. 

What happened to the planes? To summarize the Boeing crashes, which have been correlated and connected to each other according to FAA Administrator Daniel Elwell, the pilots struggled to maintain control of the planes during takeoff, which may be attributed to a new technological feature on the planes called the Maneuvering Characteristics Augmentation System, or MCAS, a safety mechanism that automatically corrects for a plane entering a stall pattern. If the plane loses lift under its wings during takeoff and the nose begins to point too far upward, the MCAS kicks in and automatically forces the nose back down. If functioning correctly, this can help to prevent the plane from stalling (and eliminate the human error of taking off at too steep of an angle). In the case of the first crash, the MCAS kicked in and forced the nose of the plane abruptly down during takeoff at a critical and irrecoverable time. At the time of this blog being published, more and more evidence is coming in connecting the factors between the two crashes, though the investigations are ongoing. 

Systems thinking: It's easy to just say that the planes crashed due to operator (cockpit) error. Or we can back up another step and blame it on the training they did or didn't receive, or even on their plane's manual which has been called "criminally insufficient" by some pilots. If we keep going, we find a software problem which was discovered in the wake of the first crash in October, 2018 (Lion Airlines). This software issue was reportedly in the midst of being resolved between Boeing and the FAA when the United States government shut down for 35 days, stalling the resolution of that software fix. Backing up even further, the FAA has been led by an interim (acting) director for the past two years, as no permanent director has been successfully appointed. 
 

The captain who questioned the 737 Max 8's flight manual had this to add: "The fact that this airplane requires such jury rigging to fly is a red flag. Now we know the systems employed are error-prone — even if the pilots aren't sure what those systems are, what redundancies are in place and failure modes. I am left to wonder: what else don't I know?"

 

So, what caused the accidents? Was it operator error? Lack of training? Poor instructions? A software problem? The federal government shutdown? Leading safety experts are learning to resist the natural human desire to isolate single causes, and look at incidents like this in more complex and inter-connected ways, taking a holistic view. Applying root cause analysis (RCA) would lead us to isolate a single problem or two that we can fix, but experts believe this approach satisfies our need for optics at the expense of actual learning, making us more prone to recurrence. As safety author Charles Perrow has written, accidents are caused by complex factors tightly coupled together, not single ones that we can isolate and simply fix. When we do try to isolate root causes, often power dynamics and biases lead us to focus on front-line elements like workers, operator error, or even training instead of looking at the bigger system within which those humans, errors, and trainings operate. Safety author Dr. Sidney Dekker has said that there are no root causes for why an accident occurs, in the same way there are no root causes for why an accident doesn't occur. Rather than focus on blaming, retraining, and other simple fixes, we are better served by asking ourselves, what in the work environment made that error possible, or why did it make sense to the frontline worker at the time?


Taking it a step further, safety expert Dr. Todd Conklin states it more bluntly: "When investigating an accident, don't limit yourself to human error or non-compliance -- you will always find both." Error is normal, and so commonplace that it's actually present not only in the small number of events that go catastrophically wrong, but it's present in almost all of the other ones too. In most cases, despite our mistakes, things don't go wrong -- which can lead us to learn the wrong lessons (breeding complacency, as recreation law attorney Charles "Reb" Gregg writes). However, if we are diligent and focus our attention on why things go right, we can learn deeper lessons. We can aim our efforts towards resilience so that when errors are inevitably made, they do