How 6 psychological biases are leaving companies exposed to catastrophic risk
High profile disasters like these have not been uncommon in the past few years. So here’s a question: Why are events like these still happening? Why are they happening in some of the most industrially advanced countries in the world with some of the most stringent standards?
What is catastrophic risk?
Catastrophic risk refers to ultra-low likelihood and ultra-high consequence events which fall outside standard risk matrices. Often called materially unwanted events (MUE), these events are perceived as extremely rare but are truly catastrophic when they occur.
6 psychological barriers to understanding catastrophic risk
In conducting comprehensive catastrophic risk assessments for clients across a broad range of industries, I’ve become convinced that the root of the problem begins with our own psychological biases. What’s most concerning is that we often aren’t aware that these biases exist, let alone the degree to which they skew our understanding of the level of risk to which we’re exposed.
The overconfidence bias comes into effect when an individual's subjective confidence in their judgments or those of their organization is greater than the objective accuracy of those judgments.
It often reveals itself in statements like “that could never happen here.” Overconfidence is at play when catastrophic events are treated and referred to as “outliers” and discussions of risk focus on the low likelihood rather than the high consequences of such events.
We all want to think we’re doing a better job than our competitors. But relying on this assumption can make us overlook important objective information when it comes to assessing risk.
In decision making, anchoring occurs when an individual depends on an initial piece of information to make subsequent judgments. This bias can be particularly pernicious when predicting future probabilities using past information.
Just because a catastrophic event hasn’t happened in the past, doesn’t mean it can’t happen in the future. Much as we might like to pat ourselves on the back for a flawless safety track record, such information has no bearing on assessments of future risk for catastrophic events.
The confirmation bias is the tendency for an individual to favour information that affirms their prior beliefs.
We can unconsciously fudge the data, removing any outliers from our assessments and over-emphasizing positive data points that favour our belief that we’re doing enough to mitigate risk. We see solutions without truly understanding the problems.
4. Escalating commitment
In the face of increasing negative outcomes, individuals have a tendency to escalate commitment to an existing course of action rather than changing tack.
Repeatedly escalating commitment to inadequate solutions in response to small incidents ignores the root cause of the problem, leaving organizations exposed to unmitigated risks. This often manifests as more training for people rather than dealing with underlying technical gaps. Personnel behavior should be the last barrier of defense, not the only one.
Groupthink occurs when the desire for harmony and conformity in a group of individuals leads to irrational decision making.
This can often happen within groups where there is a hierarchical difference between members such as age or seniority. New hires, for example, are often reluctant to speak up when they have information or opinions that differ from the group, for fear of sounding impertinent. The groupthink effect is so strong that we can even sometimes believe we must be mistaken when our information differs, even if it is factual.
When aspects of risk are brushed under the rug in the name of conformity, we are not performing adequate assessments and mitigations based on a full set of information.
6. Normalization of deviance
Normalization of deviance happens when individuals in an organization become accustomed to a behaviour that deviates from what it should be in the absence of negative feedback.
When it comes to safety, organizations tend to normalize a behavior that deviates from the original protection layer put in place but hasn’t lead to an incident. Over time these seemingly insignificant deviations can lead to small failures. These small failures constitute “erosion” in the levels of protection which eventually line up to cause a catastrophic failure. This is the proverbial straw that breaks the camel’s back.
This can be particularly dangerous when important but hidden safety protocols and procedures become lax over time. Many small incidents can turn into catastrophic events that are much more devastating than they would otherwise have been had the proper procedures been in place.
How to limit your organization’s risk exposure
The first steps towards correcting our approach to catastrophic risks are:
- Acknowledging and counteracting our psychological biases
- Understanding the extent to which risk exists for our specific operating environment
- Investing resources in earnest to actually do something about it
Working with an experienced technical industry partner is critical in helping organizations understand the complexities of potential risk pathways and triggers.
Our modern operating environments are more advanced and complex than ever before. Knowing and managing your risk profile requires a well-established framework and true technical depth. The combination is paramount to implementing an effective and efficient risk mitigation solution. We simply can’t afford not to step up our game.