Skip to main content
Knowledge Default Banner Dark

Strategy

Keeping a Cap on a Crisis

Keeping a Cap on a Crisis

A close look at certain aspects of your company’s procedures could prevent a minor incident turning into a full-blown disaster.

The Korean Air Cargo Flight 8509 had just taken off from London in 1999 when its chief pilot, responded to an erroneous reading on his navigation equipment and banked so far to the left, the wing of the plane touched the ground. The aircraft crashed to earth killing all four crew members.

An investigation into the disaster found that while the pilot’s attitude director indicator (ADI) had received the wrong data, his co-pilot’s ADI was working well and, in fact, a comparator alarm had sounded calling attention to the discrepancy.  Evidence suggests the 33-year-old junior officer, influenced by South Korea’s hierarchical culture, had simply lacked the confidence to challenge or correct his superior. 

Major crises often come from something small, a situation that could have remained just that, if the organisation had been better prepared to handle unforeseen events.

What’s putting your company at risk?

As organisations and economies increase in complexity, the likelihood of small incidents mushrooming into full-scale crises increases. During my studies in this area, I’ve noted four aspects of companies which affect how they respond to unforeseen circumstances and have a pivotal role in the outcome.

The cultural aspects as illustrated above were also at play in the lead up to TEPCO’s Fukushima Daiichi Nuclear Power plant disaster. The Fukushima Nuclear Accident Independent Investigation Commission found the accident was "manmade" and that its direct causes were all foreseeable. The company failed to take into account prior reports of falsification of safety records, and an internal study pinpointing the risk of tsunami.  Perhaps the root cause of this catastrophic under-reaction lay in the TEPCO governance system. The company had been run by a very culturally homogeneous board, which made it difficult for executives to correct or speak out about safety concerns brought to their attention almost a decade before the accident. 

A second aspect is the organisational blind spot. While firms can prepare themselves for the likelihood of bad things, whether it’s an earthquake, economic downturn, mechanical failure or staff malfeasance, it is the unanticipated, or as Donald Rumsfeld once noted – “the unknown unknowns … the things we don’t know we don’t know”, that can put a company at risk of full-blown disaster.

Individuals and divisions operating in isolation within a company often, unintentionally perhaps, hide information from each, so when a situation does occur those involved may be taken by surprise and not able to make necessary connections. The September 11 terror attack is an extreme example. U.S. intelligence agencies had a lot of information but were working in silos and unable to connect the dots in time. Or, as Hewlett-Packard CEO Lew Platt put it, “If HP knew what HP knows, it would be three times more productive.”

One way of mitigating this issue is to rally the organisation around a common goal and develop a culture where sharing information is expected. (Bureaucratic organisations, which are heavy on procedures, individual monetary compensation and organisational fragmentation are unlikely to fare well in this dimension). Naturally, directors and senior executives have a crucial role to play to foster this culture.

A third challenge for organisations is the perception of risk, particularly at the top of the organisation. There are two main issues here. The first being the ability to understand the risk appetite of subordinates and the second, the illusion of control – the tendency for people to overestimate their ability to control events and to feel in charge of outcomes which they (demonstrably) have no influence over.

Individuals often have difficulty putting themselves in someone else’s mind.  This is particularly true when trying to predict the level of risk others will take.  Senior executives may be surprised to discover just how much the risk appetite of their employees differs from their own.  Gen Y subordinates, for example, may be much more relaxed about taking on cyber risk.

Systemic thinking is another critical aspect to managing risk. Things don’t usually fail in isolation. In 1996, ValuJet Flight 592 crashed in the Everglades, Florida. Investigations revealed the accident was caused by a complex combination of human and mechanical failures. The National Transportation Safety Board apportioned the blame to: the maintenance contractor, for improperly packaging and storing hazardous materials; ValuJet, for failing to supervise the contractor; and the Federal Aviation administration, for not mandating smoke detection and fire suppression systems in cargo holds. They found the deaths of 110 passengers and crew was the result of a concatenation of events which began when an employee mislabeled a canister. Chain reactions like this may be easy to see in hindsight but very difficult to predict with the growing complexity of organisations and products.

Naturally, whatever an organisation can do to identify how each piece of the firm interacts with another will help mitigate the risk of disaster. Companies can plan ahead, introduce simulations into crisis management practices and stress test their organisation to see how all the different divisions react under certain conditions.

When everything else fails

Pro-actively thinking about risk is extremely useful.  But no matter how many resources are spent preparing a company, improving information-sharing, addressing cultural intricacies and understanding how an organisation’s links interact, there will always be totally unanticipated incidents for which there are no set procedures. In these cases managers must be given the freedom to think on their own, take initiative, act quickly and address the challenges at hand. In short, organisations must be prepared to expect the unexpected.

The clever management of the explosion which crippled Apollo 13’s 1970 lunar voyage, avoiding disaster and saving its crew, provides an excellent example of this real-time thinking … and material for a good movie.

Gilles Hilary is Associate Professor of Accounting and Control at INSEAD.

Follow INSEAD Knowledge on Twitter and Facebook

About the author(s)

View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.