Cardano Black Box Series Part 1: The Challenger Disaster

January 2018
The Challenger Disaster
The term black box comes from aviation, where each incident and crash is investigated in detail. “Black box thinking” centres on a willingness and tenacity to investigate the lessons that often exist when we fail, but that we rarely exploit.

Inspired by black box thinking, we will try to understand the events leading up to The Challenger Disaster and study some of the lessons learned to prevent future mistakes. Human nature is often a contributing factor to a disaster and so we have found this exercise highly relevant in making DB pension schemes more robust in an uncertain world.

By Stefan Lundbergh and Nicole Xu

It was a particularly chilly morning on January 28, 1986. The seven astronauts waved at the staff on-site as they entered the Challenger Space Shuttle at Kennedy Space Centre. This was not a routine mission – one of the astronauts, Christa McAuliffe, was a school teacher. She had been selected among 11,000 applicants to become the first teacher in space, a marketing idea that had been conceived by the NASA Public Relation department. McAuliffe was 37 years old and her husband and their two children, a nine-year-old son and a six-year-old daughter, were on location at Kennedy Space Centre for the launch. Her class in New Hampshire and half of the American population were watching the countdown live on television. After the lift-off, disaster struck. Within seconds, Challenger turned into a ball of fire and exploded. A whole nation was left in a state of shock.

“What went wrong?”, asked the nation. A Presidential commission, chaired by William Rogers, was charged with investigating the Challenger Disaster. The fourteen-head strong commission included the Nobel-prize winner physicist Richard Feynman. Feynman identified that the O-rings sealing a joint on the solid rocket booster failed to seal properly at low temperature, leading to flames passing through the joint. To convince the other commission members, Feynman demonstrated his point using a glass of ice water and an O-ring at the hearing.

Feynman was not the first one to notice this fatal design fault. The engineers at Thiokol, manufacturer of the solid rocket booster, had warned about the potential failure of the O-rings in cold weather and suggested to delay the launch. “Take off your engineering hat and put the management hat on,” said Jerald Mason, the Vice President of Thiokol, when overruling his engineers. At NASA, the engineers raised the same concerns and were overruled by Lawrence Mulloy, the shuttle programme manager – “when do you want me to launch it – next April?”

The commission concluded that the root cause of the disaster was to be found in the managerial culture at NASA. During the investigation, Feynman learned that engineers estimated the likelihood of a mission failure to be about 1/100 while the management assessed it to be 1/100,000. It turned out that this ‘convenient’ risk assessment helped NASA’s management to maintain a tight launch schedule by relaxing the original maintenance and security protocols. Unfortunately, NASA’s management did not learn their lessons and 17 years later, similar organisational causes led to the Columbia disaster where another seven astronauts died.

A half-sigma or 54-sigma event?

After 135 space shuttle missions, the 30-year NASA space shuttle programme ended in 2011. In retrospect, we know that there were two flight mission failures, the Challenger in 1986 and Columbia in 2003. If the engineers’ risk estimate (1/100) was correct, the likelihood of observing at least two accidents is a ½ sigma event. In other words, it is as likely as the FTSE100 falling by more than 0.5 per cent at the daily closing.

If the managements’ risk estimate (1/100,000) was correct, how likely then was it to observe two shuttle accidents? The answer is a staggering 54 sigma event. To put this into perspective, a mere 8-sigma event would mean that you would expect two accidents if there had been a space shuttle mission every day since the inception of the universe – the Big Bang. Either the management of NASA had extremely bad luck or their risk estimate was way off the mark.

This analysis is not just a statistical exercise, it has real implication since managerial decisions are based on the assessment of risks. By conveniently assuming a very low risk, NASA management could justify maintaining an unrealistic launch schedule by relaxing the maintenance and security protocols. The price for this ‘convenient’ managerial assumption was ultimately paid by 13 professional astronauts and one school teacher.

 

Investing is not Rocket Science

The financial sector is littered with extremely ‘unlucky’ people who have experienced statistically near-impossible events. David Viniar, CFO of Goldman Sachs in 2007, described the market movements as “25-sigma moves, several days in a row”. Either Goldman Sachs was extremely unlucky or their risk assessments were significantly off the mark.

Running a DB scheme is not rocket science, but we can learn from NASA’s management failure and become more realistic in assessing financial risks. Are we not making the same ‘convenient’ managerial assumptions by lowballing the financial market risks when putting together an optimistic recovery plan, especially when the scheme sponsor is financially weak? The consequences of getting the risk assessment wrong in pensions will not kill anyone, but can be very expensive for members, sponsors and workers. In the last decade, the DB pension deficit in the UK has balooned by £400 billion, roughly double the total cost of NASA’s 30-year space shuttle programme.

Are we not relying too much on our economic theories and financial market models? Should we not compare real-world outcomes with what our models predict? This way we could improve our assessments of risk to be more in line with reality, especially when we do the long-term planning such as forming a recovery plan. Can it be that our willingness to discard data that falsify our financial models is driven by our managerial needs to have high return and low risk? In those cases, we must tell ourselves “Take off your management hat and put the investment professional hat on”.

A way to break this ‘convenient’ closed loop thinking is to use the so-called pre-mortem analysis proposed by the Nobel-prize winner psychologist Daniel Kahneman. In such analysis, we assume that the DB scheme has failed and we back-track what could have led to the failure. This gives us valuable information about dominant risks and market sensitivities, which should refrain us from taking on the management hat.

Feynman’s closing remark in the Appendix F in the Presidential commission contains a valuable lesson for all of us when it comes to assessing risks: “Reality must take precedence over public relations, for nature cannot be fooled.”