Online Book Reader

Home Category

Proofiness - Charles Seife [26]

By Root 852 0
of a solid rocket booster (SRB) failure destroying the shuttle was roughly 1 in 35 based on prior experience with this technology.” One in thirty-five was an enormous and unacceptable level of risk. After all, the shuttles were supposed to make hundreds of flights, returning their crews safely every single time. If the shuttle would, on average, lose a crew of seven astronauts once every thirty-five flights, the shuttle program was as good as dead. So NASA disregarded the study, instead deciding “to rely upon its engineering judgment and to use 1 in 100,000 as the SRB failure probability estimate.” In other words, NASA simply tossed out the 1-in-35 number and substituted a much more acceptable one—in which you could launch a shuttle every day for decades, totaling thousands upon thousands of launches, and expect not to have a single failure.

On January 28, 1986, a bit more than half a second after Challenger left the launch pad, a puff of gray smoke coming from its right solid rocket booster heralded disaster. Nobody knew it at the time, but a small rubber seal in the booster had failed. Fifty-nine seconds into the flight, a small flame erupted from the booster and the conflagration quickly grew out of control. Seventy-three seconds after launch, at an altitude of 46,000 feet, Challenger exploded in an enormous yellow-white ball of fire. It had taken only twenty-five shuttle launches before the risks caught up with NASA.

NASA management had deliberately understated the risks of a shuttle flight. Instead of facing the unpleasant reality that the shuttle boosters were risky, the agency decided to engineer a lie that was more acceptable. As physicist Richard Feynman, a member of the Challenger investigation panel, put it, “As far as I can tell, ‘engineering judgment’ means that they’re just going to make up numbers!” Instead of performing a genuine assessment of the probability that the shuttle would fail, the management would start with a level of risk that was acceptable and work backward. “It was clear that the numbers . . . were chosen so that when you add everything together, you get 1 in 100,000,” Feynman wrote. NASA’s risk estimates were complete fictions, and nobody noticed until disaster struck.

Risks are tricky. We’re pretty bad at estimating them. We spend our time worrying about graphic but uncommon events (meteor strikes, child abductions, and shark attacks) when we should really be worrying about—and preventing—more mundane risks (strokes, heart attacks, and diabetes) that are much more likely to cut our lives short. We spend our money chasing after faint hopes of winning the lottery or hitting pay dirt in a get-rich-quick scheme instead of paying off the credit cards that have a serious chance of driving us into ruin. We are terrified of dying in a plane crash but think nothing about speeding down the highway while talking on a cell phone. We don’t have an internal gauge of what behaviors are truly dangerous and what aren’t.

In the 1980s, economists Daniel Kahneman (who would later win the Nobel Prize) and Amos Tversky showed how irrational humans can be when confronted with risk. They presented test subjects with a scenario in which they had to make a difficult choice:

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed.

The two programs are very different—one is conservative, with a high probability of saving a small number of people, and one is risky, with a small probability of saving a large number of people. The subject has to make a choice about whether to choose the conservative or the risky strategy. But there was a twist. Kahneman and Tversky presented the exact same choice, but with slightly different wordings, to two separate groups of subjects. For the first group of subjects, the wording emphasized saving people from the disease; for the second, the phrasing dwelled on the victims of the disease rather than the survivors.

These differences in wording were

Return Main Page Previous Page Next Page

®Online Book Reader