Being Wrong - Kathryn Schulz [151]
Among high-risk fields, commercial aviation currently sets the standard for error management. As often happens, the airline industry’s commitment to curtailing error grew out of a mistake of unprecedented and tragic proportions. In 1977, two Boeing 747s collided at the Tenerife airport in the Canary Islands, killing close to 600 people—then and now, the worst accident in aviation history. When safety officials investigated, they found that the collision was caused by a concatenation of errors, individually minor but collectively catastrophic. The airline industry responded by establishing strict protocols for every aspect of aviation—from how runways should be labeled to what phrases air traffic controllers and pilots can use to communicate with each other. These protocols succeeded in reducing significant commercial aviation accidents in the United States from 0.178 per million flight hours in 1998 to 0.104 per million flight hours in 2007.
Another well-known example of corporate efforts to prevent error is the quality-control process known as Six Sigma. Six Sigma was pioneered at Motorola in 1986 and is now used by the majority of Fortune 500 companies, plus countless smaller businesses. The protocol’s name comes from statistics: the Greek letter sigma (s) indicates the amount of standard deviation from a given norm. In this case, all deviation is assumed to be undesirable—an error in a manufacturing process or in its end product. A company that has achieved Six Sigma experiences just 3.4 such errors per million opportunities to err, a laudably low failure rate (or, framed positively, a 99.9997 percent success rate). To get a sense of what this means, consider that a company that ships 300,000 packages per year with a 99 percent success rate sends 3,000 packages to the wrong place. If that same company achieved Six Sigma, only a single package would go astray.
There are countless variations on Six Sigma in use today (and the program itself is a variation on many earlier quality-control measures), but they all share certain basic principles and protocols. Chief among these are a reliance on hard data and, as the name implies, a phobia of deviation. Traditionally, many companies evaluate their success based on how well they do on average—whether it takes an average of three days to deliver that package, say, or whether the brake pads you manufacture are an average of three-eighths of an inch thick. But the trouble with averages is that they can conceal many potential lapses and mistakes. If it takes an average of three days for your packages to reach their destination, some could be arriving in nine hours, others in two and a half weeks. If some of your brake pads are a half-inch thick and some are a quarter-inch thick, they might not fit with your other components, or they might not pass safety standards, or they might be rejected by the auto manufacturers you supply. With Six Sigma, then, the goal isn’t to improve the average per se, but to reduce the deviation from that average. To do this, Six Sigma analysts make use of a procedure that is usually encapsulated as “define, measure, analyze, improve, control.” In essence, that procedure involves isolating and assessing every single variable pertaining to a given process.