Being Wrong - Kathryn Schulz [158]
This is all very well and good—rah rah democracy, et cetera—but what happens when the errors of free and open societies are unconscionable? Back in Chapter Seven, we saw democratic Switzerland decline to extend the vote to women until 1971. Meanwhile, the democratic government of the United States has, in the course of its history, sanctioned the enslavement of human beings, detained its own citizens in internment camps, spied on internal political opposition, tortured suspected enemies of the state, and, in countless other ways, demonstrated the extent and severity of a democratic nation’s capacity to err.
One could argue that these ugly episodes represent the breakdown rather than the upshot of democracy, but that is, at best, a partisan’s comforting half-truth. If a system makes room for the inevitability of error and, accordingly, permits the expression of any and all beliefs; and if one of those beliefs—say, that debtors should be imprisoned or interracial marriage should be illegal—becomes entrenched policy, who’s to say that this is a perversion rather than a product of democracy? Granted, the failures of a democratic society remain preferable to the failures of societies where people cannot speak their minds and cast their votes in freedom. But these failures are still grave—and, what is more, future and equally grave failures are all but inevitable.
In this respect, too, political means for embracing and avoiding error resemble the other methods we saw in the first part of this chapter. Whether in politics, industry, or everyday life, you can implement a system to prevent mistakes, but you can bet your life you won’t prevent them all. For instance, many hospitals have some version of a “time-out” protocol, a checklist that surgical teams are supposed to review before beginning any procedure. Beth Israel Deaconess Medical Center had such a time-out system in place before the 2008 wrong-side surgery—but the team didn’t use it. Many hospitals now use a magic marker to indicate the correct surgical site on a patient before an operation begins. The correct side of the BIDMC patient was marked—but the doctor didn’t notice. For this patient, at least, the existing error-prevention system “wasn’t good enough,” said Paul Levy. He and his team went back to the drawing board to improve it (generating, among other things, the new time-out checklist on the next page), and, Levy said, “we think it will work better now. But undoubtedly something else will go wrong sometime in the future.” And that’s pretty much how it goes. You devise a system for preventing error, it works well for a while, and then an error occurs anyway and reveals a flaw in your design. So you modify your system and now it works better—but sooner or later another mistake will slip through and show you still more shortcomings you failed to detect. Error, it would seem, is always one step ahead of us.
So we can’t catch all our errors, or catch up to error in general. Nor, however, can we give up the chase, since the price of doing so—in lives, money, and sheer folly—is simply too steep. Our only choice, then, is to keep living with and looking for wrongness, in all its strangely evasive omnipresence. To help us think about how to do so, I want to turn to