Being Wrong - Kathryn Schulz [5]
Error-studies practitioners are a motley crew, ranging from psychologists and economists to engineers and business consultants, and the work they do is similarly diverse. Some seek to reduce financial losses for corporations by eliminating mistakes in manufacturing processes. Others try to improve safety procedures in situations, ranging from angioplasties to air traffic control, where human error poses a major threat to life and health. As that suggests, error studies, unlike epistemology, is an applied science. Although its researchers look at the psychological as well as the structural reasons we get things wrong, their overall objective is practical: they seek to limit the likelihood and impact of future mistakes.
In service of this goal, these researchers have become remarkable taxonomists of error. A brief survey of their literature reveals a dizzying proliferation of categories of wrongness. There are slips and lapses and mistakes, errors of planning and errors of execution, errors of commission and errors of omission, design errors and operator errors, endogenous errors and exogenous errors. I could go on, but only at the expense of plunging you into obscure jargon and precise but—it must be said—painful explication. (A sample: “Mistakes may be defined as deficiencies or failures in the judgmental and/or inferential processes involved in the selection of an objective or in the specification of the means to achieve it, irrespective of whether or not the actions directed by this decision-scheme run according to plan.”)
Mistakes may be defined this way, but not by me. Don’t misunderstand: I’m grateful to the error-studies folks, as we all should be. At a moment in history when human error could easily unleash disaster on a global scale, they are trying to make our lives safer and easier. And, because they are among the few people who think long and hard about error, I count them as my colleagues in wrongology. The same goes for epistemologists, whose project has somewhat more in common with my own. Still, I depart from both groups of thinkers in important ways. My own interest lies neither in totalizing nor in atomizing error; and my aim is neither to eliminate mistakes nor to illuminate a single, capital-T Truth. Instead, I’m interested in error as an idea and as an experience: in how we think about being wrong, and how we feel about it.
This attention to how we think and feel about error casts a different light on some of the difficulties with defining it. Take the matter of stakes. The question I raised earlier was whether it ever makes sense to treat minor gaffes and world-altering errors—the car keys and the WMDs—as comparable phenomena. In their causes and consequences, these errors are so unalike that including them in the same category seems at best unhelpful and at worst unconscionable. But if we’re interested in the human experience of error, such comparisons become viable—in fact, invaluable. For example, we are usually much more willing to entertain the possibility that we are wrong about insignificant matters than about weighty ones. This has a certain emotional logic, but it is deeply lacking in garden-variety logic. In high-stakes situations, we should want to do everything possible to ensure that we are right—which, as we will see, we can only do by imagining all the ways we could be wrong. That we are able to do this when it hardly matters, yet unable to do so when the stakes are huge, suggests that we might learn something important by comparing these otherwise very different experiences. The same can be said