Being Wrong - Kathryn Schulz [17]
What was new and radical about this perspective wasn’t the recognition of how difficult it is to distinguish error from truth. That idea is at least as old as Plato. It appears in the Bible as well—for instance, as the question of how to tell false prophets from true. (“For Satan himself masquerades as an angel of light,” we read in 2 Corinthians.) Renaissance and Enlightenment thinkers would also have been familiar with this idea from the work of their medieval counterparts, who often characterized errors as ignes fatui—literally fool’s fires, although often translated as false or phantom fires. Today we know these false fires as will o’ the wisps: mysterious wandering lights that, in folklore, lead unwary travelers astray, typically into the depths of a swamp or over the edge of a cliff. Less romantically, false fires also referred to the ones lit by bandits to fool travelers into thinking they were approaching an inn or town. In either case, the metaphor says it all: error, disguised as the light of truth, leads directly into trouble. But Enlightenment thinkers mined a previously unnoticed aspect of this image. Error, they observed, wasn’t simply darkness, the absolute absence of the light of truth. Instead, it shed a light of its own. True, that light might be flickering or phantasmagoric, but it was still a source of illumination. In this model, error is not the opposite of truth so much as asymptotic to it—a kind of human approximation of truth, a truth-for-now.
This is another important dispute in the history of how we think about being wrong: whether error represents an obstacle in the path toward truth, or the path itself. The former idea is the conventional one. The latter, as we have seen, emerged during the Scientific Revolution and continued to evolve throughout the Enlightenment. But it didn’t really reach its zenith until the early nineteenth century, when the French mathematician and astronomer Pierre-Simon Laplace refined the theory of the distribution of errors, illustrated by the now-familiar bell curve. Also known as the error curve or the normal distribution, the bell curve is a way of aggregating individually meaningless, idiosyncratic, or inaccurate data points in order to generate a meaningful and accurate big picture.
Laplace, for instance, used the bell curve to determine the precise orbit of the planets. Such movements had been recorded since virtually the beginning of history, but those records were unreliable, afflicted by the distortion intrinsic to all human observation. By using the normal distribution to graph these individually imperfect data points, Laplace was able to generate a far more precise picture of the galaxy. Unlike earlier thinkers, who had sought to improve their accuracy by getting rid of error, Laplace realized that you should try to get more error: aggregate enough flawed data, and you get a glimpse of the truth. “The genius of statistics, as Laplace defined it, was that it did not ignore errors; it quantified them,” the writer Louis Menand observed. “…The right answer is, in a sense, a function of the mistakes.” For thinkers of that particular historical moment, who believed in the existence of an ordained truth while simultaneously recognizing the omnipresence of error, the bell curve represented a kind of holy grail: wrongness contained, curtailed, and coaxed into revealing its opposite.*
A century later, the idea that errors reveal rather than obscure the truth gained a powerful new proponent in Freud. But while earlier thinkers had been interested primarily in external truths—in the facts of the world as ordained by nature or God—Freud’s domain was the internal. The truths he cared about are the ones we stash away in our unconscious. By definition, those truths are inaccessible to the reasoning mind—but, Freud argued in The Psychopathology of Everyday Life, we can catch occasional glimpses of them, and one way we do so is through error. Today, we know these truth-revealing errors as Freudian slips—as the