Being Wrong - Kathryn Schulz [167]
I say “frustration,” and that’s what we often feel in the face of our mistakes: thwarted, aggravated, disoriented, disturbed. Without dismissing such feelings, one major point of this book has been to urge us to move beyond them—and sometimes forestall them—through the cultivation of a different attitude toward error. In this alternative attitude, wrongness reminds us that the human mind is far more valuable and versatile than it would be if it just passively reflected the precise contours of reality. For those who share this view, the fact that our beliefs inhere in our minds is a given, and a gift—one whose benefits (humor, imagination, intelligence, individuality) are so manifestly worthwhile that we willingly pay for them with our mistakes.
What other entity can lay claim to wrongness, after all? Not God, obviously, since the monotheistic versions, at least, are all-knowing and inerrant. And (as far as we know) not any animals other than ourselves, either. If there’s a sense in which a lion errs when it pounces too soon and misses its prey, or a sense in which an owl is somehow mistaken about its notion of the night sky, it is surely nothing like the sense in which we human beings are wrong. It seems safe to say that no lion has ever berated itself for making a mistake, or waxed defensive about it, or turned it into a funny story to recount to the rest of the pride. Nor, presumably, is there any variation between one owl’s idea of night and another’s, nor any way for them individually or collectively to revise their understanding of the cosmos. These creatures can no more get things wrong than they can make up stories about cowardly lions, or about owls that deliver the mail at Hogwarts School of Witchcraft and Wizardry. In both cases, the limitation is the same: they cannot imagine things that do not exist. We can, and so much the luckier for us.
Machines, too, are incapable of error in the human sense. However much a computer or a BlackBerry or an ATM might excel at revealing our mistakes (as both designers and users), neither they nor any of their electronic kin can make errors on their own. To begin with, error is contingent on belief, and while machines can arguably “know” things—in the sense of possessing accurate information—they can’t “believe” things in the way that you and I can. Granted, certain advanced forms of artificial intelligence have some capacity to generate theories about the world and revise those theories in the face of counterevidence, a capacity that could be said to amount to a crude form of belief. Even the most cutting-edge machines don’t do this very well, but that’s not the point. The point is that they don’t do it with emotion, and emotion is central to both the idea of belief and the idea of wrongness. Surprise, confusion, embarrassment, amusement, anguish, remorse, delight: take away all of that, and whatever process of belief collapse and reconstruction that remains doesn’t look anything like error as you and I experience it. In the face of information that violates their (limited) representations of the world, machines do not go into denial or blame their programmer or turn red or laugh out loud. If they are sufficiently sophisticated, they update their representations; otherwise, they freeze, or fail. This is nicely captured by the two stock devices used in science fiction when an android is confronted with input that contradicts its existing database. The first is the freeze response: “does not compute.” The second is the fail response: instant and violent self-destruction.
As far as we know, then, error is uniquely ours. “To err is human,”