Everything Is Obvious_ _Once You Know the Answer - Duncan J. Watts [23]
WE DON’T THINK THE WAY WE THINK WE THINK
The frame problem, however, isn’t just a problem for artificial intelligence—it’s a problem for human intelligence as well. As the psychologist Daniel Gilbert describes in Stumbling on Happiness, when we imagine ourselves, or someone else, confronting a particular situation, our brains do not generate a long list of questions about all the possible details that might be relevant. Rather, just as an industrious assistant might use stock footage to flesh out a drab PowerPoint presentation, our “mental simulation” of the event or the individual in question simply plumbs our extensive database of memories, images, experiences, cultural norms, and imagined outcomes, and seamlessly inserts whatever details are necessary in order to complete the picture. Survey respondents leaving restaurants, for example, readily described the outfits of the waiters inside, even in cases where the waitstaff had been entirely female. Students asked about the color of a classroom blackboard recalled it as being green—the normal color—even though the board in question was blue. In general, people systematically overestimate both the pain they will experience as a consequence of anticipated losses and the joy they will garner from anticipated gains. And when matched online with prospective dates, subjects report greater levels of liking for their matches when they are given less information about them. In all of these cases, a careful person ought to respond that he can’t answer the question accurately without being given more information. But because the “filling in” process happens instantaneously and effortlessly, we are typically unaware that it is even taking place; thus it doesn’t occur to us that anything is missing.19
The frame problem should warn us that when we do this, we are bound to make mistakes. And we do, all the time. But unlike the creations of the AI researchers, humans do not surprise us in ways that force us to rewrite our whole mental model of how we think. Rather, just as Paul Lazarsfeld’s imagined reader of the American Soldier found every result and its opposite is equally obvious, once we know the outcome we can almost always identify previously overlooked aspects of the situation that then seem relevant. Perhaps we expected to be happy after winning the lottery, and instead find ourselves depressed—obviously a bad prediction. But by the time we realize our mistake, we also have new information, say about all the relatives who suddenly appeared wanting financial support. It will then seem to us that if we had only had that information earlier, we would have anticipated our future state of happiness correctly, and maybe never bought the lottery ticket. Rather than questioning our ability to make predictions about our future happiness, therefore, we simply conclude that we missed something important—a mistake we surely won’t make again. And yet we do make the mistake again. In fact, no matter how many times we fail to predict someone’s behavior correctly, we can always explain away our mistakes in terms of things that we didn’t know at the time. In this way, we manage to sweep the frame problem under the carpet—always convincing ourselves that this time we are going to get it right, without ever learning what it is that we are doing wrong.
Nowhere is this pattern more evident, and more difficult to expunge, than in the relationship between financial rewards and incentives. It seems obvious, for example, that employee performance can be improved through the application of financial incentives, and in recent decades performance-based pay