Irrational Economist_ Making Decisions in a Dangerous World - Erwann Michel-Kerjan [63]
What explains these seeming failures to learn from the past? The problem is perplexing because it cannot be written off as a simple consequence of people having short memories. Residents along the Texas Gulf Coast, after all, are constantly reminded of the omnipresent threat of hurricanes by the ubiquitous media coverage that such storms receive during hurricane season, and few social conversations in California don’t include some mention of earthquakes—concerning either damage that one suffered in the past or what one has heard about threats in the future. Likewise, our failure to optimally invest in protection cannot be attributed to the absence of incentives; as an example, consider that residents in the state of Florida can earn substantial reductions in their wind-storm insurance premiums if they undertake certain investments in mitigation—yet substantial numbers fail to take advantage of the program.
I contend that the reason we often fail to invest optimally in protection lies not in our inability to foresee the possibility of future losses but, rather, in our inability to foresee the consequences of how we decide to protect against these losses. In particular, we underinvest because of the combined effect of three forces: (1) an instinct to learn by trial and error that subconsciously rewards us for not mitigating more often than for mitigating; (2) a tendency to base decisions on poor mental models of the physical mechanics of hazards; and (3) a tendency to be lured to take risks by a misplaced confidence in our ability to survive hazards, no matter how severe. These natural psychological barriers to learning about mitigation are then compounded by a fourth, societal factor: a tendency to entrust decisions about how much (and when) to invest in mitigation to agents who are not likely to suffer the direct consequences of poor decisions.
WHEN GOOD DECISION PROCESSES PRODUCE BAD CONSEQUENCES
One of the more remarkable findings that has emerged from the study of complex problem solving over the years is that we do not need to be particularly smart to be able to make smart decisions. Good pool players, for example, manage to know how to direct a ball to a pocket without knowledge of the mechanics of force and relative motion. The explanation for this ability is that we can often get quite good at things if we are simply put in an environment that is favorable to learning by trial and error: one where we are offered repeated opportunities to learn, where the feedback it affords us is clear (e.g., we can see whether the ball drops in the pocket or not), and where mistakes, when we make them, are not fatal.
The problem, however, is that if we apply this same principle when deciding whether to invest in protection against low-probability hazards, the result may be a pattern of behavior that is transparently dysfunctional: an unending cycle of underinvestment in mitigation followed by short bursts of mitigation, followed by a return to underinvestment. The reason is straightforward: During periods when there are no hazards present—which will be most of the