Irrational Economist_ Making Decisions in a Dangerous World - Erwann Michel-Kerjan [53]
Two subject areas in behavioral economics, prospect theory and the availability heuristic, help explain the over-updating of virgin risks and the under-updating of experienced risks after an extreme event. A finding of prospect theory is that individuals place excess weight on zero. The Russian Roulette problem illustrates this phenomenon. Most people are willing to pay more to remove one bullet from a six-cylinder gun when it is the only bullet than if there are two (or more) bullets in the gun. That is, a reduction in risk from 1/6 to zero is worth more to them than a reduction from 2/6 to 1/6, even though they are equal reductions in the probability of death, and money is less valuable in the two-bullet case since they are 1/6 likely to die anyway.
Similarly, people perceive an increase in risk from, say, 0 percent to 0.1 percent as large but an equal absolute increase from say 5 percent to 5.1 percent as small. This tendency leads to excessive updating for a previously virgin risk and to barely any updating for an experienced risk. Suppose, for example, an uncontemplated event occurs and fully rational updating would change the risk from 0.01 percent to 1 percent, a 100-fold increase. We conjecture that individuals might instead produce a posterior risk assessment of say 5 percent, a value 5 times too high.
The enormous change in perception when a probability goes from 0 to positive is consistent with evidence from other areas. The theory of just noticeable differences explores such phenomena. For instance, as a noise gets louder, a greater change in volume is needed to make the change perceptible. This is similar to our argument that as base probabilities get larger, small changes in probability are not perceived as well.
The availability heuristic also supports our conjecture. It asserts that individuals assess the probability of an event as higher when examples come to mind more readily. Once an event has occurred, it is much more salient, leading individuals to overestimate its probability. While the first occurrence of a risk makes it suddenly salient, the third occurrence, say, does not add much to its availability. This would explain the substantial updating for virgin risks and the relatively little updating for previously experienced risks.
When dealing with experienced risks, people may suffer from heuristic confusion, assuming, even if incorrect, that they are in a situation where data are extensive and the system is well understood. In such situations, another occurrence of the event in question does not add much information. Many times, however, we act as though we have more information than we do. For instance, in areas where floods occur on average once every 100 years, even if the process were unchanging, thousands of years of data would be needed to accurately assess the probability of a flood in a given year. Such a long time series of data is rarely available.
The bottom line is that with most of the low-probability experienced risks of great interest that affect society as a whole, we have relatively little experience. This means that the updating of our assessments should often be substantial. Indeed, when the probability distribution of a risk is changing over time, the occurrence of an extreme event should lead to even greater updating.
CONCLUSION
We have made two conjectures about human failures when extrapolating from the observance of low-probability, high-consequence events to predictions about future events. First, we tend to overreact when virgin risks occur. The particular danger, now both available and salient, is likely to be overestimated in the future. Second, and by contrast, we tend to raise our probability estimate insufficiently when an experienced