Irrational Economist_ Making Decisions in a Dangerous World - Erwann Michel-Kerjan [28]
Of course, this is a rather simple example and the recommendation for School B includes several important caveats. It assumes, first, that all conjectures and estimates are indeed correct and, second, that the person’s risk attitude is measured in addition to just tradeoffs under certainty.3 Third, our basic analysis above ignores the value of collecting additional information on key features of each school (e.g., social life, academic rigor, job access), as well as the very important option of switching schools if not happy. This latter thought introduces a dynamic element into the decision and is referred to as real options analysis. To do this justice would require building a decision tree, in which the student can switch schools after the first, second, or perhaps even third year (if allowed), so as to assess the value of future information that may arise. This will help assess the value of keeping your options open by making flexible versus rigid decisions.
NORMATIVE DEBATES
As this simple example shows, decisions can get complicated fast in real-world settings. But the field of decision theory initially focused on more abstract problems, such as what it means to make a rational decision under risk rather than in a setting characterized by real-world messiness. Early theorists conveniently assumed that all good options were already identified (so, little need for creativity) and that all possible states of nature and their consequences would be well defined (an exceedingly unrealistic assumption). The focus in those first few years of decision theory (i.e., in the 1960s) was mostly on the underlying decision principles that should govern a rational choice, akin to studying Newtonian physics in an idealized world of no friction, conservation of energy, and no quantum mechanical effects. Although many papers were written about theoretical issues, two major problems stood out. They concerned normative questions about preferences and beliefs, respectively, as explained next.
On the preference side, one serious problem was that quite a few smart people would knowingly and willingly violate a key assumption of the rational model, as shown by French economist and Nobel Laureate Maurice Allais. To illustrate, which would you prefer: option A, which offers $3,000 for sure, or option B, which entails a gamble offering an 80 percent chance of getting $4,000 and a 20 percent chance of getting zero? Even though the expected value of the gamble is $3,200, most people would prefer $3,000 for sure (option A) since they want to play it safe. But when these two options are embedded in two new gambles C and D, such that C offers a 25 percent chance of option A and D offers a 25 percent chance of option B, many people switch their preference. To most of us, a 25 percent of getting $3,000 (which is option C when worked out) is less attractive than a 20 percent of getting $4,000 (which is option D when calculated through). This switch of preference violates a key tenet of expected utility theory known as the independence axiom.
Also, there were problems on the probability side. One famous debate concerned a paradox posed by Daniel Ellsberg (of later fame due to publishing the Pentagon Papers). It involved multiple urns, some with known and some with unknown odds of drawing a winning ball. Instead of estimating the expected value of the unknown probability, and sticking with that estimate, most people exhibit strong aversion to ambiguity in violation of basic probability principles. A simpler version of the paradox would be as follows. You can choose one of two urns, each containing red and white balls. If you draw red you win $100 and nothing otherwise. You know that urn A has exactly a 50-50 ratio of red and white balls. In urn B, the ratio is unknown. From which