Irrational Economist_ Making Decisions in a Dangerous World - Erwann Michel-Kerjan [120]
UNREASONABLE FRIENDS AND ENEMIES
Being recognized is no guarantee of success. There are honest brokers in the world of deeds, eager for scientifically sound proposals that they can carry forward. But there are also practitioners with foregone conclusions, looking for experts whose work they can invoke, in order to justify positions that they have already adopted. They may care little about the quality of our work, as long as it points in their direction and can be cited as a “neutral” source of truth. If our fame advances their cause, then they may help us to find better speaking engagements, better luck with our op-eds, and better consulting opportunities. But we are just means to their predetermined ends.
How can we tell whether we are being “kept” by the powerful, rather than getting well-deserved audiences? One positive sign is finding that our supporters have followed a discovery process paralleling our own, independently discovering a behavioral regularity that we have documented and explained. A second positive sign is finding that our supporters care about the details of our work and the science underlying it—and not just about our convenient truths. A third positive sign is finding that our supporters are committed to empirical evaluation, meaning that they care about how well a program works and not just that it exists, bringing them wealth or power.
Just as acclaim can be embarrassing, if it means that we are being used, so can scorn be an honor, if it means that our ideas are so powerful that they must be attacked. Social scientists study basic human activities, which others are already addressing (e.g., politicians, pundits, public affairs offices). Unless we fashion roles for those incumbents, we threaten them. A community activist once told me that she knew that her views mattered, when her life was threatened. Without worthy enemies, perhaps we’re not saying much.
OUR PROGRAM?
If we get the chance to implement our work, we then need to ask whether the programs that emerge are faithful to our ideas. Implementation always requires some adaptation. However interdisciplinary our research groups might be, they are unlikely to include all the relevant expertise. For example, Howard Kunreuther’s early work on the National Flood Insurance Program (NFIP) found that, despite having many attractive features, the NFIP had not adequately addressed insurance agents’ compensation. Without appropriate commissions, agents had less incentive to sell a policy, however attractive it might be for their clients. The federal government later adjusted those commissions—a result of the research project’s commitment to empirical research, capable of producing surprising results.
As mentioned, evaluation research is essential, if one cares about a program’s impacts (and not just its existence). Evaluation research begins by assessing how faithful a program’s implementation is to its underlying concept. That assessment keeps programs from being judged unfairly, when flawed imitations bear their names. For example, the failure of poorly marketed flood insurance does not prove that insurance can never work, as a strategy for internalizing costs, reducing moral hazard, and sending price signals. Similarly, the failure of poorly executed risk communications does not prove that “information doesn’t work.” On the other hand, if a program cannot be implemented faithfully, then it may just be an ivory-tower idea.
As a risk management program becomes more complicated, its implementation requires more and more kinds of expertise—to the point where the big idea may be just a banner under which multiple specialties ply their crafts. If so, then we should be proud of having opened the door to those experts, even if the final product is less distinctively our own. Complex programs provide less clear tests