The Atheist's Guide to Reality_ Enjoying Life Without Illusions - Alex Rosenberg [63]
When asked to explain their strategies, most people identify a norm of equality as dictating their choice in the first game and a commitment to fairness as doing so in the second. In “cut the cake,” people rarely ask for more than 5 units even when they think they can get away with 6. When asked why, they say they do so out of a sense of fairness. They also say that they get angry when the other player makes choices that strike them as unequal. When they play the disposer role in the ultimatum game, people describe their satisfaction at rejecting offers they think are too low. The feeling of satisfaction is evidently worth more to them than accepting a low offer of some amount of money rather than nothing. These games tap into strong feelings that lead people to do things that reflect important parts of core morality—norms of reciprocity, fairness, and equality.
In the experiments, the “cut the cake” and ultimatum games are one-shot games, played just once by people who don’t know each other. Consider what happens when you program computers to play these two games over and over using many different strategies over a wide range of payoffs in individual games: demand 9, accept anything; demand 5, accept 4 or more; demand 4, accept 3 or more; and so on. Program a little evolution by natural selection into this simulation: If you start out with a lot of different competing strategies, have the simulation filter out the least rewarded strategies every ten or a hundred or a thousand or a million rounds. Replace each player using an eliminated strategy with a new one using one of the surviving strategies. It doesn’t even have to be the best one. Program the computer to make a random choice. What you find is that if you play enough rounds, the strategies that do best overall are very often the ones that are “fair” or “equal.” In “cut the cake,” asking for half is a winning strategy most of the time. In the ultimatum game, asking for half and refusing anything much less does very well.
What does best in the long run doesn’t always do best in the short run, but human evolution was (and is) a long-run process. These computer simulation results, along with similar ones for repeated prisoner’s dilemma games, strongly suggest one thing: if we evolved in circumstances that had these kinds of payoffs, there would have been strong selection for core morality. There would have been strong selection for anything that made people adopt norms of fairness, equity, and cooperation.
But how does natural selection get people to adopt such norms? How does it shape such adaptations? What is the quick and dirty solution to the design problems that arise in situations of iterated strategic choice? This problem looks like it’s too hard to be solved by genetically based natural selection. Maybe if there were genes for playing tit for tat, they would be selected for. But at least in the human case, if not in animal models, such genes seem unlikely. The solution has to involve some cultural natural selection. It can’t, however, help itself to very much culture.