The Atheist's Guide to Reality_ Enjoying Life Without Illusions - Alex Rosenberg [62]
What’s so good about tit for tat? Three things. First, this strategy always starts out being nice—that is, cooperating. Second, a player won’t be taken for a sucker; a player retaliates when taken advantage of. Third, a player doesn’t have to play against a tit-for-tat player many times to figure out what strategy he or she is using. And it’s easy to switch to tit for tat and to start making the same gains.
The message that emerges from human experiments and computer simulations is also clear and also nice. Under a wide variety of conditions in which people face the strategic problem posed by an iterated PD, there will be selection for certain kinds of strategies. The fittest strategies are all variants on tit for tat: they are initially nice—players using them start out cooperating and are quick to forgive and minimally retaliatory; if opponents switch from defecting to cooperating, the tit-for-tat players switch to cooperation; and the strategies are transparent to other players, so it’s easy to figure out how to do best when playing other tit-for-tat players—just do what they do.
If many or even just the most crucial interactions in our evolutionary past were such iterated games, then there would have been strong selection for nice strategies. If blind variation could impose tit for tat or some other nice strategy on the members of one or more groups of hominins, then those groups would flourish. Their members would solve the huge design problem imposed on them, and groups of such cooperators would be immune to the sort of subversion from within that undermined Darwin’s group selection idea for how niceness could be selected for.
It’s a nice story of how niceness could have emerged, but is it any more than a story? Is there evidence that our hominin ancestors actually faced an iterated prisoner’s dilemma often enough over the time scale needed for cooperative strategies like tit for tat to be selected for? There is no direct evidence. There was no currency (natural or otherwise) to measure the payoffs, no way to measure the frequency and numbers of interactions, and even if there were, strategic interaction problems don’t fossilize well. On the other hand, when the choice was between extinction or cooperation, the fact that we are here strongly suggests that cooperative strategies must somehow have been selected for.
Moreover, repeated PD games are not the only ones people face and are not the only ones that select for niceness. Experiments focusing on other games that people (and computers) play and that we know our ancestors faced strongly suggest that fitness and niceness really do go together.
Consider this strategic interaction problem, called “cut the cake”: Two players who don’t know each other and can’t communicate are each asked to bid for some portion of an amount of money, say, $10. Each is told that if the other player’s bid and theirs total more than $10, neither gets anything, and if the total of their two bids is equal to or less than $10, each receives what they bid. In this one-shot game, most people spontaneously bid an amount somewhere close to $5. In this case, rationality does not by itself point to any particular strategy. So why should one be predominant? What is more, the predominance of fair bids is cross-culturally constant. Across a broad range of Western and non-Western, agricultural, pastoral, slash-and-burn, nomadic, and even hunter-gatherer societies, the fair offer is the predominant one.
Consider another game, this one a little more complicated. In this game, one player gets