The Atheist's Guide to Reality_ Enjoying Life Without Illusions - Alex Rosenberg [59]
In the case of hardwiring, nature usually has millions of years to fine-tune behavior to its environment; in the case of learning, it often has time only to hit on minimally satisfactory strategies.
Let’s set up the problem Darwin tried to solve with group selection in the way that game theorists think about it. Often, they start with a particular model, the famous prisoner’s dilemma game (PD for short). Suppose you and I set out to rob a bank and we are caught with our safecracking tools before we can break in. We are separated and informed of our rights as criminal suspects and then offered the following “deals.” If neither of us confesses, we’ll be charged with possession of safecracking tools and imprisoned for one year each. If we both confess to attempted bank robbery, a more serious crime, we’ll each receive a five-year sentence. If, however, only one of us confesses and the other remains silent, the confessor will receive a suspended one-year sentence in return for his or her confession, and the other will receive a ten-year sentence for attempted bank robbery. The question each of us faces is whether or not to confess.
As a rational agent, I want to minimize my time in jail. If I think you’re going to confess, then to minimize my prison sentence, I had better confess, too. Otherwise, I’ll end up with ten years and you’ll just get a suspended one-year sentence. But come to think of it, if I confess and you don’t, then I’ll get the suspended one-year sentence. Now it begins to dawn on me that whatever you do, I had better confess. If you keep quiet and I confess, I’ll get the shortest jail sentence possible. If you confess, then I’d be crazy not to confess as well. If I don’t confess I might get the worst possible outcome, ten years. So, I conclude that no matter what you do, the only rational thing for me to do is to confess. Game theorists call this the dominant strategy.
Now, how about your reasoning process? Well, it’s exactly the same as mine. If I confess, you’d be a fool to do otherwise, and if I don’t, you’d still be a fool to do otherwise. You have a dominant strategy—the most rational under the circumstances—and it’s the exact same strategy as mine.
The result is that we both confess and both get five years in the slammer. Where’s the dilemma? Well, if we had cooperated, we would both have gotten one year. Looking out for our own interests leads us to a “suboptimal” outcome, one less desirable than another that is attainable. Rationality leads to suboptimality; there is an outcome that both rational egoists prefer but can’t reach. Hence, the dilemma.
Why should this model of strategic interaction have any bearing on an evolutionary account of social cooperation? Because prisoner’s dilemmas are all over the place in social and biological life, and they have been since before there were humans. They must have been especially frequent and important among scavenging hominins on the African savanna a million years ago. Any way you look at it, two hungry human scavengers coming on a recently killed carcass face several distinct prisoner’s dilemmas. They know that a fresh kill means that there is a serious predator around, one they can’t deal with. They know also that the fresh kill will attract other scavengers they can’t deal with either—hyena packs. If they both scavenge, they’ll get some food, but the chances of being surprised by predators or more