Everything Is Obvious_ _Once You Know the Answer - Duncan J. Watts [77]
Of course, there are circumstances in which we may care about very small improvements in prediction accuracy. In online advertising or high-frequency stock trading, for example, one might be making millions or even billions of predictions every day, and large sums of money may be at stake. Under these circumstances, it’s probably worth the effort and expense to invest in sophisticated methods that can exploit the subtlest patterns. But in just about any other business, from making movies or publishing books to developing new technologies, where you get to make only dozens or at most hundreds of predictions a year, and where the predictions you are making are usually just one aspect of your overall decision-making process, you can probably predict about as well as possible with the help of a relatively simple method.
The one method you don’t want to use when making predictions is to rely on a single person’s opinion—especially not your own. The reason is that although humans are generally good at perceiving which factors are potentially relevant to a particular problem, they are generally bad at estimating how important one factor is relative to another. In predicting the opening weekend box office revenue for a movie, for example, you might think that variables such as the movie’s production and marketing budgets, the number of screens on which it will open, and advance ratings by reviewers are all highly relevant—and you’d be correct. But how much should you weight a slightly worse-than-average review against an extra $10 million marketing budget? It isn’t clear. Nor is it clear, when deciding how to allocate a marketing budget, how much people will be influenced by the ads they see online or in a magazine versus what they hear about the product from their friends—even though all these factors are likely to be relevant.
You might think that making these sorts of judgments accurately is what experts would be good at, but as Tetlock showed in his experiment, experts are just as bad at making quantitative predictions as nonexperts and maybe even worse.9 The real problem with relying on experts, however, is not that they are appreciably worse than nonexperts, but rather that because they are experts we tend to consult only one at a time. Instead, what we should do is poll many individual opinions—whether experts or not—and take the average. Precisely how you do this, it turns out, may not matter so much. With all their fancy bells and whistles, prediction markets may produce slightly better predictions than a simple method like a poll, but the difference between the two is much less important than the gain from simply averaging lots of opinions somehow. Alternatively, one can estimate the relative importance of the various predictors directly from historical data, which is really all a statistical model accomplishes. And once again, although a fancy model may work slightly better than a simple model, the difference is small relative to using no model at all.10 At the end of the day, both models and crowds accomplish the same objective. First, they rely on some version of human judgment to identify which factors are relevant to the prediction in question. And second, they estimate and weight the relative importance of each of these factors. As the psychologist Robyn Dawes once pointed out, “the whole trick is to know what variables to look at and then know how to add.”11
By applying this trick consistently, one can also learn over time which predictions can be made with relatively low error, and which cannot be. All else being equal, for example, the further in advance you predict the outcome of an event, the larger your error will be.