Everything Is Obvious_ _Once You Know the Answer - Duncan J. Watts [89]
This seems like an obvious point, but it is widely misunderstood.19 Advertisers, in fact, often pay a premium to reach customers they think are most likely to buy their products—because they have bought their products (e.g., Pampers) in the past; or because they have bought products in the same category (e.g., a competitor to Pampers); or because their attributes and circumstances make them likely to do so soon (e.g., a young couple expecting their first child). Targeted advertising of this kind is often held up as the quintessence of a scientific approach. But again, at least some of those consumers, and possibly many of them, would have bought the products anyway. As a result, the ads were just as wasted on them as they were on consumers who saw the ads and weren’t interested. Viewed this way, the only ads that matter are those that sway the marginal consumer—the one who ends up buying the product, but who wouldn’t have bought it had they not seen the ad. And the only way to determine the effect on marginal consumers is to conduct an experiment in which the decision about who sees the ad and who doesn’t is made randomly.
FIELD EXPERIMENTS
A common objection to running these kinds of randomized experiments is that it can be difficult to do in practice. If you put up a billboard by the highway or place an ad in a magazine, it’s generally impossible to know who sees it—even consumers themselves are often unaware of the ads they have seen. Moreover, the effects can be hard to measure. Consumers may make a purchase days or even weeks later, by which stage the connection between seeing the ad and acting on it has been lost. These are reasonable objections, but increasingly they can be dealt with, as three of my colleagues at Yahoo!—David Reiley, Taylor Schreiner, and Randall Lewis—demonstrated recently in a pioneering “field experiment” involving 1.6 million customers of a large retailer who were also active Yahoo! users.
To perform the experiment, Reiley and company randomly assigned 1.3 million users to the “treatment” group, meaning that when they arrived at Yahoo!-operated websites, they were shown ads for the retailer. The remaining 300,000, meanwhile, were assigned to the “control” group, meaning that they did not see these ads even if they visited exactly the same pages as the treatment group members. Because the assignment of individuals to treatment and control groups was random, the differences in behavior between the two groups had to be caused by the advertising itself. And because all the participants in the experiment were also in the retailer’s database, the effect of the advertising could be measured in terms of their actual purchasing behavior—up to several weeks after the campaign itself concluded.20
Using this method, the researchers estimated that the additional revenue generated by the advertising was roughly four times the cost of the campaign in the short run, and possibly much higher over the long run. Overall, therefore, they concluded that the campaign had in fact been effective—a result that was clearly good news both for Yahoo! and the retailer. But what they also discovered was that almost all the effect was for older consumers—the ads were largely ineffective for people under forty. At first, this latter result seems like bad news. But the right way to think about it is that finding out that something doesn’t work is also the first step toward learning what does work. For example, the advertiser could experiment with a variety of different approaches to appeal to younger people, including different formats, different styles, or even different sorts of incentives and offers. It’s entirely possible that something would work, and it would be valuable to figure out