The Filter Bubble - Eli Pariser [76]
On the other hand, when Harvard researchers Terence Burnham and Brian Hare asked volunteers to play a game in which they could choose to donate money or keep it, a picture of the friendly looking robot Kismet increased donations by 30 percent. Humanlike agents tend to make us clam up on the intimate details of our lives, because they make us feel as if we’re actually around other people. For elderly folks living alone or a child recovering in a hospital, a virtual or robotic friend can be a great relief from loneliness and boredom.
This is all to the good. But humanlike agents also have a great deal of power to shape our behavior. “Computers programmed to be polite, or to evidence certain personalities,” Calo writes, “have profound effects on the politeness, acceptance, and other behavior of test subjects.” And because they engage with people, they can pull out implicit information that we’d never intend to divulge. A flirty robot, for example, might be able to read subconscious cues—eye contact, body language—to quickly identify personality traits of its interlocutor.
The challenge, Calo says, is that it’s hard to remember that humanlike software and hardware aren’t human at all. Advertars or robotic assistants may have access to the whole set of personal data that exists online—they may know more about you, more precisely, than your best friend. And as persuasion and personality profiling get better, they’ll develop an increasingly nuanced sense of how to shift your behaviors.
Which brings us back to the advertar. In an attention-limited world, lifelike, and especially humanlike, signals stand out—we’re hardwired to pay attention to them. It’s far easier to ignore a billboard than an attractive person calling your name. And as a result, advertisers may well decide to invest in technology that allows them to insert human advertisements into social spaces. The next attractive man or woman who friends you on Facebook could turn out to be an ad for a bag of chips.
As Calo puts it, “people are not evolved to twentieth-century technology. The human brain evolved in a world in which only humans exhibited rich social behaviors, and a world in which all perceived objects were real physical objects.” Now all that’s shifting.
The Future Is Already Here
The future of personalization is driven by a simple economic calculation. Signals about our personal behavior and the computing power necessary to crunch through them are becoming cheaper than ever to acquire. And as that cost collapses, strange new possibilities come within reach.
Take facial recognition. Using MORIS, a $3,000 iPhone app, the police in Brockton, Massachusetts, can snap a photo of a suspect and check his or her identity and criminal record in seconds. Tag a few pictures with Picasa, Google’s photo-management tool, and the software can already pick out who’s who in a collection of photos. And according to Eric Schmidt, the same is true of Google’s cache of images from the entire Web. “Give us 14 images of you,” he told a crowd of technologists at the Techonomy Conference in 2010, “and we can find other images of you with ninety-five percent accuracy.”
As of the end of 2010, however, this feature isn’t available in Google Image Search. Face.com, an Israeli start-up, may offer the service before the search giant does. It’s not every day that a company develops a highly useful and world-changing technology and then waits for a competitor to launch it first. But Google has good reason to be concerned: The ability to search by face will shatter many of our cultural illusions about privacy and anonymity.
Many of us will be caught in flagrante delicto. It’s not just that your friends (and enemies) will be able to easily find pictures other people have taken of you—as if the whole Internet has been tagged on Facebook. They will also