Online Book Reader

Home Category

The Filter Bubble - Eli Pariser [50]

By Root 847 0
that’s higher, and repeat.”

Programmers face problems like this all the time. What link is the best result for the search term “fish”? Which picture can Facebook show you to increase the likelihood that you’ll start a photo-surfing binge? The directions sound pretty obvious—you just tweak and tune in one direction or another until you’re in the sweet spot. But there’s a problem with these hill-climbing instructions: They’re as likely to end you up in the foothills—the local maximum—as they are to guide you to the apex of Mount Whitney.

This isn’t exactly harmful, but in the filter bubble, the same phenomenon can happen with any person or topic. I find it hard not to click on articles about gadgets, though I don’t actually think they’re that important. Personalized filters play to the most compulsive parts of you, creating “compulsive media” to get you to click things more. The technology mostly can’t distinguish compulsion from general interest—and if you’re generating page views that can be sold to advertisers, it might not care.

The faster the system learns from you, the more likely it is that you can get trapped in a kind of identity cascade, in which a small initial action—clicking on a link about gardening or anarchy or Ozzy Osbourne—indicates that you’re a person who likes those kinds of things. This in turn supplies you with more information on the topic, which you’re more inclined to click on because the topic has now been primed for you.

Especially once the second click has occurred, your brain is in on the act as well. Our brains act to reduce cognitive dissonance in a strange but compelling kind of unlogic—“Why would I have done x if I weren’t a person who does x—therefore I must be a person who does x.”Each click you take in this loop is another action to self-justify—“Boy, I guess I just really love ‘Crazy Train.’ ” When you use a recursive process that feeds on itself, Cohler tells me, “You’re going to end up down a deep and narrow path.” The reverb drowns out the tune. If identity loops aren’t counteracted through randomness and serendipity, you could end up stuck in the foothills of your identity, far away from the high peaks in the distance.

And that’s when these loops are relatively benign. Sometimes they’re not.

We know what happens when teachers think students are dumb: They get dumber. In an experiment done before the advent of ethics boards, teachers were given test results that supposedly indicated the IQ and aptitude of students entering their classes. They weren’t told, however, that the results had been randomly redistributed among students. After a year, the students who the teachers had been told were bright made big gains in IQ. The students who the teachers had been told were below average had no such improvement.

So what happens when the Internet thinks you’re dumb? Personalization based on perceived IQ isn’t such a far-fetched scenario—Google Docs even offers a helpful tool for automatically checking the grade-level of written text. If your education level isn’t already available through a tool like Acxiom, it’s easy enough for anyone with access to a few e-mails or Facebook posts to infer. Users whose writing indicates college-level literacy might see more articles from the New Yorker; users with only basic writing skills might see more from the New York Post.

In a broadcast world, everyone is expected to read or process information at about the same level. In the filter bubble, there’s no need for that expectation. On one hand, this could be great—vast groups of people who have given up on reading because the newspaper goes over their heads may finally connect with written content. But without pressure to improve, it’s also possible to get stuck in a grade-three world for a long time.

Incidents and Adventures

In some cases, letting algorithms make decisions about what we see and what opportunities we’re offered gives us fairer results. A computer can be made blind to race and gender in ways that humans usually can’t. But that’s only if the relevant algorithms are designed

Return Main Page Previous Page Next Page

®Online Book Reader