The Filter Bubble - Eli Pariser [89]
In short, at the time of this writing, Twitter makes it pretty straightforward to manage your filter and understand what’s showing up and why, whereas Facebook makes it nearly impossible. All other things being equal, if you’re concerned about having control over your filter bubble, better to use services like Twitter than services like Facebook.
We live in an increasingly algorithmic society, where our public functions, from police databases to energy grids to schools, run on code. We need to recognize that societal values about justice, freedom, and opportunity are embedded in how code is written and what it solves for. Once we understand that, we can begin to figure out which variables we care about and imagine how we might solve for something different.
For example, advocates looking to solve the problem of political gerrymandering—the backroom process of carving up electoral districts to favor one party or another—have long suggested that we replace the politicians involved with software. It sounds pretty good: Start with some basic principles, input population data, and out pops a new political map. But it doesn’t necessarily solve the basic problem, because what the algorithm solves for has political consequences: Whether the software aims to group by cities or ethnic groups or natural boundaries can determine which party keeps its seats in Congress and which doesn’t. And if the public doesn’t pay close attention to what the algorithm is doing, it could have the opposite of the intended effect—sanctioning a partisan deal with the imprimatur of “neutral” code.
In other words, it’s becoming more important to develop a basic level of algorithmic literacy. Increasingly, citizens will have to pass judgment on programmed systems that affect our public and national life. And even if you’re not fluent enough to read through thousands of lines of code, the building-block concepts—how to wrangle variables, loops, and memory—can illuminate how these systems work and where they might make errors.
Especially at the beginning, learning the basics of programming is even more rewarding than learning a foreign language. With a few hours and a basic platform, you can have that “Hello, World!” experience and start to see your ideas come alive. And within a few weeks, you can be sharing these ideas with the whole Web. Mastery, as in any profession, takes much longer, but the payoff for a limited investment in coding is fairly large: It doesn’t take long to become literate enough to understand what most basic bits of code are doing.
Changing our own behavior is a part of the process of bursting the filter bubble. But it’s of limited use unless the companies that are propelling personalization forward change as well.
What Companies Can Do
It’s understandable that, given their meteoric rises, the Googles and Facebooks of the online world have been slow to realize their responsibilities. But it’s critical that they recognize their public responsibility soon. It’s no longer sufficient to say that the personalized Internet is just a function of relevance-seeking machines doing their job.
The new filterers can start by making their filtering systems more transparent to the public, so that it’s possible to have a discussion about how they’re exercising their responsibilities in the first place.
As Larry Lessig says, “A political response is possible only when regulation is transparent.” And there’s more than a little irony in the fact that companies whose public ideologies revolve around openness and transparency are so opaque themselves.
Facebook, Google, and their