The Filter Bubble - Eli Pariser [88]
A better approach is to choose to use sites that give users more control and visibility over how their filters work and how they use your personal information.
For example, consider the difference between Twitter and Facebook. In many ways, the two sites are very similar. They both offer people the opportunity to share blips of information and links to videos, news, and photographs. They both offer the opportunity to hear from the people you want to hear from and screen out the people you don’t.
But Twitter’s universe is based on a few very simple, mostly transparent rules—what one Twitter supporter called “a thin layer of regulation.” Unless you go out of your way to lock your account, everything you do is public to everyone. You can subscribe to anyone’s feed that you like without their permission, and then you see a time-ordered stream of updates that includes everything everyone you’re following says.
In comparison, the rules that govern Facebook’s information universe are maddeningly opaque and seem to change almost daily. If you post a status update, your friends may or may not see it, and you may or may not see theirs. (This is true even in the Most Recent view that many users assume shows all of the updates—it doesn’t.) Different types of content are likely to show up at different rates—if you post a video, for example, it’s more likely to be seen by your friends than a status update. And the information you share with the site itself is private one day and public the next. There’s no excuse, for example, for asking users to declare which Web sites they’re “fans” of with the promise that it’ll be shown only to their friends, and then releasing that information to the world, as Facebook did in 2009.
Because Twitter operates on the basis of a few simple, easily understandable rules, it’s also less susceptible to what venture capitalist Brad Burnham (whose Union Square Ventures was Twitter’s primary early investor) calls the tyranny of the default. There’s great power in setting the default option when people are given a choice. Dan Ariely, the behavioral economist, illustrates the principle with a chart showing organ donation rates in different European countries. In England, the Netherlands, and Austria, the rates hover around 10 percent to 15 percent, but in France, Germany, and Belgium, donation rates are in the high 90s. Why? In the first set of countries, you have to check a box giving permission for your organs to be donated. In the second, you have to check a box to say you won’t give permission.
If people will let defaults determine the fate of our friends who need lungs and hearts, we’ll certainly let them determine how we share information a lot of the time. That’s not because we’re stupid. It’s because we’re busy, have limited attention with which to make decisions, and generally trust that if everyone else is doing something, it’s OK for us to do it too. But this trust is often misplaced. Facebook has wielded this power with great intentionality—shifting the defaults on privacy settings in order to encourage masses of people to make their posts more public. And because software architects clearly understand the power of the default and use it to make their services more profitable, their claim that users can opt out of giving their personal information seems somewhat disingenuous. With fewer rules and a more transparent system, there are fewer defaults to set.
Facebook’s PR department didn’t return my e-mails requesting an interview (perhaps because MoveOn’s critical view of Facebook’s privacy practices is well known). But it would probably argue that it gives its users far more choice and control about how they use the service than Twitter does. And it’s true that Facebook’s options control panel lists scores of different options for Facebook users.
But to give people control, you have to make clearly evident what the options are,