The Net Delusion - Evgeny Morozov [57]
In the not so distant future, a banker perusing nothing but Reuters and Financial Times and with other bankers as her online friends, would be left alone to do anything she wants, even browse Wikipedia pages about human rights violations. In contrast, a person of unknown occupation, who is occasionally reading Financial Times but is also connected to five well-known political activists through Facebook, and who has written blog comments that included words like “democracy” and “freedom,” would only be allowed to visit government-run websites (or, if she is an important intelligence target, she’ll be allowed to visit other sites, with her online activities closely monitored).
When Censors Understand You Better Than Your Mom Does
Is such customization of censorship actually possible? Would censors know so much about us that they might eventually be able to make automated decisions about not just each individual but each individual acting in a particular context?
If online advertising is anything to judge by, such behavioral precision is not far away. Google already bases the ads it shows us on our searches and the text of our emails; Facebook aspires to make its ads much more fine-grained, taking into account what kind of content we have previously “liked” on other sites and what our friends are “liking” and buying online. Imagine building censorship systems that are as detailed and fine-tuned to the information needs of their users as the behavioral advertising we encounter every day. The only difference between the two is that one system learns everything about us to show us more relevant advertisements, while the other one learns everything about us to ban us from accessing relevant pages. Dictators have been somewhat slow to realize that the customization mechanisms underpinning so much of Web 2.0 can be easily turned to purposes that are much more nefarious than behavioral advertising, but they are fast learners.
By paying so much attention to the most conventional and certainly blandest way of Internet control—blocking access to particular URLs—we may have missed more fundamental shifts in the field. Internet censorship is poised to grow in both depth, looking deeper and deeper into the kinds of things we do online and even offline, and breadth, incorporating more and more information indicators before a decision to censor something is made.
When in the summer of 2009 the Chinese government announced that it would require all computers sold in the country to have one special piece of software called GreenDam installed on them, most media accounts focused on how monumental the plan seemed to be or how poorly the authorities handled GreenDam’s rollout. As a result of heavy domestic and international criticism, the plan was scrapped, but millions of computers in Chinese schools and Internet cafés still continue to use the software to this day.
Internal politics aside,