Everyware_ The Dawning Age of Ubiquitous Computing - Adam Greenfield [78]
If we can take as a limit case the recording of every single impression experienced in the course of a life, then it seems fair to say that all the other issues we're interested in addressing will be found somewhere inside this envelope. And if this is so—and there's currently little reason to believe otherwise—we can safely assume that even devices with small form factors will be able to contain usefully large storage arrays.
Going a step further still, such high local information densities begin to suggest the Aleph of Borges (and William Gibson): a single, solid-state unit that contains high-fidelity representations of literally everything, "the only place on earth where all places are." As strange as this poetic notion may sound in the context of an engineering discussion, the numbers back it up; it's hard to avoid the conclusion that we are entering a regime in which arbitrarily large bodies of information can be efficiently cached locally, ready to hand for whatever application requires them.
If this is too rich for your blood, Roy Want, Gaetano Boriello, and their co-authors point out, in their 2002 paper "Disappearing Hardware," that we can at least "begin to use storage in extravagant ways, by prefetching, caching and archiving data that might be useful later, lessening the need for continuous network connectivity."
While this is more conservative, and certainly less romantic, than Borges' Aleph, it has the distinct advantage (for our immediate purposes, anyway) of referring to something real. Intel has demonstrated several iterations of a high-density mobile/wearable storage system based on these ideas, called a "personal server," the earliest versions of which were little more than a hard drive with a built-in wireless connection. Where Want's version of Alan Dix's calculation puts that total lifetime throughput figure at a starkly higher 97 TB ("80 years, 16 hours a day, at 512 Kbps"), he reckons that a personal server should store that amount of data by the more optimistic date of 2017; some of the apparent optimism no doubt reflects the difference in scale between a grain of sand and the mobile-phone-sized personal server.
But, again, the purpose of providing such calculations is merely to backstop ourselves. Any ubiquitous application that requires less in the way of local storage than that required by recording every sensation of an entire life in high fidelity would seem to present little problem from here on out.
Thesis 61
The necessary addressing scheme already exists.
As we considered earlier, a technology with the ambition to colonize much of the observable world has to offer some provision for addressing the very large number of nodes implied by such an ambition. We've seen that a provision along these lines appears to exist, in the form of something called IPv6, but what exactly does this cryptic little string mean?
In order to fully understand the implications of IPv6, we have to briefly consider what the Internet was supposed to be "for" in the minds of its original designers, engineers named Robert E. Kahn and Vint Cerf. As it turns out, Kahn and Cerf were unusually prescient, and they did not want to limit their creation to one particular use or set of uses. As a result, from the outset it was designed to be as agnostic as possible regarding the purposes and specifications of the devices connected to it, which has made it a particularly brilliant enabling technology.
The standard undergirding communication over the Internet—a network layer protocol known, rather sensibly, as Internet Protocol, or IP—doesn't stipulate anything but the rules by which packets of ones and zeroes get switched from one location to another. The model assumes that all the intelligence resides in the devices connected to the network, rather