Everyware_ The Dawning Age of Ubiquitous Computing - Adam Greenfield [4]
At the MIT Media Lab, Professor Hiroshi Ishii's "Things That Think" initiative developed interfaces bridging the realms of bits and atoms, a "tangible media" extending computation out into the walls and doorways of everyday experience. At IBM, a whole research group grew up around a "pervasive computing" of smart objects, embedded sensors, and the always-on networks that connected them.
And as mobile phones began to percolate into the world, each of them nothing but a connected computing device, it was inevitable that someone would think to use them as a platform for the delivery of services beyond conversation. Philips and Samsung, Nokia and NTT DoCoMo—all offered visions of a mobile, interconnected computing in which, naturally, their products took center stage.
By the first years of the twenty-first century, with daily reality sometimes threatening to leapfrog even the more imaginative theorists of ubicomp, it was clear that all of these endeavors were pointing at something becoming real in the world.
Intriguingly, though, and maybe a little infuriatingly, none of these institutions understood the problem domain in quite the same way. In their attempts to grapple with the implications of computing in the post-PC era, some concerned themselves with ubiquitous networking: the effort to extend network access to just about anyplace people could think of to go. With available Internet addresses dwindling by the day, this required the development of a new-generation Internet protocol; it also justified the efforts of companies ranging from Intel to GM to LG to imagine an array of "smart" consumer products designed with that network in mind.
Others concentrated on the engineering details of instrumenting physical space. In the late 1990s, researchers at UC Berkeley developed a range of wireless-enabled, embedded sensors and microcontrollers known generically as motes, as well as an operating system for them to run on. All were specifically designed for use in ubicomp,
Thirty miles to the south, a team at Stanford addressed the absence in orthodox computer science of a infrastructural model appropriate for the ubiquitous case. In 2002, they published a paper describing the event heap, a way of allocating computational resources that better accounted for the arbitrary comings and goings of multiple simultaneous users than did the traditional "event queue."
Developments elsewhere in the broader information technology field had clear implications for the ubiquitous model. Radio-frequency identification (RFID) tags and two-dimensional barcodes were just two of many technologies adapted from their original applications, pressed into service in ubicomp scenarios as bridges between the physical and virtual worlds. Meanwhile, at the human-machine interface, the plummeting cost of processing resources meant that long-dreamed-of but computationally-intensive ways of interaction, such as gesture recognition and voice recognition, were becoming practical; they would prove irresistible as elements of a technology that was, after all, supposed to be invisible-but-everywhere.
And beyond that, there was clearly a ferment at work in many of the fields touching on ubicomp, even through the downturn that followed the crash of the "new economy" in early 2001. It had reached something like a critical mass of thought and innovation by 2005: an upwelling of novelty both intellectual and material, accompanied by a persistent sense, in many quarters, that ubicomp's hour had come 'round at last. Pieces of the puzzle kept coming. By the time I began doing the research for this book, the literature on ubicomp was a daily tide of press releases and new papers that was difficult to stay on top of: papers on wearable computing, augmented reality, locative media, near-field communication, bodyarea networking. In many cases, the fields were so new that the jargon hadn't even solidified yet.
Would all of these