Everyware_ The Dawning Age of Ubiquitous Computing - Adam Greenfield [62]
In the latter case particularly—where the problem may indeed not reside in any one place at all, but rather arises out of the complex interaction of independent parts—resolving the issue is going to present unusual difficulties. Diagnosis of simple defaults in ubiquitous systems will likely prove to be inordinately time-consuming by current standards, but systems that display emergent behavior may confound diagnosis entirely. Literally the only solution may be to power everything down and restart components one by one, in various combinations, until a workable and stable configuration is once again reached.
This will mean rebooting the car, or the kitchen, or your favorite sweater, maybe once and maybe several times, until every system that needs to do so has recognized the others and basic functionality has been restored to them all. And even then, of course, the interaction of their normal functioning may entrain the same breakdown. Especially when you consider how dependent on everyware we are likely to become, the prospect of having to cut through such a Gordian tangle of interconnected parts just to figure out which one has broken down is somewhat less than charming.
Thesis 45
Users will understand their transactions with everyware to be essentially social in nature.
There's good reason to believe that users will understand their transactions with ubiquitous systems to be essentially social in nature, whether consciously or otherwise—and this will be true even if there is only one human party to a given interaction.
Norbert Wiener, the "father of cybernetics," had already intuited something of this in his 1950 book, The Human Use of Human Beings: according to Wiener, when confronted with cybernetic machines, human beings found themselves behaving as if the systems possessed agency.
This early insight was confirmed and extended in the pioneering work of Byron Reeves and Clifford Nass, published in 1996 as The Media Equation. In an extensive series of studies, Reeves and Nass found that people treat computers more like other people than like anything else—that, in their words, computers "are close enough to human that they encourage social responses." (The emphasis is present in the original.) We'll flatter a computer, or try wheedling it into doing something we want, or insult it when it doesn't—even if, intellectually, we're perfectly aware how absurd this all is.
We also seem to have an easier time dealing with computers when they, in turn, treat us politely—when they apologize for interrupting our workflow or otherwise acknowledge the back-and-forth nature of communication in ways similar to those our human interlocutors might use. Reeves and Nass urge the designers of technical systems, therefore, to attend closely to the lessons we all learned in kindergarten and engineer their creations to observe at least the rudiments of interpersonal etiquette.
Past attempts to incorporate these findings into the design of technical systems, while invariably well-intentioned, have been disappointing. From Clippy, Microsoft's widely-loathed "Office Assistant" ("It looks like you're writing a letter"), to the screens of Japan Railways' ticket machines, which display an animated hostess bowing to the purchaser at the completion of each transaction, none of the various social interfaces have succeeded in doing anything more than reminding users of just how stilted and artificial the interaction is. Even Citibank's ATMs merely sound disconcerting, like some miserly cousin of HAL 9000, when they use the first person in apologizing for downtime or other violations of user expectations ("I'm sorry—I can only dispense cash in multiples of $20 right now.")
But genuinely