Chapter 1 Introduction
The middle of Mooreâs law
The history of technology is a history of unintended consequences, of revolutions that never happened, and of unforeseen disruptions. Take railroads, for instance. In addition to quickly moving things and people around, railroads brought a profound philosophical crisis of timekeeping. Before railroads, clock time followed the sun. âNoonâ was when the sun was directly above, and local clock time was approximate. This was accurate enough for travel on horseback or foot, but setting clocks by the sun proved insufficient to synchronize railroad schedules. One townâs noon would be a neighboring townâs 12:02, and a distant townâs 12:36. Trains traveled fast enough that these small changes added up. Arrival times now had to be determined not just by the time to travel between two places, but the local time at the point of departure, which could be based on an inaccurate church clock set with a sundial. The effect was that trains would run at unpredictable times and, with terrifying regularity, crash into each other.
It was not surprising that railroads wanted to have a consistent way to mea-sure time, but what did âconsistentâ mean? Their attempt to answer this question led to a crisis of timekeeping: Do the railroads dictate when noon is, does the government, or does nature? What does it mean to have the same time in different places? Do people in cities need a different timekeeping method than farmers? The engineers making small steam engines in the early nineteenth century could not possibly have predicted that by the end of the century their invention would lead to a revolution in commerce, politics, geography, philosophy and just about all human endeavors.1
We can compare the last twenty years of computer and networking technology to the earliest days of steam power. Once, giant steam engines ran textile mills and pumped water between canal locks. Miniaturized and made more efficient, steam engines became more dispersed throughout industrial countries powering trains, machines in workplaces, and even personal carriages. As computers shrink, they too are getting integrated into more places and contexts than ever before.
We are at the beginning of the era of computation and data communication embedded in, and distributed through, our entire environment. Going far beyond how we now define âcomputers,â the vision of ubiquitous computing (see Sidebar: The Many Names of Ubicomp)is of information processing and networking as key components in the design of everyday objects (Figure 1-1) using built-in computation and communication to make familiar tools and environments do their jobs better. It is the underlying (if unstated) principle guiding the development of toys that talk back, clothes that react to the environment, rooms that change shape depending on what their occupants are doing, electromechanical prosthetics that automatically manage chronic diseases and enhance peopleâs capabilities beyond what is biologically possible, hand tools that dynamically adapt to their user, and (of course) many new ways for people to be bad to each other.2
The rest of this chapter discusses why the idea of ubiquitous computing is important now, and why user experience design is key to creating successful ubiquitous computing (ubicomp) devices and environments.
Sidebar: The Many Names of Ubicomp
There are many different terms applied to what I am calling ubiquitous computing (or ubicomp for short). Each term came from a different social and historical context. Although not designed to be complementary, each built on the definitions of those that came before (if only to help the group coining the term identify themselves). I consider them to be different aspects of the same phenomenon:
- Ubiquitous computing refers to the practice of embedding information processing and network communication into everyday, human environments to continuously provide services, information, and communication.
- Physical computing describes how people interact with computing through physical objects, rather than in an online environment or on monolithic, general purpose computers.
- Pervasive computing refers to the prevalence of this new mode of digital technology.
- Ambient intelligence describes how these devices appear to integrate algorithmic reasoning (intelligence) into human-built spaces so that it becomes part of the atmosphere (ambiance) of the environment.
- The Internet of Things suggests a world in which digitally identifiable physical objects relate to each other in a way that is analogous to how purely digital information is organized on the Internet (specifically, the Web).
Of course, applying such retroactive continuity (a term the comic book industry uses to describe the pretense of order grafted onto a disorderly existing narrative) attempts to add structure to something that never had one. In the end, I believe that all of these terms actually reference the same general idea. I prefer to use ubiquitous computing since it is the oldest.
1.1 The hidden middle of Mooreâs law
To understand why ubiquitous computing is particularly relevant today, it is valuable to look closely at an unexpected corollary of Mooreâs Law. As new information processing technology gets more powerful, older technology gets cheaper without becoming any less powerful.
First articulated by Intel Corporation founder Gordon Moore, today Mooreâs Law is usually paraphrased as a prediction that processor transistor densities will double every two years. This graph (Figure 1-2) is traditionally used to demonstrate how powerful the newest computers have become. As a visualization of the density of transistors that can be put on a single integrated circuit, it represents the way semiconductor manufacturers distill a complex industry into a single trend. The graph also illustrates a growing industryâs internal narrative of progress without revealing how that progress is going to happen.
Mooreâs insight was dubbed a law, like a law of nature, but it does not actually describe the physical properties of semiconductors. Instead, it describes the number of transistors Gordon Moore believed would have to be put on a CPU for a semiconductor manufacturer to maintain a healthy profit margin given the industry trends he had observed in the previous five years. In other words, Mooreâs 1965 analysis, which is what his law is based on, was not a utopian vision of the limits of technology. Instead, the paper (Moore, 1965) described a pragmatic model of factors affecting profitability in semiconductor manufacturing. Mooreâs conclusion that âby 1975 economics may dictate squeezing as many as 65,000 components on a single silicon chipâ is a prediction about how to compete in the semiconductor market. It is more of a business plan and a challenge to his colleagues than a scientific result.
Fortunately for Moore, his model fit the behavior of the semiconductor industry so well that it was adopted as an actual development strategy by most of the other companies in the industry. Intel, which he co-founded soon after writing that article, followed his projection almost as if it was a genuine law of nature and prospered.
The economics of this indus...