In October 2012, stuntman Felix Baumgartner jumped from a 24-mile-high space capsule, becoming the first human in free fall to break the sound barrier. His jump broke previous records set by retired Air Force Colonel Joe Kittinger, who provided guidance to Baumgartner during the mission. Like Kittinger before him, the daredevil faced an early crisis as he began to enter, but quickly recovered from a dreaded tailspin. âAt a certain R.P.M.,â he explained, âthere's only one way for blood to leave your body, and that's through your eyeballs. That means you're dead.â While the jump was âharder than I expected,â Fearless Felix landed safely in New Mexico four minutes and twenty seconds later (Tierney 2012b).
The mission, dubbed âRed Bull Stratos,â was financed by the popular drink company and marked the culmination of a marketing relationship that began in 1988 (Badenhausen 2012). It involved the assistance of 300 technical and medical experts, including former NASA employees. A project five years in the making, scientists observing Stratos collected data for the benefit of pilots and astronauts. Meanwhile, in line with its other co-branded initiatives like Infiniti Red Bull Racing, the namesake company commodified Stratos by offering merchandise like T-shirts, hats, and backpacks. The mission yielded âtens of millions of dollars in global exposureâ for the brand in what was âperhaps the greatest marketing stunt of all timeâ (Heitner 2012). Forbes magazine notes that âanother winner came out of Sunday's jump besides Red Bull: the Internetââmore specifically YouTube, which broke its own record for concurrent live video streams as eight million viewers tuned in (Badenhausen 2012).
On Tailspins and Technical Leaps of Faith
Stratos symbolizes a broader trend, traceable to the late 1980s and early 1990s, in which the twin themes of commercialism and technical bravado combine. Then-President Bill Clinton and Vice President Al Gore along with conservative Republicans hitched their ideological wagon to the techno-libertarian dream of Silicon Valley. Their enchantment with cyberspace integrated the spirit of American exceptionalism, faith in free markets, and a view of computer networks as nature's next evolutionary leap. The surprising resonance between progressive and conservative ideologies culminated in the mid-1990s passage of the Telecommunications Act of 1996, which represents the joint effort of technology elites like Swiss-born American philanthropist Esther Dyson and conservatives like Newt Gingrich (McChesney 2013, p. 105). In fact, Gingrich described the policy as part of a divinely inspired âmissionâ (Turner 2006, p. 231).
This celebratory entrance of technical and political elites into the so-called New Economy constituted a leap of faith not unlike the breathtaking feat of Fearless Felix. While Colonel Kittinger praised Baumgartner for his courage, we might ask whether the combination of technophilia, or a strong enthusiasm for technology, and commercialism tends more toward an ethos of recklessnessâone cultivated not simply through technical expertise but also by an apparatus of psychological and ideological coercion. Indeed, Baumgartner suffered panic attacks while training in his claustrophobic space suit, at one point fleeing the country by plane to avoid an endurance test. Psychologists worked with him to mitigate fears that appear to have been well-founded, considering his near-death tailspin (Tierney 2012a).
Rather than encouraging competition and market diversity, Silicon Valley's techno-centric ideology has triggered an economic tailspin of industry consolidation and a rapid decline of public interest values. This development has important implications since specific protocols, applications, and platforms tend to become âlocked inâ for generations along with any unforeseen consequences. In the digital era, the process of technological lock-in resembles Baumgartner's tailspin both in terms of the difficulty of restoring balance and of the danger posed by the potential loss of lifeblood. Giving in to techno-utopian ideology regularly leads to a willingness to jump recklessly into new technical environments, but the consequences of such imprudence are not uniformly distributed. In our culture of digital monopolies, users become locked into a limited range of commercial providers who, in turn, generate profit by extracting user dataâthe lifeblood of the digital economyâand packaging it for sale to marketers, government agencies, and banks. Such techniques entrench social stereotypes and exacerbate class inequality even as profits boom.
In recent years, critics have implicated digital technologies in a range of problems from psychological distraction to global financial crises. Several critics refer to an emerging âanti-net backlash.â These critics identify problems and suggest solutions at different levels from modifying our personal habits to implementing technical and legal remedies. A common thread in all these critiques is that digital culture is driven by an overarching ideology, described in our book's introductory chapter as âthe California Ideologyâ (Barbrook and Cameron 1998). As we demonstrate in this chapter, this ideology along with its âmoral catechismâ has significant psychological, moral, and spiritual dimensions. We begin with the epistemological dimensions of this ideology, that is, how Silicon Valley thinks about concepts like knowledge, progress, and expertise. We then move to consider its psychological, moral, and spiritual dimensions before concluding with reflections on how the virtue of prudence can redirect our focus away from solely instrumental or technical skills to non-instrumental values such as moderation, discernment, and humility that help confront the crisis.
The Crisis Is Ideological
The guiding principle of techno-utopianism (supported by a never-ending stream of âproof of progressâ data) is that everything always has, always does, and always will continue to get better and better, forever and ever, amen. This âmythos of the digital sublimeâ depends on an inability to recognize the reality of limitations on anything: growth, potential, money, or natural resources (Schultze 2002, p. 116). Within this techno-religious worldview, if for some inexplicable reason our way of life is derailed, or evidence of our vulnerability is brought to our attention, it is only a temporary setback, just another problem to be overcome through the modern miracle of technological advancement. To be sure, not everyone accepts or adheres to this philosophy; people abstain and resist. What we are identifying here is a relative center of gravity, a defining ethos that makes resistance and abstinence meaningful and necessary in the first place.
Public confidence in a techno-centric way of life is based, in part, on the belief that technology can deliver on its promises by always satisfying our hungers, whatever their type or size, in the most convenient and efficient ways (Pacey 1983). Proof of progress is no farther away than the rote retelling of the most recent technological hurdles scaled: from the first manufactured automobile, to the first talking motion picture, to the first computer small enough to fit on our lap. When computer-processing speed doubles every 18 months, we marvel at our culture's ability to make our lives more efficient. Silicon Valley's âsymbol brokersâ rely on proof of progress in language like âfastest,â âeasiest,â and âmost efficientâ to stir consumer frenzy and reinforce our belief in our system's superiority (Schultze 2002, p. 116). As the system rewards our extravagant demands with immediate supply, our identification with its power to deliver us from boredom and anonymity is strengthened (Boorstin 1992, pp. 3â6). âGet it nowâ and âWhy wait?â are common mantras that reinforce our obsession with efficiency and immediacy, often measured in the seconds required to deliver the satisfaction. Immediate gratification is so foundational that it is often portrayed as something we are entitled to as citizens.
In most cases, however, proof-of-progress data are flawed. They are self-confirming, focusing on the wonder of human achievement while failing to show that progress in one area may be accompanied by less positive developments in another. Progress is taken out of context to support personal or corporate agendas (Pacey 1983, p. 14). For example, increases in openness and connectivity worldwide are used to demonstrate the power of Facebook's mission (âto give people the power to build community and bring the world closer togetherâ), but such increases also come with corresponding reductions in privacy and security. For every technological benefit supporting proof of progress, there may be a technological burden that gets buried in in the sea of high fives and virtual victory laps.
Undoubtedly, technological advancement fosters improvements in our quality of life, including greater comfort, luxury, convenience, and choice, all of which function as proof of progress and efficiency. But instead of seeing technological advancement and conspicuous digital consumption as âsuperficial indulgencesâ that contribute to major changes in social structures, personal identity, and bankruptcy, Silicon Valley enthusiasts promote them as a âstrangely democratic and unifying force.â In this light, a technological purchase is not simply âone-dimensional or shallow,â narcissistic, or fragmenting. Besides, if what we want is peace on earth, then âa unifying systemâ that transcends religious, cultural, and caste differencesâ is what we need (Cohen 2002; see also Twitchell 2003). And, thankfully, this is the system, at least âon paper,â that Silicon Valley promises.
In addition to a belief in progress, Silicon Valley's reigning ideology elevates expert knowledge over other ways of knowing. Hardware and software programmers, network administrators, and the like enjoy greater material wealth and elevated social status in the system. At places like Google and Facebook, âmeritâ means one's ability to create good software. Within this culture of expertise, a clear knowledge gap exists between technological, specialized forms of knowing, on the one hand, and more general and moral perspectives that rely on wisdom, on the other. Information managers âspecializeâ by relying on ostensibly detached, presumably objective, quantitative ways of knowing. Their job is to effectively manage and transmit information in bits and bytes, not to be responsible for using the information they transmit for good. The focus is on making something new, asking âWhat's next?â without pausing to consider the effects of âWhat is?â
We are reminded of Jeff Goldblum's character, Dr. Malcolm, in Steven Spielberg's now classic film Jurassic Park (1993) when he remarks, âThe lack of humility before nature that's being displayed here staggers me.â Granted, he was talking about cloning dinosaurs at the time, but the same kind of logic explains this knowledge gap and Silicon Valley's technological hubris. Dr. Malcolm continues: âYour scientists were so preoccupied with whether or not they could, that they didn't stop to think if they should.â In other words, when the ethical âoughtâ (i.e., should we be doing this, or is this the right thing to do?) is subjugated to the ethical âisâ (i.e., is it scientifically possible to do this, or is it technically possible to do this?), the ethic of efficiency and progress rules the day and trumps all other ethical standards. The clear consequence of a society âlacking any clear âoughtsâ is a religion of quick decisions and instant deletesâ (Schultze 2002, p. 28). These kinds of instrumental values act tyrannically as âa spiritual guillotine, decapitating other valuesâ that have cultural and transcendent staying power (Shriver 1972, p. 537). A society committed to instrumental values âeliminates all moral obstructions to their ascendencyâ (Christians et al. 1993, p. 171).
More importantly, perhaps, beliefs in progress, experts, and unlimited resource often mask, or obscure, the politics of science and technology. To the extent that scientific and technical development reflects the views and prerogatives of experts and elites, our technical environment will augment the power and privilege of some at the expense of others. Digital behemoths like Google and Apple have only recently begun to acknowledge the real-world implications of Silicon Valley's lack of racial, gender, and age diversity, not to mention the broader socio-economic disparities that their business models can exacerbate. In this context, the âmyth of inevitabilityâ around information technologies âperforms an important role in their institutionalization, and in the broader effort to shape the future toward certain endsâ (Gates 2011, p. 6). This myth assumes that the course of technological development is pre-ordained, set in stone, or otherwise unchangeable, as a form of techno-divine Providence.
Often, the rhetoric of inevitability is used by designers, inventors, or other proponents of a new technology. But it might also be found in pessimistic detractors. In either case, the assumption that the course of technological development is unchangeable and predictable carries the implication that since there is nothing we can do about it, there is no need (or, it is a waste of valuable time) to try to stop a new technology from developing or even to try to ensure that it is designed, developed, and deployed ethically and sustainably. One example of the clear course of inevitability relates to the implementation of body cameras for law enforcement officers. When policy makers or tech leaders describe the widespread availability, use, or implementation of a technology as a foregone conclusion, as simply a matter of time, or as something inescapable (usually for the better), this feeds into the myth of inevitability (Gates 2011, p. 6). However, while certain aspects of technical development may be inevitable, other important aspects are not. The future is not inevitable but contingent and contestable. A global information network of some kind may have been inevitable, but the World Wide Web was not. Social networking technologies may have been inevitable, but Facebook certainly was not.
These are subtle but hugely important distinctions. Especially on the part of civil authorities, the assumption of inevitability âencourages public acquiescence, while suppressing alternative, less technocratic ways to address complex social problemsâ (Gates 2011, p. 6). As such, assumptions that impact the design, implementation, regulation, and use of technology can be seen as âa form of social controlâ (Gates 2011, p. 6). Without proper attention to the human context of development and implementation (the politics of science and tech), emergent technologies may ultimately augment privilege for the few and power for the technocratic state.
Google is arguably the most successful organization to build a commercial enterprise based on these assumptions about the value of information and expertise. As noted in the Introduction, its mission is âto organize the world's information and make it universally accessible and useful.â As a product, for instance, Google Glass was not successful. But it has near-monopoly control of the search-engine market and has leveraged that success to branch into other development areas like artificial intelligence, geospatial mapping, self-driving cars, smartphones, and more. As critics point out, though, Google's mission assumes a lot: first, that it is possible to collect and organize all of the world's information; second, that it is beneficial to do so; third, that commercial organizations can and should do so with all due haste. As we discuss later in the book (Chapter 6), Google further posits that algorithms can and should process such information in order to render it meaningful and useful....