The Exponential Age
eBook - ePub
VerfĂŒgbar bis 11 Jul |Weitere Informationen

The Exponential Age

How the Next Digital Revolution Will Rewire Life on Earth

Azeem Azhar

  1. English
  2. ePUB (handyfreundlich)
  3. Über iOS und Android verfĂŒgbar
eBook - ePub
VerfĂŒgbar bis 11 Jul |Weitere Informationen

The Exponential Age

How the Next Digital Revolution Will Rewire Life on Earth

Azeem Azhar

Angaben zum Buch
Buchvorschau
Inhaltsverzeichnis
Quellenangaben

Über dieses Buch

A bold exploration and call-to-arms over the widening gap between AI, automation, and big data—and our ability to deal with its effects We are living in the first exponential age.High-tech innovations are created at dazzling speeds; technological forces we barely understand remake our homes and workplaces; centuries-old tenets of politics and economics are upturned by new technologies. It all points to a world that is getting faster at a dizzying pace.
Azeem Azhar, renowned technology analyst and host of the Exponential View podcast, offers a revelatory new model for understanding how technology is evolving so fast, and why it fundamentally alters the world. He roots his analysis in the idea of an "exponential gap" in which technological developments rapidly outpace our society's ability to catch up. Azhar shows that this divide explains many problems of our time—from political polarization to ballooning inequality to unchecked corporate power. With stunning clarity of vision, he delves into how the exponential gap is a near-inevitable consequence of the rise of AI, automation, and other exponential technologies, like renewable energy, 3D printing, and synthetic biology, which loom over the horizon. And he offers a set of policy solutions that can prevent the growing exponential gap from fragmenting, weakening, or even destroying our societies. The result is a wholly new way to think about technology, one that will transform our understanding of the economy, politics, and the future.

HĂ€ufig gestellte Fragen

Wie kann ich mein Abo kĂŒndigen?
Gehe einfach zum Kontobereich in den Einstellungen und klicke auf „Abo kĂŒndigen“ – ganz einfach. Nachdem du gekĂŒndigt hast, bleibt deine Mitgliedschaft fĂŒr den verbleibenden Abozeitraum, den du bereits bezahlt hast, aktiv. Mehr Informationen hier.
(Wie) Kann ich BĂŒcher herunterladen?
Derzeit stehen all unsere auf MobilgerĂ€te reagierenden ePub-BĂŒcher zum Download ĂŒber die App zur VerfĂŒgung. Die meisten unserer PDFs stehen ebenfalls zum Download bereit; wir arbeiten daran, auch die ĂŒbrigen PDFs zum Download anzubieten, bei denen dies aktuell noch nicht möglich ist. Weitere Informationen hier.
Welcher Unterschied besteht bei den Preisen zwischen den AboplÀnen?
Mit beiden AboplÀnen erhÀltst du vollen Zugang zur Bibliothek und allen Funktionen von Perlego. Die einzigen Unterschiede bestehen im Preis und dem Abozeitraum: Mit dem Jahresabo sparst du auf 12 Monate gerechnet im Vergleich zum Monatsabo rund 30 %.
Was ist Perlego?
Wir sind ein Online-Abodienst fĂŒr LehrbĂŒcher, bei dem du fĂŒr weniger als den Preis eines einzelnen Buches pro Monat Zugang zu einer ganzen Online-Bibliothek erhĂ€ltst. Mit ĂŒber 1 Million BĂŒchern zu ĂŒber 1.000 verschiedenen Themen haben wir bestimmt alles, was du brauchst! Weitere Informationen hier.
UnterstĂŒtzt Perlego Text-zu-Sprache?
Achte auf das Symbol zum Vorlesen in deinem nÀchsten Buch, um zu sehen, ob du es dir auch anhören kannst. Bei diesem Tool wird dir Text laut vorgelesen, wobei der Text beim Vorlesen auch grafisch hervorgehoben wird. Du kannst das Vorlesen jederzeit anhalten, beschleunigen und verlangsamen. Weitere Informationen hier.
Ist The Exponential Age als Online-PDF/ePub verfĂŒgbar?
Ja, du hast Zugang zu The Exponential Age von Azeem Azhar im PDF- und/oder ePub-Format sowie zu anderen beliebten BĂŒchern aus Business & Consumer Behaviour. Aus unserem Katalog stehen dir ĂŒber 1 Million BĂŒcher zur VerfĂŒgung.

Information

Jahr
2021
ISBN
9781635769081
chapter one
image
The Harbinger
Before I knew what Silicon Valley was, I had seen a computer. It was December 1979, and our next-door neighbor had brought home a build-it-yourself computer kit. I remember him assembling the device on his living room floor and plugging it into a black-and-white television set. After my neighbor meticulously punched in a series of commands, the screen transformed into a tapestry of blocky pixels.
I took the machine in with all the wonder of a seven-year-old. Until then, I had only seen computers depicted in TV shows and movies. Here was one I could touch. But it was more remarkable, I think now, that such a contraption had even got to a small suburb of Lusaka in Zambia in the 1970s. The global supply chain was primordial, and remote shopping all but non-existent—and yet the first signs of the digital revolution were already visible.
The build-it-yourself kit piqued my interest. Two years later, I got my own first computer: a Sinclair ZX81, picked up in the autumn of 1981, a year after moving to a small town in the hinterlands beyond London. The ZX81 still sits on my bookshelf at home. It has the footprint of a seven-inch record sleeve and is about as deep as your index and middle fingers. Compared to the other electronic items in early-1980s living rooms—the vacuum-tubed television or large cassette deck—the ZX81 was compact and light. Pick-up-with-your-thumb-and-forefinger light. The built-in keyboard, unforgiving and taut when pressed, wasn’t something you could type on. It only responded to stiff, punctuated jabs of the kind you might use to admonish a friend. But you could get a lot out of this little box. I remember programming simple calculations, drawing basic shapes, and playing primitive games on it.
This device, advertized in daily newspapers across the UK, was a breakthrough. For £69 (or about $145 at the time), we got a fully functional computer. Its simple programming language was, in principle, capable of solving any computer problem, however complicated (although it might have taken a long time).10 But the ZX81 wasn’t around for long. Technology was developing quickly. Within a few years, my computer—with its blocky black-and-white graphics, clumsy keyboard, and slow processing—was approaching obsolescence. Within six years, my family had upgraded to a more modern device, made by Britain’s Acorn Computers. The Acorn BBC Master was an impressive beast, with a full-sized keyboard and a numeric keypad. Its row of orange special-function keys wouldn’t have looked out of place on a prop in a 1980s space opera.
If the exterior looked different from the ZX81’s, the interior had undergone a complete transformation. The BBC Master ran several times faster. It had 128 times as much memory. It could muster as many as sixteen different colors, although it was limited to displaying eight at a time. Its tiny speaker could emit up to four distinct tones, just enough for simple renditions of music—I recall it beeping its way through Bach’s Toccata and Fugue in D Minor. The BBC Master’s relative sophistication allowed for powerful applications, including spreadsheets (which I never used) and games (which I did).
Another six years later, in the early 1990s, I upgraded again. By then, the computer industry had been through a period of brutal consolidation. Devices like the TRS-80, Amiga 500, Atari ST, Osborne 1 and Sharp MZ-80 had vied for success in the market. Some small companies had short-lived success but found themselves losing out to a handful of ascendant new tech firms.
It was Microsoft and Intel that emerged from the evolutionary death match of the 1980s as the fittest of their respective species: the operating system and the central processing unit. They spent the next couple of decades in a symbiotic relationship, with Intel delivering more computational power and Microsoft using that power to deliver better software. Each generation of software taxed the computers a little more, forcing Intel to improve its subsequent processor. “What Andy giveth, Bill taketh away” went the industry joke (Andy Grove was Intel’s CEO; Bill Gates, Microsoft’s founder).
At the age of nineteen, I was oblivious to these industry dynamics. All I knew was that computers were getting faster and better, and I wanted to be able to afford one. Students tended to buy so-called PC clones—cheap, half-branded boxes which copied the eponymous IBM Personal Computer. These were computers based on various components that adhered to the PC standard, meaning they were equipped with Microsoft’s latest operating system—the software that allowed users (and programmers) to control the hardware.
My clone, an ugly cuboid, sported the latest Intel processor: an 80486. This processor could crunch through eleven million instructions per second, probably four or five times more than my previous computer. A button on the case marked “Turbo” could force the processor to run some 20 percent faster. Like a car where the driver keeps their foot on the accelerator, however, the added speed came at the cost of frequent crashes.
This computer came with four megabytes of memory (or RAM), a four-thousand-fold improvement on the ZX81. The graphics were jaw-dropping, though not state-of-the-art. I could throw 32,768 colors on the screen, using a not-quite cutting-edge graphics adaptor that I plugged into the machine. This rainbow palette was impressive but not lifelike—blues in particular displayed poorly. If my budget had stretched £50 (or about $85 at the time) more, I might have bought a graphics card that painted sixteen million colors, so many that the human eye could barely discern between some of the hues.
The ten-year journey from the ZX81 to the PC clone reflected a period of exponential technological change. The PC clone’s processor was thousands of times more powerful than the ZX81’s, and the computer of 1991 was millions of times more capable than that of 1981. That transformation was a result of swift progress in the nascent computing industry, which approximately translated to a doubling of the speed of computers every couple of years.
To understand this transformation, we need to examine how computers work. Writing in the nineteenth-century, the English mathematician and philosopher George Boole set out to represent logic as a series of binaries. These binary digits—known as “bits”—can be represented by anything, really. You could represent them mechanically by the positions of a lever, one up and one down. You could, theoretically, represent bits with M&Ms—some blues, some reds. (This is certainly tasty, but not practical.) Scientists eventually settled on 1 and 0 as the best binary to use.
In the earliest days of computing, getting a machine to execute Boolean logic was difficult and cumbersome. And so a computer—basically any device that could conduct operations using Boolean logic—required dozens of clumsy mechanical parts. But a key breakthrough came in 1938, when Claude Shannon, then a master’s student at the Massachusetts Institute of Technology, realized electronic circuits could be built to utilize Boolean logic—with on and off representing 1 and 0. It was a transformative discovery, which paved the way for computers built using electronic components. The first programmable, electronic, digital computer would famously be used by a team of Allied codebreakers, including Alan Turing, during World War II.
Two years after the end of the war, scientists at Bell Labs developed the transistor—a type of semiconductor, a material that partly conducts electricity and partly doesn’t. You could build useful switches out of semiconductors. These in turn could be used to build “logic gates”—devices that could do elementary logic calculations. Many of these logic gates could be stacked together to form a useful computing device.
This may sound technical, but the implications were simple: the new transistors were smaller and more reliable than the valves that were used in the earliest electronic components, and they paved the way for more sophisticated computers. In December 1947, when scientists built the first transistor, it was clunky and patched together from a large number of components, including a paper clip. But it worked. Over the years, transistors would become less ad hoc and more consistently engineered.
From the 1940s onwards, the goal became to make transistors smaller. In 1960, Robert Noyce at Fairchild Semiconductor developed the world’s first “integrated circuit,” which combined several transistors into a single component. These transistors were tiny and could not be handled individually by man or machine. They were made through an elaborate process a little like chemical photography, called photo­lithography. Engineers would shine ultraviolet light through a film with a circuit design on it, much like a child’s stencil. This imprints a circuit onto a silicon wafer, and the process can be repeated several times on a single wafer—until you have several transistors on top of one another. Each wafer may contain several identical copies of circuits, laid out in a grid. Slice off one copy and you have a silicon “chip.”
One of the first people to understand the power of this technology was Gordon Moore, a researcher working for Noyce. Five years after his boss’s invention, Moore realized that the physical area of integrated circuits was reducing by about 50 percent every year, without any decrease in the number of transistors. The films—or “masks”—used in photolithography were getting more detailed; the transistors and connections smaller; the components themselves more intricate. This reduced costs and improved performance. Newer chips, with their smaller components and tighter packing, were faster than older ones.
Moore looked at these advances, and in 1965, he came up with a hypothesis. He postulated that these developments would double the effective speed of a chip for the same cost over a certain period of time.11 He eventually settled on the estimate that, every 18–24 months, chips would get twice as powerful for the same cost. Moore went on to cofound Intel, the greatest chip manufacturer of the twentieth century. But he is probably more famous for his theory, which became known as “Moore’s Law.”
This “law” is easy to misunderstand; it is not like a law of physics. Laws of physics, based on robust observation, have a predictive quality. Newton’s Laws of Motion cannot be refuted by everyday human behavior. Newton told us that force equals mass times acceleration—and this is almost always true.12 It doesn’t matter what you do or don’t do, what time of day it is, or whether you have a profit target to hit.
Moore’s Law, on the other hand, is not predictive; it is descriptive. Once Moore outlined his law, the computer industry—from chipmakers to the myriad suppliers who supported them—came to see it as an objective. And so it became a “social fact”: not something inherent to the technology itself, but something wished into existence by the computer industry. The materials firms, the electronic designers, the laser manufacturers—they all wanted Moore’s Law to hold true. And so it did.13
But that did not make Moore’s Law any less powerful. It has been a pretty good guide to computers’ progress since Moore first articulated it. Chips did get more transistors. And they followed an exponential curve: at first getting imperceptibly faster, and then racing away at rates it is hard to comprehend.
Take the below graphs. The top one shows the growth of transistors per microchip from 1971 to 2017. That this graph looks moribund until 2005 reflects the power of exponential growth. On the second graph, which shows the same data using a logarithmic scale—a metric that converts an exponential increase into a straight line—we see that, between 1971 and 2015, the number of transistors per chip multiplied nearly ten million times.
image
image
Source: Our World In Data
The magnitude of this shift is almost impossible to conceptualize, but we can try to grasp it by focusing on the price of a single transistor. In 1958, Fairchild Semiconductor sold one hundred transistors to IBM for $150 apiece.14 By the 1960s, the price had fallen to $8 or so per transistor. By 1972, the year of my birth, the average cost of a transistor had fallen to fifteen cents,15 and the semiconductor industry was churning out between one hundred billion and one trillion transistors a year. By 2014, humanity produced 250 billion billion transistors annually: twenty-five times the number of stars in the Milky Way. Each second, the world’s “fabs”—the specialized factories that turn out transistors—spewed out eight trillion transistors.16 The cost of a transistor had dropped to a few billionths of a dollar.
Why does this matter? Because it led computers to improve at an astonishing rate. The speed that a computer can process information is roughly proportional to the number of transistors that make up its processing unit. As chips gained transistors, they got faster. Much faster. At the same time, the chips themselves were getting cheaper.
This extraordinary dropping in price is what drove the computing revolution of my teenage years, making my BBC Master so much better than my ZX81. And since then, it has transformed all of our lives again. When you pick up your smartphone, you hold a device with several chips and billions of transistors. Computers—once limited to the realms of the military or scientific research—have become quotidian. Think of the first electronic computer, executing Alan Turing’s codebreaking algorithms in Bletchley Park in 1945. A decade later, there were still only 264 computers in the world, many costing tens of thousands of dollars a month to rent.17 Six decades on, there are more than five billion computers in use—including smartphones, the supercomputers in our pockets. Our kitchen cupboards, storage boxes and attics are littered with computing devices—at only a few years old, already too outdated for any conceivable use.
Moore’s Law is the most famous distillation of the exponential development of digital technology. Over the last half-century, computers have got inexorably faster—bringing with them untold technological, economic, and social transformations. The goal of this chapter is to explain how this shift has come about, and why it looks set to continue for the foreseeable future. It will also serve as an introduction to the defining force of our age: the rise of exponential technologies.
‱
Put in the simplest terms, an exponential increase is anything that goes up by a constant proportion. A linear process is what happens...

Inhaltsverzeichnis

  1. Cover Page
  2. The Exponential Age
  3. Title Page
  4. Copyright
  5. Contents
  6. Preface - The Great Transition
  7. Chapter One - The Harbinger
  8. Chapter Two - The Exponential Age
  9. Chapter Three - The Exponential Gap
  10. Chapter Four - The Unlimited Company
  11. Chapter Five - Labor’s Loves Lost
  12. Chapter Six - The World is Spiky
  13. Chapter Seven - The New World Disorder
  14. Chapter Eight - Exponential Citizens
  15. Conclusion - Abundance and Equity
  16. Acknowledgments
  17. Notes
  18. Select Bibliography
  19. About the Author
Zitierstile fĂŒr The Exponential Age

APA 6 Citation

Azhar, A. (2021). The Exponential Age ([edition unavailable]). Diversion Books. Retrieved from https://www.perlego.com/book/2088500/the-exponential-age-how-the-next-digital-revolution-will-rewire-life-on-earth-pdf (Original work published 2021)

Chicago Citation

Azhar, Azeem. (2021) 2021. The Exponential Age. [Edition unavailable]. Diversion Books. https://www.perlego.com/book/2088500/the-exponential-age-how-the-next-digital-revolution-will-rewire-life-on-earth-pdf.

Harvard Citation

Azhar, A. (2021) The Exponential Age. [edition unavailable]. Diversion Books. Available at: https://www.perlego.com/book/2088500/the-exponential-age-how-the-next-digital-revolution-will-rewire-life-on-earth-pdf (Accessed: 15 October 2022).

MLA 7 Citation

Azhar, Azeem. The Exponential Age. [edition unavailable]. Diversion Books, 2021. Web. 15 Oct. 2022.