PART I
WHAT WORK IS WORTH
CHAPTER 1
AS AMERICAN AS ECONOMIC PIE
The abandonment of the American worker began in the middle of the last century. No particular date marks the momentâthe process unfolded gradually, pushed along by evolving economic theories and the misguided public policies based on them. But it would be a mistake to call the abandonment accidental. The approach to economic policy that emerged after the Great Depression and the Second World War discounted the interests of the typical worker and the stability of his social environment in favor of faster overall national growth and greater consumption, including by redistributing money to those left behind. Policy makers understood the implications of the ideas they embraced and the actions they took, and they largely accomplished their goals. Even today, mainstream politicians struggle to comprehend the popular disgruntlement about what they perceive as clear achievements.
The workerâs dilemma can be linked with two major developments in postwar economic thinking, which combined to produce the central metaphor of modern American politics: the economic pie. The first was the overwhelming importance assigned to measurement of the economyâs total size. This had been critical to the federal governmentâs Keynesian response to the Great Depression, which relied on public spending to boost demand and thus production. Such management of the economy required accurate knowledge of production levels and trends, so the U.S. Department of Commerce developed the system of national accounting that became the GDP, a Herculean effort whose leader, Simon Kuznets, would win the Nobel Prize in Economic Sciences for his work on economic growth.1 When the Depression gave way to a global military conflict, the outcome of which would turn on the industrial capacity of the Allied and Axis economies, GDP became an existential concern.
As the economy regained its peacetime footing, national accounts recorded fewer M4 Sherman tanks headed to the front and more Chevrolet Bel Air convertibles destined for the suburbs. Notwithstanding Kuznetsâs warning to Congress that âthe welfare of a nation can ⌠scarcely be inferred from a measurement of national income,â2 GDP transitioned smoothly into the primary measure of prosperity, and GDP growth became the primary goal of economic policy. Long after saturation bombing ended, and even after national economies had revived, cross-country comparisons of GDP remained the means for assessing national power; GDP per capita defined a citizenryâs well-being.
The second key mid-century development in economic thinking was the ascent of consumers and the priority given to their interests at the expense of producers. Although this observation conjures a vision of two constituencies vying for the same resources, here the dynamic is more complex. Every individual is both a producer and a consumer, the economy an engine of both production and consumption. If unions drive wages higher and prices rise, households might benefit in their paychecks and suffer in the checkout line simultaneously. If cheap imports drive domestic manufacturers out of business, the reverse might be true. The choice of which identity gets preference has substantial consequences for how we define prosperity; a goal of rising productivity for all workers leads toward a very different policy agenda from one that aims to maximize what each household can consume.
For most of history, drawing a distinction between the roles of consumer and producer would have meant little. While individuals within a family or other close-knit social group have always specialized in certain functions, as a unit, they once relied almost exclusively on their collective output to sustain themselves. Increases in consumption were increases in production, and vice versa. But the story of economic development since at least the start of the Industrial Revolution has been in large part a story of disaggregating these activities. Increased specialization has driven the productivity gains and innovation responsible for the stunning improvement of material living standards around the world.
Households began to specialize in particular outputs and trade within their communities to meet their needs. Trade between communities stitched together national economies that shared a common language, currency, legal system, and physical infrastructure. Topeka supplied wheat; Detroit, cars; Louisville, baseball bats. In the era of globalization, entire nations produce surpluses of certain goods and services that they trade for the surpluses of others.
Meanwhile, the creation of various financial products allowed economic actors, whether individuals or nations, not only to consume different things than they produce but also to do so at different times. When we say that someone is saving money, we mean that she is converting current production into future consumption; a borrower, by contrast, funds consumption now through a promise to produce later. Government influences the roles of producer and consumer too, using its taxing and spending powers to translate the production of some into the consumption of others.
* * *
As the activities of production and consumption drifted further apart, policy makers increasingly adopted the consumption lens. This had long been a tenet of classical liberalism: âconsumption is the sole end and purpose of all production; and the interest of the producer ought to be attended to, only so far as it may be necessary for promoting that of the consumer,â wrote Adam Smith in The Wealth of Nations. âThe maxim is so perfectly self-evident, that it would be absurd to attempt to prove it.â3 But only with the enormous influence of Keynesian economics did the principle entrench itself. Although GDP does refer to gross domestic production, the initial premise of its measurement was to ensure sufficient demand during the Depression. In the consumer-driven boom of the postwar years, it was only natural to view GDP as a measure of what people were consumingâand the primary goal of society as growth in consumption.4
The broader 1960s cultural shift toward individualism and the priority placed on fulfilling desires also moved the consumer toward the economyâs center. In modern America, efforts to promote the virtue of production over the vice of consumption are often regarded as archaic curiosities. âThere is almost nothing more important we can do for our young than convince them that production is more satisfying than consumption,â wrote Republican senator Ben Sasse in his best-selling 2017 book The Vanishing American Adult.5 In its review, The Atlantic characterized this view as âstoicismâ and âself-denial.â6
These trends helped bring about a dramatic expansion of the welfare state. Trillions of dollars poured into low-income households as the welfare system sought to guarantee an individualâs right to consumption, while doing nothing about (if not actively retarding) his ability to become more productive. Today, a welfare benefit like the Supplemental Nutrition Assistance Program (SNAP, or âfood stampsâ) gets credit for âlifting people out of povertyâ merely because the benefitâs cash value raises the recipientsâ income above the poverty threshold, even though it does nothing to help them gain a foothold in the economy and provide for themselves.
That GDP offers a reliable proxy for prosperity and that each individualâs satisfaction depends on the share of GDP she can consume are the key components in the concept of the economic pie, which was born in the postwar years as well. When serving a pie, each portionâs size depends on both the size of the dish and the share allocated to each slice. Likewise, the thinking went, each personâs consumption depends on the size of the overall economy and the share he receives. Fighting over shares is a zero-sum game, but if we concentrate on baking an ever-larger pie, then everyoneâs slice can grow. And who doesnât like pie?
The tenets of this âeconomic pietyâ were quickly embraced and remain widely accepted today. The phrase economic pie first appeared in the presidential lexicon in 1952, when Harry Truman quoted from a Business Week article that used the term. John F. Kennedy used it when addressing the U.S. Chamber of Commerce. Presidents Lyndon Johnson, Gerald Ford, Ronald Reagan, George H. W. Bush, Bill Clinton, and Barack Obama used it too.7 The media and think tanks across the political spectrum bandy it about with ease.
Republicans tend to promote free markets that will grow the pie rapidly, while grudgingly accepting a role for government in apportionment. Democrats focus more on the role of government in guaranteeing big enough portions for all but generally recognize that more growth will mean more to go around. On its own terms, this approach has delivered. The overall economy has grown enormously: from 1975 to 2015, the nationâs GDP increased threefold.8 Redistribution has widened the smaller slices: during the same period, spending on programs targeting lower-income households increased fourfold.9 Federal regulatorsâ budgets expanded faster still,10 yet the American economy remained the dynamic and innovative envy of the world. For Americans of all socioeconomic strata, material living standards, access to technology, and consumer variety all marched steadily higher.
* * *
Tempering these impressive gains, however, were a variety of costsâthe other side of the trade-offs made in pursuit of growth. Cheap goods and plentiful transfer payments ensured that nearly all Americans could afford cable television and air conditioning11 but not that they could build fulfilling lives around productive work, strong families, and healthy communities. To the contrary, cheap goods and plentiful transfer payments tended to undermine those other priorities. Consistently, segments of society that were thriving saw their fortunes improved, while struggling segments faced further distress.
The prevailing policy approach acknowledges the existence of economic losers but holds that any losses are exceeded by the gains to winners, which means that with careful redistribution, everyone can emerge ahead. But what if peopleâs ability to produce matters more than how much they can consume? That ability cannot be redistributed. And what if smaller losses for those at the bottom of the economic ladder are much more consequential to them than the larger gains for those already on top? Under those conditions, rising GDP will not necessarily translate into rising prosperity.
Such considerations have implications as well for societyâs longer-term trajectory. Even if gains exceed the costs initially, what happens if the losses undermine stable families, decimate entire communities, foster government dependence, and perhaps contribute to skyrocketing substance abuse and suicide rates? What if the next generation, raised in this environment, suffers as wellâperhaps reaching adulthood with even lower productive capacity? What if, in the meantime, cheap capital from foreign savings has fueled enormous increases in government and consumer debt, while the industrial policies of foreign governments have left the American economy with fewer opportunities to create well-paying jobs for less-skilled workers? Such costs show up nowhere in GDPâat least initially. Sadly, they appear to have been much more than hypothetical and much costlier than anyone imagined.
While the Great Recession of 2007â9 is often understood as the catalyst for the economic frustration of the next decade, a majority of Americans hasnât told Gallup that they are âin general, satisfied with the way things are going in the United States at this timeâ since January 2004.12 In the quarter-century prior to the Great Recession, median weekly earnings for full-time workers rose only 1 percent in real termsânot 1 percent per year, 1 percent totalâand that increase was confined to women and to those with college degrees. Among all men, and among all people with less than a bachelorâs degree, full-time earnings declined.13 While in 1979, the typical man with a high school degree could support a family of four at more than twice the poverty line, by 2007, his earnings cleared the threshold by less than 50 percent.14
And those are the figures for people who were working. After peaking at 84.5 percent of the population in 1997, the share of prime-age Americans (twenty-five to fifty-four years old) either working or looking for work began an unprecedented decline, falling by 2015 to 80.8 percent. A three- or four-point decline seems small, but it represents more than 4 million people missing from the workforce, which exceeded the total number of unemployed prime-age workers still in the market. Count the âunworking,â who are excluded from standard statistics, and the unemployment rate doubled.15
Furthermore, these data count any work as employment. Social thinker Nicholas Eberstadt has shown that total paid hours increased only 4 percent during 2000â2015, despite an 18 percent rise in population; work per adult civilian fell 12 percent.16 At the same time, the share of employment in âalternative work arrangementsââtemps, independent contractors, and freelancersâclimbed from 11 percent in 2005 to 16 percent in 2015. During the decade, such jobs were the source of the nationâs entire employment gain across all age groups.17
Widening the lens beyond economic metrics reveals an even more devastating collapse of social health. Maladies once thought the province of the very poorest communities have been ravaging the working class for decades and begun making inroads even higher up the socioeconomic ladder. Readers often think of Hillbilly Elegy, J. D. Vanceâs memoir of Appalachian dysfunction, as depicting the social circumstances that fed Donald Trumpâs rise. But Vance was not describing postâfinancial crisis America; the backdrop for his troubled upbringing was the go-go 1990s.
In Coming Apart, a study of demographic and cultural trends during the period from 1960 to 2010, Charles Murray described the fate of the 30 percent of Americans with no more than a high school degree working in a âblue-collar job, mid- or low-level service job, or a low-level white-collar job.â To control for any race-related factors, Murray focused specifically on whites.18 Within that group, he found that the married share of thirty- to forty-nine-year-olds declined from 84 percent in the 1960s and 1970s to 48 percent by 2010. Fully 95 percent of children were living with both biological parents when the mother turned forty in the 1960s, but by the 2000s, the figure was plunging toward 30 percent. Likewise, between the 1970s and the 2000s, the share of thirty- to forty-nine-year-olds not involved in any secular or religious organization tripled to more than 30 percent. By 2010, only 20 percent said that, generally, âpeople can be trustedâ; fewer than half believed that others âtry to be fair.â Those figures were declining too. In barely half of households was a full-time worker present.19
Murrayâs focus on whites for purposes of analytical clarity does not imply that they are uniquely affected by these trends. To the contrary, his objective was to show that alarming conditions once associated with minority communities in America were now persistent across all races. Whatâs new is not the challenge of social decay but rather the way it has...