1
Science on Trial
ON OCTOBER 22, 2012, in the small Italian town of Lâ Aquila, seven earthquake experts were convicted of manslaughter and sentenced to six years in prison. The prosecutor claimed that they were responsible for the death of 309 residents in a major earthquake in 2009 due to their failure to adequately assess and communicate seismic risks ahead of time. In the three months preceding the earthquake, the city had experienced an event that experts call a seismic swarm: two or three low-level tremors daily. An additional fifty-seven tremors took place in the five days before. Residents were unnerved and turned to scientists for guidance on whether these tremors signaled a major earthquake, and if so, whether they should evacuate the city. Their worries were exacerbated when a local lab technician named Giampaolo Giuliani began to predict a major earthquake on the basis of his measurement of radon gas levels. The scientific community had repeatedly rejected the reliability of radon measurements for short-term predictions of earthquakes, and Giuliani had been denied funding for his research several times because his work was insufficiently scientific. But this did not stop him from setting up a website to post daily radon readings and sharing his predictions with the locals. A few days before the earthquake, the mayor issued a gag order on Giuliani for fear that his website would provoke panic.
It was in this context that the Italian Civil Protection Department and local officials decided to hold a meeting with seven seismologists to comment on the probability that the seismic swarm in LâAquila might precede a major earthquake. The scientific opinion was that this is quite rare. According to the meeting minutes, one of the participating scientists said, âIt is unlikely that an earthquake like the one in 1703 could occur in the short term, but the possibility cannot be totally excluded.â The meeting was short and followed by a press conference in which Bernardo De Bernardinis, vice director of the Civil Protection Department, announced that the situation was âcertainly normal,â adding, âThe scientific community tells me there is no danger because there is an ongoing discharge of energy.â This press conference was the grounds for the charges that led to the scientistsâ conviction. The charge was not a failure to predict the earthquake, which the prosecutor recognized was not possible, but rather the misleading assurance by a group of respected experts that there was no danger. He claimed that this message had led residentsâand especially the younger and more educated onesâto change their plans and stay in Lâ Aquila, with disastrous consequences.
This small but dramatic episode illustrates some of the key features of the use and misuse of scientific advice in public policy. On the one hand, it shows the dependence of citizens and public officials on scientific expertise on a matter literally of life and death. The residents of Lâ Aquila turned to science for an explanation in the face of an unusual and frightening natural event. The science was crucial on this issue. Attempting to see the problem merely as a conflict over values, such as whether the residents were the sorts of people who would leave their city when faced with an existential threat, would be to miss the point. Factual questions mattered: What was the likelihood of a major earthquake, and what was the risk of harm to the residents in the event of an earthquake?
On the other hand, the incident exposes the limits of decision-making on the basis of scientific knowledge. Like many other areas of science, though more so than most, earthquake science is uncertain and inexact. Scientists have become increasingly capable of predicting the likelihood that an earthquake will strike a given area within a given time period, but there is still no accepted scientific method for reliable short-term prediction. The seismologists who were consulted had some data on the likelihood of a major earthquake in the days following a seismic swarm, but these findings were far from conclusive. Given the uncertainty and limits of reliable knowledge, residentsâ attitudes toward risk were critical to determining the appropriate earthquake response. Yet ironically, only the lab technician Giuliani seemed to appreciate the power of public fear, while local officials appealed to the authority of science in an ill-conceived attempt to reassure the public.
After the highly publicized trial, scientists and scientific associations around the world protested the conviction on the grounds that it penalized scientists for making a prediction that turned out to be incorrect. The president of the American Association for the Advancement of Science wrote a letter to the president of Italy, arguing that this kind of treatment would have a chilling effect and discourage scientists from public engagement. While the scapegoating of scientists through the criminal system may not have been an appropriate response to what had taken place, it was clearly a reaction to the mishandling of expert advice before the earthquake. The officials had denied the public a chance to understand the content and uncertainty of the science, instead delivering an authoritative judgment with an appeal to the views of âthe scientific community.â This had created a false sense of security, and deprived citizens of the ability to evaluate the information for themselves, and make up their own minds about how to respond to an unknown and unquantified danger.
The LâAquila case was a particularly dramatic example of a communityâs dependence on scientific advice and the disastrous results of bad advice, but it is hardly unique. The COVID-19 pandemic, which started in Wuhan, China, in late 2019, and killed nearly two million people globally within a year, exposed both the dependence of governments on scientific advice and cracks in this relationship on a much greater scale. In the face of a new and catastrophic risk, the lives of billions depended on scientistsâ ability to study the behavior of the novel coronavirus, provide policy advice to governments, and produce safe and effective vaccines. Governments turned to scientists for help, and scientists delivered remarkable amounts of new knowledge in a short period of time. At the same time, this episode showed the difficulties of using scientific knowledge under conditions of uncertainty and disagreementâand the severe costs of failure. Many governments claimed to be following the science while pursuing wildly different policies. Scientists publicly disagreed among themselves as well as with government policies. The science itself was evolving rapidly. Key aspects of the disease, from transmission and fatality rates to the duration of immunity, were unknown. Scientists and public health officials who appeared on regular press conferences focused on short-term health objectives, while disregarding the economic and social impacts of policies as well as broader conceptions of health. Their assumptions were not always disclosed or scrutinized. As appeals to the authority of scientific models and findings dominated public discourse, rejections and dismissals of scientific authority from politicians and the public also intensified.
The COVID-19 response of many countries involved serious mistakes and with disastrous results. Social scientists, public health experts, and physicians are studying the effects of these policies and trying to explain why some nations fared better than others. It is difficult to diagnose the failure, however, without relying on an account that articulates the sources of tension in the relationship between science and democracy, and examines better and worse ways to mitigate them. This book seeks to offer such an account.
What are the dilemmas of scientific advisory committees and their proper role within broader democratic decision-making procedures? How should the certainty, reliability, and completeness of available scientific knowledge affect the procedures for its use? Is it appropriate to expect citizens to engage with the technicalities of science? How are questions about the use of science in a democratic society influenced by broader decisions about the funding, design, and conduct of scientific research? These are the questions I set out to answer. The answers, in turn, will help us identify the structural tensions in the science-democracy relationship, and distinguish them from contingent problems due to the moral failings or incompetence of individuals occupying prominent political or scientific positions at a particular time.
Our ability to act on some of the biggest problems of our times, such as pandemics, climate change, biotechnology, nuclear weapons, or environmental issues, requires relying on knowledge provided by scientists and other experts. The modern state has struck an unprecedented partnership with science, taking scientific inquiry as its authoritative source of knowledge and the means for bringing about better policy outcomes. New scientific research determines what we see as our problems and the range of options we have for solving them. Meanwhile, contemporary political life is increasingly characterized by pathological treatments of expertise, with denials of science and distrust of scientists, on the one hand, and appeals to the authority of experts and complaints about the ignorance of the citizenry, on the other. These attitudes are intensified in reaction to one another: frustration with denial and pseudoscience leads to increased appeals to the authority of scientists, which in turn generates resentmentâand more denial. It is a vicious cycle.
The partnership between democracy and expertise is intrinsically unstable. Democracyârule by the peopleâholds out the promise that the people can shape their collective life by making decisions together, either directly or through elected representatives. Expert knowledge threatens to alter or limit the possibilities for democratic decision-making. It presents a rival source of authority in the public sphere, based on truth rather than agreement. This creates the danger that the authority of experts and their claims to objective knowledge will crowd out the space for democratic judgment about how to shape a collective existence. At the same time, scientific experts have no direct access to political power. The truth of scientific claims may not depend on the number of people who believe in them, but their uptake in politics inevitably requires persuading the many. In the realm of politics, scientists must appeal to people who do not share and may not understand the scientific communityâs methods for settling the truth. Citizens and their representatives ultimately retain the right to reject scientific knowledge, which is a right that they exercise quite often.
Efforts to eliminate this inherent tension would be problematic for both science and politics. Determining scientific truth democratically would be irrational and dangerous, while justifying democratic decisions by appeal to standards of scientific correctness would set a standard both impossibly high and inappropriate for politics. The legitimacy of democratic decisions derives not from their scientific credentials but instead from the fact that those who are subjected to them have had a say in the decision process. The challenge, then, is to devise ways for expertise and democracy to coexist productively. Expert knowledge could be used to expand the power of democracy or lead to the alienation of citizens from a politics that seems to defy their control. The success of the relationship between democracy and expertise depends on whether democracies can find ways to use expertise to further their own ends and produce good outcomes. Recent failures in the use of science for political decisionsânot only on COVID-19, but on climate change, vaccines, genetically modified organisms (GMOs), and earthquake warningsâsuggest that it is necessary to rethink how the relationship between science and democracy should be structured. These are not just failures of political practice; they are also failures of political theory.
The tension between expertise and democracy is not a new problem, but it is important to distinguish between two different forms that the problem has taken historically. The first challenges the justification for democratic rule given the alternative of rule by experts. If there are experts who possess superior knowledge about what is best, the argument goes, then having them rule would be in everyoneâs interest. Participation by those who know less would simply result in worse outcomes for all. This was one of Platoâs arguments for philosopher kings, and it is the main claim in recent arguments for epistocracy. The relevant expertise in this case is knowledge of the good or what would be best for the community; it is a form of moral and political knowledge, rather than scientific or technical. This line of reasoning is usually countered by questioning whether such knowledge exists, whether we can identify or agree on those who possess it, whether a small elite or the demos as a whole is more likely to possess it, and if a small elite, whether it can be trusted to rule incorruptibly. These arguments are about the relationship between knowledge and the legitimacy of democratic...