âThe devilâs in the detailsâ is a common saying about policy proposals. Perhaps we need a similar saying about policy research âfor example, âthe devilâs in the caveats.â No simple list of caveats can bridge the enormous gap in understanding between the specialists who conduct policy research and the citizens and policymakers who are responsible for policy but often have overblown expectations about what policy research can do.
Consider this headline from MIT Technology Review , December 2016, âIn 2017, We Will Find Out If a Basic Income Makes Sense.â1 At the time, several countries were preparing to conduct experiments on the Universal Basic Income (UBI)âa policy to put a floor under everyoneâs income. But none of the experiments had plans to release any findings at all in 2017 (nor did they). The more important inaccuracy of this article was that it reflected the common but naĂŻve belief that UBI experiments are capable of determining whether UBI âmakes sense.â Social science experiments can produce useful information, but they cannot answer the big questions that most interest policymakers and voters, such as does UBI work or should we introduce it.
The limited contribution that social science experiments can make to big policy questions like these would not be a problem if everyone understood it, but unfortunately, the article in MIT Technology Review is no anomaly. Itâs a good example of the misreporting on UBI and related experiments that has gone on for decades.2 MIT Technology Review was founded at the Massachusetts Institute of Technology in 1899. Its website promises âintelligent, lucid, and authoritative ⊠journalism ⊠by a knowledgeable editorial staff, governed by a policy of accuracy and independence.â3 Although the Reviewâs expertise is in technology rather than in scientific research, it is the kind of publication nonspecialists expect can help them understand the limits and usefulness of scientific research.
Policy discussion, policy research, and policymaking involve diverse groups of people with widely differing backgrounds: citizens, journalists, academics, elected officials, and appointed public servants (call these last two âpolicymakersâ). Although some people fit more than one group, the groups as a whole donât have enough shared background knowledge to achieve mutual understanding of what research implies about policy. Researchers often do not understand what citizens and policymakers expect from research, while citizens and policymakers often do not understand the inherent difficulties of policy research or the difference between what research shows and what they want to know.
Specialists usually include a list of caveats covering the limitations of their research, but caveats are incapable of doing the work researchers often rely on them to do. A dense, dull, and lengthy list of caveats cannot provide nonspecialists with a firm grasp of what research does and does not imply about the policy at issue. Therefore, even the best scientific policy research can leave nonspecialists with an oversimplified, or simply wrong, impression of its implications for policy. People who do not understand the limits of experiments also cannot understand the value that experiments do have.
Better-written, longer, or clearer caveats wonât solve the problem either. The communication problem, coupled with the inherent limitations of social science experimentation, calls for a different approach to bridge the gap in understanding.
This book considers how these sorts of problems might affect future UBI experiments and suggests ways way to avoid them. As later chapters explain, UBI has many complex economic, political, social, and cultural effects that cannot be observed in any small-scale, controlled experiment. Even the best UBI experiment makes only a small contribution to the body of knowledge on the issue. It addresses questions only partially and indirectly while leaving many others unanswered.
Citizens and policymakers considering introducing UBI are understandably interested in larger issues. They want answers to the big questions, such as does UBI work as intended; is it cost-effective; should we introduce it on a national level? The gap between what an experiment can show and the answers to these big questions is enormous. Within one field, specialists can often achieve mutual understanding of this gap with no more than a simple list of caveats, many of which are self-evident and need not be mentioned. Across different fields, mutual understanding quickly gets more difficult, and it becomes extremely difficult between groups as diverse as the people involved in the discussion of UBI and those involved in the discussion of UBI experiments.
The process that brought about the experiments in most countries is not likely to produce research focused on bridging that gap in understanding. The demand for the current round of experiments seems to be driven more by the desire to have a UBI experiment than by the desire to learn anything specific about UBI from an experiment. An unfocused demand for a test puts researchers in position to learn whatever an experiment can show, whether or not it is closely connected to what citizens and policymakers most want to know.
The vast majority of research specialists who conduct experiments are not fools or fakers. They will look for evidence that makes a positive and useful contribution to the body of knowledge about UBI. But the effort to translate that contribution into a better public understanding of the body of evidence about UBI is far more difficult than often recognized. This communication problem badly affected many past experiments and is in danger of happening again.
To understand the difficulty of the task, imagine a puzzle strewn out over the floor of a large, dark, locked room. A map of the entire puzzle, assembled together, provides answers to the big questionsâdoes it work, and should we implement it? An experiment shines a light through a window, lighting up some of the puzzle pieces, so that researchers can attempt to map how they might fit together. They can easily map the pieces near the window, but further away, their view gets dimmer, the accuracy of their map decreases, and in dark corners of the room many pieces remain unobservable.
Although scientists like to solve entire puzzles when possible, under normal circumstances, they have to settle for something less ambitious. Thatâs why the basic goal of scientific research is to increase the sum of knowledge available to the scientific communityâeven if that increase is very small. In terms of the example, a research project can achieve the basic goal by mapping even one new piece, even if the puzzle as a whole remains unsolved and the map is only readable to other scientists.
As the MIT Review article illustrates, nonspecialists tend to expect something far more definitive, as if a social science experiment had the same goal as a high school science test: to determine whether the subject passes or fails. People often expect research to produce an estimate of whether UBI works or whether the country should introduce it. In terms of the metaphor , they expect researchers to provide their best estimate of the solution to the entire puzzle.
If researchers present their findings in the normal way for social scientists, they present something fundamentally different from what citizens and policymakers are looking for and possibly expecting. The potential for misunderstanding is enormous when research reports say something to the effect of here are the parts of the puzzle we were able to map to an audience looking for something to the effect of here is our best estimate of the solution to the entire puzzle. Caveats do not and cannot draw the necessary connection, which requires something more to the effect of here is how the parts we were able to map can be used toward a larger effort to find the solution to the entire puzzle and how close or far we remain from it.
In research reports, caveats typically focus not on the connection between the two goals, but on trying to help people understand research on its own terms. In the analogy, caveats tend to focus on the areas that experiments were able to map: how did they map this area; what does it mean to map this area; how accurate is the map of this area, and so on. The relationship between the areas mapped and the solution to the whole puzzle is often covered by one big caveat so seemingly simple that it often goes unstated: the areas we mapped are far from a solution to the entire puzzle. In other words, the information gathered about UBI in an experiment is far from a definitive, overall evaluation of UBI as a policy. As obvious as that caveat might be to researchers, it is not at all obvious to many nonspecialists.
Of course, non...