A Critical Analysis of Basic Income Experiments for Researchers, Policymakers, and Citizens
eBook - ePub

A Critical Analysis of Basic Income Experiments for Researchers, Policymakers, and Citizens

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

A Critical Analysis of Basic Income Experiments for Researchers, Policymakers, and Citizens

Book details
Book preview
Table of contents
Citations

About This Book

At least six different Universal Basic Income (UBI) experiments are underway or planned right now in theUnited States, Canada, the United Kingdom, Finland, and Kenya. Several more countries are considering conducting experiments. Yet, there seems to be more interest simply in having UBI experiments than in exactly what we want to learn from them. Although experiments can produce a lot of relevant data about UBI, they are crucially limited in their ability to enlighten our understanding of the big questions that bear on the discussion of whether to implement UBI as a national or regional policy. And, past experience shows that results of UBI experiments are particularly vulnerable misunderstanding, sensationalism, and spin. This book examines the difficulties of conducting a UBI experiment and reporting the results in ways that successfully improve public understanding of the probable effects of a national UBI. The book makes recommendations how researchers, reporters, citizens, and policymakers can avoid these problems and get the most out of UBI experiments.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access A Critical Analysis of Basic Income Experiments for Researchers, Policymakers, and Citizens by Karl Widerquist in PDF and/or ePUB format, as well as other popular books in Economics & Economic Theory. We have over one million books available in our catalogue for you to explore.

Information

Year
2018
ISBN
9783030038496
© The Author(s) 2018
Karl WiderquistA Critical Analysis of Basic Income Experiments for Researchers, Policymakers, and CitizensExploring the Basic Income Guaranteehttps://doi.org/10.1007/978-3-030-03849-6_1
Begin Abstract

1. Introduction

Karl Widerquist1
(1)
Georgetown University in Qatar, Doha, Qatar
Karl Widerquist

Abstract

This chapter introduces and previews the book with a broad overview of the problems involved in conducting Universal Basic Income (UBI) experiments and in reporting the results in ways that successfully increase public understanding of the issue. It argues that experimenters should work backward from the big “bottom-line questions” that are most important to the public discussion of UBI to the variables that tests can actually address, and then forward again, closely explaining the relationship between experimental findings and the things people discussing UBI as a potential national policy really want to know.

Keywords

Basic income experimentsNegative Income Tax experimentsSocial science experimentsBasic incomeUniversal Basic IncomeInequalityPoverty
End Abstract
“The devil’s in the details” is a common saying about policy proposals. Perhaps we need a similar saying about policy research —for example, “the devil’s in the caveats.” No simple list of caveats can bridge the enormous gap in understanding between the specialists who conduct policy research and the citizens and policymakers who are responsible for policy but often have overblown expectations about what policy research can do.
Consider this headline from MIT Technology Review , December 2016, “In 2017, We Will Find Out If a Basic Income Makes Sense.”1 At the time, several countries were preparing to conduct experiments on the Universal Basic Income (UBI)—a policy to put a floor under everyone’s income. But none of the experiments had plans to release any findings at all in 2017 (nor did they). The more important inaccuracy of this article was that it reflected the common but naïve belief that UBI experiments are capable of determining whether UBI “makes sense.” Social science experiments can produce useful information, but they cannot answer the big questions that most interest policymakers and voters, such as does UBI work or should we introduce it.
The limited contribution that social science experiments can make to big policy questions like these would not be a problem if everyone understood it, but unfortunately, the article in MIT Technology Review is no anomaly. It’s a good example of the misreporting on UBI and related experiments that has gone on for decades.2 MIT Technology Review was founded at the Massachusetts Institute of Technology in 1899. Its website promises “intelligent, lucid, and authoritative 
 journalism 
 by a knowledgeable editorial staff, governed by a policy of accuracy and independence.”3 Although the Review’s expertise is in technology rather than in scientific research, it is the kind of publication nonspecialists expect can help them understand the limits and usefulness of scientific research.
Policy discussion, policy research, and policymaking involve diverse groups of people with widely differing backgrounds: citizens, journalists, academics, elected officials, and appointed public servants (call these last two “policymakers”). Although some people fit more than one group, the groups as a whole don’t have enough shared background knowledge to achieve mutual understanding of what research implies about policy. Researchers often do not understand what citizens and policymakers expect from research, while citizens and policymakers often do not understand the inherent difficulties of policy research or the difference between what research shows and what they want to know.
Specialists usually include a list of caveats covering the limitations of their research, but caveats are incapable of doing the work researchers often rely on them to do. A dense, dull, and lengthy list of caveats cannot provide nonspecialists with a firm grasp of what research does and does not imply about the policy at issue. Therefore, even the best scientific policy research can leave nonspecialists with an oversimplified, or simply wrong, impression of its implications for policy. People who do not understand the limits of experiments also cannot understand the value that experiments do have.
Better-written, longer, or clearer caveats won’t solve the problem either. The communication problem, coupled with the inherent limitations of social science experimentation, calls for a different approach to bridge the gap in understanding.
This book considers how these sorts of problems might affect future UBI experiments and suggests ways way to avoid them. As later chapters explain, UBI has many complex economic, political, social, and cultural effects that cannot be observed in any small-scale, controlled experiment. Even the best UBI experiment makes only a small contribution to the body of knowledge on the issue. It addresses questions only partially and indirectly while leaving many others unanswered.
Citizens and policymakers considering introducing UBI are understandably interested in larger issues. They want answers to the big questions, such as does UBI work as intended; is it cost-effective; should we introduce it on a national level? The gap between what an experiment can show and the answers to these big questions is enormous. Within one field, specialists can often achieve mutual understanding of this gap with no more than a simple list of caveats, many of which are self-evident and need not be mentioned. Across different fields, mutual understanding quickly gets more difficult, and it becomes extremely difficult between groups as diverse as the people involved in the discussion of UBI and those involved in the discussion of UBI experiments.
The process that brought about the experiments in most countries is not likely to produce research focused on bridging that gap in understanding. The demand for the current round of experiments seems to be driven more by the desire to have a UBI experiment than by the desire to learn anything specific about UBI from an experiment. An unfocused demand for a test puts researchers in position to learn whatever an experiment can show, whether or not it is closely connected to what citizens and policymakers most want to know.
The vast majority of research specialists who conduct experiments are not fools or fakers. They will look for evidence that makes a positive and useful contribution to the body of knowledge about UBI. But the effort to translate that contribution into a better public understanding of the body of evidence about UBI is far more difficult than often recognized. This communication problem badly affected many past experiments and is in danger of happening again.
To understand the difficulty of the task, imagine a puzzle strewn out over the floor of a large, dark, locked room. A map of the entire puzzle, assembled together, provides answers to the big questions—does it work, and should we implement it? An experiment shines a light through a window, lighting up some of the puzzle pieces, so that researchers can attempt to map how they might fit together. They can easily map the pieces near the window, but further away, their view gets dimmer, the accuracy of their map decreases, and in dark corners of the room many pieces remain unobservable.
Although scientists like to solve entire puzzles when possible, under normal circumstances, they have to settle for something less ambitious. That’s why the basic goal of scientific research is to increase the sum of knowledge available to the scientific community—even if that increase is very small. In terms of the example, a research project can achieve the basic goal by mapping even one new piece, even if the puzzle as a whole remains unsolved and the map is only readable to other scientists.
As the MIT Review article illustrates, nonspecialists tend to expect something far more definitive, as if a social science experiment had the same goal as a high school science test: to determine whether the subject passes or fails. People often expect research to produce an estimate of whether UBI works or whether the country should introduce it. In terms of the metaphor , they expect researchers to provide their best estimate of the solution to the entire puzzle.
If researchers present their findings in the normal way for social scientists, they present something fundamentally different from what citizens and policymakers are looking for and possibly expecting. The potential for misunderstanding is enormous when research reports say something to the effect of here are the parts of the puzzle we were able to map to an audience looking for something to the effect of here is our best estimate of the solution to the entire puzzle. Caveats do not and cannot draw the necessary connection, which requires something more to the effect of here is how the parts we were able to map can be used toward a larger effort to find the solution to the entire puzzle and how close or far we remain from it.
In research reports, caveats typically focus not on the connection between the two goals, but on trying to help people understand research on its own terms. In the analogy, caveats tend to focus on the areas that experiments were able to map: how did they map this area; what does it mean to map this area; how accurate is the map of this area, and so on. The relationship between the areas mapped and the solution to the whole puzzle is often covered by one big caveat so seemingly simple that it often goes unstated: the areas we mapped are far from a solution to the entire puzzle. In other words, the information gathered about UBI in an experiment is far from a definitive, overall evaluation of UBI as a policy. As obvious as that caveat might be to researchers, it is not at all obvious to many nonspecialists.
Of course, non...

Table of contents

  1. Cover
  2. Front Matter
  3. 1. Introduction
  4. 2. Universal Basic Income and Its More Testable Sibling, the Negative Income Tax
  5. 3. Available Testing Techniques
  6. 4. Testing Difficulties
  7. 5. The Practical Impossibility of Testing UBI
  8. 6. BIG Experiments of the 1970s and the Public Reaction to Them
  9. 7. New Experimental Findings 2008–2013
  10. 8. Current, Planned, and Proposed Experiments, 2014–Present
  11. 9. The Political Economy of the Decision to Have a UBI Experiment
  12. 10. The Vulnerability of Experimental Findings to Misunderstanding, Misuse, Spin, and the Streetlight Effect
  13. 11. Why UBI Experiments Cannot Resolve Much of the Public Disagreement About UBI
  14. 12. The Bottom Line
  15. 13. Identifying Important Empirical Claims in the UBI Debate
  16. 14. Claims That Don’t Need a Test
  17. 15. Claims That Can’t Be Tested with Available Techniques
  18. 16. Claims That Can Be Tested but Only Partially, Indirectly, or Inconclusively
  19. 17. From the Dream Test to Good Tests Within Feasible Budgets
  20. 18. Why Have an Experiment at All?
  21. 19. Overcoming Spin, Sensationalism, Misunderstanding, and the Streetlight Effect
  22. Back Matter