Behind Closed Doors
eBook - ePub

Behind Closed Doors

IRBs and the Making of Ethical Research

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Behind Closed Doors

IRBs and the Making of Ethical Research

Book details
Book preview
Table of contents
Citations

About This Book

Although the subject of federally mandated Institutional Review Boards (IRBs) has been extensively debated, we actually do not know much about what takes place when they convene. The story of how IRBs work today is a story about their past as well as their present, and Behind Closed Doors is the first book to meld firsthand observations of IRB meetings with the history of how rules for the treatment of human subjects were formalized in the United States in the decades after World War II. Drawing on extensive archival sources, Laura Stark reconstructs the daily lives of scientists, lawyers, administrators, and research subjects working—and "warring"—on the campus of the National Institutes of Health, where they first wrote the rules for the treatment of human subjects. Stark argues that the model of group deliberation that gradually crystallized during this period reflected contemporary legal and medical conceptions of what it meant to be human, what political rights human subjects deserved, and which stakeholders were best suited to decide. She then explains how the historical contingencies that shaped rules for the treatment of human subjects in the postwar era guide decision making today—within hospitals, universities, health departments, and other institutions in the United States and across the globe. Meticulously researched and gracefully argued, Behind Closed Doors will be essential reading for sociologists and historians of science and medicine, as well as policy makers and IRB administrators.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Behind Closed Doors by Laura Stark in PDF and/or ePUB format, as well as other popular books in History & World History. We have over one million books available in our catalogue for you to explore.

Information

Year
2011
ISBN
9780226770888
Topic
History
Index
History
PART I
IRBs in Action
There are things IRB members are supposed to do: assess risks and safeguard participants’ rights. Board members say that they do these things because they are moral imperatives and also because they are the law.
Then there are things IRB members do by virtue of the social arrangement created through these laws. It is hard for a group of people to assess the risks of research and safeguard the rights of participants while seated at a conference table, particularly when two crucial people are absent from the meeting rooms: namely, the researcher and the research participant. Researchers are absent from most IRB meetings by custom. Research participants are missing by definition: people cannot be recruited for research until after the IRB has approved recruiting them. Without the researcher and the subject, how do these decisions get made?
The chapters in part 1 take up this question and answer it in different but complementary ways. The aim of this introduction is to give a sense of what IRB meetings look like from the inside before showing how IRB members make decisions. My view is not the same one you will get from reading ethics training manuals. This is not to say that the IRB members at Adams, Greenly, and Sander State University were deviant or that their meetings went off script.1 The point, rather, is that IRB members have shared understandings about how to do their work, which they do not need to articulate. Their tacit understandings are in the background of the discussions that are on display in the subsequent chapters.
The Balancing Act
IRBs deliberate over a particular subset of studies during what is called full-board review. These were the types of reviews I observed in the meetings I attended.2 Generally speaking, studies that are regarded as especially sensitive—potentially dangerous or possibly coercive ones, for example—are the studies that all the board members evaluate together, regardless of the research method. In addition to studies that receive full-board review, IRB administrators can place submissions in two other categories: “exempt” and “expedited.” The administrator or a few members can evaluate these studies independently.3
According to federal regulation, the people who carry out full-board review have to fit a general mold. In terms of membership, IRBs must include at least five people. Some may have as many as twenty. The boards I observed averaged ten. The bulk of the members are supposed to represent “expertise” in the areas in which investigators propose to conduct their research, although no board member who is actually involved in the study under review may vote on it. There must be at least one board member who is not affiliated with the institution, and one member must have “nonscientific” concerns. Operationally, these requirements are often taken to mean that these members should not hold an advanced degree. In some cases, however, these members (sometimes referred to as “community members” or “community representatives”) do have formal research training, as was the case for doctors in private practice and theologians who served on the boards I observed.4 The National Bioethics Advisory Commission has recommended that one-quarter of IRB members should be community representatives and one-quarter should not be affiliated with the institution, but most IRBs fall short of this ideal.5 The federal government strongly urges that boards include at least one man and one woman, and it more gently encourages that members of racial minority groups be included on the boards, though the regulations do not, strictly speaking, set quotas for the race and gender composition of boards.6 As of 2002, one-quarter of IRBs had memberships that were exclusively white, and seven out of ten IRBs had male majorities.7 Within these flexible guidelines, local IRBs have the latitude to develop different standards and practices.
All members of the IRBs I observed had been appointed by a university administrator (e.g., the vice president for research); many saw themselves as fulfilling a service requirement; and a few had negotiated stipends or, for faculty members, course release.8 Most IRB members with whom I spoke reported that they served because they personally enjoyed it or because they wanted to serve as a “proresearch” presence on what is commonly considered a restrictive committee at universities. Perhaps institutions make the most of the resonance of “ethics work” with other kinds of activities that are thought to be altruistic but become morally suspect if money changes hands: for example, donating blood, organs, or ova. By referring to board members’ work as a gift—in their case, a gift of time—their choices can be made to seem more ethical and good. In any event, the board members whom I observed overwhelmingly described their group as very “compatible” and “collegial,” despite differences in training and personal background. This is no coincidence: members of declarative bodies tend to be selected not only for what they know and whom they represent, but also for a capacity that anthropologist Donald Brenneis calls “amiable mutual deference,” which smooths the deliberative process.9 The group members I observed worked to be accommodating of each other, in part because of this selection bias and also because board members get authority from federal regulations insofar as they act in unison as “the IRB.”
There are three moral principles—respect for persons, beneficence, and justice—that, in theory, guide IRB members’ interpretation of the nuts and bolts of regulations. When the Congress passed the National Research Act in 1974, it required that a national commission be created to review and ideally improve the laws for human-subjects protections that had just been enacted. That body, formally called the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, worked hard on many thorny issues (e.g., prison research) for many long years. One of their final acts, required by the 1974 law, was to articulate guiding principles for research on human subjects. In 1979 they published the Belmont Report (named for the conference center where the group met) to outline the overarching spirit in which the regulations should be interpreted. The commissioners decided that the three principles would come into play in three corollary practices: making sure that the people being studied were not chosen in discriminatory ways; ensuring that participants had adequate information when they agreed to be studied; and ensuring that the risks to participants (whether physical, social, or legal) were appropriate in light of the potential benefits of the study, either for the participants or for others. The shorthand for this final task, according to the Belmont Report, was to weigh risks and benefits. For IRB members, weighing risks and benefits of a study is a daunting task, in part because the commissioners themselves regarded it as an aspiration that could never fully be achieved.10
Why three principles, you might ask, and not four or four hundred? Sociologist John Evans has explained that this was a somewhat arbitrary choice but that it is a brand of arbitrariness that works well for modern liberal governments, in which the most seemingly impersonal decisions are taken to be the most legitimate.11 One way to make decisions seem impersonal is to use rational—that is to say, highly rule-bound—decision-making techniques. Since the 1930s, rational decision making has taken the form of “balancing”—or cost-benefit analysis in one incarnation.12 It was not inevitable that cost-benefit analysis would become the main tool of rational regulatory decision making, but it has nonetheless become “the dominant logic of government.”13 The aim is to encourage people acting on behalf of the government, whether they are rule experts or knowledge experts, to make decisions that either are based on numbers or seem to be.
As a result, the language of weighing risks against benefits is pervasive among IRB members, but board member rarely do—or even are able to—use quantitative balancing in any practical sense. In the 1970s, even the members of the National Commission recognized the “metaphorical character of these terms.” Given that IRB members have (and use) a good deal of discretion when they evaluate researchers’ studies, it may be reassuring to know that the commissioners themselves felt that “only on rare occasions” would “quantitative techniques be available for the scrutiny of research protocols.” That said, the commissioners nonetheless felt that “the idea of systematic, nonarbitrary analysis of risks and benefits should be emulated insofar as possible.” The point of emulating a quantitative technique in what they acknowledged was an inherently qualitative assessment was to make the review “more rigorous and precise, while making communication between review board members and investigators less subject to misinterpretation, misinformation and conflicting judgments.” The commission’s advice to IRB members—that they metaphorically weigh risks and benefits—may have had an effect precisely opposite to the one they intended.14
The metaphor of weighing risks and benefits has come to serve many purposes in IRB meetings. At a Greenly IRB meeting, for example, the rhetoric of risk-benefit analysis oriented board members’ thinking in the broadest sense and also added humor, irony, and reflexivity to reviews. In one case, a researcher recently hired by the university submitted a protocol for a study funded through the American Heart Association that involved enrolling older men at risk of a heart attack. Board members felt the researcher’s protocol was too short (“a prĂ©cis”), his information for participants too technical (“is it too much for a consent to be understandable?”), and his general demeanor “cavalier” (“He’s from [another research university]. Maybe they’re less demanding there.”)15 More broadly, this lax kind of researcher was, in the board chair’s estimation, an example of how the university was “getting new faculty, who are also very oriented to this type of protocol.” One board member, Nathan, a reluctant bioethicist who preferred to avoid the tinge of righteousness by describing himself as just a member of the philosophy department, was concerned that these new researchers were not restrictive enough in their exclusion requirements for heart patients. A change to the research design would be entirely fair for board members to request, they decided, trying to shoehorn their good sense into the balancing metaphor. Here is an excerpt of the meeting that shows how IRB members use the rhetoric of weighing when deciding issues that do not register on scales. This excerpt also introduces some of the conventions that ethnographers use to designate in texts how people’s conversations sound in real time. (For a list of transcription conventions and their meanings, please see the appendix.)
CHAIR (UNIVERSITY RESEARCH ADMINISTRATION): Nathan raises an interesting point because one of the things we are charged with doing is to determine whether the risk [is worth the—
EDWARD (FACULTY, ARCHITECTURE): Risk is worth the benefit.]
CHAIR: Whether the uh =
DR. MORRIS (OUTSIDE PHYSICIAN): Benefit is worth the risk.
NATHAN (FACULTY, PHILOSOPHY-BIOETHICS): h Something like that.16
The language of weighing risks and benefits is more rhetorically useful, perhaps, than actual attempts to compare incommensurable things. Above all, it serves the purpose of demonstrating that a decision maker knows the rules of the game. Members of declarative bodies invoke regulatory language to show that they are aware of what they are doing, especially when they use their discretion to interpret the rules creatively.
Seeing Like a Subject
The people who do the day-to-day work of governing—who decide where to build a dam, a dump, or a sidewalk, for example—have a tendency, within some systems of government, to see only physical resources and the interests of power holders when they are preparing to make a decision. They tend not to see—not to imagine—the tangible consequences of their decisions for individual people on the ground who may be affected. Historian James Scott has called this phenomenon “seeing like a state.”17 Within the legal systems of modern liberal democracies, the people who do the day-today work of governing are encouraged also to “see like a subject.” In other words, rule experts and knowledge experts enact regulations that require them to imagine the perspectives of people whom the law is controlling or safeguarding. From the vantage point of a research institution, seeing like a subject is a way to reduce the chances that subjects will have reason to sue, which they are empowered to do, based on laws such as human-subjects regulations.
One example of how regulations encourage IRB members to see like a subject is in locating the crucial threshold between studies that present “no more than minimal risk” to participants and more risky studies. The distinction hinges on whether the study presents greater social, legal, or physical risks than a participant would experience in her everyday life. The question for board members, then, is what the participant’s everyday life is like, and whether the research involves experiences and procedures that are much different. If so, then the burden is on the researcher to show greater benefit.
What are the experiences of would-be research participants in their everyday lives, and how can you know? Previous studies on courts and on state administration would suggest that IRB members might think of research participants in the aggregate and of individual participants as microcosms of the broader population to which they belong.18 This was not the case among board members, though. To make decisions in IRB meetings, board members imagined the people who featured in their own lives as stand-ins for research participants. During an IRB meeting at Greenly, for example, an anthropologist on the board argued that children answering one researcher’s proposed questionnaire would experience it as “invasive” because of the racial experiences of one of his own students. Earlier in the day, the student had told the anthropologist about how “when [the student] was two years old he was given a real hard time because he was sort of dark skinned but not quite clear. And he was being told, ‘Are you black or white?’ That is a good bit of concern. [Participants] should know that this is what kind of questions [they’re] going to get.”19 At another meeting, a sports coach on the board insisted that a researcher change the information that he was planning to give participants in a bone-density study. At issue was how best to express the risk of radiation exposure so that participants could consent with full information. That board member disliked the researcher’s plan to express the radiation dosage relative to the amount from getting an X-ray: “When you say it’s going to be / that it’s the same as a normal X-ray—I mean, I can see my father going, ‘How much radiation is that?’ ”20
The people whom board members called to mind when they imagined a research subject—a relative or a student, for example—reinforced the race, class, and gender biases of the board membership. This often took place, paradoxically, during the review of studies that aimed to question conventional wisdom about health, social groups, and human experience.
Ambiguities over the boundaries of the groups that IRB members saw themselves as protecting and representing—that is, the slipperiness of terms like community and population in IRB meetings—created situations in which all IRB members could usefully explain how their own life experiences might help the board more fully imagine the pers...

Table of contents

  1. Cover
  2. Copyright
  3. Title Page
  4. Series page
  5. Dedication
  6. Contents
  7. INTRODUCTION
  8. PART I : IRBS IN ACTION
  9. PART II : SETTING IRBS IN MOTION IN COLD WAR AMERICA
  10. Acknowledgments
  11. Appendix: Ethnographic Methods
  12. Abbreviations
  13. Notes
  14. Bibliography
  15. Index