PART I
IRBs in Action
There are things IRB members are supposed to do: assess risks and safeguard participantsâ rights. Board members say that they do these things because they are moral imperatives and also because they are the law.
Then there are things IRB members do by virtue of the social arrangement created through these laws. It is hard for a group of people to assess the risks of research and safeguard the rights of participants while seated at a conference table, particularly when two crucial people are absent from the meeting rooms: namely, the researcher and the research participant. Researchers are absent from most IRB meetings by custom. Research participants are missing by definition: people cannot be recruited for research until after the IRB has approved recruiting them. Without the researcher and the subject, how do these decisions get made?
The chapters in part 1 take up this question and answer it in different but complementary ways. The aim of this introduction is to give a sense of what IRB meetings look like from the inside before showing how IRB members make decisions. My view is not the same one you will get from reading ethics training manuals. This is not to say that the IRB members at Adams, Greenly, and Sander State University were deviant or that their meetings went off script.1 The point, rather, is that IRB members have shared understandings about how to do their work, which they do not need to articulate. Their tacit understandings are in the background of the discussions that are on display in the subsequent chapters.
The Balancing Act
IRBs deliberate over a particular subset of studies during what is called full-board review. These were the types of reviews I observed in the meetings I attended.2 Generally speaking, studies that are regarded as especially sensitiveâpotentially dangerous or possibly coercive ones, for exampleâare the studies that all the board members evaluate together, regardless of the research method. In addition to studies that receive full-board review, IRB administrators can place submissions in two other categories: âexemptâ and âexpedited.â The administrator or a few members can evaluate these studies independently.3
According to federal regulation, the people who carry out full-board review have to fit a general mold. In terms of membership, IRBs must include at least five people. Some may have as many as twenty. The boards I observed averaged ten. The bulk of the members are supposed to represent âexpertiseâ in the areas in which investigators propose to conduct their research, although no board member who is actually involved in the study under review may vote on it. There must be at least one board member who is not affiliated with the institution, and one member must have ânonscientificâ concerns. Operationally, these requirements are often taken to mean that these members should not hold an advanced degree. In some cases, however, these members (sometimes referred to as âcommunity membersâ or âcommunity representativesâ) do have formal research training, as was the case for doctors in private practice and theologians who served on the boards I observed.4 The National Bioethics Advisory Commission has recommended that one-quarter of IRB members should be community representatives and one-quarter should not be affiliated with the institution, but most IRBs fall short of this ideal.5 The federal government strongly urges that boards include at least one man and one woman, and it more gently encourages that members of racial minority groups be included on the boards, though the regulations do not, strictly speaking, set quotas for the race and gender composition of boards.6 As of 2002, one-quarter of IRBs had memberships that were exclusively white, and seven out of ten IRBs had male majorities.7 Within these flexible guidelines, local IRBs have the latitude to develop different standards and practices.
All members of the IRBs I observed had been appointed by a university administrator (e.g., the vice president for research); many saw themselves as fulfilling a service requirement; and a few had negotiated stipends or, for faculty members, course release.8 Most IRB members with whom I spoke reported that they served because they personally enjoyed it or because they wanted to serve as a âproresearchâ presence on what is commonly considered a restrictive committee at universities. Perhaps institutions make the most of the resonance of âethics workâ with other kinds of activities that are thought to be altruistic but become morally suspect if money changes hands: for example, donating blood, organs, or ova. By referring to board membersâ work as a giftâin their case, a gift of timeâtheir choices can be made to seem more ethical and good. In any event, the board members whom I observed overwhelmingly described their group as very âcompatibleâ and âcollegial,â despite differences in training and personal background. This is no coincidence: members of declarative bodies tend to be selected not only for what they know and whom they represent, but also for a capacity that anthropologist Donald Brenneis calls âamiable mutual deference,â which smooths the deliberative process.9 The group members I observed worked to be accommodating of each other, in part because of this selection bias and also because board members get authority from federal regulations insofar as they act in unison as âthe IRB.â
There are three moral principlesârespect for persons, beneficence, and justiceâthat, in theory, guide IRB membersâ interpretation of the nuts and bolts of regulations. When the Congress passed the National Research Act in 1974, it required that a national commission be created to review and ideally improve the laws for human-subjects protections that had just been enacted. That body, formally called the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, worked hard on many thorny issues (e.g., prison research) for many long years. One of their final acts, required by the 1974 law, was to articulate guiding principles for research on human subjects. In 1979 they published the Belmont Report (named for the conference center where the group met) to outline the overarching spirit in which the regulations should be interpreted. The commissioners decided that the three principles would come into play in three corollary practices: making sure that the people being studied were not chosen in discriminatory ways; ensuring that participants had adequate information when they agreed to be studied; and ensuring that the risks to participants (whether physical, social, or legal) were appropriate in light of the potential benefits of the study, either for the participants or for others. The shorthand for this final task, according to the Belmont Report, was to weigh risks and benefits. For IRB members, weighing risks and benefits of a study is a daunting task, in part because the commissioners themselves regarded it as an aspiration that could never fully be achieved.10
Why three principles, you might ask, and not four or four hundred? Sociologist John Evans has explained that this was a somewhat arbitrary choice but that it is a brand of arbitrariness that works well for modern liberal governments, in which the most seemingly impersonal decisions are taken to be the most legitimate.11 One way to make decisions seem impersonal is to use rationalâthat is to say, highly rule-boundâdecision-making techniques. Since the 1930s, rational decision making has taken the form of âbalancingââor cost-benefit analysis in one incarnation.12 It was not inevitable that cost-benefit analysis would become the main tool of rational regulatory decision making, but it has nonetheless become âthe dominant logic of government.â13 The aim is to encourage people acting on behalf of the government, whether they are rule experts or knowledge experts, to make decisions that either are based on numbers or seem to be.
As a result, the language of weighing risks against benefits is pervasive among IRB members, but board member rarely doâor even are able toâuse quantitative balancing in any practical sense. In the 1970s, even the members of the National Commission recognized the âmetaphorical character of these terms.â Given that IRB members have (and use) a good deal of discretion when they evaluate researchersâ studies, it may be reassuring to know that the commissioners themselves felt that âonly on rare occasionsâ would âquantitative techniques be available for the scrutiny of research protocols.â That said, the commissioners nonetheless felt that âthe idea of systematic, nonarbitrary analysis of risks and benefits should be emulated insofar as possible.â The point of emulating a quantitative technique in what they acknowledged was an inherently qualitative assessment was to make the review âmore rigorous and precise, while making communication between review board members and investigators less subject to misinterpretation, misinformation and conflicting judgments.â The commissionâs advice to IRB membersâthat they metaphorically weigh risks and benefitsâmay have had an effect precisely opposite to the one they intended.14
The metaphor of weighing risks and benefits has come to serve many purposes in IRB meetings. At a Greenly IRB meeting, for example, the rhetoric of risk-benefit analysis oriented board membersâ thinking in the broadest sense and also added humor, irony, and reflexivity to reviews. In one case, a researcher recently hired by the university submitted a protocol for a study funded through the American Heart Association that involved enrolling older men at risk of a heart attack. Board members felt the researcherâs protocol was too short (âa prĂ©cisâ), his information for participants too technical (âis it too much for a consent to be understandable?â), and his general demeanor âcavalierâ (âHeâs from [another research university]. Maybe theyâre less demanding there.â)15 More broadly, this lax kind of researcher was, in the board chairâs estimation, an example of how the university was âgetting new faculty, who are also very oriented to this type of protocol.â One board member, Nathan, a reluctant bioethicist who preferred to avoid the tinge of righteousness by describing himself as just a member of the philosophy department, was concerned that these new researchers were not restrictive enough in their exclusion requirements for heart patients. A change to the research design would be entirely fair for board members to request, they decided, trying to shoehorn their good sense into the balancing metaphor. Here is an excerpt of the meeting that shows how IRB members use the rhetoric of weighing when deciding issues that do not register on scales. This excerpt also introduces some of the conventions that ethnographers use to designate in texts how peopleâs conversations sound in real time. (For a list of transcription conventions and their meanings, please see the appendix.)
CHAIR (UNIVERSITY RESEARCH ADMINISTRATION): Nathan raises an interesting point because one of the things we are charged with doing is to determine whether the risk [is worth theâ
EDWARD (FACULTY, ARCHITECTURE): Risk is worth the benefit.]
CHAIR: Whether the uh =
DR. MORRIS (OUTSIDE PHYSICIAN): Benefit is worth the risk.
NATHAN (FACULTY, PHILOSOPHY-BIOETHICS): h Something like that.16
The language of weighing risks and benefits is more rhetorically useful, perhaps, than actual attempts to compare incommensurable things. Above all, it serves the purpose of demonstrating that a decision maker knows the rules of the game. Members of declarative bodies invoke regulatory language to show that they are aware of what they are doing, especially when they use their discretion to interpret the rules creatively.
Seeing Like a Subject
The people who do the day-to-day work of governingâwho decide where to build a dam, a dump, or a sidewalk, for exampleâhave a tendency, within some systems of government, to see only physical resources and the interests of power holders when they are preparing to make a decision. They tend not to seeânot to imagineâthe tangible consequences of their decisions for individual people on the ground who may be affected. Historian James Scott has called this phenomenon âseeing like a state.â17 Within the legal systems of modern liberal democracies, the people who do the day-today work of governing are encouraged also to âsee like a subject.â In other words, rule experts and knowledge experts enact regulations that require them to imagine the perspectives of people whom the law is controlling or safeguarding. From the vantage point of a research institution, seeing like a subject is a way to reduce the chances that subjects will have reason to sue, which they are empowered to do, based on laws such as human-subjects regulations.
One example of how regulations encourage IRB members to see like a subject is in locating the crucial threshold between studies that present âno more than minimal riskâ to participants and more risky studies. The distinction hinges on whether the study presents greater social, legal, or physical risks than a participant would experience in her everyday life. The question for board members, then, is what the participantâs everyday life is like, and whether the research involves experiences and procedures that are much different. If so, then the burden is on the researcher to show greater benefit.
What are the experiences of would-be research participants in their everyday lives, and how can you know? Previous studies on courts and on state administration would suggest that IRB members might think of research participants in the aggregate and of individual participants as microcosms of the broader population to which they belong.18 This was not the case among board members, though. To make decisions in IRB meetings, board members imagined the people who featured in their own lives as stand-ins for research participants. During an IRB meeting at Greenly, for example, an anthropologist on the board argued that children answering one researcherâs proposed questionnaire would experience it as âinvasiveâ because of the racial experiences of one of his own students. Earlier in the day, the student had told the anthropologist about how âwhen [the student] was two years old he was given a real hard time because he was sort of dark skinned but not quite clear. And he was being told, âAre you black or white?â That is a good bit of concern. [Participants] should know that this is what kind of questions [theyâre] going to get.â19 At another meeting, a sports coach on the board insisted that a researcher change the information that he was planning to give participants in a bone-density study. At issue was how best to express the risk of radiation exposure so that participants could consent with full information. That board member disliked the researcherâs plan to express the radiation dosage relative to the amount from getting an X-ray: âWhen you say itâs going to be / that itâs the same as a normal X-rayâI mean, I can see my father going, âHow much radiation is that?â â20
The people whom board members called to mind when they imagined a research subjectâa relative or a student, for exampleâreinforced the race, class, and gender biases of the board membership. This often took place, paradoxically, during the review of studies that aimed to question conventional wisdom about health, social groups, and human experience.
Ambiguities over the boundaries of the groups that IRB members saw themselves as protecting and representingâthat is, the slipperiness of terms like community and population in IRB meetingsâcreated situations in which all IRB members could usefully explain how their own life experiences might help the board more fully imagine the pers...