Chapter 1
Surveillance and transparency as sociotechnical systems of accountability
Deborah G. Johnson and Kent A. Wayland
In this paper we argue that surveillance can be illuminated by framing surveillance regimes as sociotechnical systems of accountability, and then comparing surveillance to transparency regimes also framed as sociotechnical systems of accountability. We begin by grounding our understanding of accountability in the relationship between technology and democracy. We next explore surveillance and transparency regimes as traditionally and separately conceived and then show how they both function as mechanisms of accountability in democratic societies. The framing allows us, first, to compare the systems and ask a set of questions about how each kind of system constructs and positions individuals, what role information technology (IT) plays in constituting the system, and how relationships of power are arranged and maintained.
STS, IT, and the technology-democracy connection
Research and scholarship in the field of science and technology studies (STS) has, in the last several decades, converged on a thesis generally referred to as âco-constructionâ or âmutual constitution.â The thesis holds that technology both constitutes and is constituted by society. It grew out of early work (Bijker et al., 1987; Bijker and Law, 1992), and has been used to explore a myriad of topics, such as gender (Wajcman, 2004), the role of users in constituting technology (Oudshoorn and Pinch, 2003), issues in medical and biomedical technologies (Oudshoorn, 2003), and the role of technical expertise in decision-making (Jasanoff, 2005). This literature gives a rich and nuanced view of the mutual creation of technology and culture.
One implication of STS theory and the co-construction thesis in particular is that the unit of analysis in technology studies should be sociotechnical systems (rather than artifacts). Technology and society are so intertwined that technology should be understood to be the combination of artifacts, social practices, social arrangements, and meanings. Society or particular social practices must, as well, be understood to be combinations of artifacts, cultural meanings, and social arrangements. In other words, the co-construction thesis goes hand-in-hand with understanding that systems are almost never merely technological or merely social.
Another implication of the co-construction thesis is that values â social, moral, and political â are part of this mutual constitution. Early in the development of the field of STS, Winnerâs seminal work, âDo artifacts have politics?â (1986) drew attention to the connection between values and technology, but the field subsequently went through a period of neglect (Winner, 1993; but see Bijker, 1993). In recent years, however, interest in the connection has returned (Hackett et al., 2008) and more attention is being given, broadly, to the normative implications of STS scholarship.
Scholars in the field of computer ethics and, more broadly, those who study the social implications of IT have readily taken up normative issues and brought attention to values being promoted, undermined, or simply transformed through information technology. However, while normative scholarship in these fields has been robust, only recently have these scholars begun to draw on work in STS. For recent work in computer ethics drawing on STS see, for example, Verbeek (2006), Keulartz et al. (2004), and Introna (2007).
The value that most concerns us here is democracy. Scholarship in STS, computer ethics, and the social implications of IT has, in varying ways, addressed the link between technology and democracy, though more often than not the link is presumed or hinted at rather than explicitly addressed. The most prominent technologyâdemocracy themes found in the STS literature are those focused on the role of expertise and of public / citizen participation in democratic decision-making, or, more broadly, how to understand citizenship in technocratic societies. Thorpe (2008), for example, suggests that STS scholarship has tended to focus on democratizing expertise and on the implications of STS scholarship for democratic / public participation (see also Kleinman, 2000; Hamlett, 2003). Another related stream of analysis in STS is focused on social movements and activism (Hess et al., 2008). Significant progress has been made in these areas and the scholarship continues to reveal new possibilities for technological decision-making in democratic societies.
Scholarship in the fields of computer ethics and the social implications of computers has taken the technologyâdemocracy connection up specifically with regard to IT. Bimber (2003) examines the effects of the internet on democratic politics while Johnson (2007) addresses technological literacy. Here there is also important work on values in design that has not explicitly addressed democracy but points in that direction by noting how technological decisions can be value decisions and value decisions can be technological decisions (Friedman and Nissenbaum, 1996; Brigham and Introna, 2007; Flanagan et al., 2007). Perhaps the most explicit discussion of the technologyâdemocracy connection is that about the internet and whether it is a âdemocratic technologyâ (Johnson, 1997, 2000; Best and Wade, 2007).
The work that most directly examines technology and democracy as a matter of co-construction is Scloveâs Democracy and Technology (1995). Scloveâs analysis begins with the idea that technology is social structure, provides an analysis of technology as multivalenced, and then goes on to focus on democratic procedures for decisions about technology. Our analysis expands upon this and the previous STS literatures by addressing how sociotechnical systems of accountability function to constitute and maintain democracy.
Accountability and democracy
âDemocracyâ is at once a simple and a complex idea. Perhaps the most straightforward expression of the idea is the principle that individuals should have a say in decisions that affect their lives. The simplicity disappears quickly when we realize that this principle can be manifested in many different formsâat different places, in different times, with differing institutions, in different cultures. Democracy has been, and continues to be, interpreted and reinterpreted, invented and reinvented as the world changes, in relationship with new technology, new ideas, new circumstances, and many other kinds of change.
Nevertheless, even given this complexity, systems of accountability are essential components of democratic societies, especially representative democracies. Here âhaving a say in decisions that affect oneâs lifeâ entails that representative decision-makers be accountable to those for whom they make decisions, and that institutions set up to achieve social goals are accountable to the public they aim to serve. This means that citizens must be informed about the activities and decisions of their representatives and that they must have opportunities to provide input to these representatives. The ultimate consequence of accountability in political systems is, of course, that citizens vote for or against re-electing a representative, or call for the resignation of an untrustworthy bureaucrat. Institutional accountability is more complex but also involves information flow from institutions to citizens and processes by which public input can be received. All democracies devise systems of accountability involving information flows between citizens, representatives, and institutions. Thus, a first claim of our analysis is that democratic societies are constituted in part by systems of accountability, systems in which individuals and institutions are held to standards of behavior and expected to explain failures to conform to those standards.
Increasingly, these systems of accountability are mediated through information technologies (IT). Government agencies use websites to disclose their practices; law-enforcement officials hold citizens accountable by means of various IT tracking systems; corporations monitor their employees and provide information to auditors through IT systems. In other words, the accounts created and provided in these systems are either constituted in or translated into data which can be accessed, processed, and/or searched. Our second claim is, then, that systems of accountability are sociotechnical systems. Information technology is used to gather, use, and display information in these systems, and hence these accountability systems are constituted by (and in turn constitute) IT.
Our third claim is that surveillance and transparency practices are both systems of accountability. This allows us to frame surveillance and transparency as parallel systems. Such an approach has not, to our knowledge, been undertaken before. Indeed, although accountability is implicit in most discussions of transparency and some discussions of surveillance, rarely is accountability used as the dominant framework or lens through which to describe and evaluate such systems.
Typically âaccountabilityâ refers to practices in which individual actors (elected officials, government bureaucrats, professionals, judges) and institutions (government agencies, corporations, organizations of various kinds) are expected to behave in specific ways (i.e., according to certain formal or informal standards) and to âanswer toâ particular constituents. Government officials and institutions are expected to function in the public interest; corporations are supposed to abide by the law; professionals and professions are supposed to be worthy of the trust that clients must put in them. Citizens (as individuals or related groups) are expected to play a role in the process if in no other way than by responding (or not responding) to the accounts given by institutional actors. Generally âaccountsâ must be given when actors fail to meet given standards, but often they are also expected to âgive accountsâ to demonstrate that they have adhered to standards or fulfilled their responsibilities even when there is no failure.
Transparency readily fits the notion of accountability: in transparency regimes, institutions provide accounts of how they operate to demonstrate that they are fulfilling social or legal standards (expectations). The idea that surveillance regimes are systems of accountability might, however, seem somewhat odd, especially when it comes to marketing research and other forms of non-state surveillance. Yet, in these surveillance regimes an account of an individual is developed and then decisions are made on the basis of that account. The account (a digital profile) is put together by linking data from any number of sources to a specific person/identity. The decision may be as significant as making an arrest or as trivial as sending marketing information. In either case, the profile amounts to an account of the person, based on their behavior, and usually predicts future behavior. In this respect, and despite the fact that the person is not personally providing the account, surveillance practices fit the accountability framework.
We propose, then, to read the notion of âgiving accountsâ broadly here, to include any kind of data collection that seeks to characterize or categorize an actor and use that collection (a profile) as the basis for treatment. Thus, our framework of accountability includes the following key elements: the giving or creation of an account, which is based on oneâs behavior or characteristics, is compared with some set of norms or categories and the comparison is then used as the basis for some kind of treatmentâthe granting or denial of a loan, the imposition of a fine for violating an environmental standard, further scrutiny of auditing reports, the offering of special marketing deals.
With this framework in hand, we can now begin to examine surveillance and transparency as sociotechnical systems of accountability.
Surveillance
From the early days of computing to the present, concern has been expressed about the threat particular uses of information technology can pose to personal privacy. However, the first computers were large mainframes and public concern was based on the expectation that computers would be used primarily by large bureaucratic organizations, especially government agencies, to amass huge quantities of data on citizens. Worries about the threat to personal privacy waned somewhat with the development of personal computers, for small computers were thought to put information in the hands of many (and not just large and already powerful organizations). With the development of technology, however, a much broader and more powerful set of computer-based tools has been created for tracking and sorting individuals and regulating their behavior, including data-mining tools, biometrics, facial recognition, intelligent highways, cookies (to track web browsing), and RFID (radio-frequency identification). In the end, then, although the development of personal computers complicated the issues, public concern and scholarship on privacy has persisted with attention turning from one new technological development to another.
The theoretical literature on privacy has focused on conceptualizing privacy and understanding its value and its variations (Marx, 2001). Broadly, the literature might be described as moving through stages in which privacy was understood as an individual good to understanding it as a social or public good. Regan (1995) argued for this move in Legislating Privacy. More recently, scholarship seems to have shifted...