Law, Human Agency and Autonomic Computing
eBook - ePub

Law, Human Agency and Autonomic Computing

The Philosophy of Law Meets the Philosophy of Technology

Mireille Hildebrandt, Antoinette Rouvroy, Mireille Hildebrandt, Antoinette Rouvroy

  1. 248 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Law, Human Agency and Autonomic Computing

The Philosophy of Law Meets the Philosophy of Technology

Mireille Hildebrandt, Antoinette Rouvroy, Mireille Hildebrandt, Antoinette Rouvroy

Book details
Book preview
Table of contents
Citations

About This Book

Law, Human Agency and Autonomic Computing interrogates the legal implications of the notion and experience of human agency implied by the emerging paradigm of autonomic computing, and the socio-technical infrastructures it supports. The development of autonomic computing and ambient intelligence – self-governing systems – challenge traditional philosophical conceptions of human self-constitution and agency, with significant consequences for the theory and practice of constitutional self-government. Ideas of identity, subjectivity, agency, personhood, intentionality, and embodiment are all central to the functioning of modern legal systems. But once artificial entities become more autonomic, and less dependent on deliberate human intervention, criteria like agency, intentionality and self-determination, become too fragile to serve as defining criteria for human subjectivity, personality or identity, and for characterizing the processes through which individual citizens become moral and legal subjects. Are autonomic – yet artificial – systems shrinking the distance between (acting) subjects and (acted upon) objects? How 'distinctively human' will agency be in a world of autonomic computing? Or, alternatively, does autonomic computing merely disclose that we were never, in this sense, 'human' anyway? A dialogue between philosophers of technology and philosophers of law, this book addresses these questions, as it takes up the unprecedented opportunity that autonomic computing and ambient intelligence offer for a reassessment of the most basic concepts of law.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Law, Human Agency and Autonomic Computing by Mireille Hildebrandt, Antoinette Rouvroy, Mireille Hildebrandt, Antoinette Rouvroy in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

Information

Chapter 1

Smart? Amsterdam urinals
and autonomic computing

Don Ihde
When I first learned that the famous fly images in Amsterdam Airport urinals were examples of social-technological “nudging,” I was delighted. The example occurs in the book, Nudge, by the behavioral economist, Richard Thaler and law professor, Cass Sunstein (now Obama's chief of the Office of Information and Regulatory Affairs) (New York Times 2009).1 That was because I frequently had earlier used the term “nudging” to indicate a non-deterministic mode of shifting directions or trajectories in human-technology relations. The other frequently used notion I had advised concerning the role of philosophers of technology in relation to social-technological process was that of sitting philosophers in “R & D—research and development” positions in order to have a “place” for “nudging” at design and development stages, rather than after technologies are already developed and “in place” (Ihde 2002: 103–12). Thus, hypothetically, I should be happy to deal with the theme of “autonomic computing,” a focal theme for this volume.
“Autonomic Computing,” however, becomes highly problematic in precisely this context because autonomic computing simply does not (yet. . .and may never?) exist! At best, it is a dreamed of Cervantes windmill, which can be tilted against if one is critical, or bypassed in favor of a more empirical or concrete approach. The implied comparison: Amsterdam urinals as a social-technological nudge example, and autonomic computing as a current technofantasy simply are not commensurate. By posing my introduction in this way, I clearly now must explain the problem.

Nudging and the Amsterdam urinals

I begin with the actually existent social-technological example: Amsterdam Airport urinals. These urinals have a fly-image etched onto the porcelain of the urinal, just a bit above the drain outlet. The claim is that once so installed, the spillage on the floor of the men's toilets was reduced by 80 per cent! It is thus claimed that this engineered design manages to “attract people's attention and alter their behavior and alter their behavior in a positive way . . . Men evidently like to aim at targets” (New York Times 2009). Now, assuming the claims related to cleaning are correct, and taking note that the Amsterdam Airport is the hub of a very high number of users, and it is also a place where men of many cultures are in transit, then this is a highly successful social-technology. And although on the surface this may appear to be a lo-tech and low-cost technology, its “profiling” and behavior-modifying strategy parallels much of what is desired with hi-tech social-IT strategies especially with respect to marketing.
A second, deeper glance however, shows that the situation is somewhat more complex. Although a fly-image etched simply onto the porcelain remains a simple design, it actually fits into a complex and far-reaching system of waste technologies. For instance, far more important even than the clean-up in labor saved – and especially for a country such as the Netherlands where water tables are of essential importance – is the amount of water used to keep the system working. Low-level flushes, coupled to automatic sensors, begin to make the urinal much more hi-tech than might first appear. And these design effects are more “autonomic” than the aim-at-target phenomenon since the user does not directly relate to the flush amount or sensor. Nor is the urinal example as complex as a bi-gender problem, which we New Yorkers are familiar with. In recent years, in theatres, sports complexes and other mass public sites, the phenomenon of women invading men's toilets has produced an amount of amusing commentary equal to that surrounding the literature about the Amsterdam Airport urinals. The lines outside women's toilets are usually much, much longer than those for men and thus in the rush between acts, “invasions” into the men's toilets are known to happen. Today, of course, with the building of the new baseball stadiums, designers have become aware of the need of greater concern, not only for gender ratios, but for time-at-station, and thus the new facilities have proportionately many more female toilet stations than older facilities. (The unisex solutions which exist in many European contexts, would be unlikely to work in the US for obvious cultural reasons.) In short, the Amsterdam Airport urinals are but one instance of a modern sanitation and waste social-technological problem. And this problem encompasses not only the technical and engineering dimensions, but the cultural, social, gender dimensions as well. Even deeper, behind this complex of problems lies the need for critically informed engineering-design education sensitive to the cultural-social as well as the technical demands for design process. And, there remains the issue I shall call “ontological” as well: what shapes the human-social and even “existential” relations to technologies? Now, while I shall return to the issues raised by social-technological “nudging,” since it contains provocative ideas about choices and human-technology interactions, I now turn to the problems facing autonomic computing.

Autonomic computing

The idea behind autonomic computing is for a totally “autonomous” network system described by Matt Villano (2009) as,
“a network administrator's dream, a system that identifies, isolates and repairs glitches all by itself. As thresholds are reached, servers reallocate resources automatically. When a virus or hacker intrudes, the network responds without involving humans at all, eliminating threats and learning from the incidents so they don't occur again. As a result, the technology reduces overall IT cost of ownership by as much as 50 percent.”
Here, then, is the dream of a totally autonomous technological system – but more lies behind this dream since such a system is called “autonomic.” Turning now to a more serious and technical source, Manish Parashar and Salim Hariri call for a new paradigm to guide the development of autonomic computing,
“The increasing scale complexity, heterogeneity and dynamism of networks, system and applications have made our computational and information infrastructure brittle, unmanageable and insecure. This has necessitated the investigation of an alternate paradigm for system and application design, which is based on strategies used by biological systems to deal with similar challenges-a vision that has been referred to as autonomic computing.”
(Manish and Hariri 2005: 447; emphasis mine)
But which biological system? The answer is the “autonomic nervous system.” “The human nervous system is, to the best of our knowledge, the most sophisticated example of autonomic behavior existing in the human body. It is the body's master controller that monitors changes inside and outside the body, integrates sensory inputs, and effects appropriate response. . .the nervous system is able to constantly regulate and maintain homeostasis” (Parashar and Hariri 2005: 248). So, once again in the recent history of computational technologies, the dream of what I shall call animal autonomy returns. Before looking at its predecessor, however, I need to expand upon what such a new nervous system paradigm is thought to entail for the designers of its IT analog. I shall here skip to the characteristics deemed desirable for this self-correcting, autonomous system as described by Parashar and Hariri (2005: 255) (in abbreviated form):
Self-awareness: An autonomic application/system “knows itself” and is aware of its state and its behaviors.
Self-configuring: . . .able to configure and reconfigure itself under varying and unpredictable conditions.
Self-optimizing: . . .able to detect suboptimal behaviors and optimize itself to improve its execution.
Self-healing: . . .able to detect and recover from potential problems and. . .function smoothly.
Self-protecting: . . .capable of detecting and protecting its resources from both internal and external attack. . .maintaining. . .security and integrity.
Context aware: An autonomic application/system should be aware of its execution environment and be able to react to changes in the environment.
Open: . . .must function in a heterogeneous world. . .across multiple hardware and software architectures. . .
Anticipatory: . . .able to anticipate to the extend possible, its needs and behaviors and those of its context, and be able to manage itself proactively.
As anyone can see, this is a very tall order! But as I read this list and description of the new biological paradigm I also recognized that much of it recalled a similar set of hopes and claims made in a preceding similar utopic battle some decades ago: I refer to the mid-twentieth-century contestations surrounding the then idealized notions for artificial intelligence in which the philosophical critic, Hubert Dreyfus, played a central role. This previous contestation produced a “library” of books and articles and spanned argument over several decades. The entire history is too complex to trace here in detail. But I shall look at two of its dimensions which so closely parallel the current state of autonomic computing phenomena. The one dimension may be characterized as the role of techno-hype which accompanies and has accompanied in modernity, the designer-corporate claims usually originating from the supporting engineering-corporate developers and then amplified by enthusiastic media-hype in the press and now on the internet. Here the issue is one for a critical hermeneutics which must address the mythologization/ demythologization issues which surround such technological developments. The second dimension is substantive and relates to the appropriateness of the models and paradigms which are used to frame the development of hoped for technologies. In the earlier artificial intelligence debates Dreyfus highlighted both these dimensions.
I begin with techno-hype: Dreyfus characterized the early period of artificial intelligence (AI) as the days of heady predictions (1957–67) such as those claimed by Herbert Simon. Simon predicted in 1957,
That within ten years a digital computer will be the world's chess champion [I take preliminary note here of the much later—1957/1997 or forty years later—Big Blue vs. Kasperov match. I will return to this event since I contend it has been terribly misinterpreted and misunderstood!]. . . .within ten years a digital computer will discover and prove an important new mathematical theorem. . .[and] within ten years most theories in psychology will take the form of computer programs, or of qualitative statements about the characteristics of computer programs.
(Dreyfus 1993: 82)
To Simon's predictions were others which included claims that AI processes would produce translation programs, produce a general problem solver, be able to write news stories and the like, all predicated on a belief that AI would exceed human intelligence. Much of this research was then being funded by RAND, MIT, and similar early computational centers. Ironically, by the end of the prediction-decade none of the claims were being fulfilled and it was RAND, worried that the predictions were not coming through, which commissioned Dreyfus to examine the situation which, he and his brother, Stuart, began with the first result, Dreyfus's famous, “Alchemy and Artificial Intelligence” (1967). What I should like to draw attention to here is that these hyper-claims belong to a now very long tradition of modernist hopes and projections stemming from initial and often utopian projections concerning new technologies. Moreover, this myth-making, I contend, is part of the deep culture of modernist technoscience overall. Its roots go back to the earliest glimmers of modernity, for example, in the Roger Bacon fantasies about underwater boats, flying machines, machines of war (1270) which later were visualized in Leonardo's technical drawings of diving suits, flying machines and human or horse-powered tanks (1450s). Leaping to the twentieth century, recall similar dreams for the elimination of insect born diseases with DDT, almost infinitely cheap electric power with nuclear energy, and the solutions to human hunger with the green revolution. All these overly utopian extrapolations overlooked side effects (insect resistance through mutational change), complications (need for multiple redundancy safety and waste storage systems), and unforeseen hidden costs (transfers of agricultural technologies calling for fertilizer-pesticide-modern technology equipment too expensive for developing countries).
Accompanying, but not fore-fronted here, were also the parallel dystopian predictions and mythologies which, in contrarian fashion, project disasters and the destruction of traditional cultural values. These are to my mind, equally exaggerated and absurd. Rather, what I am pointing to is the prediction pattern which accompanies each new social-technological development. And what I am advocating is a more “empirical” and historical critical look at developing technologies. Such a perspective would reveal that even underway in development, technologies are constantly changing and undergoing modification, often such that when one finally reaches some level of stabilization the technology will rarely look like what the original design or “intent” claimed. Bruno Latour's descriptive history of the Diesel engine is a good example of this phenomenon: although framed by Latour's complex theory concerning the constructions of facts and things, the process by which Diesel's engine becomes a “diesel engine” is suggestive of late modern technology development.
Rudolph Diesel both experimented – one of his predecessor engines using steam exploded and nearly killed him – and theorized. He was enamored of the then current darling of science theory, Carnot's thermodynamic theory. Seeking a more efficient thermodynamic engine, Diesel had “. . .an idea of a perfect engine working according to Carnot's thermodynamic principles. . .an engine where ignition could occur without an increase in temperature, a paradox that Diesel solved by inventing new ways of injecting and burning fuel. . . .we have a book he published and a patent he took out. [patent issued in 1887]” (Latour 1987: 105). Thus, paralleling our autonomic computing story, we have an ideal plan. But, as Latour in his inimical style points out, the process of materializing an engine turns out to be more complex and difficult than outlined in the ideal description. Entering into agreements with MAN, Krupp et al., a bevy of engineers gathered now into a workshop, began to try to produce a diesel engine. “The question of fuel combustion soon turned out to be more problematic, since air and fuel have to be mixed in a fraction of a second. A solution entailing compressed air injection was found, but this required huge pumps and new cylinders for the air; the engine became large and expensive. . .” (Latour 1987: 105). In short, even at this early developmental stage Diesel had already. . . “drifted way from the original patent and from the principles presented in his book” (Latour 1987: 105).
In any case, ten years later, in 1897, an actual engine had been developed, hopefully to be replicated and run by those who buy the license rights to make engines – but then it turned out the engines were not unproblematic. . . “the engine kept faltering, stalling, breaking apart. . . .One after another, the licensees returned the prototypes to Diesel and asked for their money back. . .Diesel went bankrupt and had a nervous breakdown[1899]” (Latour 1987: 106). Despite these setbacks, engineers kept tinkering and redesigning, producing engines which could run all day, but which still needed to be overhauled at night, until 1908 when Diesel's original patent expired. Then, once in the “. . .public domain, MAN is able to offer a diesel engine for sale after yet more tinkering, which can be bought as an unproblematic, albeit new, item of equipment. . .[this now is twenty plus years after the original patent]” (Latour 1987: 106). Although this is a highly foreshortened history, it is quite typical of technological development as discerned by historians of technology. The end result turns out to be very different from the initial conception and the way from the initial plan to, in this case a successful, result is difficult, filled with twists and turns and often the end result does not look anything like its conceptual beginning. And, even here, I am disregarding the even vaster history of technologies which fail or do not end up successfully at all.2 My point is that a historical and empirically critical analysis must contain skepticism about techno-hype which remains embedded in technofantasy. Nor is it accidental that today such hype reflexively points back to the supporters of these vastly expensive projects. Big Blue was an IBM project and IBM is again behind the autonomic computing project, with Cisco Systems in a formal partnership to develop an endto-end autonomic system, announced in 2004 (Villano 2009).
Turning briefly back to the substantive goals of autonomic computing, however, also reveals much about some lessons learned ...

Table of contents

  1. Front Cover
  2. Title Page
  3. Copyright
  4. Contents
  5. Acknowledgements
  6. On the contributors
  7. Foreword by Ian Kerr
  8. Introduction: a multifocal view of human agency in the era of autonomic computing
  9. 1 Smart? Amsterdam urinals and autonomic computing
  10. 2 Subject to technology: on autonomic computing and human autonomy
  11. 3 Remote control: human autonomy in the age of computer-mediated agency
  12. 4 Autonomy, delegation, and responsibility: agents in autonomic computing environments
  13. 5 Rethinking human identity in the age of autonomic computing: the philosophical idea of trace
  14. 6 Autonomic computing, genomic data and human agency: the case for embodiment
  15. 7 Technology, virtuality and utopia: governmentality in an age of autonomic computing
  16. 8 Autonomic and autonomous ‘thinking’: preconditions for criminal accountability
  17. 9 Technology and accountability: autonomic computing and human agency
  18. 10 Of machines and men: the road to identity. Scenes for a discussion
  19. 11 ‘The BPI Nexus’: a philosophical echo to Stefano Rodotà’s ‘Of Machines and Men’
  20. Epilogue: technological mediation, and human agency as recalcitrance
  21. Index