Judgement-Proof Robots and Artificial Intelligence
eBook - ePub

Judgement-Proof Robots and Artificial Intelligence

A Comparative Law and Economics Approach

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

Judgement-Proof Robots and Artificial Intelligence

A Comparative Law and Economics Approach

Book details
Book preview
Table of contents
Citations

About This Book

This book addresses the role of public policy in regulating the autonomous artificial intelligence and related civil liability for damage caused by the robots (and any form of artificial intelligence). It is a very timely book, focusing on the consequences of judgment proofness of autonomous decision-making on tort law, risk and safety regulation, and the incentives stemming from these. This book is extremely important as regulatory endeavours concerning AI are in their infancy at most, whereas the industry's development is continuing in a strong way. It is an important scientific contribution that will bring scientific objectivity to a, to date, very one-sided academic treatment of legal scholarship on AI.

Frequently asked questions

Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Judgement-Proof Robots and Artificial Intelligence by Mitja Kova?,Mitja Kova? in PDF and/or ePUB format, as well as other popular books in Economics & Economic Policy. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9783030536442
Ā© The Author(s) 2020
M. KovačJudgement-Proof Robots and Artificial Intelligencehttps://doi.org/10.1007/978-3-030-53644-2_1
Begin Abstract

1. Introduction

Mitja Kovač1
(1)
School of Economics and Business, University of Ljubljana, Ljubljana, Slovenia
Mitja Kovač

Abstract

The introduction summarizes a book outline, introduces individual chapters, and discusses some of the main concepts used.
Keywords
Law and economicsRegulationAutonomous artificial systemsJudgement-proof problem
End Abstract
Artificial intelligence and its recent breakthroughs in the machineā€“human interactions and machine learning technology are increasingly affecting almost every sphere of our lives. It is on an exponential curve and some of its materializations represent an increased privacy threat (Kosinski and Yilun 2018), might be ethically questionable (e.g. child-sex bots), and even potentially dangerous and harmful (e.g. accidents caused by autonomous self-driving vehicles, ships, and planes or autonomous decision to kill by machines). Big data economies, robotization, autonomous artificial intelligence, and their impact on societies have recently received increasing scholarly attention in economics, law, sociology, philosophy, and natural sciences.
Superfast economic changes spurred by worldwidely integrated markets, creation of artificial intelligence and related explosive gathering and processing of unimaginable large data (big data) by the artificial intelligence represent one of the most triggering questions of the modern world. One that can even rival the fatal issue of the global climate change. Namely, the artificial intelligence is undoubtedly unleashing a new industrial revolution and, in order to govern the currently uncontemplated hazards, it is of vital importance for the lawmakers around the globe to address its systemic challenges and regulate its economic and social effects without stifling innovation.
The founding father of modern computer science and artificial intelligence Alan Turing envisaged such a trajectory and, in a lecture, given in Manchester in 1951 considered the subjugation of humankind:
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butlerā€™s Erewhom. (Turing 1951)
More recently Russell (2019) argues that technical community has suffered from a failure of imagination when discussing the nature and impact of super-intelligent AI. Russell (2019) suggests that often we see ā€œdiscussions of reduced medical errors, safer cars or other advances of an incremental nature.ā€ He also advances that:
ā€¦robots are imagined as individual entities carrying their brains with them, whereas in fact they are likely to be wirelessly connected into a single, global entity that draws on vast stationary computing resources. It is if researchers are afraid of examining the real consequences of AI A general-purpose intelligent system can, by assumption, do what any human can do. (Russell 2019)
Current trends that lean towards developing autonomous machines, with the capacity to interact, learn, and take autonomous decisions, indeed hold a variety of concerns regarding their direct and indirect effect that call for a substantive law and economics treatment. The super-intelligent artificial intelligence will, as I argue throughout this book, also change immensely the entire institutional and conceptual structure of the law. Super-influencer and industrial visionary Elon Musk for example advocates an urgent legislative action that would regulate globally the artificial intelligence before it will be too late. At the U.S. National Governors Association 2017 summer meeting in Providence, Musk famously stated that ā€œthe US governmentā€™s current framework for regulation would be dangerous with artificial intelligence because of the existential risk it poses to humanity (Gibbs 2017).ā€ Moreover, Musk sees artificial intelligence as the ā€œmost serious threat to the survival of human raceā€ (Gibbs 2017). Policymakers around the world have been actually urged to address the growing legal vacuum in virtually every domain affected by technological advancement.
However, normally, the way regulations are set up is when a bunch of bad things happen, thereā€™s a public outcry, and after many years a regulatory agency is set up to regulate that industry. This book seeks to address this problem of the reflective regulatory action, where a bunch of bad things need to happen to trigger the regulatory response and urges for a pre-emptive, ex ante regulatory approach where actions are taken before bad things happen and before there is a public outcry. There is a simple reason for such an approach. Namely, as Musk suggests, the absence of such a pre-emptive regulatory action might indeed be a fatal one.
Meanwhile, Europeans, lagging slightly behind the artificial intelligenceā€™s technological breakthrough of the United States and China, have not come to grips with what is ethical, let alone with what the law should be and result is a growing legal vacuum in almost every domain affected by this unprecedented technological development. For example, European lawyers are currently passionately discussing what happens when a self-driving car has a software failure and hits pedestrian, or a droneā€™s camera happens to catch someone skinny-dipping in a pool or taking a shower, or a robot kills a human in a self-defence? Is then the manufacturer or the maker of the software or the owner or the user or even the autonomous artificial intelligence himself responsible if something goes wrong?
Having regard to these developments European Parliament already in 2017 adopted a Resolution on the Civil Law Rules on Robotics (P8-TA (2017)0051) and requested the EU Commission to submit on the basis of Article 114 TFEU, a proposal for a directive on civil law rules and to consider the designation of a European Agency for Robotics and Artificial Intelligence in order to provide the technical, ethical, and regulatory expertise. EU Parliament also proposed a code of ethical conduct for robotics engineers a code for research ethics committees, a licence for designers, and a licence for users.
Moreover, lawmakers around the world and particularly the EU Commission also consider that the civil liability for damage caused by the robots (and any form of artificial intelligence) is a crucial issue which also needs to be analysed and addressed at the Union level in order to ensure efficiency, transparency, and consistency in the implementation of legal certainty throughout the EU. In other words, lawmakers wonder whether strict liability or the risk management approach (obligatory insurance or a special compensation fund) should be applied in instances where artificial intelligence causes damage. Furthermore, stakeholders also debate whether an autonomous artificial intelligence should be characterized in the existing legal categories or whether for example a new category with specific rules should be created? If lawmakers would indeed embark on such a journey and proceed with an establishment of such a separate legal entity, then the triggering question is what kind of category shall we have?
As an illustration, consider the current rules on the European continent where autonomous artificial intelligence cannot be held liable per se for actors or omissions that cause damage, since it may not be possible to identify the party responsible for providing compensation and to require that party to make good the damage it has caused (Erdelyi and Goldsmith 2018; Breland 2017; Wadhwa 2014). Current Directive 85/374/EEC adopted more than thirty years ago covers merely damage caused by artificial intelligenceā€™s manufacturing defects and on condition that the injured person is able to prove the actual damage, the defect in the product, and the causal relationship between damage and defect. Therefore, strict liability or liability without fault may not be sufficient to induce the optimal precaution and internalization of risks. Namely, the new super-intelligent artificial intelligence generation will sooner or later be capable of autonomously learning from their own variable experience and will interact with their environment in a unique and unforeseeable manner. Such autonomous, self-learning, decision-making autonomous super-intelligence might then present a substantive limitation to the deterrence and prevention effects and related incentive streams of current regulatory framework.
Regarding the question of strict liability, the law and economics scholarship has witnessed the transformation of product liability, from simple negligence to the far more complex concept of strict product liability (SchƤfer and Ott 2004; Kraakman 2000). This change has b...

Table of contents

  1. Cover
  2. Front Matter
  3. 1.Ā Introduction
  4. Part I. Conceptual Framework
  5. Part II. Judgement-Proof Superintelligent and Superhuman AI
  6. Back Matter