Regulating Artificial Intelligence in Industry
eBook - ePub

Regulating Artificial Intelligence in Industry

  1. 256 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Regulating Artificial Intelligence in Industry

Book details
Book preview
Table of contents
Citations

About This Book

Artificial Intelligence (AI) has augmented human activities and unlocked opportunities for many sectors of the economy. It is used for data management and analysis, decision making, and many other aspects. As with most rapidly advancing technologies, law is often playing a catch up role so the study of how law interacts with AI is more critical now than ever before. This book provides a detailed qualitative exploration into regulatory aspects of AI in industry.

Offering a unique focus on current practice and existing trends in a wide range of industries where AI plays an increasingly important role, the work contains legal and technical analysis performed by 15 researchers and practitioners from different institutions around the world to provide an overview of how AI is being used and regulated across a wide range of sectors, including aviation, energy, government, healthcare, legal, maritime, military, music, and others. It addresses the broad range of aspects, including privacy, liability, transparency, justice, and others, from the perspective of different jurisdictions.

Including a discussion of the role of AI in industry during the Covid-19 pandemic, the chapters also offer a set of recommendations for optimal regulatory interventions. Therefore, this book will be of interest to academics, students and practitioners interested in technological and regulatory aspects of AI.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Regulating Artificial Intelligence in Industry by Damian M. Bielicki, Damian M. Bielicki in PDF and/or ePUB format, as well as other popular books in Law & Law Theory & Practice. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2021
ISBN
9781000509823
Edition
1
Topic
Law
Index
Law

PART I
Horizontal AI applications

1 Artificial intelligence and its regulation in the European Union

Gauri Sinha and Rupert Dunbar
DOI: 10.4324/9781003246503-2

Background

There is an acknowledgement that the European Union (EU) is behind key competitors in the race to attract, create and nurture AI companies and investment. 1 On this basis, the European Commission has published a White Paper, and a public consultation process on this has now been undertaken. 2 The plan is for the publication of legislative proposals imminently. 3
1 White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, COM (2020) 65 Final (Brussels, 19 February 2020), 4. 2 Public Consultation on AI White Paper: Final Report, European Commission, DG for Communications Networks, Content and Technology (November 2020). 3 European Parliament (Press Releases), ‘Parliament leads the way on first set of EU rules for Artificial Intelligence’ (20 October 2020) <https://www.europarl.europa.eu/news/en/press-room/20201016IPR89544/parliament-leads-the-way-on-first-set-of-eu-rules-for-artificial-intelligence> accessed 27 May 2021.
With some uncertainty concerning the precise future regulation of AI in the EU, this chapter seeks nonetheless to establish from the White Paper and consultation process what indication there is for companies on key points of concern: transparency, explainability and accountability. It contextualises these concepts and highlights the challenges inherent in them. These, of course, are challenges common to all jurisdictions.
Problems more unique to the EU also remain though, not least an underlying scepticism concerning AI across not just EU citizenry, but also public authority and business—with 90% of the consultation respondents expressing concern that AI may breach fundamental rights. 4 There is also an interesting development in that the European Parliament has published its positions on AI and the emergence of nationalism in both the consultation and following the COVID-19 pandemic, providing a far from unified picture.
4 Public Consultation on AI (n 2), 7.
The reality is that regulation on the ground is likely to be years away.
Given that action through legislation (‘positive harmonisation’) is far from immediate, it is important to draw attention to the Court of Justice of the European Union’s probable role in developing this area through its case law (so-called ‘negative harmonisation’). Overall, companies will be encouraged by the Court’s mood music, even if the specific tune is difficult to identify at this stage.
Ultimately, concerning the future of regulation for AI in the EU, it must be said that whilst there is ambition to achieve much in the field, the collective vision and philosophy for what AI can do, and how it can benefit society, remains fragmented.

Introduction

The White Paper defines AI as ‘a collection of technologies that combine data, algorithms and computing power’. 5 Its applications are multiple and the Commission recognises that, in a positive light, it could improve healthcare, help combat climate change, increase the efficiency of production and improve security (amongst others). 6 But in order to achieve these ambitions it needs to overcome certain challenges, including the fact that citizens are anxious concerning AI’s capacity to do them harm, both intentionally and unintentionally, and that businesses are seeking legal certainty. A unified way forward is sought.
5 White Paper (n 1), 2. See also A Definition of AI: Main Capabilities and Disciplines, High-Level Expert Group on Artificial Intelligence (Independent), European Commission B-1049 (Brussels, 8 April 2019). 6 White Paper (n 1), 1.
How can this be achieved? The answer is not straightforward and depends on a number of subjective factors that have their roots in trust and ethics. For the EU there is an increasing realisation that making decisions quickly or more efficiently is not what defines the success of AI. What is more important is how the benefits are realised. Are we ready to leave decisions that impact human lives in the hands of machines? Perhaps the answer to this ‘readiness’ lies somewhere in the corridors of trust and ethics, and it is to these concepts that the chapter now turns.
In April 2019, the European Commission High Level Expert Group on AI adopted the Ethics Guidelines for Trustworthy AI, stressing that human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology. 7 The broad principle that AI initiatives should not be realised if they entail the compromising of ethics is abundantly clear. 8 Compliance with ethics, however, is a different complexity that merits discussion. The ‘creative’ interpretation of ethical principles should not be used as a shield to achieve ‘box-ticking’ compliance where corporations only seem to be complying. 9
7 Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence (Independent), European Commission B-1049 (Brussels, 8 April 2019). 8 S. Tsakiridi, ‘AI ethics in the post-GDPR world: Part 1’ (2020) P. & D.P. 20(6), 13–15. 9 Compliance in several other areas, for example, financial crime laws, has been reduced to ‘box-ticking’ where organisations seem to be complying to avoid regulatory action.
The relationship between ethics and law adds a further layer of intricacy. As highlighted by the European Data Protection Supervisor, ethics in the EU is not conceived as an alternative to compliance with the law, but as the underpinning values that protect human dignity and freedom. 10
10 European Data Protection Supervisor, ‘Ethics’ <https://edps.europa.eu/data-protection/our-work/ethics_en> accessed 27 May 2021.
What follows in this chapter is an attempt to unravel the complexities behind three ethical principles that form the cornerstone of an effective AI model—transparency, explainability and accountability.

Transparency

In the context of AI, transparency indicates the capability to describe, inspect and reproduce the mechanisms through which AI systems make decisions. Transparency, however, is closely linked to trust, which is also about being explicit and open about choices and decisions concerning data sources and development processes and stakeholders. 11 In essence, it would mean having a complete view of the system on three levels. 12 First, at the implementation level, where the AI model acts on the data that it is fed to produce a known output, including the technical principles of the model and the associated parameters. The Commission considers this as the standard ‘white-box’ model, in contrast to the ‘black-box’ model where this principle is unknown. Second, at the specification level, all the information that resulted in the implementation, such as objectives, tasks and relevant datasets, should be open and known. The third level is interpretability, which represents the understanding of the underlying mechanisms of the model. For example, what are the logical principles behind the processing of data and what is the rationale behind a certain output? The Commission believes that these questions are the hardest to answer and the third level of transparency is not achieved in current AI systems. 13 This is particularly complex as the third level of transparency is closely linked to fairness in decisions, often impacting human lives. Fairness itself is a subjective and contextual concept, influenced by multiple social, cultural and legal factors.
11 V. Dignum, Responsible Artificial Intelligence, in Artificial Intelligence: Foundations,Theory, and Algorithms (Springer 2019), 54. 12 R. Harmon, H. Junklewitz, I. Sanchez, ‘Robustness and Explainability of Artificial Intelligence’ EUR 30040 EN (2020) EU Science Hub, 11. 13 ibid., 12.
With the level of subjectivity present, it is crucial to remember that AI models are not designed by machines and potentially reflect the biases and prejudices of the designers choosing the features, metrics and structures of a model. 14 The concept of trust also needs further clarification to navigate through the effectiveness of AI. Trust is primarily used to speak about the trust or distrust of individuals and institutions who are responsible for developing, deploying or using AI. However, trust could also be directed at those who are responsible for regulating AI, who are overseeing its use or providing conditions for its development. Having all of these stakeholders involved in decisions that impact human beings is perhaps the fairest way to build trust, but this may not be possible due to the number of stakeholders or the complexity of the AI systems. The outcome then becomes almost as important as the process—but are we able to ensure that AI systems are transparent to the extent they can be?
14 Tsakiridi (n 8), 13–15.
Opacity in machine learning, the so-called ‘black-box’ algorithm, is often mentioned as one of the primary hurdles to achieving transparency in AI. 15 The primary aim of AI algorithms is to identify patterns or similarities in a particular dataset. As such, the correlations that a particular dataset presents (for example, the correlation between geographic area and race) are also learnt by the algorithm. There is a risk that these correlations will lead to bias and unfair outcomes due to preconceived human notions. For example, there have been many issues regarding facial recognition models trained on limited datasets reflecting the bias of model developers, and unfair outcomes of loan decision models that have historically used biased datasets to determine availability of credit. 16
15 Dignum (n 11), 59. 16 R. Schmelzer, ‘Towards a More Transparent AI’ (Forbes, 23 May 2020)<https://www.forbes.com/sites/cognitiveworld/2020/05/23/towards-a-more-transparent-ai> accessed 27 May 2021.
It follows that bias-free decision-making should be the outcome of a transparent AI model. The decision-making process should be open and fair, for example, rejected loan applications should be able to list objective reasons why an applicant was rejected. This could be yearly income or the amount of savings, but not subjective reasons such as race, gender or postcode. Transparent obj...

Table of contents

  1. Cover
  2. Half-Title
  3. Series
  4. Title
  5. Copyright
  6. Contents
  7. Contributors
  8. Preface
  9. List of acronyms and abbreviations
  10. PART I Horizontal AI applications
  11. PART II Vertical AI applications
  12. Summary
  13. Bibliography
  14. Index