Predictive Policing and Artificial Intelligence
eBook - ePub

Predictive Policing and Artificial Intelligence

John McDaniel, Ken Pease

  1. 312 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Predictive Policing and Artificial Intelligence

John McDaniel, Ken Pease

Book details
Book preview
Table of contents
Citations

About This Book

This edited text draws together the insights of numerous worldwide eminent academics to evaluate the condition of predictive policing and artificial intelligence (AI) as interlocked policy areas. Predictive and AI technologies are growing in prominence and at an unprecedented rate. Powerful digital crime mapping tools are being used to identify crime hotspots in real-time, as pattern-matching and search algorithms are sorting through huge police databases populated by growing volumes of data in an eff ort to identify people liable to experience (or commit) crime, places likely to host it, and variables associated with its solvability. Facial and vehicle recognition cameras are locating criminals as they move, while police services develop strategies informed by machine learning and other kinds of predictive analytics. Many of these innovations are features of modern policing in the UK, the US and Australia, among other jurisdictions.AI promises to reduce unnecessary labour, speed up various forms of police work, encourage police forces to more efficiently apportion their resources, and enable police officers to prevent crime and protect people from a variety of future harms. However, the promises of predictive and AI technologies and innovations do not always match reality. They often have significant weaknesses, come at a considerable cost and require challenging trade- off s to be made. Focusing on the UK, the US and Australia, this book explores themes of choice architecture, decision- making, human rights, accountability and the rule of law, as well as future uses of AI and predictive technologies in various policing contexts. The text contributes to ongoing debates on the benefits and biases of predictive algorithms, big data sets, machine learning systems, and broader policing strategies and challenges.Written in a clear and direct style, this book will appeal to students and scholars of policing, criminology, crime science, sociology, computer science, cognitive psychology and all those interested in the emergence of AI as a feature of contemporary policing.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Predictive Policing and Artificial Intelligence by John McDaniel, Ken Pease in PDF and/or ePUB format, as well as other popular books in Computer Science & Artificial Intelligence (AI) & Semantics. We have over one million books available in our catalogue for you to explore.

Information

Part I
Bias and Big Data

Chapter 1

The future of AI in policing

Exploring the sociotechnical imaginaries

Janet Chan

Introduction

Artificial intelligence (AI) is a generic term that covers a variety of related computational techniques such as machine learning, speech recognition, machine vision, national language processing, expert systems and various tools for planning and optimisation, used for problem solving and performance of tasks that normally require human intelligence (Walsh et al., 2019). Through advances in modern technology, the use of AI for services to human society is no longer a distant future in the realm of science fiction. According to the majority of AI experts, there’s a 50 per cent chance that 2062 is the year when we would have created machines as intelligent as we are (Walsh, 2018). The promises of AI in making human decisions smarter, more efficient and more rational are extremely attractive in an environment where information has become increasingly complex, voluminous and fast changing. What, then, is the future of AI in policing? There is, unfortunately, no clear answer to this question that we can look up in the literature or in government publications. This is partly because AI is still a relatively new technology, even though it is being developed rapidly and its applications are growing exponentially. It is safe to say, though, that the future of AI depends on how societies see AI technology and its relevance to society in general, and to policing in particular. And this is what this chapter is trying to examine.
The chapter is structured as follows. Section 1 introduces the concept of “sociotechnical imaginaries” which underpins the analysis presented in this chapter. Section 2 discusses in general the benefits and risks of AI, while Section 3 examines more closely four co-existing sociotechnical imaginaries connected with the use of AI in policing, focusing on the case of predictive policing. The final section suggests how society should approach the advent of AI.

1. Sociotechnical imaginaries

Throughout this chapter, I will be using the concept of “sociotechnical imaginaries” as popularised by the Science and Technology Studies scholar Sheila Jasanoff (2015a). Jasanoff defines sociotechnical imaginaries as “collectively held, institutionally stabilized, and publicly performed visions of desirable futures, animated by shared understandings of forms of social life and social order attainable through, and supportive of, advances in science and technology” (2015a: 4). Even though this definition seems to privilege visions that are “desirable”, the concept is not confined to utopian visions of technology; in fact, as in most forms of technologies, there are inevitably dystopian visions foreseen by segments, sometimes substantial segments, of the population. The existence of “resistant imaginaries” is part of the framework. It should also be pointed out that this definition does not assume that there is only one dominant vision that is “collectively held”, there are likely to be multiple visions co-existing in societies. The definition also is not premised on visions being unchanging in time or space; rather, it allows for a process of transformation where new ideas are introduced, embedded in social practice or resisted, leading to their expansion or removal. In this formulation, “[i]maginaries operate as both glue and solvent, able—when widely disseminated and effectively performed—to preserve continuity across the sharpest ruptures of innovation or, in reverse, to upend firm worlds and make them anew” (2015a: 29).
The concept of social imaginaries has been in existence for some time. Jasanoff (2015a: 5–8) has traced the genealogies of “imagination as a social practice” to the works of Durkheim and Weber, anthropologists such as Evans-Pritchard and Mary Douglas, Benedict Anderson’s work on imagined communities, and Charles Taylor’s use of the term “social imaginaries”. She notes a “startling, almost inexplicable omission” from the classic accounts of social imaginaries—“a detailed investigation of modernity’s two most salient forces: science and technology” (2015a: 8). Jasanoff sees sociotechnical imaginaries as occupying “the theoretically underdeveloped space between the idealistic collective imaginations identified by social and political theorists and the hybrid but politically neutered networks or assemblages with which STS scholars often describe reality”:
Our definition pulls together the normativity of the imagination with the materiality of networks … Unlike mere ideas and fashions, sociotechnical imaginaries are collective, durable, capable of being performed; yet they are also temporally situated and culturally particular. …[T]hese imaginaries are at once products of and instruments of the co-production of science, technology, and society in modernity.
(2015a: 19)
To recognise or identify sociotechnical imaginaries about AI in policing, I have drawn on published academic literature, available government documents and other online resources in order to examine the language used to frame and visualise the use of AI in society and in policing.

2. The benefits and risks of AI for society

The report by Walsh et al. (2019) is drawn on heavily in this section as it provides the most recent survey of the applications of AI to society and represents the sociotechnical imaginaries of the top Australian researchers in humanities, social sciences, science, technology and engineering.1 The report appears to maintain a relatively “balanced” sociotechnical imaginary of AI; in other words, it lays out equally the benefits as well as the risks of AI for society.

Definition of AI

Walsh et al. (2019: 14) are careful to define what AI is and is not, noting however that there is not consensus among AI researchers on a universal definition. AI is “not a specific technology” but “a collection of interrelated technologies used to solve problems and perform tasks that, when humans do them, requires thinking”. The “components of AI” include machine learning, as well as a range of techniques including natural language processing, speech recognition, computer vision and automated reasoning (see Figure 1.1). Walsh et al. (2019: 15) claim that AI is superior to simpler technologies “in its ability to handle problems involving complex features such as ambiguity, multiple and sometimes conflicting objectives, and uncertainty” and in many cases, its “ability to learn and improve over time”.
Figure 1.1Components of AI, reproduced with permission from ACOLA (2019).
Source: Walsh, T., Levy, N., Bell, G., Elliott, A., Maclaurin, J., Mareels, I.M.Y., Wood, F.M. (2019) The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing. Report for the Australian Council of Learned Academies, www.acola.org.

The benefits of AI

Walsh et al. give numerous examples of the potential benefits of AI for a variety of sectors in society, including manufacturing, health, communication, transportation and financial applications:
New techniques of machine learning are spurring unprecedented developments in AI applications. Next-generation robotics promise to transform our manufacturing, infrastructure and agriculture sectors; advances in natural language processing are revolutionising the way clinicians interpret the results of diagnostic tests and treat patients; chatbots and automated assistants are ushering in a new world of communication, analytics and customer service; unmanned autonomous vehicles are changing our capacities for defence, security and emergency response; intelligent financial technologies are establishing a more accountable, transparent and risk-aware financial sector; and autonomous vehicles will revolutionise transport.
(2019: 4)
Apart from these transformative innovations, the sociotechnical imaginary mentions a more practical benefit of AI, “the capacity to free workers from dirty, dull, difficult and dangerous tasks” (2019: 31). Moreover, the prospects of AI increasing its reach and influence are excellent, according to this imaginary. The factors favouring its growth include various advances in computing, data availability and the Internet of Things. Since AI is dependent on high volumes of data, the availability of large volumes of diverse data, at high velocity, arriving “real time” is crucial to its realising its potential. Already, we are taking advantage of algorithms that help us translate text, navigate roads and look for accommodation. The potential benefits of AI are touted both in terms of technical achievements ( “the ability to outperform humans in forming inferences from large, complex datasets to solve problems such as classification, continuous estimation, clustering, anomaly detection, data generation and ranking”) and economic transformations (“the potential to transform economies and societies, in terms of innovation, effectiveness, process efficiency and resilience”) (2019: 22).

The risks of AI

Walsh et al.’s (2019) sociotechnical imaginary of AI is not totally rosy. The risks of AI are not papered over but discussed in great detail. In this way, the “resistant imaginary” coexists with the utopian one:
It is well known, for example, that smart facial recognition technologies have often been inaccurate and can replicate the underlying biases of the human-encoded data they rely upon; that AI relies on data that can and has been exploited for ethically dubious purposes, leading to social injustice and inequality; and that while the impact of AI is often described as ‘revolutionary’ and ‘impending’, there is no guarantee that AI technologies such as autonomous vehicles will have their intended effects, or even that their uptake in society will be inevitable or seamless.
(2019: 4)
This recognition of potential problems is quickly followed by a sociotechnical imaginary that normalises these issues variously as temporary “teething problems of a new technology” or a “risk associated with all technological developments”, adding that “AI technologies could in fact be applied to oppose this misuse” (2019: 4).
The risks or downsides of AI are discussed in terms of its current technical limitations, including the risks of errors, the risk of data-driven biases and the problem of trust. These risks are discussed in more detail below.

Technical limitations

The report points out that AI, in spite of its widely circulated achievements, is not without problems. For example, there are risks of errors in current facial recognition systems. The report cites the case of a Chinese businesswoman incorrectly identified as having jaywalked when her face on a bus advertisement was captured by the facial recognition system as it went through an intersection (Shen, 2018, cited in Walsh et al., 2019: 23). Machine learning (ML) algorithms are also singled out for their limited capability:
ML systems will often break in strange ways, do not provide meaningful explanations, and struggle to transfer to a new domain.
(Walsh et al., 2019: 15–16)
A good example is the apparently superhuman achievements of AlphaZero when trained to play games like Go and Chess; however, this skill is not easily converted to playing a game of poker or reading x-rays (2019: 15–16). The report suggests that the “narrow focus” of machine learning systems will “likely be the case for many years” (2019: 15–16). Machine learning also suffers from the problem of intelligibility: “It can be difficult – even for an expert – to understand how a ML system produces its results (the so-called ‘black box’ problem)” (2019: 34–35).
Natural language processing (NLP), in spite of its impressive achievements and improvements over the years, is, according to the report, still work in progress:
NLP still has limitations as demonstrated by the Winograd Schema Challenge, a test of machine intelligence. The Winograd Schema tasks computer programs with answering carefully tailored questions that require common sense reasoning to solve. The results from the first annual Winograd Schema Challenge ranged from the low 30th percentile in answering correctly to the high 50s, suggesting that further research is required to develop systems that can handle such tests. Notably, human subjects were asked the same questions and scored much higher, with an overall average of approximately 90 percent.
(2019: 34–35)

Data-driven biases

The report devotes a great deal of space to what might be called a “resistant imaginary” of AI. Most of this discussion related to the “risk of amplifying discrimination and bias, and problems of fairness” that stem from the use of aggregated data (Walsh et al., 2019: 175). The report distinguishes between algorithmic bias and bias in the input data. In relation to the latter, the report cites research on biases in human decisions, as a result of “various failures of reasoning” and negative emotions such as fear (2019: 176). However, algorithms designed to reduce bias may not be effective because of the risks of intrinsic as well as extrinsic bias:
Intrinsic bias is built-in in the development of the AI system or results from inputs causing permanent change in the system’s structure and rules of operation. … Extrinsic bias derives from a system’s inputs in a way that does not effect a permanent change in the system’s internal structure and rules of operation. The output of such systems might be inaccurate or unfair but the system remains ‘rational’ in that new evidence is capable of correcting the fault.
(2019: 178–9)
Where intrinsic biases can come from a range of sources, including developers who are biased, technological constraints, programming errors or historical biases, extrinsic biases can originate from unrepresentative, skewed or erroneous data, hence perpetuating historical biases. This problem with biased data can limit the usefulness of NLP algorithms:
Most of the advances in NLP over the past decade have been achieved with specific tasks and datasets, which are driven by ever larger datasets. However, NLP is only as good as the data set underlying it. If not appropriately trained, NLP models can accentuate b...

Table of contents

  1. Cover
  2. Half Title
  3. Series Information
  4. Title Page
  5. Copyright Page
  6. Dedication
  7. Contents
  8. List of illustrations
  9. Foreword
  10. List of contributors
  11. Introduction
  12. Part I Bias and Big Data
  13. Part II Police accountability and human rights
  14. Index