Human-Machine Shared Contexts
eBook - ePub

Human-Machine Shared Contexts

  1. 438 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Human-Machine Shared Contexts

Book details
Book preview
Table of contents
Citations

About This Book

Human-Machine Shared Contexts considers the foundations, metrics, and applications of human-machine systems. Editors and authors debate whether machines, humans, and systems should speak only to each other, only to humans, or to both and how. The book establishes the meaning and operation of "shared contexts" between humans and machines; it also explores how human-machine systems affect targeted audiences (researchers, machines, robots, users) and society, as well as future ecosystems composed of humans and machines.

This book explores how user interventions may improve the context for autonomous machines operating in unfamiliar environments or when experiencing unanticipated events; how autonomous machines can be taught to explain contexts by reasoning, inferences, or causality, and decisions to humans relying on intuition; and for mutual context, how these machines may interdependently affect human awareness, teams and society, and how these "machines" may be affected in turn. In short, can context be mutually constructed and shared between machines and humans? The editors are interested in whether shared context follows when machines begin to think, or, like humans, develop subjective states that allow them to monitor and report on their interpretations of reality, forcing scientists to rethink the general model of human social behavior. If dependence on machine learning continues or grows, the public will also be interested in what happens to context shared by users, teams of humans and machines, or society when these machines malfunction. As scientists and engineers "think through this change in human terms, " the ultimate goal is for AI to advance the performance of autonomous machines and teams of humans and machines for the betterment of society wherever these machines interact with humans or other machines.

This book will be essential reading for professional, industrial, and military computer scientists and engineers; machine learning (ML) and artificial intelligence (AI) scientists and engineers, especially those engaged in research on autonomy, computational context, and human-machine shared contexts; advanced robotics scientists and engineers; scientists working with or interested in data issues for autonomous systems such as with the use of scarce data for training and operations with and without user interventions; social psychologists, scientists and physical research scientists pursuing models of shared context; modelers of the internet of things (IOT); systems of systems scientists and engineers and economists; scientists and engineers working with agent-based models (ABMs); policy specialists concerned with the impact of AI and ML on society and civilization; network scientists and engineers; applied mathematicians (e.g., holon theory, information theory); computational linguists; and blockchain scientists and engineers.

  • Discusses the foundations, metrics, and applications of human-machine systems
  • Considers advances and challenges in the performance of autonomous machines and teams of humans
  • Debates theoretical human-machine ecosystem models and what happens when machines malfunction

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Human-Machine Shared Contexts by William Lawless,Ranjeev Mittu,Donald Sofge in PDF and/or ePUB format, as well as other popular books in Design & UI/UX Design. We have over one million books available in our catalogue for you to explore.

Information

Year
2020
ISBN
9780128223796
Topic
Design
Subtopic
UI/UX Design
Chapter 1

Introduction: Artificial intelligence (AI), autonomous machines, and constructing context: User interventions, social awareness, and interdependence

William F. Lawlessa; Ranjeev Mittub; Donald A. Sofgec a Department of Mathematics, Sciences and Technology, and Department of Social Sciences, School of Arts and Sciences, Paine College, Augusta, GA, United States
b Information Management & Decision Architectures Branch, Information Technology Division, US Naval Research Laboratory, Washington, DC, United States
c Navy Center for Applied Research in Artificial Intelligence, US Naval Research Laboratory, Washington, DC, United States

Abstract

With the prospect of even larger disruptions to come, the present economic impact of machine learning (ML), a subset of artificial intelligence (AI), is estimated in the trillions of dollars. Applications of ML and other AI algorithms are propelling unprecedented economic impacts across industry, the military, medicine, finance, and more. But as autonomous machines become ubiquitous, recent problems with ML have surfaced. Early on, Pearl warned AI scientists that they must “build machines that make sense of what goes on in their environment,” a warning still unheeded that may impede their further development. For example, self-driving vehicles often rely on sparse data; by early 2019 self-driving cars had already been involved in three fatalities, including a pedestrian, and yet, ML is unable to explain the contexts within which it operates. We propose that these three seemingly unrelated problems require an interdisciplinary approach to solve. For example, for the symposium in 2019 and now for this book, we asked that authors address how user interventions may improve the context for autonomous machines operating in unfamiliar environments or when experiencing unanticipated events; how autonomous machines can be taught to explain contexts by reasoning, inferences, or causality and decisions to humans relying on intuition; for mutual context, how these machines may interdependently affect human awareness, teams, and society; and how these “machines” may be affected in turn. In short, can context be mutually constructed and shared between machines and humans? By extension, we are interested in whether shared context follows when machines begin to think or, like humans, develop subjective states that allow them to monitor and report on their interpretations of reality, forcing scientists to rethink the general model of human social behavior. If dependence on ML continues or grows, we and the public are also interested in what happens to context shared by users, teams of humans and machines, or society when these machines malfunction. If we “think through this change in human terms,” (From George Shultz, former director of the Office of Management and Budget and former secretary of the treasury, secretary of labor, and secretary of state; he is a distinguished fellow at Stanford University's Hoover Institution.) our ultimate goal is for AI to advance the performance of autonomous machines and teams of humans and machines for the betterment of society wherever these machines interact with humans or other machines.
Note: The Abstract is derived from the AAAI Blurb (AAAI website: http://www.aaai.org/Symposia/Spring/sss19symposia.php#ss01).

Keywords

Autonomy; Teams; Machine explanations of context; Explainable AI

1.1 Introduction

1.1.1 Overview

Pearl (2002) warned almost two decades ago that, for the applications of artificial intelligence (AI) to be successful and accepted by the scientific community and the public at large, machines had to be able to explain what they were doing. This warning, largely unheeded (Pearl & Mackenzie, 2018), must be addressed if AI is to continue unimpeded. AI must be able to explain to ordinary humans, relying mostly on intuition (Kambhampati, 2018), why it is making the decisions it makes in human terms (Shultz, 2018). Society and scientists are worried about what AI means to social well-being, a concern that will amplify once machines begin to think on their own (Gershenfeld, 1999). These issues and concerns became the focus of our symposium at Stanford in March 2019, now adopted and studied further in this book.
In this Introduction, we review the background literature for this book that led to our symposium on “AI, autonomous machines and constructing context.” After the literature review and to set the stage for the chapters that follow, we introduce each chapter, its author(s), and how that chapter contributes to the book's theme to advance the science of human-machine teams. If this book is successful, not only will it advance the science of human-machine teams, but also we hope that it will also contribute to a renewal of social science for humans (e.g., Lawless, Mittu, & Sofge, 2018).
The application of machine learning (ML)a has led to significant advances across widely divergent fields such as astronomy, medicine, government, industry, and the militaryb and with particular applications such as self-driving cars and drones, both of which have recently caused human deaths (e.g., an Uber car killed a pedestrian, in NTSB, 2018; a swarm attack killed Russians on a base in Syria, in Grove, 2018c). The rapid advances and applications with ML, however, have recently exposed several problems associated with data and context that urgently need to be addressed to allow further advances and applications to continue. This symposium was planned to clarify the strengths of ML and its applications and any newly discovered problems and to consider solutions. By addressing these problems, our hope is that AI will continue to advance for the betterment of science, social theory, and society.
The overall goal for AI reasoning systems is to determine knowledge; an operational goal is to be able to discriminate the data in new samples that fit the patterns learned from those that do not fit.d But the data and context problems ML now faces raise several questions. First, while some of the processes of ML are established, more data may yet improve ML models for existing applications, and while new solutions to existing task applications can be disseminated worldwide almost instantly (Brynjolfsson & Mitchell, 2017), applications of ML are often based on proprietary data that do not generalize well beyond the learned data if at all. As possible remediations, first, in an unlearned or uncertain context, like in an emergency with a disabled human, can a human help a machine by intervening or instructing the machine about what should be done and having the machine learn from this new experience? Second, as immediate if not more so, users want a causal stepwise explanation of what a machine has planned before and after it acts (Pearl & Mackenzie, 2018)e; that is, specifically, can the machine explain its actions sufficiently well enough for the human to trust the machine? Third, once machines have been trained as part of a team, inversely, they should be aware of the human's responsibilities as a member in the team. In sum, these three problems are as follows: First, can new data be generated beyond training data (e.g., synthetic data)f and can humans instruct machines to learn causal relations on the fly?g Second, can a machine articulate what it has learned (e.g., what causal inferences can it draw after its learning has been satisfactorily “completed,” which ML algorithms cannot do presently; in Pearl & Mackenzie, 2018)? And third, can an understanding of context be mutual, that is, the contexts faced by a human who is trusting a machine at the same time that the machine is trusting the human (e.g., Lawless et al., 2018)?
The uses of augmented intelligence and assists are wide and spreading. For example, DoD's “Unmanned Systems Integrated Roadmap”h noted that “DoD envisions unmanned systems seamlessly operating with manned systems while gradually reducing the degree of human control and decision making required for the unmanned portion of the force structure.” For these trends to continue, the situation we are in today requires not only leveraging AI to make better decisions more quickly but also with a mutual understanding shared by both humans and machines.

1.1.2 Problem 1: The generation of new data

Many if not most machines are being trained on proprietary data with algorithms selecting the data, often randomly, making reproducibility a problem even in astronomy, especially when data and solutions are unique and have not been archived (e.g., Wild, 2018). Data libraries with standard problems, procedures, and acceptable solutions are being developed but are not yet common (see the report by Hutson, 2018). Once updated by training with new or revised data, improved solutions to previously trained tasks can be disseminated almost instantaneously (Brynjolfsson & Mitchell, 2017). But ML does not easily transfer what it has learned, if at all, from one application to a new one (Tan, 2017). The result is that machines are often trained with sparse datai; for a self-driving car, this result may suffice in relatively simple environments like driving on expressways amidst light traffic.
Limited (proprietary) and sparse sensory (visual, Lidar, and radar) data produce poor training data, often insufficient to determine the context or to operate in complex environments (e.g., inside of large cities occupied by numerous pedestrians, poor weather conditions, and bridges). There are statistical techniques to address sparse data (e.g., Najafi & Salam, 2016). But sparse databases may be expanded with user-induced intervention data generated, say, when a driver intervenes to take control of a self-driving or autonomous vehicle; instead of disconnecting the system, the intervention itself becomes a source of new data (a similar earlier model is the AI apprentice; see Mitchell et al., 1990). By extension, queries about context raised by a human user of a ML system may also serve as a lever to provide new data; further, if the autonomous agent behaves as preferred by the user, trust should be enhanced (Hutson, 2017). Inversely, if an industry protocol has been standardized, nearby ML systems may be able to self-report shared information individually or collectively within a distributed system in a way that leverages the shared information and expands the data available to retrain each participating ML unit.
As another approach to sparse data, activity-based intelligence (ABI) rapidly integrates data from multiple sources to determine the relevant “patterns-of-life” data from a targeted individual, to determine and identify when change is occurring, and to characterize those patterns that drive data collection for ML to create advantages for decision-makers (Biltgen, Bacastow, Kaye, & Young, 2017). The traditional intelligence cycle decomposes multidisciplinary collection requirements from a description of individual target signatures or persistent behaviors. For ABI, practitioners use advanced large-scale data filtering of events, entities, and transactions to develop an understanding (context) through spatial and temporal correlations across multiple data sets (e.g., Bowman et al., 2017). Many aspects of ABI and anomaly detection and the Internet of Things (IoT) are benefitting from the application of AI and ML techniques (Barlow, 2017).j

1.1.3 Problem 2: Explanations by machines of what they have learned or can share (building context)

New regulations are motivating engineers to become aware of how their algorithms are making decisions. For example, from Kean (2018):
the European Union recently implemented a regulation requiring all algorithms to be ‘explainable’ by human engineers.
To prevent unintended actions by machines (e.g., Scharping, 2018), Kissinger (2018) asked the following:
To what extent is it possible to enable AI to comprehend the context that informs its instructions?…Can we, at an early stage, detect and correct an AI program that is acting outside our framework of expectation? Or will AI, left to its...

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Contributors
  6. Preface
  7. Chapter 1: Introduction: Artificial intelligence (AI), autonomous machines, and constructing context: User interventions, social awareness, and interdependence
  8. Chapter 2: Analogy and metareasoning: Cognitive strategies for robot learning
  9. Chapter 3: Adding command knowledge “At the Human Edge”
  10. Chapter 4: Context: Separating the forest and the trees—Wavelet contextual conditioning for AI
  11. Chapter 5: A narrative modeling platform: Representing the comprehension of novelty in open-world systems
  12. Chapter 6: Deciding Machines: Moral-Scene Assessment for Intelligent Systems
  13. Chapter 7: The criticality of social and behavioral science in the development and execution of autonomous systems
  14. Chapter 8: Virtual health and artificial intelligence: Using technology to improve healthcare delivery
  15. Chapter 9: An information geometric look at the valuing of information
  16. Chapter 10: AI, autonomous machines and human awareness: Towards shared machine-human contexts in medicine
  17. Chapter 11: Problems of autonomous agents following informal, open-textured rules
  18. Chapter 12: Engineering for emergence in information fusion systems: A review of some challenges
  19. Chapter 13: Integrating expert human decision-making in artificial intelligence applications
  20. Chapter 14: A communication paradigm for human-robot interaction during robot failure scenarios
  21. Chapter 15: On neural-network training algorithms
  22. Chapter 16: Identifying distributed incompetence in an organization
  23. Chapter 17: Begin with the human: Designing for safety and trustworthiness in cyber-physical systems
  24. Chapter 18: Digital humanities and the digital economy
  25. Chapter 19: Human-machine sense making in context-based computational decision
  26. Chapter 20: Constructing mutual context in human-robot collaborative problem solving with multimodal input
  27. Index