Explainable Artificial Intelligence for Smart Cities
eBook - ePub

Explainable Artificial Intelligence for Smart Cities

  1. 350 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Explainable Artificial Intelligence for Smart Cities

Book details
Book preview
Table of contents
Citations

About This Book

Thanks to rapid technological developments in terms of Computational Intelligence, smart tools have been playing active roles in daily life. It is clear that the 21st century has brought about many advantages in using high-level computation and communication solutions to deal with real-world problems; however, more technologies bring more changes to society. In this sense, the concept of smart cities has been a widely discussed topic in terms of society and Artificial Intelligence-oriented research efforts. The rise of smart cities is a transformation of both community and technology use habits, and there are many different research orientations to shape a better future.

The objective of this book is to focus on Explainable Artificial Intelligence (XAI) in smart city development. As recently designed, advanced smart systems require intense use of complex computational solutions (i.e., Deep Learning, Big Data, IoT architectures), the mechanisms of these systems become 'black-box' to users. As this means that there is no clear clue about what is going on within these systems, anxieties regarding ensuring trustworthy tools also rise. In recent years, attempts have been made to solve this issue with the additional use of XAI methods to improve transparency levels. This book provides a timely, global reference source about cutting-edge research efforts to ensure the XAI factor in smart city-oriented developments. The book includes both positive and negative outcomes, as well as future insights and the societal and technical aspects of XAI-based smart city research efforts.

This book contains nineteen contributions beginning with a presentation of the background of XAI techniques and sustainable smart-city applications. It then continues with chapters discussing XAI for Smart Healthcare, Smart Education, Smart Transportation, Smart Environment, Smart Urbanization and Governance, and Cyber Security for Smart Cities.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Explainable Artificial Intelligence for Smart Cities by Mohamed Lahby, Utku Kose, Akash Kumar Bhoi in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Mobile & Wireless Communications. We have over one million books available in our catalogue for you to explore.

Information

1 An Overview of Explainable Artificial Intelligence (XAI) from a Modern Perspective

Ana Carolina Borges Monteiro1, Reinaldo Padilha França1, Rangel Arthur2, and Yuzo Iano1
1School of Electrical Engineering and Computing (FEEC) – State University of Campinas (UNICAMP), Campinas, São Paulo, Brazil
2Faculty of Technology (FT) – State University of Campinas (UNICAMP), Limeira, São Paulo, Brazil
Contents
1.1 Introduction
1.2 Explainable Artificial Intelligence (XAI) Concept
1.3 Need for XAI in Neural Network-Oriented Applications
1.4 Discussion
1.5 Trends
1.6 Conclusions
References
DOI: 10.1201/9781003172772-1

1.1 Introduction

Similar to disruptive all-purpose technologies, Artificial Intelligence (AI) will move from its current perspective-related experimental state to be incorporated into the fabric of most modern businesses, considering that smart technology does things and does it well, from smart smartphone assistants to the autopilot that controls the plane most of the time of your flight. Or even pondering insurance companies that use machine learning to motorize, industrialize, condition, robotize, that is, automate and improve/customize/personalize customer support; commercial companies optimizing their business with neural networks (Figure 1.1) and even AI for the automation of medical diagnosis. There is a part of a society where intelligent robots have already taken over manual labour, still reflecting the potential of AI that as time goes by becomes more intelligent taking on analytical tasks (Došilović et al., 2018).
Figure 1.1 Neural Networks.
In a smart city, it is possible to provide interaction and connection with the citizens of smart cities through safer, building this concept that derives from innovative and even resilient technological solutions that are inclusive and accessible to citizens. Inclusive sustainable urban planning for a deep involvement of residents, companies, and governments improves collaboration, transparency, and sustainability with safer and more compatible tools. The transformation that leads a city to be identified as ‘smart’ takes into account the characteristics and demands of citizens, improving government services through the application of data analysis to enable city leaders and employees to make actionable and informed decisions. Even providing better digital services to residents, incorporating intelligent solutions that provide answers to specific questions, as well as offering technology applications and services in a safe, helping to disseminate digital technologies that can bring benefits to people and the city where they operate (França et al., 2021).
Due to the advances in computational power that have become more accessible and the large volume of data generated daily, Machine Learning (ML) models have become more complex, reaching increasingly impressive levels of performance. However, the increase in performance of ML models over time has not always been accompanied by an advance in their transparency, that is, intelligent models did not obtain an exact explanation of how this came to their conclusions (França et al., 2021).
Whereas the field of AI has developed computational devices with properties and characteristics that allow drive cars, fold proteins, synthesize chemical compounds, and even detect and identify high-energy particles at a superhuman level (high computational processing), that is, which specialist professionals would not be able to achieve with such precision and efficiency. However, these AI algorithms cannot explain the thinking procedure behind their digital decisions. Since a computer (computational device) has the capacity and properties to dominate protein folding (a chemical process in which the structure of a protein assumes its functional configuration with importance for cellular metabolism and related problems) and also informs about the rule and properties of Science Biology, reflecting that it is more useful, advantageous and even convenient than a computational device to just fold proteins without the need for explanation than a human being (Haenlein & Kaplan, 2019; Holzinger et al., 2019).
The lack of explainability becomes a problem in the exemplary panorama of a well-respected and recognized teacher by her students and colleagues, generating motivation for her students and sharing her techniques with other teachers. And be conceptualized minimally by an AI algorithm that aims to improve the quality of teaching by evaluating the performance of teachers, however without guidance and explanation of what may have impacted the algorithm by conceptualizing in this way. This molds a possible problem of not being able to explain the decisions made by an AI algorithm, when it evaluates a person’s performance, considering that it is important that employees and workers who are evaluated by intelligent technology have access to the factors that can lead to this digital judgement (AI-oriented decision making) (Putnik et al., 2018).
In that way, that person could challenge the decision made by the algorithm or work to improve these factors. It is in this context that the importance of the XAI study comes in, addressing the need to be able to interpret an ML model. This arises because it is common for the formulation of problems addressed by ML to be incomplete, considering that usually, insight is not enough to deal with a problem, considering the importance of more than just ‘what’, but also ‘why’, and even the ‘how’, that is, reason for an improvement to be achieved (Emmert-Streib et al., 2020).
In this sense, XAI is the field of research dedicated to the study of methods so that AI applications produce solutions that can be explained to human beings, appearing as a counterpoint to the development of completely black-box models, that is, opaque models in which not even developers know how decisions are made. However, one of the great challenges of XAI is the difficulty of reaching a concise definition of what a sufficiently explainable model is (Gunning, 2017).
Also considering the transformation of machine learning that has been occurring over time, pondering the recent past that aims to obtain intelligent model the digital decision system, digital behavior, and your reactions (answers to the problem presented). These results obtained in the field of ML have led to fast growth in the elaboration and realization of AI. Reflecting that the conception of Deep Learning (DL) (Figure 1.2) is grounded on the past, there are many techniques such as consisting of convolutional neural networks (CNN), recursive neural networks (RNN), reinforcement learning, and even contentious networks demonstrating remarkable success. Although these successful achievements were obtained, it is difficult to clarify and even elucidate the digital decisions and actions of these intelligent systems related to users (França, Monteiro, et al., 2020; Yan et al., 2015).
Figure 1.2 Deep Learning.
Considering that these DL models projected with hundreds of millions of layered artificial neural networks (ANN) are not foolproof, given the disadvantage when it is simply duped, relating case of a pixel attack. With respect to the pixel attack, this is similar to the association inference attack, related to extracting images from facial recognition systems, just by having API access to the ML model, this can be performed as a series of progressive queries using an image digital base, even considering if you know anything about the target (such as age, sex, or even race), you can try to choose a digital image closer to the likeness of this person. It then executes a series of queries using the flip attack, changing the pixels to increase the accuracy or reliability of the ML smart system. At some point, high confidence is achieved, which produces a digital image similar to the original, which, although not perfect, is very close to the person’s appearance. Thus, the complexity of this type of advanced applications-oriented in AI increases with the difficulty related to successes and the ability to explain. Thus, XAI aims to explain the reasons for the new ML/DL systems, understand, and determine how this algorithm behaves in the future, producing better definable intelligent models (França, Monteiro, et al., 2020; Yan et al., 2015).
These XAI-oriented models are planned to be associated with an interactive human-computer interface, that is, HMI techniques, which allows converting AI models into useful and understandable explanation dialogue for the end-user. Consider three basic expectations: addressing the explanation of the purpose behind how the parts (algorithm modules) that design and employ the system are influenced; or even how data sources (dataset) and results (inferences) are utilized; and finally, how the inputs of an AI model drive to outputs (outcome) (Gunning et al., 2019).
It is worthwhile to exemplify XAI from medical practice, concerning after examining patient data, the physician must comprehend and explain to the patient the medical treatment/therapy proposed to the patient based on the recommendation of the AI-oriented medical decision support system. In this regard, first, what medical data is assessed is another fundamental criterion, it is also essential to identify and recognize what medical data is required and what needs to be done for a suitable medical assessment (Samek & MĂźller, 2019).
The XAI, in this sense, emerges as a new field of research, which seeks more transparent approaches, especially for deep neural networks. Therefore, this chapter aims to provide an updated overview of the XAI and your technologies, showing the fundamentals of this disruptive technology, demonstrating a landscape view to the applied aspect, as also key concerns, and challenges, with a concise bibliographic background, featuring the potential of technologies (Islam et al., 2021).

1.2 Explainable Artificial Intelligence (XAI) Concept

Explainable Artificial Intelligence (XAI) is artificial intelligence programmed to explain its own purpose, and rationalization of the decision process, in a way that can be understood by the average user. Either way, XAI offers important information about how an Artificial Intelligence program makes decisions, by listing the program’s strengths and weaknesses; the specific criteria used by the program to achieve a result; the appropriate level of confidence for different types of decisions; the reason why a decision was made, at the expense of other options; what errors the intelligent system in question is vulnerable to; and even how these errors can be corrected (Longo et al., 2020).
Considering an important objective of XAI, it is to offer open algorithms, breaking the paradigm acting as ‘black boxes’, about the algorithms used to reach a given decision were not understood, that is, skepticism in the face of inexplicable responses given by AI mechanisms and machine learning. Still pondering that as the use of artificial intelligence spreads, it becomes increasingly important to pay attention to issues of digital trust and data protection (Arrieta et al., 2020; Lin et al., 2020).
The term ‘black-box’ is associated with the underlying AI process, that is, basically the data is introduced in an AI-oriented process to train the AI model. This AI model then learns patterns from the training data (through a learning machine) and employs that to predict patterns (insights) in volume data, or even perform recommendations (inference) based on digital learning. Thus, underlying digital thinki...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Contents
  6. Contributors
  7. 1 An Overview of Explainable Artificial Intelligence (XAI) from a Modern Perspective
  8. 2 Explainable Artificial Intelligence for Services Exchange in Smart Cities
  9. 3 IoT- and XAI-Based Smart Medical Waste Management
  10. 4 The Impact and Usage of Smartphone among Generation Z: A Study Based on Data Mining Techniques
  11. 5 Explainable Artificial Intelligence: Guardian for Cancer Care
  12. 6 ANN-Based Brain Tumor Classification: Performance Analysis Using K-Means and FCM Clustering With Various Training Functions
  13. 7 Recognition of the Most Common Trisomies through Automated Identification of Abnormal Metaphase Chromosome Cells
  14. 8 Smart Learning Solutions for Combating COVID-19
  15. 9 An Analysis of Machine Learning for Smart Transportation System (STS)
  16. 10 Classification of Kinematic Data Using Explainable Artificial Intelligence (XAI) for Smart Motion
  17. 11 Smart Urban Traffic Management for an Efficient Smart City
  18. 12 Systematic Comparison of Feature Selection Methods for Solar Energy Forecasting
  19. 13 Indoor Environment Assistance Navigation System Using Deep Convolutional Neural Networks
  20. 14 Pixel-Based Classification of Land Use/Land Cover Built-Up and Non-Built-Up Areas Using Google Earth Engine in an Urban Region (Delhi, India)
  21. 15 Emergence of Smart Home Systems Using IoT: Challenges and Limitations
  22. 16 Acceptance of Blockchain in Smart City Governance from the User Perspective
  23. 17 Explainable AI in Machine/Deep Learning for Intrusion Detection in Intelligent Transportation Systems for Smart Cities
  24. 18 Real-Time Identity Censorship of Videos to Enable Live Telecast Using NVIDIA Jetson Nano
  25. 19 Smart Cities' Information Security: Deep Learning-Based Risk Management
  26. Index