Hands-On Explainable AI (XAI) with Python
eBook - ePub

Hands-On Explainable AI (XAI) with Python

Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

Denis Rothman

  1. 454 pagine
  2. English
  3. ePUB (disponibile sull'app)
  4. Disponibile su iOS e Android
eBook - ePub

Hands-On Explainable AI (XAI) with Python

Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

Denis Rothman

Dettagli del libro
Anteprima del libro
Indice dei contenuti
Citazioni

Informazioni sul libro

Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.

Key Features

  • Learn explainable AI tools and techniques to process trustworthy AI results
  • Understand how to detect, handle, and avoid common issues with AI ethics and bias
  • Integrate fair AI into popular apps and reporting tools to deliver business value using Python and associated tools

Book Description

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.

Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.

You will build XAI solutions in Python, TensorFlow 2, Google Cloud's XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.

You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.

By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.

What you will learn

  • Plan for XAI through the different stages of the machine learning life cycle
  • Estimate the strengths and weaknesses of popular open-source XAI applications
  • Examine how to detect and handle bias issues in machine learning data
  • Review ethics considerations and tools to address common problems in machine learning data
  • Share XAI design and visualization best practices
  • Integrate explainable AI results using Python models
  • Use XAI toolkits for Python in machine learning life cycles to solve business problems

Who this book is for

This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.

Some of the potential readers of this book include:

  • Professionals who already use Python for as data science, machine learning, research, and analysis
  • Data analysts and data scientists who want an introduction into explainable AI tools and techniques
  • AI Project managers who must face the contractual and legal obligations of AI Explainability for the acceptance phase of their applications

Domande frequenti

Come faccio ad annullare l'abbonamento?
È semplicissimo: basta accedere alla sezione Account nelle Impostazioni e cliccare su "Annulla abbonamento". Dopo la cancellazione, l'abbonamento rimarrà attivo per il periodo rimanente già pagato. Per maggiori informazioni, clicca qui
È possibile scaricare libri? Se sì, come?
Al momento è possibile scaricare tramite l'app tutti i nostri libri ePub mobile-friendly. Anche la maggior parte dei nostri PDF è scaricabile e stiamo lavorando per rendere disponibile quanto prima il download di tutti gli altri file. Per maggiori informazioni, clicca qui
Che differenza c'è tra i piani?
Entrambi i piani ti danno accesso illimitato alla libreria e a tutte le funzionalità di Perlego. Le uniche differenze sono il prezzo e il periodo di abbonamento: con il piano annuale risparmierai circa il 30% rispetto a 12 rate con quello mensile.
Cos'è Perlego?
Perlego è un servizio di abbonamento a testi accademici, che ti permette di accedere a un'intera libreria online a un prezzo inferiore rispetto a quello che pagheresti per acquistare un singolo libro al mese. Con oltre 1 milione di testi suddivisi in più di 1.000 categorie, troverai sicuramente ciò che fa per te! Per maggiori informazioni, clicca qui.
Perlego supporta la sintesi vocale?
Cerca l'icona Sintesi vocale nel prossimo libro che leggerai per verificare se è possibile riprodurre l'audio. Questo strumento permette di leggere il testo a voce alta, evidenziandolo man mano che la lettura procede. Puoi aumentare o diminuire la velocità della sintesi vocale, oppure sospendere la riproduzione. Per maggiori informazioni, clicca qui.
Hands-On Explainable AI (XAI) with Python è disponibile online in formato PDF/ePub?
Sì, puoi accedere a Hands-On Explainable AI (XAI) with Python di Denis Rothman in formato PDF e/o ePub, così come ad altri libri molto apprezzati nelle sezioni relative a Informatica e Intelligenza artificiale (IA) e semantica. Scopri oltre 1 milione di libri disponibili nel nostro catalogo.

Informazioni

Anno
2020
ISBN
9781800202764

4

Microsoft Azure Machine Learning Model Interpretability with SHAP

Sentiment analysis will become one of the key services AI will provide. Social media, as we know it today, forms a seed, not the full-blown social model. Our opinions, consumer habits, browsing data, and location history constitute a formidable source of data of AI models.
The sum of all of the information about our daily activities is challenging to analyze. In this chapter, we will focus on data we voluntarily publish on cloud platforms: reviews.
We publish reviews everywhere. We write reviews about books, movies, equipment, smartphones, cars, and sports—everything that exists in our daily lives. In this chapter, we will analyze IMDb reviews of films. IMDb offers datasets of review information for commercial and non-commercial use.
As AI specialists, we need to start running AI models on the reviews as quickly as possible. After all, the data is available, so let's use it! Then, the harsh reality of prediction accuracy changes our pleasant endeavor into a nightmare. If the model is simple, its interpretability poses little to no problem. However, complex datasets such as the IMDb review dataset contain heterogeneous data that make it challenging to make accurate predictions.
If the model is complex, even when the accuracy seems correct, we cannot easily explain the predictions. We need a tool to detect the relationship between local specific features and a model's global output. We do not have the resources to write an explainable AI (XAI) tool for each model and project we implement. We need a model-agnostic algorithm to apply to any model to detect the contribution of each feature to a prediction.
In this chapter, we will focus on SHapley Additive exPlanations (SHAP), which is part of the Microsoft Azure Machine Learning model interpretability solution. In this chapter, we will use the word "interpret" or "explain" for explainable AI. Both terms mean that we are providing an explanation or an interpretation of a model.
SHAP can explain the output of any machine learning model. In this chapter, we will analyze and interpret the output of a linear model that's been applied to sentiment analysis with SHAP. We will use the algorithms and visualizations that come mainly from Su-In Lee's lab at the University of Washington and Microsoft Research.
We will start by understanding the mathematical foundations of Shapley values. We will then get started with SHAP in a Python Jupyter Notebook on Google Colaboratory.
The IMDb dataset contains vast amounts of information. We will write a data interception function to create a unit test that targets the behavior of the AI model using SHAP.
Finally, we will explain reviews from the IMDb dataset with SHAP algorithms and visualizations.
This chapter covers the following topics:
  • Game theory basics
  • Model-agnostic explainable AI
  • Installing and running SHAP
  • Importing and splitting sentiment analysis datasets
  • Vectorizing the datasets
  • Creating a dataset interception function to target small samples of data
  • Linear models and logistic regression
  • Interpreting sentiment analysis with SHAP
  • Exploring SHAP explainable AI graphs
Our first step will be to understand SHAP from a mathematical point of view.

Introduction to SHAP

SHAP was derived from game theory. Lloyd Stowell Shapley gave his name to this game theory model in the 1950s. In game theory, each player decides to contribute to a coalition of players to produce a total value that will be superior to the sum of their individual values.
The Shapley value is the marginal contribution of a given player. The goal is to find and explain the marginal contribution of each participant in a coalition of players.
For example, each player in a football team often receives different amounts of bonuses based on each player's performance throughout a few games. The Shapley value provides a fair way to distribute a bonus to each player based on her/his contribution to the games.
In this section, we will first explore SHAP intuitively. Then, we will go through the mathematical explanation of the Shapley value. Finally, we will apply the mathematical model of the Shapley value to a sentiment analysis of movie reviews.
We will start with an intuitive explanation of the Shapley value.

Key SHAP principles

In this section, we will learn about Shapley values through the principles of symmetry, null players, and additivity. We will explore these concepts step by step with intuitive examples.
The first principle we will explore is symmetry.

Symmetry

If all of the players in a game have the same contribution, their contribution will be symmetrical. Suppose that, for a flight, the plane cannot take off without a pilot and a copilot. They both have the same contribution.
However, in a basketball team, if one player scores 25 points and another just a few points, the situation is asymmetrical. The Shapley value provides a way to find a fair distributio...

Indice dei contenuti

  1. Preface
  2. Explaining Artificial Intelligence with Python
  3. White Box XAI for AI Bias and Ethics
  4. Explaining Machine Learning with Facets
  5. Microsoft Azure Machine Learning Model Interpretability with SHAP
  6. Building an Explainable AI Solution from Scratch
  7. AI Fairness with Google's What-If Tool (WIT)
  8. A Python Client for Explainable AI Chatbots
  9. Local Interpretable Model-Agnostic Explanations (LIME)
  10. The Counterfactual Explanations Method
  11. Contrastive XAI
  12. Anchors XAI
  13. Cognitive XAI
  14. Answers to the Questions
  15. Other Books You May Enjoy
  16. Index
Stili delle citazioni per Hands-On Explainable AI (XAI) with Python

APA 6 Citation

Rothman, D. (2020). Hands-On Explainable AI (XAI) with Python (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1694602/handson-explainable-ai-xai-with-python-interpret-visualize-explain-and-integrate-reliable-ai-for-fair-secure-and-trustworthy-ai-apps-pdf (Original work published 2020)

Chicago Citation

Rothman, Denis. (2020) 2020. Hands-On Explainable AI (XAI) with Python. 1st ed. Packt Publishing. https://www.perlego.com/book/1694602/handson-explainable-ai-xai-with-python-interpret-visualize-explain-and-integrate-reliable-ai-for-fair-secure-and-trustworthy-ai-apps-pdf.

Harvard Citation

Rothman, D. (2020) Hands-On Explainable AI (XAI) with Python. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1694602/handson-explainable-ai-xai-with-python-interpret-visualize-explain-and-integrate-reliable-ai-for-fair-secure-and-trustworthy-ai-apps-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Rothman, Denis. Hands-On Explainable AI (XAI) with Python. 1st ed. Packt Publishing, 2020. Web. 14 Oct. 2022.