Hands-On Explainable AI (XAI) with Python
eBook - ePub

Hands-On Explainable AI (XAI) with Python

Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

Denis Rothman

  1. 454 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Hands-On Explainable AI (XAI) with Python

Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps

Denis Rothman

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

Resolve the black box models in your AI applications to make them fair, trustworthy, and secure. Familiarize yourself with the basic principles and tools to deploy Explainable AI (XAI) into your apps and reporting interfaces.

Key Features

  • Learn explainable AI tools and techniques to process trustworthy AI results
  • Understand how to detect, handle, and avoid common issues with AI ethics and bias
  • Integrate fair AI into popular apps and reporting tools to deliver business value using Python and associated tools

Book Description

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.

Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.

You will build XAI solutions in Python, TensorFlow 2, Google Cloud's XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.

You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.

By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.

What you will learn

  • Plan for XAI through the different stages of the machine learning life cycle
  • Estimate the strengths and weaknesses of popular open-source XAI applications
  • Examine how to detect and handle bias issues in machine learning data
  • Review ethics considerations and tools to address common problems in machine learning data
  • Share XAI design and visualization best practices
  • Integrate explainable AI results using Python models
  • Use XAI toolkits for Python in machine learning life cycles to solve business problems

Who this book is for

This book is not an introduction to Python programming or machine learning concepts. You must have some foundational knowledge and/or experience with machine learning libraries such as scikit-learn to make the most out of this book.

Some of the potential readers of this book include:

  • Professionals who already use Python for as data science, machine learning, research, and analysis
  • Data analysts and data scientists who want an introduction into explainable AI tools and techniques
  • AI Project managers who must face the contractual and legal obligations of AI Explainability for the acceptance phase of their applications

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Hands-On Explainable AI (XAI) with Python est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Hands-On Explainable AI (XAI) with Python par Denis Rothman en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Informatica et Intelligenza artificiale (IA) e semantica. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Année
2020
ISBN
9781800202764

4

Microsoft Azure Machine Learning Model Interpretability with SHAP

Sentiment analysis will become one of the key services AI will provide. Social media, as we know it today, forms a seed, not the full-blown social model. Our opinions, consumer habits, browsing data, and location history constitute a formidable source of data of AI models.
The sum of all of the information about our daily activities is challenging to analyze. In this chapter, we will focus on data we voluntarily publish on cloud platforms: reviews.
We publish reviews everywhere. We write reviews about books, movies, equipment, smartphones, cars, and sports—everything that exists in our daily lives. In this chapter, we will analyze IMDb reviews of films. IMDb offers datasets of review information for commercial and non-commercial use.
As AI specialists, we need to start running AI models on the reviews as quickly as possible. After all, the data is available, so let's use it! Then, the harsh reality of prediction accuracy changes our pleasant endeavor into a nightmare. If the model is simple, its interpretability poses little to no problem. However, complex datasets such as the IMDb review dataset contain heterogeneous data that make it challenging to make accurate predictions.
If the model is complex, even when the accuracy seems correct, we cannot easily explain the predictions. We need a tool to detect the relationship between local specific features and a model's global output. We do not have the resources to write an explainable AI (XAI) tool for each model and project we implement. We need a model-agnostic algorithm to apply to any model to detect the contribution of each feature to a prediction.
In this chapter, we will focus on SHapley Additive exPlanations (SHAP), which is part of the Microsoft Azure Machine Learning model interpretability solution. In this chapter, we will use the word "interpret" or "explain" for explainable AI. Both terms mean that we are providing an explanation or an interpretation of a model.
SHAP can explain the output of any machine learning model. In this chapter, we will analyze and interpret the output of a linear model that's been applied to sentiment analysis with SHAP. We will use the algorithms and visualizations that come mainly from Su-In Lee's lab at the University of Washington and Microsoft Research.
We will start by understanding the mathematical foundations of Shapley values. We will then get started with SHAP in a Python Jupyter Notebook on Google Colaboratory.
The IMDb dataset contains vast amounts of information. We will write a data interception function to create a unit test that targets the behavior of the AI model using SHAP.
Finally, we will explain reviews from the IMDb dataset with SHAP algorithms and visualizations.
This chapter covers the following topics:
  • Game theory basics
  • Model-agnostic explainable AI
  • Installing and running SHAP
  • Importing and splitting sentiment analysis datasets
  • Vectorizing the datasets
  • Creating a dataset interception function to target small samples of data
  • Linear models and logistic regression
  • Interpreting sentiment analysis with SHAP
  • Exploring SHAP explainable AI graphs
Our first step will be to understand SHAP from a mathematical point of view.

Introduction to SHAP

SHAP was derived from game theory. Lloyd Stowell Shapley gave his name to this game theory model in the 1950s. In game theory, each player decides to contribute to a coalition of players to produce a total value that will be superior to the sum of their individual values.
The Shapley value is the marginal contribution of a given player. The goal is to find and explain the marginal contribution of each participant in a coalition of players.
For example, each player in a football team often receives different amounts of bonuses based on each player's performance throughout a few games. The Shapley value provides a fair way to distribute a bonus to each player based on her/his contribution to the games.
In this section, we will first explore SHAP intuitively. Then, we will go through the mathematical explanation of the Shapley value. Finally, we will apply the mathematical model of the Shapley value to a sentiment analysis of movie reviews.
We will start with an intuitive explanation of the Shapley value.

Key SHAP principles

In this section, we will learn about Shapley values through the principles of symmetry, null players, and additivity. We will explore these concepts step by step with intuitive examples.
The first principle we will explore is symmetry.

Symmetry

If all of the players in a game have the same contribution, their contribution will be symmetrical. Suppose that, for a flight, the plane cannot take off without a pilot and a copilot. They both have the same contribution.
However, in a basketball team, if one player scores 25 points and another just a few points, the situation is asymmetrical. The Shapley value provides a way to find a fair distributio...

Table des matiĂšres

  1. Preface
  2. Explaining Artificial Intelligence with Python
  3. White Box XAI for AI Bias and Ethics
  4. Explaining Machine Learning with Facets
  5. Microsoft Azure Machine Learning Model Interpretability with SHAP
  6. Building an Explainable AI Solution from Scratch
  7. AI Fairness with Google's What-If Tool (WIT)
  8. A Python Client for Explainable AI Chatbots
  9. Local Interpretable Model-Agnostic Explanations (LIME)
  10. The Counterfactual Explanations Method
  11. Contrastive XAI
  12. Anchors XAI
  13. Cognitive XAI
  14. Answers to the Questions
  15. Other Books You May Enjoy
  16. Index
Normes de citation pour Hands-On Explainable AI (XAI) with Python

APA 6 Citation

Rothman, D. (2020). Hands-On Explainable AI (XAI) with Python (1st ed.). Packt Publishing. Retrieved from https://www.perlego.com/book/1694602/handson-explainable-ai-xai-with-python-interpret-visualize-explain-and-integrate-reliable-ai-for-fair-secure-and-trustworthy-ai-apps-pdf (Original work published 2020)

Chicago Citation

Rothman, Denis. (2020) 2020. Hands-On Explainable AI (XAI) with Python. 1st ed. Packt Publishing. https://www.perlego.com/book/1694602/handson-explainable-ai-xai-with-python-interpret-visualize-explain-and-integrate-reliable-ai-for-fair-secure-and-trustworthy-ai-apps-pdf.

Harvard Citation

Rothman, D. (2020) Hands-On Explainable AI (XAI) with Python. 1st edn. Packt Publishing. Available at: https://www.perlego.com/book/1694602/handson-explainable-ai-xai-with-python-interpret-visualize-explain-and-integrate-reliable-ai-for-fair-secure-and-trustworthy-ai-apps-pdf (Accessed: 14 October 2022).

MLA 7 Citation

Rothman, Denis. Hands-On Explainable AI (XAI) with Python. 1st ed. Packt Publishing, 2020. Web. 14 Oct. 2022.