Professional Documents
Culture Documents
CSCM23 L10 Introduction To XAI
CSCM23 L10 Introduction To XAI
CSCM23 L10 Introduction To XAI
DESIGNING-IN TRUST,
UNDERSTANDING, AND
NEGOTIATION
Towards Explainable AI
CSCM23
Designing-In Trust, Understanding, and Negotiation
XAI
A Brief
DEFINITIONS
Tim Miller, Explanation in arti cial intelligence: Insights from the social sciences,
Arti cial Intelligence, 2019
Designing-In Trust, Understanding, and Negotiation
Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo. “Examples are not enough,
learn to criticize! Criticism for interpretability.” Advances in Neural Information
Processing Systems (2016).
CSCM23
fi
fi
CSCM23
Designing-In Trust, Understanding, and Negotiation 4 XAI
XAI AS A FIELD
▪︎ Data Driven
Zoom Poll
8 XAI
→reliable
→responsible
CSCM23
9 XAI
→reliable
→resilient
CSCM23
10 XAI
WHY XAI?
▪︎ Human curiosity and learning
▪︎ Goal of science
Designing-In Trust, Understanding, and Negotiation
▪︎ Safety measures
WHY XAI?
▪︎ Detecting bias
▪︎ …
CSCM23
12 XAI
▪︎ Algorithm Transparency
SCOPE OF
▪︎
INTERPRETABILITY How does the algorithm create the model?
(algorithmic perspective)
▪︎ Global Model Interpretability
Model-Agnostic Methods
CSCM23
14 XAI
INTERPRETABLE
MODELS
Just to drop some names … ▪︎ Standard simple / weak models …
▪︎ Linear Regression
▪︎ Logistic Regression
▪︎ Decision Trees
Designing-In Trust, Understanding, and Negotiation
▪︎ Naïve Bayes
▪︎ K-nearest Neighbours
CSCM23
15 XAI
MODEL-AGNOSTIC
METHODS:
Friedman, Jerome H.
Greedy function approximation: A gradient boosting
machine.”
Annals of statistics (2001): 1189-1232
ff
ff
fi
16 XAI
MODEL-AGNOSTIC
METHODS: ▪︎ Instance version of PDP
MODEL-AGNOSTIC
METHODS
PERMUTATION
Feature Importance
Designing-In Trust, Understanding, and Negotiation
▪︎ Global Method
MODEL-AGNOSTIC
METHODS
GLOBAL SURROGATE
▪︎ Global Method
MODEL-AGNOSTIC
METHODS: ▪︎ Instance Method
LOCAL SURROGATE (LIME) ▪︎ Local surrogate models are interpretable models
that are used to explain individual predictions
of black box machine learning models.
MODEL-AGNOSTIC
METHODS:
SHAPLEY VALUES
▪︎ Instance Method
MODEL-AGNOSTIC
METHODS:
SHAPLEY VALUES
EXAMPLE-BASED
EXPLANATIONS
▪︎ Counterfactual Explanations
▪︎ If X had not occurred, Y would not have
occurred
▪︎ Adversarial Examples
▪︎ Adversarial examples are counterfactual
examples with the aim to deceive the
Designing-In Trust, Understanding, and Negotiation
▪︎ …
CSCM23
23 XAI
VISUALISING
NEURAL NETWORKS
RECENT WORK
On Understanding the Influence of Controllable
Factors with a Feature Attribution Algorithm:
a Medical Case Study
Veera Raghava Reddy Kovvuri Siyuan Liu Monika Seisenberger (a) A lung cancer instance randomly selected from the Simulacrum dataset. Uncontrollable features are: Age, Ethnicity, Sex, and Height.
Swansea University Nanyang Technological University Swansea University (a) A lung cancer instance randomly selected from the Simulacrum dataset. Uncontrollable features are: Age, Ethnicity, Sex, and H
Swansea,United Kingdom Singapore Swansea,United Kingdom
v.r.r.kovvuri@swansea.ac.uk syliu@ntu.edu.sg m.seisenberger@swansea.ac.uk
Abstract—Feature attribution XAI algorithms enable their treat all features homogeneously when computing their relative
users to gain insight into the underlying patterns of large datasets importances. Such homogeneity may not always give desirable
through their feature importance calculation. Existing feature
Designing-In Trust, Understanding, and Negotiation
attribution algorithms to understand important factors affecting To answer this question, a naive approach would be to build (a) Global views of lung cancer cases in the Simulacrum (left: SHAP; right: CAFA). Uncontrollable features are: Age, Ethnicity, Sex, and
and Humber, Northern Ireland, Scotland and Wales. To remove • SHAP considers High Restriction on Cafes and Restau-
cancer patient survivability; [7] employs feature attribution al- another predictive model, which only considers controllable
are agreeable with the ones givenHeight.
by SHAP.
gorithms to study factors affecting the transmission of SARS-
CoV-2; and [16] uses feature attribution to analyse factors
factors, and apply feature attribution algorithms to that model.
However, as explained in [8] and [26], dropping features from
noise and achieve a more accurate Rt estimation, we drop data rants Access (CR H), High Restriction on Pubs and
affecting foreign exchange markets. However, existing feature
attribution algorithms (see e.g., [2], [19], [24] for overviews)
models can negatively impact the model performance as we
will show in Table I in experimental study. Thus, instead points with cumulative cases less than 20 for each region and Bars Access (PB H), Number of Daily Infections (Cases),
of building models with fewer features, we suggest creating
algorithms that are able to treat controllable factors differently keep 3,936
and Humber, instances.
Northern A sliding-window
Ireland, Scotland and mean filter
Wales. To of size 3
remove • Number
SHAP considers of Daily Infections (Cases), Medium
High Restriction Restriction
on Cafes and R
978-1-6654-9810-4/22/$31.00©2022 IEEE from uncontrollable ones.
hasachieve
noise and been used to filter
a more noise inRdaily
accurate cases. we drop data
estimation, on PubsAccess
rants and Bars(CR AccessH), (PB M), Restriction
High and High Restriction
on Pub
t
Wecumulative
split the dataset as 70% Sport and Leisure Facilities (SL H) as the top five
points with cases less thanfor 20 training
for eachand region 30%and for Bars Access (PB H), Number of Daily Infections (C
testing, and use a random forest classifier. We achieve a high effective control measures; whereas
keep 3,936 instances. A sliding-window mean filter of size 3 Number of Daily Infections (Cases), Medium Rest
• CAFA considers CR H, PB H, PB M, Medium Restric-
prediction accuracy of 94.4%. Since
(b) Globalwe
views aim toBreast
obtain a bird’s-
has been used to filter noise in daily cases. of the UCI Cancer dataset
on Pubs and Bars and Access
Nursing (PB
(left: SHAP; right: CAFA). Uncontrollable features are: Age, and Menopause.
tion on Hospital HomeM), and(HV
Visits High
M) Rest
and
eye view of how control measures are affecting the disease, we
fi
Using Explainable AI to Understand
Impacts of Non-pharmaceutical Control
Measures on COVID-19 Transmission for
Evidence-based Policy
An Example
Funded by Welsh Government
2020-2021
“This project will gain deep insights into the effects of
COVID measures used to control the spread of COVID-19. … We will
further explore XAI techniques to reveal dynamics between
Project control measures and disease transmission.”
Question: Which control measures are more effective?
Two Objectives:
Multiple
Information
Sources Data in Machine
Machine Learning
(E.g., Wikipedia, Learning Tabular XAI Results
Models
Government Format
Websites, Kaggle,
etc.)
Current Work
Current Work
Current Work
Important Features
Current Work
Current Work
35 XAI
ANTI-XAI
▪︎ Cynthia Ruin
Stop explaining black box machine learning models
for high stakes decisions and use interpretable
models instead
Designing-In Trust, Understanding, and Negotiation
IN SHORT …