Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 5

Sign In

Towards Data Science


Published in

Towards Data Science


Nikita
Nikita
Follow

Apr 12, 2020


·
4 min read
·
Interpretability and TCAV
Make sure expert knowledge is reflected in your deep learning model

While working on my master thesis and trying to get a hold of research, I paused
for a minute and wanted to truly appreciate the breakthrough discoveries in the
field of machine learning. The ideas are just based on some common sense and simple
math.

I am currently working on the interpretability of a model prediction. I would like


to share my understanding of what is interpretability and go one step further in
explaining concept activation vectors and their importance. As always, I’ll try my
best to explain it in as simple terms as possible. Here are a few concepts to get
you going.
Photo by Pixabay on Pexels

Interpretability/ Explanation: For suppose, there is a model that is trained to


classify a set of images as a cat or not a cat. Interpretability would be
explaining why was a particular image classified as a cat. This is important to
confirm if the domain expertise knowledge has been reflected in the NN model. Also,
it helps the user to develop trust which is needed to use the model.

Local explanation: If we consider a single data point/ a single image of a cat and
explain why is it classified the way it is, then it would be a local explanation.
For suppose, it could be the pixels of the face and body of that cat.
Photo: Hjvannes/Wikimedia Commons

Global explanation: would be explaining what particular features or concepts, in


this case, give rise to a classification in the model. In the example that
classifies images as a cat or not, the global explanation could be whiskers or the
ears of a cat. You can see that I have stressed upon the word model because these
features help in understanding the overall behavior of the model and not just a
single image.

Okay, I think you have got the basics now.

Let’s now talk about a local explanation method namely, saliency maps. For those of
you who don’t know much about saliency maps, it assigns how important each input
feature was for the prediction. Meaning, the derivative of the probability of the
class to each pixel i.e. will a small change of a pixel change the probability of
that particular class. If yes, then by how much?

The problem with saliency maps is confirmation bias, we see only what we think is
true. Hence now instead of human’s subjective judgment, if we have a quantitative
judgment to understand which concept mattered more it would be a better measure of
quality. Also, since humans don’t think in terms of pixels we shall consider high-
level human-understandable concepts. Further, understanding why a model works is
fairly important instead of a local explanation. So we would like to focus on a
global explanation. These desires gave rise to Testing with Concept Activation
Vectors (TCAV).

TCAV provides a quantitative explanation of how important is the concept that we


came up with after training for the prediction. But, how would you represent a
concept? We do it using a vector, namely Concept Activation Vector (CAV).
(a) Images of concepts and random objects (b) Class images ( c) Collect activations
from the model for the images ( d) linear classifier and CAV orthogonal to the
decision boundary (e)TCAV score using derivative of w.r.t. change in a concept.
Source: TCAV Paper [1]

For CAV, we take images of concepts and a few other random images, then we take
activations of the network we are investigating. Now since we have activations and
the concepts, we train a linear classifier to separate the concepts from the random
images. The vector that is orthogonal to the decision boundary gives us the CAV
which is a vector that moves towards the concept images and away from the random
images.

Now for the TCAV score, we do something similar to what we did in saliency maps.
TCAV score tells us how important each concept was for the prediction. This means
that the derivative of the probability of the class to each concept i.e. will a
small change of a concept change the probability of that particular class.

In the paper, TCAV was applied to the real-world problem of predicting diabetic
retinopathy (DR) which is a diabetes complication that affects eyes. The model was
trained to predict DR level using a 5-point grading scale based on complex
criteria, where level 0 corresponds to no DR and 4 corresponds to proliferative DR.
Doctors diagnosed DR on diagnostic concepts such as microaneurysms (MA) or pan-
retinal laser scars (PRP), with different concepts being more prominent at
different DR levels. [1]
TCAV results for DR level 4 and level 1. Relevant concepts are in green and those
that are not relevant are in red. Source: TCAV Paper

The importance scores of these concepts to the model were tested using TCAV. For
some DR levels, TCAV identified the correct diagnostic concepts as being important.
However, the model often over-predicted level 1 (mild) as level 2 (moderate). Given
this, the doctor said she would like to tell the model to de-emphasize the
importance of HMA for level 1. Hence, TCAV may be useful for helping experts
interpret and fix model errors when they disagree with model predictions. Thus,
making sure that the domain expertise knowledge is reflected in the model. [1]

It is great but this tool requires humans to collect relevant concepts. There is
further research that discusses more on how to do this without human supervision.

References

[1] TCAV Paper: https://arxiv.org/pdf/1711.11279.pdf

Sign up for The Variable


By Towards Data Science

Every Thursday, the Variable delivers the very best of Towards Data Science: from
hands-on tutorials and cutting-edge research to original features you don't want to
miss. Take a look.
More from Towards Data Science
Follow
Your home for data science. A Medium publication sharing concepts, ideas and codes.
Sukanta Roy

Sukanta Roy

·Apr 12, 2020


The Trap of tutorials and online courses

How tutorials and online courses can create an illusion of competence, and how not
to fall into this trap — Online tutorials and courses are great. You can learn
about anything from the best instructors in the world for free or for a reasonable
price (in most cases at least). The rise of the online learning platforms like
Coursera, Udemy, Edx, Udacity etc. has made it possible for anyone to…
Online Courses

6 min read
The Trap of tutorials and online courses

Share your ideas with millions of readers.


Write on Medium
Tarlan Ahadli

Tarlan Ahadli

·Apr 12, 2020


Naive Bayes Classifier: Bayesian Inference, Central Limit Theorem, Python/C++
Implementation

Understanding Mathematical Base of Naive Bayes Classifier — Table of Content


Introduction 1. Conditional Probability Definition Intuition Quick Example Chain
Rule 2. Independence Definition Intuition Quick Example 3. Conditional Independence
Definition Proof Quick Example 4. Bayes’ Theorem and Naive Bayes Classifier
Definition Proof 5. Central Limit Theorem and Normal Distribution 6. Implementation
of the Theory
Machine Learning

10 min read
Naive Bayes Classifier: Bayes Inference, Central Limit Theorem, Python/C++
Implementation
Tim R. Schleicher

Tim R. Schleicher

·Apr 12, 2020


3 reasons why smart CEOs start Machine Learning projects right now

While most CEOs struggle in the face of COVID-19, some leverage data and do things
differently — The COVID-19 crisis hits organizations with full force. This black
swan imposes a unique leadership challenge to CEOs. Since the outbreak, I have
personally experienced the struggles of leaders and employees alike––many of whom
were not able to adapt to the radically changing environment. Some CEOs we worked
with in…
Machine Learning

4 min read
3 reasons why smart CEOs start Machine Learning projects right now
Wang Shenghao

Wang Shenghao
·Apr 12, 2020
Crack Site Selection Puzzle by Geospatial Analysis (Part 1)

An end-to-end geospatial application built on open-source tools and data — Site


selection is a popular topic in the consulting industry. In this blog, I am
introducing a novel solution backed by open source OpenStreetMap dataset and QGIS.
Hope this case study can bring some fresh ideas to the analysts and consultants who
work on the site selection related projects. Problem Statement Suppose…
Site Selection

7 min read
Crack Site Selection Puzzle by Geospatial Analysis (Part 1)
Ben Fraser

Ben Fraser

·Apr 12, 2020


A pragmatic dive into Random Forests and Decision Trees with Python

A full code implementation of both algorithms from scratch — Introduction A random


forest is undeniably one of the best models for obtaining a quick and reasonable
solution to most structured data problems. They can adapt to both regression and
classification problems, are resistant to over-fitting (with sufficient
estimators), and they can work without any data standardisation or creation of
dummy…
Random Forest

19 min read
A pragmatic dive into Random Forests and Decision Trees with Python
Read more from Towards Data Science
Recommended from Medium
Fred Malack

Fred Malack

in

unpackAI
What’s the Role of Datasets in ML?
yodayoda

yodayoda

in

Map for Robots


From depth map to point cloud
Rishabh Jain

Rishabh Jain
Pose Estimation in Drone Videos for Identifying Hand Gestures
Bijil Subhash

Bijil Subhash
FastAI: How to pick the optimal learning rate using FastAI?
Gwendolyn Faraday

Gwendolyn Faraday
in

We’ve moved to freeCodeCamp.org/news


The Best Resources I Used to Teach Myself Machine Learning
Surakshya Aryal

Surakshya Aryal
Learning Continued on Week 6, 7 & 8
Rob Vandenberghe

Rob Vandenberghe

in

ML6team
ML6 is ISO 27001 certified!
Sri Anumakonda

Sri Anumakonda
Pairing Lane Detection with Object Detection

AboutHelpTermsPrivacy

You might also like