Professional Documents
Culture Documents
Summary of IML (Intepretable Machine Learning)
Summary of IML (Intepretable Machine Learning)
Machine Learning)
Interpretable Machine Learning (IML) is a hot field of machine learning research at present and
even in the next few years. Its research purpose is to solve the interpretability problem of
machine learning models, so that machine learning models can be more widely used in all
industry.
②to allow us to understand, trust, and use the model more effectively.
Challenges of machine learning
The structure of a high-precision model is very complicated commonly, and its operating mechanism is like a
black box, which is difficult to describe in human-understandable language, and the output of the model is
also difficult to interpret, which makes huge challenge in some areas related to life safety or important
decision-making.
For example, the use case of ML now faced challenges like:
• The Finance Act requires banks to explain to customers why their loans cannot be approved.
• Amazon's artificial intelligence recruitment tools are biased towards men.
• The COMPAS algorithm, which is widely used in the US, predicts that the risk of black defendants
committing a crime again is much higher than that of whites.
Problems with the black box model
1. Unable to dig out causality problems or causal misjudgment problems. If the model cannot give a
reasonable causal relationship, then the results of the model will also be difficult to convince people.
2. The black box model raise security problems . It is difficult to find when the model is attacked from
outside.
3. The black box model biased, and the resulting decision will be biased against samples of different ages,
genders, and regions.