Professional Documents
Culture Documents
EPQ Presentation
EPQ Presentation
EPQ Presentation
This was very easy to understand as a human could look for the
reply the machine had given and seen what key words it was meant
to look for to see how it had come to its decision.
Molnar, C. (2023, August 21). Interpretable Machine Learning - A guide for Making Black Box Models Explainable.
Retrieved from https://christophm.github.io/interpretable-ml-book/feature-importance.html
How to understand an AI’s
decisions continued…
There is, however, a new way of understanding these algorithms. We
can now, with modern developments, use AI to find how another AI
has made its decision. This is called Explainable AI (XAI) and is
becoming more and more important to verify AIs decisions in
modern situations like in medical diagnosis (Hassija, et al., 2023).
Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., . . . Hussain, A. (2023, August 24). Interpreting
Black-Box Models: A Review on Explainable Artificial Intelligence. Retrieved from Springer:
https://link.springer.com/article/10.1007/s12559-023-10179-8#Sec22
Why must we understand an AI’s
decisions?
AI is being used more in sectors that need reasons why a choice has
been made. As I stated previously, the NHS are using Explainable AI
to see why an AI has made a decision when looking at MRI and CT
scans (NHS, 2023). By understanding how an AI has made its
decision here, not only can we confirm it’s thought process, but we
could understand additional features of what tumours and other
issues look like.
NHS. (2023, Febuary 10). Artificial Intelligence - NHS Transformation Directorate. Retrieved from NHS:
https://transform.england.nhs.uk/information-governance/guidance/artificial-intelligence
Why must we understand
an AI’s decisions? continued...
Another reason that we need to understand an AIs decision is that we
can set an AI to do a job twenty-four hours a day, seven days a week
without needing the needs of a human.