EPQ Presentation

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 10

Will we ever be able to

fully understand an AI’s


decisions?
William Dodd - EPQ
Aims and
Objectives
In this project, I am aiming to answer to whether we will ever be
able to fully understand an AI’s decisions.

I am going to focus on three sections in this presentation.


• How did AI develop and how understandable it was during its
early development?
• Why do we need to be able to understand an AI’s decisions?
• What techniques can we use to understand an AI’s decisions?
Previous AI
Systems
One of the first AI systems was called ELIZA.
It was used to act as a therapist and looked for key words like “sad”
or “anxious” in a human's prompt and would reply with something
relevant. (Tarnoff 2023)

This was very easy to understand as a human could look for the
reply the machine had given and seen what key words it was meant
to look for to see how it had come to its decision.

Tarnoff, B. (2023, July 25). The Guardian. Retrieved from https://www.theguardian.com/technology/2023/jul/25/joseph-


weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai
Previous
AI Systems continued…
The biggest limitation to algorithms like ELIZA was that they could
not understand human language very well so would often not reply
with something appropriate.

When a computer can fully understand human inputs, this is called


Natural Language Processing. To understand language, a computer
must be able to do more than just pick key words out of a prompt.
This is why virtual assistants like Alexa often reply with ”Sorry, I
don’t know that” or something similar.

When natural language processors were first being developed after


ELIZA was when people first realised that understanding how an AI
came to it’s decisions might not be so easy
How to understand an AI’s
decisions
Modern, hard-to-understand AIs like ChatGPT use a variety of
methods to learn. For example, ChatGPT uses the Transformer
architecture which looks at each sentence it is given and tries to
think of thousands, sometimes millions, of ways of seeing the
sentence (Singh, 2019).

This is similar to how humans ignore the non-important parts of


sentences while skim reading and is just an advanced, more clever
way of picking key words out of sentences, except a lot more
accurate as if there are multiple key words, the AI has a better
understanding of how to deal with them

Singh, A. (2019, September 6). Retrieved from Medium: https://towardsdatascience.com/attention-networks-c735befb5e9f


How to understand an AI’s
decisions continued…
However, more advanced AI systems that don’t necessarily use text (e.g.
DeepMind that plays Chess) are a lot harder to understand.

Some of these learn by playing against themselves millions of times to find


the quickest and most guaranteed way to win. This means they don’t have
training data to trace back to like ChatGPT and other text-based ones which
just read out their training data back at the user. These are called Black Box
algorithms.

We can understand these by using algorithms like “Permutation Importance”


(Molnar, 2023). This sounds complicated but is just changing variables inside
the machine and seeing how the output varies. For example, if you changed
one variable and the machine then started playing a certain move every game,
you would know how it had produced that decision.

Molnar, C. (2023, August 21). Interpretable Machine Learning - A guide for Making Black Box Models Explainable.
Retrieved from https://christophm.github.io/interpretable-ml-book/feature-importance.html
How to understand an AI’s
decisions continued…
There is, however, a new way of understanding these algorithms. We
can now, with modern developments, use AI to find how another AI
has made its decision. This is called Explainable AI (XAI) and is
becoming more and more important to verify AIs decisions in
modern situations like in medical diagnosis (Hassija, et al., 2023).

IBM, Google and Intel have been pushing their combined


developments in XAI to places that need confirmation that an AI
knows what it is doing. The NHS and other countries’ equivalents
have bought into this so that medical diagnoses by AI can be
verified.

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., . . . Hussain, A. (2023, August 24). Interpreting
Black-Box Models: A Review on Explainable Artificial Intelligence. Retrieved from Springer:
https://link.springer.com/article/10.1007/s12559-023-10179-8#Sec22
Why must we understand an AI’s
decisions?
AI is being used more in sectors that need reasons why a choice has
been made. As I stated previously, the NHS are using Explainable AI
to see why an AI has made a decision when looking at MRI and CT
scans (NHS, 2023). By understanding how an AI has made its
decision here, not only can we confirm it’s thought process, but we
could understand additional features of what tumours and other
issues look like.

We could also train doctors with these discoveries by AI and if the


doctors are faster with this new information, waiting times could be
sped up.

NHS. (2023, Febuary 10). Artificial Intelligence - NHS Transformation Directorate. Retrieved from NHS:
https://transform.england.nhs.uk/information-governance/guidance/artificial-intelligence
Why must we understand
an AI’s decisions? continued...
Another reason that we need to understand an AIs decision is that we
can set an AI to do a job twenty-four hours a day, seven days a week
without needing the needs of a human.

We can also group computers together to work on the same task


which means that we could set an AI to work doing something like
trial and error on medicine and then when it thinks it has found a
solution, a human could see how it made that decision and enhance
human understanding of diseases.
Conclusion
In conclusion, I think that we will be able to understand an AIs
decisions but it may take a lot more computational or human power
than we have currently to do so.

With more developments in newer AI, we may be able to get an AI


to understand how it produced its own decision as well as do its
original task which would save time.

By investing in Explainable AI, I think that humans will be able to


understand currently unexplainable things and with more
computational power, we will be able to do this faster and more
efficiently.

You might also like