Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

ETHICS IN AI

● With the rise in AI applications replacing human workforce, do you consider


it ethical to incorporate the use of AI in various jobs?
● How do you think income would be shared if AI is used inplace of human
workforce?
● How health benefits based on AI be made accessible and available to all the
people in the society?
● AI is a powerful tool in various fields. However depending on how it is used,
it can be either boon or bane
Ethical concerns of AI
● Taking over human jobs leading towards the era of Unemployment.

● Making human less efficient , For example: asking everything from an artificial intelligent

machine and making their own decisions.

● Wealth inequalities , where wealthy people will become richer by the influence of machines

whereas poor people will become poor because of lack of jobs.

● Machines will effect humanity and making them less interacting with other people.

● Machines don’t know the desired output that humans want.

● Facing racism from machines , which is also called AI Bias


It depends on humans how they use technology and for what they
use technologies, technology can become a blessing or curse
depending on humans.
AI Bias means favoring someone or something. AI bias focuses
upon training the machines with unbiased data, when Bias Data
is fed to an AI Machine while creating the Model then the
machine will also be biased.

● We have seen that in most of artificial assistants have a female voice and
not a male voice.
● When a camera missed the mark on racial sensitivity, or when a software
used to predict future criminals showed bias against black people.
How to avoid AI bias?

Data privacy is defined as one able to control how our digital data is being
stored, modified, and exchanged between different parties.
“The important thing to remember is the consequences of
your actions while applying AI”
Do you know that AI can even help predict text
and you can complete stories using AI?
Do you know that a machine learning algorithm OpenAI’s GPT-2
language model is trained to predict text
AI systems have been developed to analyze
images used for Face recognition, object
recognition

It means AI can make decisions


Consider the case of Self driving car

So, if Ai and machines are to take over in future, how can we make their decision
-making reliable
How can we make their decisions predictable and convincing under all
circumstances?
What will make these autonomous , self improving , independent machines and
trustworthy?
This is where ethics come into the picture.

Ethics are loosely defined set of moral principles about right and wrong guiding
actions of an individual or a group.

Since technology itself does not possess moral or ethical qualities, it needs to be
fed with human ethics.
But here are two challenges

First , how does the team of developers determine what is a good or right
outcome and for whom?

The second challenge is that AI as an autonomous ,self learning and self


improving technology
Basics of AI Ethics:

● Bias , Prejudice and Fairness


● Accountability
● Transparency, Interpretability and Explainability
Actions of AI

● Safety
● Human AI Interaction
● Cybersecurity and wrong intentional use
● Data Privacy and control
Curve fitting
When building a machine learning model, it is important to make sure that your model is

not over-fitting or under-fitting. While under-fitting is usually the result of a model not

having enough data, over-fitting can be the result of a range of different scenarios. The

objective in machine learning is to build a model that performs well with both the

training data and the new data that is added to make predictions.
Under-fitting — when a statistical model does not adequately capture the underlying

structure of the data Over-fitting — when a statistical model contains more parameters

that can be justified by the data Good fit — when a statistical model adequately learns the

training dataset and generalizes well to new data

In order to understand over-fitting better, we should look at the problems that cause

under-fitting. Under-fitting occurs when a model is too simple (not enough

observations or features), and therefore, does not learn well from the data that it is

given
How to Tell if Model is Over-fitting:

With machine learning, it is difficult to determine how well a model will perform on new data until it is

actually tested. To avoid this issue, it is important to split the data that is used to train the model into

training and testing data. As a general rule of thumb, the training data and testing data should have a

split anywhere between 80% training data and 20% testing data to 70% training data and 30% testing

data.
Ethical concerns of AI

Can AI assure this:

Unemployment What will happen after the end of jobs?

Inequality How are we going to distribute the wealth created by


machines?

Humanity How do Machines affect our behaviour and interaction?

Artificial Stupidity How can we guard against the mistakes?

You might also like