Professional Documents
Culture Documents
Ethics in Ai
Ethics in Ai
● Making human less efficient , For example: asking everything from an artificial intelligent
● Wealth inequalities , where wealthy people will become richer by the influence of machines
● Machines will effect humanity and making them less interacting with other people.
● We have seen that in most of artificial assistants have a female voice and
not a male voice.
● When a camera missed the mark on racial sensitivity, or when a software
used to predict future criminals showed bias against black people.
How to avoid AI bias?
Data privacy is defined as one able to control how our digital data is being
stored, modified, and exchanged between different parties.
“The important thing to remember is the consequences of
your actions while applying AI”
Do you know that AI can even help predict text
and you can complete stories using AI?
Do you know that a machine learning algorithm OpenAI’s GPT-2
language model is trained to predict text
AI systems have been developed to analyze
images used for Face recognition, object
recognition
So, if Ai and machines are to take over in future, how can we make their decision
-making reliable
How can we make their decisions predictable and convincing under all
circumstances?
What will make these autonomous , self improving , independent machines and
trustworthy?
This is where ethics come into the picture.
Ethics are loosely defined set of moral principles about right and wrong guiding
actions of an individual or a group.
Since technology itself does not possess moral or ethical qualities, it needs to be
fed with human ethics.
But here are two challenges
First , how does the team of developers determine what is a good or right
outcome and for whom?
● Safety
● Human AI Interaction
● Cybersecurity and wrong intentional use
● Data Privacy and control
Curve fitting
When building a machine learning model, it is important to make sure that your model is
not over-fitting or under-fitting. While under-fitting is usually the result of a model not
having enough data, over-fitting can be the result of a range of different scenarios. The
objective in machine learning is to build a model that performs well with both the
training data and the new data that is added to make predictions.
Under-fitting — when a statistical model does not adequately capture the underlying
structure of the data Over-fitting — when a statistical model contains more parameters
that can be justified by the data Good fit — when a statistical model adequately learns the
In order to understand over-fitting better, we should look at the problems that cause
observations or features), and therefore, does not learn well from the data that it is
given
How to Tell if Model is Over-fitting:
With machine learning, it is difficult to determine how well a model will perform on new data until it is
actually tested. To avoid this issue, it is important to split the data that is used to train the model into
training and testing data. As a general rule of thumb, the training data and testing data should have a
split anywhere between 80% training data and 20% testing data to 70% training data and 30% testing
data.
Ethical concerns of AI