Faiml Unit 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Progressive Education Society’s

MODERN COLLEGE OF ENGINEERING Pune -05


“Fundamentals of Artificial Intelligence
and Machine Learning”
2022-23 (Semester II)

By,
Prof. GAURI V. MATHAD
Assistant Professor,Department OF AI&ML
PES’s MCOE, Pune.
Unit I
Introduction to AI
Foundation and History of AI
Maturation of Artificial Intelligence (1943-1952)

● Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits
in 1943. They proposed a model of artificial neurons.
● Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between
neurons. His rule is now called Hebbian learning.
● Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950.
Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can
check the machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a
Turing test.
The birth of Artificial Intelligence (1952-1956)

● Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which
was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new
and more elegant proofs for some theorems.
● Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John
McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the
enthusiasm for AI was very high at that time.
The golden years-Early enthusiasm (1956-1974)

● Year 1966: The researchers emphasized developing algorithms which can solve mathematical
problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA.
● Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.

The first AI winter (1974-1980) :


● The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time
period where computer scientist dealt with a severe shortage of funding from government for AI
researches.
● During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)

● Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the
decision-making ability of a human expert.
● In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford
University.

The second AI winter (1987-1993)

● The duration between the years 1987 to 1993 was the second AI Winter duration.
● Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert
system such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)

● Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became
the first computer to beat a world chess champion.
● Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
● Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and
Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-
present)

● Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex
questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky
questions quickly.
● Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to
the user as a prediction.
● Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."
● Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also
performed extremely well.
● Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken hairdresser
appointment on call, and lady on other side didn't notice that she was talking with the machine.
Types of AI
Weak AI or Narrow AI

● Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.The most common
and currently available AI is Narrow AI in the world of Artificial Intelligence.
● Narrow AI cannot perform beyond its field or limitations, as it is only trained for one specific task. Hence
it is also termed as weak AI. Narrow AI can fail in unpredictable ways if it goes beyond its limits.
● Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined range of functions.
● IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert system approach
combined with Machine learning and natural language processing.
● Some Examples of Narrow AI are playing chess, purchasing suggestions on e-commerce site, self-
driving cars, speech recognition, and image recognition.
General AI

● General AI is a type of intelligence which could perform any intellectual task with efficiency like a human.
● The idea behind the general AI to make such a system which could be smarter and think like a human by
its own.
● Currently, there is no such system exist which could come under general AI and can perform any task as
perfect as a human.
● The worldwide researchers are now focused on developing machines with General AI.
● As systems with general AI are still under research, and it will take lots of efforts and time to develop
such systems.
Super AI

● Super AI is a level of Intelligence of Systems at which machines could surpass human intelligence, and
can perform any task better than human with cognitive properties. It is an outcome of general AI.
● Some key characteristics of strong AI include capability include the ability to think, to reason,solve the
puzzle, make judgments, plan, learn, and communicate by its own.
● Super AI is still a hypothetical concept of Artificial Intelligence. Development of such systems in real is
still world changing task.
Artificial Intelligence type-2: Based on functionality

1. Reactive Machines

2. Limited Memory

3. Theory of Mind

4. Self-Awareness
1. Reactive Machines

● Purely reactive machines are the most basic types of Artificial Intelligence.
● Such AI systems do not store memories or past experiences for future actions.
● These machines only focus on current scenarios and react on it as per possible best action.
● IBM's Deep Blue system is an example of reactive machines.
● Google's AlphaGo is also an example of reactive machines.
2. Limited Memory

● Limited memory machines can store past experiences or some data for a short period of time.
● These machines can use stored data for a limited time period only.
● Self-driving cars are one of the best examples of Limited Memory systems. These cars can store recent
speed of nearby cars, the distance of other cars, speed limit, and other information to navigate the road.
3. Theory of Mind

● Theory of Mind AI should understand the human emotions, people, beliefs, and be able to interact
socially like humans.
● This type of AI machines are still not developed, but researchers are making lots of efforts and
improvement for developing such AI machines.
4. Self-Awareness

● Self-awareness AI is the future of Artificial Intelligence. These machines will be super intelligent, and will
have their own consciousness, sentiments, and self-awareness.
● These machines will be smarter than human mind.
● Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
Artificial Intelligence vs Machine learning
What is artificial intelligence (AI)?
Artificial intelligence is the capability of a computer system to mimic human
cognitive functions such as learning and problem-solving. Through AI, a computer
system uses maths and logic to simulate the reasoning that people use to learn
from new information and make decisions.

What is Machine Learning?


Machine learning is an application of AI. It’s the process of using mathematical
models of data to help a computer learn without direct instruction. This enables a
computer system to continue learning and improving on its own, based on
experience.
Artificial Intelligence vs Machine learning
Are AI and machine learning the same?
While AI and machine learning are very closely connected, they are not the same. Machine learning is
considered a subset of AI.

How are AI and machine learning connected?


An “intelligent” computer uses AI to think like a human and perform tasks on its own. Machine learning is how a
computer system develops its intelligence.
Types of AI Agents

● Simple Reflex Agent


● Model-based reflex agent
● Goal-based agents
● Utility-based agent
● Learning agent
Simple Reflex agent

● The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
● These agents only succeed in the fully observable environment.
● The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
● The Simple reflex agent works on Condition-action rule, which means it maps the current
state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
Simple Reflex agent

● Problems for the simple reflex agent design approach:


○ They have very limited intelligence
○ They do not have knowledge of non-perceptual parts of the current
state
○ Mostly too big to generate and to store.
○ Not adaptive to changes in the environment.
Simple Reflex agent
Model-based reflex agent

● The Model-based agent can work in a partially observable environment, and track the situation.
● A model-based agent has two important factors:
a. Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
b. Internal State: It is a representation of the current state based on percept history.
● These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.
● Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.
Model-based reflex agent
Goal-based agents

● The knowledge of the current state environment is not always sufficient to decide for an agent to what
to do.
● The agent needs to know its goal which describes desirable situations.
● Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.
● They choose an action, so that they can achieve the goal.
● These agents may have to consider a long sequence of possible actions before deciding whether the
goal is achieved or not. Such considerations of different scenario are called searching and planning,
which makes an agent proactive.
Goal-based agents
Utility-based agents

● These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at a
given state.
● Utility-based agent act based not only goals but also the best way to achieve the goal.
● The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
● The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
Utility-based agents
Learning Agents

● A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities.
● It starts to act with basic knowledge and then able to act and adapt automatically through learning.
● A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning from environment
b. Critic: Learning element takes feedback from critic which describes that how well the agent is doing with
respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
● Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.
Learning Agents
Rationality

1. Rationality is nothing but status of being reasonable, sensible, and having


good sense of judgment.
2. Rationality implies the conformity of one's beliefs with one's reasons to
believe, or of one's actions with one's reasons for action.
3. It is concerned with expected actions and results depending upon what the
agent has perceived.
4. Performing actions with the aim of obtaining useful information is an important
part of rationality.
Rational Agent

1. A rational agent is an agent which has clear preferences and models


uncertainty via expected values.
2. A rational agent can be anything that makes decisions, typically a
person, firm, machine, or software.
3. A rational agent always performs right action, where the right action
means the action that causes the agent to be most successful in the
given percept sequence.
4. Rational agent is capable of taking best possible action in any
situation.
Example of rational action performed by any intelligent agent

Automated Taxi Driver:


Performance Measure: Safe, fast, legal, comfortable trip, maximize profits.
Environment: Roads, other traffic, customers.
Actuators: Steering wheel, accelerator, brake, signal, horn.
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors,
keyboard.
Nature of environment
● An environment in artificial intelligence is the surrounding of the agent.

● The agent takes input from the environment through sensors and delivers the output to the
environment through actuators.

● There are several types of environments:


Types of environments:
● Fully Observable vs Partially Observable
● Deterministic vs Stochastic
● Competitive vs Collaborative
● Single-agent vs Multi-agent
● Static vs Dynamic
● Discrete vs Continuous
● Episodic vs Sequential
● Known vs Unknown
Fully Observable vs Partially Observable
● When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment else it is partially observable.
● Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.
● An environment is called unobservable when the agent has no sensors in all environments.
● Examples:
○ Chess – the board is fully observable, and so are the opponent’s moves.
○ Driving – the environment is partially observable because what’s around the corner
is not known.
Deterministic vs Stochastic
● When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
● The stochastic environment is random in nature which is not unique and cannot be
completely determined by the agent.
● Examples:
○ Chess – there would be only a few possible moves for a coin at the current state
and these moves can be determined.
○ Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to
time.
Competitive vs Collaborative
● An agent is said to be in a competitive environment when it competes against another agent
to optimize the output.
● The game of chess is competitive as the agents compete with each other to win the game
which is the output.
● An agent is said to be in a collaborative environment when multiple agents cooperate to
produce the desired output.
● When multiple self-driving cars are found on the roads, they cooperate with each other to
avoid collisions and reach their destination which is the output desired.
Single-agent vs Multi-agent

● An environment consisting of only one agent is said to be a single-agent


environment.
● A person left alone in a maze is an example of the single-agent system.
● An environment involving more than one agent is a multi-agent
environment.
● The game of football is multi-agent as it involves 11 players in each team.
Dynamic vs Static
● An environment that keeps constantly changing itself when the agent is up with some action
is said to be dynamic.
● A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.
● An idle environment with no change in its state is called a static environment.
● An empty house is static as there’s no change in the surroundings when an agent enters.
Discrete vs Continuous
● If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
● The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
● The environment in which the actions are performed cannot be numbered i.e. is not discrete,
is said to be continuous.
● Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
Episodic vs Sequential
● In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or
episodes. There is no dependency between current and previous incidents. In each incident, an agent
receives input from the environment and then performs the corresponding action.
● Example: Consider an example of Pick and Place robot, which is used to detect defective parts
from the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e.
there is no dependency between current and previous decisions.
● In a Sequential environment, the previous decisions can affect all future decisions. The next action
of the agent depends on what action he has taken previously and what action he is supposed to take
in the future.
● Example:
○ Checkers- Where the previous move can affect all the following moves.
Known vs Unknown

● In a known environment, the output for all probable actions is given.


● Obviously, in case of unknown environment, for an agent to make a decision, it has to gain
knowledge about how the environment works.
Structure of agents
● Artificial intelligence is defined as the study of rational agents.
● A rational agent could be anything that makes decisions, as a person, firm, machine, or
software.
● It carries out an action with the best outcome after considering past and current percepts(agent’s
perceptual inputs at a given instance).
● An AI system is composed of an agent and its environment.
● The agents act in their environment.
● The environment may contain other agents.
An agent is anything that can be viewed as :

● perceiving its environment through sensors and


● acting upon that environment through actuators
Structure of agents
Structure of agents

● To understand the structure of Intelligent Agents, we should be familiar


with Architecture and Agent programs.
● Architecture is the machinery that the agent executes on.
● It is a device with sensors and actuators, for example, a robotic car, a
camera, a PC.
● Agent program is an implementation of an agent function.
● An agent function is a map from the percept sequence(history of all that
an agent has perceived to date) to an action.
Examples of Agent
● A software agent has Keystrokes, file contents, received network packages which act as
sensors and displays on the screen, files, sent network packets acting as actuators.
● A Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts acting as actuators.
● A Robotic agent has Cameras and infrared range finders which act as sensors and various
motors acting as actuators.
Turing Test in AI

● In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this test
is known as the Turing Test. In this test, Turing proposed that the computer can be said to be an
intelligent if it can mimic human response under specific conditions.
● Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence," which
considered the question, "Can Machine think?"
● The Turing test is based on a party game "Imitation game," with some modifications.
● This game involves three players in which one player is Computer, another player is human responder,
and the third player is a human Interrogator, who is isolated from other two players and his job is to find
that which player is machine among two of them.
Turing Test in AI

● Consider, Player A is a computer, Player B is human, and Player C is an


interrogator.
● Interrogator is aware that one of them is machine, but he needs to identify
this on the basis of questions and their response.
● The conversation between all players is via keyboard and screen so the result
would not depend on the machine's ability to convert words as speech.
● The test result does not depend on each correct answer, but only how closely
its responses like a human answer.
● The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
Turing Test in AI

The questions and answers can be like:

Interrogator: Are you a computer?

PlayerA (Computer): No

Interrogator: Multiply two large numbers such as (256896489*456725896)

Player A: Long pause and give the wrong answer.

In this game, if an interrogator would not be able to identify which is a machine and which is human, then the
computer passes the test successfully, and the machine is said to be intelligent and can think like a human.
Thank You

You might also like