Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 79

MCSE601L-ARTIFICIAL

INTELLIGENCE
COURSE OBJECTIVES
COURSE OUTCOMES
On completion of this course, student should be able to:
MODULE 1

TOPICS
INTRODUCTION

Intelligence Artificial Artificial Intelligence

• Branch of computer Science


• Imitate the roles of human brain
INTRODUCTION
• What is Artificial?
• Made or produced by human beings rather than occurring naturally, especially
as a copy of something natural.

• What is Intelligence?
• The ability to acquire and apply knowledge and skills.
ARTIFICIAL INTELLIGENCE
Artificial Intelligence:
Artificial Intelligence, is the ability of a computer
to act like a human being.
What is Machine Learning?

• Algorithms to incorporate intelligence into machine by automatically


learning from data.
• Used for classification, regression, or clustering
What is Deep Learning?

• DL is a subset of ML that employs artificial neural networks for


complex tasks.
History of AI
The gestation of artificial intelligence (1943-1955)
• Alan Turing who first articulated a complete vision of AI in his 1950 article "Computing
Machinery and Intelligence." Therein, he introduced the Turing test, machine learning, genetic
algorithms, and reinforcement learning.

The birth of artificial intelligence (1956)


• U.S Researchers developed automata theory, neural nets and the study of intelligence.

Early enthusiasm, great expectations (1952-1969)


• The early years of AI were full of successes-in a limited way. General Problem Solver (GPS) was
a computer program created in 1957 by Herbert Simon and Allen Newell to build a universal
problem solver machine.
History of AI
Knowledge-based systems: The key to power? (1969-1979)
• DENDRAL was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the computer
software expert system that it produced. Its primary aim was to help organic chemists in identifying unknown
organic molecules, by analyzing their mass spectra and using knowledge of chemistry.
• MYCIN-to diagnose patients based on reported symptoms and medical test results .
• DART-Computer Fault Diagnosis
AI becomes an industry (1980-present)

•In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent computers running
Prolog. Overall, the A1 industry boomed from a few million dollars in 1980 to billions of dollars in 1988.

•Format : relation(entity1, entity2, ....k'th entity).

Example :
•friends(raju, mahesh).
•singer(sonu).
•odd_number(5).

•Explanation :
•These facts can be interpreted as :
•raju and mahesh are friends.
•sonu is a singer.
•5 is an odd number
History of AI
The return of neural networks (1986-present)
Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net models of
memory.
Neural networks are a type of machine learning that emulate the human brain and solve common
problems in AI.

AI becomes a science (1987-present)

•In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area.
Speech technology and the related field of handwritten character recognition are already making the
transition to widespread industrial and consumer applications.

•The Bayesian network formalism was invented to allow efficient representation of, and rigorous
reasoning with, uncertain knowledge.

The emergence of intelligent agents (1995-present)

•One of the most important environments for intelligent agents is the Internet.
4 Categories of Definition for AI
• Systems that act like humans
• Systems that think like humans
• Systems that think rationally
• Systems that act rationally

Systems that think Systems that think


like humans rationally

Systems that act Systems that act


like humans rationally
1. Acting Humanly
• Also called Turing Test Approach
• The art of creating machines that perform functions that require
intelligence when performed by people
• i.e., making computers to act like humans.
• Example : Turing Test
Turing Test Approach
• Provides satisfactory operational definition for intelligence.

• Provided by conducting Turing Test

• The Turing Test is a method of inquiry in artificial intelligence (AI)


for determining whether or not a computer is capable of thinking
like a human being. The test is named after Alan Turing, the founder
of the Turing Test
The Turing Test Approach

• Woman, Machine & Judge.


• Pass the test ?
• if the interrogator cannot tell
Which one’s the computer? • if there is a computer or
• a human at the other end.
A
B
Qualities required to pass Turing Test
• The system requires these abilities to pass the test

• Natural language processing


• for communication with human
• Knowledge representation
• to store information effectively & efficiently
• Automated reasoning
• to retrieve & answer questions using the stored information
• Machine learning
• to adapt to new circumstances
Total Turing Test

• Can test subject’s perceptual abilities

• To pass the total Turing Test, the computer will need


• Computer Vision to perceive objects

• Robotics to manipulate objects and move about


2. Thinking Humanly
• Making computers to think like humans
• Goal is to build systems that function internally in some way
similar to human mind
• Also called cognitive modeling approach
Cognitive Modeling Approach
• If we are going to say that a
• given program thinks like a human,
• Give a way of determining how humans thinks;
• We need to get inside the actual workings of human minds.
• Precise theory of the mind =>becomes possible to express the theory
as a computer program.

• Example GPS(General Problem Solver)


• comparing the reasoning steps of the program and human solving
the same problems.
Cognitive Science
• Combines computer models of AI and experimental techniques of psychology.

• Try to construct precise and testable theories of working of human mind.

• Cognitive Science

• Is an interdisciplinary science

• draws on many fields (as psychology, artificial intelligence, linguistics, and


philosophy) in developing theories about human perception, thinking, and
learning
3. Thinking Rationally
• Also called Laws of Thought approach
• Thinking “Right Thing”
• Making computers to think the “Right Thing”
• Relies on logic(to make inferences) rather than human to measure correctness.
• Logic: provide a precise notation for statements about all kinds of things in the
world and the relations between them.
• Syllogism: Provide patterns for argument structures
• Always give correct conclusion given correct premises.
• For example,
• Premise: John is a human and all humans are mortal
• Conclusion: John is mortal
• Can be done using logics. Example Propositional and predicate logic.
Two obstacles: Laws of Thought of
Approach
• It’s not easy to take informal knowledge and state it in the formal
terms required by logical notation, particularly when the knowledge
is less than 100% certain.
• Being able to solve a problem “in principle” and doing so “in
practice” are very different.
• i.e., 1. Informal Knowledge is not precise.
2. Difficult to model uncertainty
3. Theory and practice cannot be put together.
4. Acting Rationally
• Also called Rational Agent Approach.
• Doing “Right Thing”
• Rational Agent – acts to achieve the best outcome.
• An agent acts rationally if it selects the action that maximizes it
performance measure.
Rational Agent Approach
• Design of rational agent
• Advantages
• More general than laws of thought approach
• Concentrates on scientific development

Limited Rationality
• acting appropriately when there is not enough time to do all
computation
Applications of AI
• Some of the applications are given below:
• Business : Financial strategies, give advice
• Engineering: check design, offer suggestions to create new
product
• Manufacturing: Assembly, inspection & maintenance
• Mining: used when conditions are dangerous
• Hospital : monitoring, diagnosing & prescribing
• Education : In teaching e-tutoring
• household : Advice on cooking, shopping etc.
• farming : prune trees & selectively harvest mixed crops.
Applications of AI
• Robots
• Chess-playing program
• Voice recognition system
• Speech recognition system
• Grammar checker
• Pattern recognition
• Medial diagnosis
• Game Playing
• Machine Translation
• Resource Scheduling
• Expert systems (diagnosis, advisory, planning, etc)
• Machine learning
What are Agent and Environment?
• An agent is anything that can perceive its environment through sensors and acts
upon that environment through effectors(actuators).
• A human agent has sensory organs such as eyes, ears, nose, tongue and skin
parallel to the sensors, and other organs such as hands, legs, mouth, for
effectors.
• A robotic agent replaces cameras and infrared range finders for the sensors, and
various motors and actuators for effectors.
• A software agent has encoded bit strings as its programs and actions.
Agents and environments
Agents and environments
• Percept: the agent’s perceptual inputs
• Percept sequence: the complete history of everything the agent has
perceived
• Agent function maps any given percept sequence to an action [f: p*
A]
• The agent program runs on the physical architecture to produce f
• Agent = architecture + program
Vacuum-cleaner world

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOp
A vacuum-cleaner agent
• \input{tables/vacuum-agent-function-table}
Concept of rationality and Performance
Concept of rationality
• A rational agent is an agent whose acts try to maximize some
performance measure.
• An agent should strive to "do the right thing", based on what it
can perceive and the actions it can perform.
• The right action is the one that will cause the agent to be most
successful
Performance measures

• An objective criterion for success of an agent's behavior


• E.g., performance measure of a vacuum-cleaner agent could be
amount of dirt cleaned up, amount of time taken, amount of
electricity consumed, amount of noise generated, etc.
• The rationality of an agent depends on four
things
• the performance measure defining the agent's
degree of success
• the percept sequence, the sequence of all the
things perceived by the agent
• the agent's prior knowledge of the
environment
• the actions that the agent can perform
Definition of Rational agents and
• Rational Agent: For each possible percept sequence, a
rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has. A rational agent should be
autonomous
Definition of omniscient agent and Information Gathering

• An omniscient agent knows the actual outcome of its


actions and can act accordingly; but omniscience is
impossible in reality.

• Agents can perform actions in order to modify future


percepts is said to be information
Gathering(Information exploration)

• An agent is autonomous if its behavior is determined


by its own experience (with ability to learn and adapt)
Task environment
• To design a rational agent we need to specify a task
environment
• a problem specification for which the agent is a solution

• PEAS: to specify a task environment


• Performance measure
• Environment
• Actuators
• Sensors
PEAS: Specifying an automated taxi driver

Performance measure:
•?
Environment:
•?
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
•?
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
• roads, other traffic, pedestrians, customers
Actuators:
•?
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
• roads, other traffic, pedestrians, customers
Actuators:
• steering, accelerator, brake, signal, horn
Sensors:
•?
PEAS: Specifying an automated taxi driver

Performance measure:
• safe, fast, legal, comfortable, maximize profits
Environment:
• roads, other traffic, pedestrians, customers
Actuators:
• steering, accelerator, brake, signal, horn
Sensors:
• cameras, sonar, speedometer, GPS
Agent Type Performance Environment Actuators Sensors
Measure

Automated Taxi Safe, fast, legal, Roads, Steering Cameras, sonar,


Driver comfortable trip, traffic, accelerator, speedometer,
maximize profits pedestrian brake, signal, GPS, odometer,
customers horn, display accelerometer
engine sensors,
keyboard
Medical Healthy patient, Patient, hospital, Screen Keyboard (entry of
diagnosis minimize costs, staff display symptoms, findings,
system lawsuits (question patient's answers)
tests,
diagnoses
treatment
referrals)
Part-Picking Percentage of parts Conveyor belt Jointed arm Camera, joint angle
Robot in correct bin with parts, bins and hand sensors
Interactive Maximize student’s Set of students Screen Keyboard
English tutor score on test display
(exercises)
Agent Type Performance Environment Actuators Sensors
Measure

robot soccer amount of goals soccer match field legs cameras, sonar or
player scored infrared
Satellite Image Correct Image Downlink from Display Color pixel arrays
Analysis Categorization satellite categorization
of scene
Refinery Maximum purity, Refinery Valves, Temperature,
controller safety operators pumps, pressure, chemical
heaters, sensors
displays
Vacuum Agent minimize energy two squares Left, Right, Sensors to identify
consumption, Suck, NoOp the dirt
maximize dirt pick
up
Properties of task environments
1.Fully observable vs. Partially observable

Fully observable

• If an agent’s sensors give it access to the complete state of the


environment at each point in time then the environment is effectively
and fully observable. There will be no portion of the environment that is
hidden for the agent.

Real-life Example 1: While running a car on the road ( Environment ), The


driver ( Agent ) is able to see road conditions, signboard and pedestrians
on the road at a given time and drive accordingly. So Road is a fully
observable environment for a driver while driving the car.

Example 2 : A chess playing system is an example of a system that


operates in a fully observable environment.
Properties of task environments
Partially observable
The agent is not familiar with the complete environment at a
given time.

Real-life Example: Playing card games is a perfect example of


a partially-observable environment where a player is not
aware of the card in the opponent’s hand
Properties of task environments
2.Deterministic vs. stochastic

Deterministic

• next state of the environment is completely


determined by the current state and the actions
executed by the agent, then the environment is
deterministic, otherwise, it is Stochastic.

• Real-life Example: The traffic signal is a deterministic


environment where the next signal is known for a
pedestrian (Agent)
Properties of task environments
stochastic
The Stochastic environment is the opposite of a
deterministic environment. The next state is totally
unpredictable for the agent.

Real-life Example 1: The radio station is a stochastic


environment where the listener is not aware about the
next song.

Example 2:Taxi driving is clearly stochastic in this


sense, because one can never predict the behavior of traffic
exactly;
Properties of task environments
3.Episodic vs. sequential

Episodic
• An episodic environment means that subsequent episodes do
not depend on what actions occurred in previous episodes.

• Real-life Example 1: A support bot (agent) answer to a question and


then answer to another question and so on. So each question-
answer is a single episode.

• Example 2: Pick and Place robot, which is used to detect


defective parts from the conveyor belts.
Properties of task environments
Sequential

• In a sequential environment, the agent engages in a series of


connected episodes.

• In sequential environments, on the other hand, the current


decision could affect all future decisions.

• Real-life Example: Checkers- Where the previous move can affect


all the following moves.
• Medical Diagnosis –Diagnosis diseases is wrong, giving
medicine, taking all test is going to be wrong.
Properties of task environments
• 4.Static vs. dynamic
Dynamics
• A dynamic environment is always changing over time
Example: the number of people in the street
Static
• Environment is not changed over time
Example: Crossword Puzzle
Properties of task environments

5.Discrete vs. continuous


Discrete
It consists of a finite number of states and agents
have a finite number of actions.

Example: A chess game comes under discrete


environment as there is a finite number of moves that
can be performed.
Properties of task environments

Continuous
It consists of a infinite number of states and agents have a
infinite number of actions.
Example:
Taxi driving is a continuous state and continuous-time
problem.
Properties of task environments
• 6.Single agent VS. multiagent
Single agent:

An environment is explored by a single agent. All actions are


performed by a single agent in the environment.
Real-life Example: Brushing a teeth
Multiagent:
If two or more agents are taking actions in the environment, it is known
as a multi-agent environment.
Real-life Example :Playing Cards
Properties of task environments
7.Known vs. unknown
The agent’s knows about the complete environment and the outcomes
for all actions .
example: solitaire card games
- If the environment is unknown, the agent will have to learn how it
works in order to make good decisions.( example: new video game).
Task Environment Observable Deterministic Episodic Static Discrete Agent
Crossword puzzle Fully Deterministic Sequential Static Discrete Single

Chess with a clock Fully Stochastic Sequential Semi Discrete Multi

Poker Partially Stochastic Sequential Static Discrete Multi

Backgammon Fully Stochastic Sequential Static Discrete Multi

Taxi driving Partially Stochastic Sequential Dynamic Continuous Multi

Medical diagnosis Partially Stochastic Sequential Dynamic Continuous Single

Image-analysis Fully Deterministic Episodic Semi Continuous Single

Part-picking robot Partially Stochastic Episodic Dynamic Continuous Single

Refinery controller Partially Stochastic Sequential Dynamic Continuous Single

Interactive English Partially Stochastic Sequential Dynamic Discrete Multi


tutor
Structure of agents
Structure of agents
• The job of AI is to design the agent program that implements the agent
function mapping percepts to actions.

Agent programs

Agent programs take the current percept as input from the sensors and
return an action to the actuators

Architecture

Architecture is a computing device used to run the agent program.


Two types of agent programs
• A Skeleton Agent
• A Table Lookup Agent
• Skeleton Agent
• The agent program receives only a single percept as its input.
• If the percept is a new input then the agent updates the memory
with the new percept
Table-lookup agent

• A table which consists of indexed percept sequences with its


corresponding action
• A trivial agent program: keeps track of the percept sequence and
then uses it to index into a table of actions to decide what to do
• The designers must construct the table that contains the
appropriate action for every possible percept sequence

function TABLE-DRIVEN-AGENT(percept) returns an action

static: percepts, a sequence initially empty


table, a table of actions, indexed by percept sequence
append percept to the end of percepts
action  LOOKUP(percepts, table)
return action
Table-lookup agent
Drawbacks: –
• Huge table (P^T , P: set of possible percepts, T: lifetime)
• Space to store the table
• Take a long time to build the table
• No autonomy
• Even with learning, need a long time to learn the table entries
Types of Intelligent Program
• Four types
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Simple reflex agents
• Select actions on the basis of the current percept ignoring
the rest of the percept history
• Example: simple reflex vacuum cleaner agent
Simple reflex agents
Simple reflex agents
Simple reflex agents
• INTERPRET-INPUT function generates an abstracted description of the
current state from the percept
• RULE-MATCH function returns the first rule in the set of rules that
matches the given state description
• RULE - ACTION – the selected rule is executed as action of the given
percept

• Example : Medical Diagnosis System


• If the patient has reddish brown spots then start the treatment for
measles.
Model-based Reflex Agents
• An agent which combines the current percept with the
old internal state to generate updated description of
the current state.
Model-based Reflex Agents
• Suitable for Partially observable environments.
• Choose the action based on the model of the world.
• It maintains an internal state.
• Model-Knowledge about “how the things happens in the world”
• Internal State-It is a representation of unobserved aspects of current
state depending on percept history.
• How the world evolves-Changes are made in the environment
irrespective of agent action.
• What my action do: What changes will be made in the environment by
the agent actions.
Model-based reflex agents

UPDATE-STATE - is responsible for creating the new internal state description

Example: Medical Diagnosis system

If the Patient has spots then check the internal state (i. e) any change in the environment may
lead to cause spots on the patient. From this internal state the current state is updated and the
corresponding action is executed.
Goal-based agents
• An Agent knows the description of current state as well as goal state.
The action matches with the current state is selected depends on the
goal state.
Goal-based agents
Goal-based agents

• Example : Medical diagnosis system


• If the name of disease is identified for the patient
then the treatment is given to the patient to recover
from him from the disease and make the patient
healthy is the goal to be achieved
Utility-based agents

• An agent which generates a goal state with high –


quality behavior (i.e) if more than one sequence
exists to reach the goal state then the sequence
with more reliable, safer, quicker and cheaper
than others to be selected.
• Utility is a function that maps a state onto a real
number, which describes the associated degree
of happiness
Utility-based agents
• It acts based not only goals but also the best way to achieve the
goals.
• If multiple possible alternatives are available choose the best way to
achieve the goal state.
Utility-based agents
Utility-based agents
• Example : Medical diagnosis System

• If the patient disease is identified then the sequence of treatment


which leads to recover the patient with all utility measure is selected
and applied
Learning Agent
• It can learn from past experience or it has learning capabilities.
• It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
• It has four conceptual components which are
1. Learning element
2. performance element
3. Critic
4. Problem generator
Learning Agent
Learning Agent
• Learning Element-Responsible for making improvements by learning
from environments.
• Critic –Learning elements take feedback from critic which describes
that how well the agent is doing with respect to fixed performance
standard.
• Performance Element: Responsible for selecting external actions.
• Problem Generator:The componenet is responsible for suggesting
actions that will lead to new and informative experiences.
• Hence the learning agent are able to learn analyze performance and
look for new ways to improve the performance.

You might also like