Professional Documents
Culture Documents
Artificial Intelligence: by Ahmad DIN
Artificial Intelligence: by Ahmad DIN
By
Ahmad DIN
Why AI?
• Algorithms
• an algorithm is a finite sequence of well-defined, computer-implementable
instructions, typically to solve a class of problems or to perform a
computation. Algorithms are always unambiguous and are used as
specifications for performing calculations, data processing, automated
reasoning, and other tasks.
Big Picture
• “We are survival machines – robot vehicles blindly programmed to preserve
the selfish molecules known as genes. This is a truth which still fills me with
astonishment.” ― Richard Dawkins, The Selfish Gene
http://www.freevectors.net/human http://www.bbc.com/earth/story/20150929-why-are-we-the-only-human-species-still-alive
Sensing
• Survival
• Uncertainties
Why don’t animals have wheels?
Reference Books
□R. Siegwart, I. Nourbakhsh and David Scaramuzza. Introduction to
Autonomous Mobile Robots. MIT Press. 2004.
□ R. Murphy. Introduction to AI Robotics. MIT Press. 2000.
□ S. Thrun, W. Burgard and D. Fox. Probabilistic Robotics. MIT Press.
2005.
□B. Siciliano and O. Khatib. Handbook of Robotics. Springer. 2008.
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
The Rise of Humans
• Only 1M humans 10K years back
• Today, 7B humans in the world
• Our weight 300M tons
• Weight of domesticated animals is 700M tons
• Weight of wild animals less than 100 M tons
• Survival of these all animals depends on us.
The Rise of Humans
• 70K years back humans were insignificant animals
• Today, Humans control this planet
• How humans come from there to here?
• How humans turned from insignificant apes to the rulers of the
earth?
• On the individual level humans are similar to the chimpanzee.
• Real difference between humans and other animals is on the
collective level.
The Rise of Humans
• Huge achievements of humans are due to the ability
to cooperate and coordinate.
• How humans do it?
• Imagination (we cooperate with countless number of strangers)
• Animals communicate objective reality
• But, human constructed fictional reality on top of this
objective reality
• Reality made of fictional entities likes nations, gods,
cooperates, money etc.
• Realities likes tress, animals depends on decisions of these
fictional realties.
What is Intelligence?
Problem
solving
What is Intelligence?
Problem
solving
What is Intelligence?
Neanderthals
What is Intelligence?
What is Intelligence?
What is Intelligence?
What is Intelligence?
think abstractly
What is Intelligence?
Language
What is Intelligence?
ideas
What is Intelligence?
What is Intelligence?
What is Intelligence?
□Intelligence is a general mental capability that
involves the ability to reason, plan, solve
problems, think abstractly, comprehend ideas
and language, and learn.
What is Intelligence?
“Everybody is a genius. But if you judge a fish by its ability to climb a
tree, it will live its whole life believing that it is stupid.”
• Anonymous
What is AI?
What is AI?
The science of making machines that:
act intelligently
• Rationality
• Environment Types
• Agent types
Intelligent Agents: Introduction
• Computers do as they are designed to do.
• Traditionally: we are happy to consider computers as obedient and
unimaginative servants.
• Recently: computers represent system which exhibit decision
making in a rapidly changing, unpredictable, or open environment,
through
– Intelligent or autonomous agents
• Examples
– Space probes in NASA
– Searching on the Internet
Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon that
environment through actuators
• Human agent:
– Eyes, ears, and other organs for sensors
– Hands, legs, mouth, and other body parts for
actuators
• Robotic agent:
– Cameras and infrared range finders for sensors
– Various motors for actuators.
Intelligent Agents: What are Agents
• An agent is a computer system
that is situated in some influence
environment, and that is capable
of autonomous action in this being influenced
Intelligent devices
Robots
Heating systems
Mobile phones
Intelligent software
Searchbots
Expert systems
Help functions ...
Agents and Environments
• The agent function maps from percept histories to
actions:
[f: P* → A]
• The agent program runs on the physical architecture
to produce f
– Agent = architecture + program
Vacuum-cleaner World
• Percepts: location and contents, e.g., [A,Dirty]
• Actions: Left, Right, Pick_Dirt, NoOp
A vacuum-cleaner function
Percept Sequence Action
[A, Clean] Right
[A, Dirty] Pick_Dirt
[B, Clean] Left
[B, Dirty] Pick_Dirt
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Pick_Dirt
…
Agents & Environments
• In complex environments:
– An agent do not have complete control over its environment, it just
have partial control
– Partial control means that an agent can influence the environment
with its actions
– An action performed by an agent may fail to have the desired effect.
• Conclusion: environments are non-
deterministic, and agents must be prepared
for the possibility of failure.
Agent Characterisation
An agent is responsible for satisfying specific goals. There can be
different types of goals such as achieving a specific status, keeping
certain status, maximising a given function (e.g., utility), etc.
beliefs
knowledge
Goal1
Goal2
environment
15
History
• History represents the interaction between an
agent and its environment. A history is a
sequence:
α0 α1 α2 αu-1 αu
h:e0 e1 e2 … eu
Where:
e0 is the initial state of the environment
αu is the u’th action that the agent choose to
perform
eu is the u’th environment state
Abstract Architecture for Agents
• Let:
– R be the set of all such possible finite sequences
(over E and Ac)
– RAc be the subset of these that end with an
action
– RE be the subset of these that end with an
environment state
17
State Transformer Functions
• A state transformer function represents behavior of the
environment:
18
Agents
• Agent is a function which maps runs to actions:
19
Systems
• A system is a pair containing an agent and an environment
• Any system will have associated with it a set of possible runs;
we denote the set of runs of agent Ag in environment Env by
R(Ag, Env)
• (We assume R(Ag, Env) contains only terminated runs)
20
Systems
• Formally, a sequence
21
Purely Reactive Agents
• Some agents decide what to do without reference to their
history — they base their decision making entirely on the
present, with no reference at all to the past
• We call such agents purely reactive:
22
Purely reactive agents
• A purely reactive agent decides what to do
without reference to its history (no references to
the past).
• It can be represented by a function
action: E → A
• Example: thermostat
Environment states: temperature OK; too cold
see action
Agent
Environment
24
Perception
• The see function is the agent’s ability to
observe its environment, whereas the action
function represents the agent’s decision
making process
• Output of the see function is a percept:
see : E → Per
which maps environment states to percepts, and
action is now a function
action : Per* → A
which maps sequences of percepts to actions
25
Perception ability
• Example:
x = “The room temperature is OK”
y = “There is no war at this moment”
then:
E={ (x,y), (x,y), (x,y), (x, y)}
e1 e2 e3 e4
but for the thermostat:
p1 if s=e1 or s=e2
see(s) =
p2 if s=e3 or s=e4
Agents with State
• We now consider agents that maintain state:
Agent
see action
next state
Environment
27
Agents with State
• These agents have some internal data structure, which is typically
used to record information about the environment state and
history.
Let I be the set of all internal states of the agent.
• The perception function see for a state-based agent is unchanged:
see : E → Per
The action-selection function action is now defined as a mapping
action : I → Ac
from internal states to actions. An additional function next is
introduced, which maps an internal state and percept to an
internal state:
next : I Per → I
28
Agent Control Loop
1. Agent starts in some initial internal state i0
2. Observes its environment state e, and generates a percept
see(e)
3. Internal state of the agent is then updated via next function,
becoming next(i0, see(e))
4. The action selected by the agent is action(next(i0, see(e)))
5. Goto 2
29
Tasks for Agents
• We build agents in order to carry out tasks
for us
• The task must be specified by us…
• But we want to tell agents what to do
without telling them how to do it
30
Utility Functions over States
• One possibility: associate utilities with
individual states — the task of the agent is
then to bring about states that maximize
utility
• A task specification is a function
u:E→
which associates a real number with every
environment state
31
Utility Functions over States
• But what is the value of a run…
– minimum utility of state on run?
– maximum utility of state on run?
– sum of utilities of states on run?
– average?
• Disadvantage: difficult to specify a long term
view when assigning utilities to individual
states
(One possibility: a discount for states later on.)
32
Utilities over Runs
• Another possibility: assigns a utility not to
individual states, but to runs themselves:
u:R→
• Such an approach takes an inherently long
term view
• Other variations: incorporate probabilities of
different states emerging
• Difficulties with utility-based approaches:
– where do the numbers come from?
– we don’t think in terms of utilities!
– hard to formulate tasks in these terms
33
Predicate Task Specifications
• A special case of assigning utilities to histories is
to assign 0 (false) or 1 (true) to a run
• If a run is assigned 1, then the agent succeeds
on that run, otherwise it fails
• Call these predicate task specifications
• Denote predicate task specification by .
Thus : R → {0, 1}.
34
Task Environments
• A task environment is a pair Env, where
Env is an environment,
: R → {0, 1}
is a predicate over runs.
Let TE be the set of all task environments.
• A task environment specifies:
– the properties of the system the agent will
inhabit
– the criteria by which an agent will be judged to
have either failed or succeeded
35
Achievement & Maintenance Tasks
• Two most common types of tasks are
achievement tasks and maintenance
tasks:
1. Achievement tasks are those of the form
“achieve state of affairs ”
2. Maintenance tasks are those of the form
“maintain state of affairs ”
36
Achievement & Maintenance Tasks
• An achievement task is specified by a set G of “good” or
“goal” states: G E
The agent succeeds if it is guaranteed to bring about at
least one of these states (we do not care which one —
they are all considered equally good).
• A maintenance goal is specified by a set B of “bad”
states: B E
The agent succeeds in a particular environment if it
manages to avoid all states in B — if it never performs
actions which result in any state in B occurring
37
Rationality – Performance Measure
• An agent should strive to "do the right thing", based on
what it can perceive and the actions it can perform
• The right action is the one that will cause the agent to
be most successful
46
Environment Types
• Episodic (vs. sequential): The agent's
experience is divided into atomic "episodes”
– Each episode consists of the agent perceiving and
then performing a single action, and the choice of
action in each episode depends only on the
episode itself, e.g., a robot whose job is to detect
faulty parts on a line in some factory
– In a sequential setting, the next episode depends
on the previous one(s), e.g., learning which chess
move to execute at each sequential step, in order
to win the game at the end
– Also called a sequential decision process.
47
Environment Types
• Static (vs. Dynamic): The environment is unchanged
while an agent is deliberating which action to execute
– Much more simpler to deal with
– For the dynamic case, the agent needs to keep track
of the changes
– The environment is semi-dynamic if the environment
itself does not change with the passage of time but
the agent's performance score does, e.g., checkers.
49
Environment Types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No
Applies condition-
action rules based
only on the current
input (reflex)
Simple Reflex Agents
55
Simple Reflex Agents
56
Simple Reflex agents
• Automated Taxi:
– Agent observes rain falling on the windshield:
Agent powers on the viper
– Agent observes a red signal; Agent breaks the taxi
until it stops.
Model-based Reflex agents
59
Model-based Reflex agents
• Robo-Soccer Example:
– Imagine a robotic goalkeeper
– It can build a model of the dynamics of the game that
is played on the field, e.g., when the ball is kicked in
its direction, the ball will be nearer to it in the next
time step
– If this robot is not able to acquire its state at some
time step, then using the model, it knows that the ball
has come nearer
– It also know what consequences a dive will have
– So, it can time its dive early and hence, save the goal.
Goal-based agents
Feedback
Agent
Agent Function
Program
Random
Action
Selector
The End
Artificial Intelligence
Search Agents
Informed search
Informed search
• Example of strategies:
2. A* search
3. IDA*
Informed search
Heuristic!
The distance is the straight line distance. The goal is to get to Sault Ste Marie,
so all the distances are from each city to Sault Ste Marie.
Greedy search
Greedy search
Examples using the map
Start: Las Vegas
Goal: Calgary
Greedy search
A* search
• Combines:
– g(n): cost to reach node n
– h(n): cost to get from n to the goal
– f (n) = g(n) + h(n)
A*
Examples using the map
Start: Las Vegas
Goal: Calgary
A*
Admissible heuristics
• A heuristic h is admissible if
• Complete: Yes
• Time: exponential
• Optimal: Yes!
Heuristics
http://aima.cs.berkeley.edu/