Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

2013-01-23

Lecture 1: Intelligent Agents


Ning Xiong Mlardalen University

Outline
What is the concept of agent? Rationality of agent behavior Task environments for agents Types of intelligent agents

2013-01-23

What is Agent

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators to change the states of the environment

Agent Examples
Human agent Sensors: eyes, ears, and other organs for feeling; Actuators: hands, legs, mouth, and other body parts Robotic agent Sensors: cameras, sonar, and infrared range finders Actuators: robotic arm, various motors

2013-01-23

Agent Function and Program


The agent function maps from percept histories to actions: [f: P* A] Agent behavior decided by the agent function The agent program runs on the physical architecture (computing device) to implement the agent function f

Rational agents
An rational agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful

2013-01-23

Why Not Saying Optimal Agents


The inherent uncertainty in real-world makes the optimal solution not exist or hardly to judge 1. The imperfect or incomplete knowledge about the world makes it impossible to judge optimality of acts 2. The chance factor in real-world makes outcome stochastic and uncertain. 3. The consequence of an action by an agent is affected by actions of other agents

Rational Choice
p11 A1 O11

Decision node

p12

O12

A2

p21

O21

p22 EU(A1)=u11p11+u12p22 EU(A2)=u21p21+u22p22

O22

A rational agent will prefer act A1 to act A2 provided that EU(A1)>EU(A2)

2013-01-23

Agent Environment Types


Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent.

Environment types
Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment.

2013-01-23

Environment types
Fully observable Deterministic Static Discrete Single agent Chess with a clock Yes Deterministic Semi Yes No Chess without a clock Yes Deterministic Yes Yes No Taxi driving No No No No No

Intrinsically the real world is partially observable, stochastic,, dynamic, continuous, multi-agent

Agent types
Four types of agents in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Learning agents

The course covers key techniques for designing various types of intelligent agents

2013-01-23

Simple reflex agents

Use if-then rules to define mapping from percepts to actions

Behavior-based intelligence without reasoning (in robotics) Lecture 3: Fuzzy rule-based control for decision making

Model-based reflex agents


World model

When the world state is not fully observable, use the model of the environment to estimate the current state given the sensor observations

2013-01-23

Model-based state estimation


Consider a process monitoring problem where we want to estimate the state of process given sensor measurements. The available model for the process is as follows

x(k ) Fx(k 1) w(k 1)

y ( k ) H x( k ) v ( k )
where x denotes the process state, y the sensor measurements. Bayesian-based filtering methods such as Kalman filtering and particle filtering to update and refine state estimates.

Multi-sensor data fusion


One lecture in the course, which will discuss how agents can utilize multi-sensor data and other information to better estimate the hidden states of the environment and to acquire better situation awareness

2013-01-23

Goal-based agents

Analyze and predict resulting outcomes of possible actions, choose the most promising action to satisfy the goal

Goal-Based Agents
Lecture 5: decision theory and analysis for helping selection of rational actions Lecture 6: How to make decisions by exploiting previous experiences

Lecture 8: How to find a set of interesting, nondominated solutions in the continuous space with multiple conflicting objectives

2013-01-23

Learning agents
Critic
feedback

Sensors

Environment

Learning element

Decision making

Actuators
Learning element: modify agent functions in decision making

Decision making: agent function to select external actions Critic: evaluate how well agent is doing

Learning Mechanisms in DVA406


Fuzzy adaptive control: on-line modification of fuzzy decision rules based on performance feedback. Lecture 4
Other agent learning approaches like reinforcement learning are given in the Learning Systems course (CDT407).

10

2013-01-23

Recommended Reference on Agents


Chapter 2: Intelligent agents, in: Artificial Intelligence: A modern approach, by Stuart Russel and Peter Norvig, Prentice Hall, 2002.

11

You might also like