Professional Documents
Culture Documents
Lecture 1 - Intelligent Agent
Lecture 1 - Intelligent Agent
Outline
What is the concept of agent? Rationality of agent behavior Task environments for agents Types of intelligent agents
2013-01-23
What is Agent
An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators to change the states of the environment
Agent Examples
Human agent Sensors: eyes, ears, and other organs for feeling; Actuators: hands, legs, mouth, and other body parts Robotic agent Sensors: cameras, sonar, and infrared range finders Actuators: robotic arm, various motors
2013-01-23
Rational agents
An rational agent should strive to "do the right thing", based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful
2013-01-23
Rational Choice
p11 A1 O11
Decision node
p12
O12
A2
p21
O21
O22
2013-01-23
Environment types
Static (vs. dynamic): The environment is unchanged while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does) Discrete (vs. continuous): A limited number of distinct, clearly defined percepts and actions. Single agent (vs. multiagent): An agent operating by itself in an environment.
2013-01-23
Environment types
Fully observable Deterministic Static Discrete Single agent Chess with a clock Yes Deterministic Semi Yes No Chess without a clock Yes Deterministic Yes Yes No Taxi driving No No No No No
Intrinsically the real world is partially observable, stochastic,, dynamic, continuous, multi-agent
Agent types
Four types of agents in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Learning agents
The course covers key techniques for designing various types of intelligent agents
2013-01-23
Behavior-based intelligence without reasoning (in robotics) Lecture 3: Fuzzy rule-based control for decision making
When the world state is not fully observable, use the model of the environment to estimate the current state given the sensor observations
2013-01-23
y ( k ) H x( k ) v ( k )
where x denotes the process state, y the sensor measurements. Bayesian-based filtering methods such as Kalman filtering and particle filtering to update and refine state estimates.
2013-01-23
Goal-based agents
Analyze and predict resulting outcomes of possible actions, choose the most promising action to satisfy the goal
Goal-Based Agents
Lecture 5: decision theory and analysis for helping selection of rational actions Lecture 6: How to make decisions by exploiting previous experiences
Lecture 8: How to find a set of interesting, nondominated solutions in the continuous space with multiple conflicting objectives
2013-01-23
Learning agents
Critic
feedback
Sensors
Environment
Learning element
Decision making
Actuators
Learning element: modify agent functions in decision making
Decision making: agent function to select external actions Critic: evaluate how well agent is doing
10
2013-01-23
11