Professional Documents
Culture Documents
1.1 The Vision of Multiagent Systems
1.1 The Vision of Multiagent Systems
Contents
1
Introduction
1.1 Vision of MAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Views of the Agent Systems . . . . . . . . . . . . . . . . . . . . . .
1
1
3
Understanding Agents
2.1 Agent Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 An Abstract Architecture for Agents. . . . . . . . . . . . . . . . . . .
2.3 An Abstract Architecture for Agents. . . . . . . . . . . . . . . . . . .
4
4
7
11
1
1.1
Introduction
The Vision of Multiagent Systems
This figure of the Rosette stone represents a master example of how information
was kept among the time. It represents intelligence and interaction between humans
in behave of one goal. The important point is, if we are able to do programs to threat
the information surviving for many time as the roseta and something more than static
information only.
The Vision of Multiagent Systems. Trends in Computer Science II.
Delegation: How to trust and give more control (critical tasks) to computers?
Talking about reaching intelligent computers as a goal takes this point in advance
in how do we are going to trust that computers can take the good decisions.
Wooldrige proposes in his book to think in a critical task as an airplanes traffic
controller in an airport. The task should be very good to be automated (a hard
work for humans) but how to trust that we can delegate the responsibility giving
enough trusting for the safety of the taken decisions (the planes are critical and
no accidents can be in consequence of bad delegated decisions).
Human delegation: How computers represent our best interests while interacting with
other humans or systems?
The final goal is that the computers automates taks for humans and helps us in
the dairy activities. For that, the computers must understand our world, how do
we react, interact and create concepts.
The Vision of Multiagent Systems. The Emergence of a New Field.
Definitions
Agent: An agent is a computer system capable of independent action on behalf of its
user or owner.
MultiAgent System: A number of agents interacting over the network. They act on
behalf of owners goals and motivations.
Agents Abilities
Cooperate
Coordinate
Negotiate
1.2
2
2.1
Understanding Agents
An Abstract View of an Agent System
An Agent Model
An agent is a computer system situated in one environment, and it is capable of an
autonomous action in this environment in order to meet design objectives (Wooldridge
and Jennings 1995).
Agent
Sensor
input
Action
output
Environment
On the diagram we can see how an agent can interact with his environment and affect it. Some times in complex systems, the agent have only a partial control of the
environment. Also under this model, the agent may have a set of actions to affect
the environment and reach its own goal. These actions, can be defined according to
the possible situations that the agent faces. The important issue is that an agent can
be seen as a control system with decision structures and can also be compared to a
software daemon working as an autonomous process.
Kinds of Agents Environments
Accessible versus Inaccessible
The agent can or can not obtain accurate information of the environments state.
Deterministic versus non-deterministic
Any action of the agent can or can not be with a single guaranteed effect.
Static versus dynamic
The static environment is changed only by the agent actions. A dynamic have
other processes changing it as in our physical world.
Discrete versus continuos
We say if we have a finite number of actions that we have a discrete environment
or not.
Towards Intelligent Agents
Properties of an intelligent agent
It should be clear that an agent as an autonomous process is capable of taking its own
decisions to reach the goals for them it was designed. Under this scope, an intelligent
agent should cover the properties listed above.
Reactivity: perceive and respond to the environment conditions in function of goals.
A goal directed behavior is some kind of plan or recipe to reach the goal of
an agent and this recipe should include a perception of the environment, how
it changes and creates special conditions to take decisions related to the design
objectives. A simple example is related to an agent thermostat reacting to the
environment changes and if its main goal is to keep a room fresh, under certain
5
2.2
u1
3
2
1
0
eu
e3
e2
e1
r : e0
Agents Perception
An agent is composed of data structures where in the figure the see function captures an observation of the environment and the action is the process of decisionmaking. see : E P er and
action : P er? Ac
see
action
Agent
Environment
Agent
see
action
State
next
Environment
2.3
Utility functions
Statement
We want to tell to an agent what to do, without telling how to do it.
Let define an utility function to define how good is an agent state as: u : E R
To avoid only viewing local states let define the function for all the run states as:
u:RR
The measure depends upon the kind of task to carry out by the agent.
Assume that the utility function u has somme uper bound that there exist a k R
such that for all r R we have u(r)k then we can talk about optimal agents.
11
Let define P (r | Ag, Env) to denote the probability that run r occurs when Ag is
placed in Env or :
X
P (r|Ag, Env) = 1
rR(Ag,Env)
Agopt = arg
AgAG
P (r|Ag, Env) = 1
rR(Ag,Env)
Agopt = arg
AgAG
rR(Ag,Env)
The importance on this last equation is the fact that we are not considering al the
subset of agents able to have an action in the environment, but the subset of agents able
to act and to fit in the capacity of an m computer with processor and memory specified.
The Utility concept is more to understand agents quoting that is more common to talk
about the achieved goals of an agent rather that its utility.
Definition 3. Is one where the utility function acts as a predicate over runs.
The function (r) where r R will denote a predicate specification.
Formally, the utility function u : R R is a predicate if the range of u is set to
{0,1}. That means that u assing a run either 1(true) or 0(false).
A task environment is defined as a pair of hEnv, i as:
: R 0, 1 and is a predicate over runs.
Let define T E the set of all task environments where it specifies:
The properties of the system the agent will inhabit.
The criteria by which the agent will be judged either for a successful or failed
task.
Given a task environment hEnv, i, we write R (Ag, Env) to denote the set of
runs of Agent Ag in environment Env that satisfy . Formally:
Definition 4. R (Ag, Env) = {r|r R(Ag, Env)and(r)}
12
P (r|Ag, Env)
rR (Ag,env)
Where P (|Ag, Env) is the probability that run r occurs if Ag is placed in Ev.
Two most common kinds of tasks are:
Achievement tasks. Those to achieve state of affairs .
Maintenance tasks. Those to maintain state of affairs .
An achievement of a task is specified by a number of goal states.
Task achievement is one of the most studied problems in AI.
Definition 8. The task environment hEnv, i specifies an achievement task there
is some set G E such that r R(Ag, Env) : (r) is true e G : e r.
G is the set of an achievement task environment as the goal states of the task.
hEnv, Gi is the achievement task environment with G goals over Env environment.
A task environment can be seen as a game where the agent is playing against the
environment or a called game against nature.
Definition 9. If we can identify a subset B of environment states |(r) is false if B
ocurrs in r and true otherwise. So, hEnv, i is a maintenance task if B E such
that (r) e B we have e 3 rr R(Ag, Env).
B is refered as the failure set.
A maintainance task environment is defined by hEnv, Bi.
Maintenance tasks can be seen as a game where the agent manages to avoid all states
in B. The environment as oponent forces the agent to fall in the B states.
13
Synthesizing Agents
How do we obtain an agent to succeed in a given task environment?
1. We can develop an algorithm to synthesize such agents from task environment
specifications.
2. We can develop an algorithm that will directly execute agent specifications in
order to produce appropriate behavior.
Agent Synthesizing
Is Automatic Programming where the goal is a program taking an input from the environment and from task environment automatically generate a successful agent in the
Env.
Definition 10. SYN : T E (AG {})
Think for as a null in Java
SYN can output an agent or
The synthesizing algorithm can be:
sound. whenever it returns an agent and success the task environment that is passed as
input. Formally SYN (hEnv, i) = Ag implies R(Ag, Env) = RP si(Ag, Env)
complete. if it is guaranteed to return an agent whenever there exists an agent that will
succeed in the task environment given as an input. Formally Ag AG s. t.
R(Ag, Env) = R (Ag, Env) implies SYB(hEnv, i) 6=
Summary
Summary
The first main message of your talk in one or two lines.
The second main message of your talk in one or two lines.
Perhaps a third message, but not more than that.
Outlook
Something you havent solved.
Something else you havent solved.
14