Module-2 Intelligent Agents

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Faculty of Engineering & Technology (Co-Edu.

)
B.Tech.

Department of Artificial Intelligence & Data Science

Class Notes
ESC
Introduction to AI & ML (22ESC147)

Module-II
INTELLIGENT AGENTS

Prepared by:
Asst. Prof. PRASHANT MULGE
Dept. of AI & DS, Sharnbasva University Kalaburgi
2023-24
Introduction to AI & ML (22ESC147)-Module-II

CONTENTS
INTELLIGENT AGENTS

Rational Agents, Mapping from Sequences to Actions, Properties


of Environments, Structure of Intelligent Agents, Types of
Agents: Simple Reflex Agents, Goal Based Agents, Utility-Based
Agents.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

1
Introduction to AI & ML (22ESC147)-Module-II

What is an Agent?
An agent can be anything that perceive its environment through sensors and act upon
that environment through actuators. An Agent runs in the cycle of perceiving, thinking, and
acting. An agent can be:
❖ Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
❖ Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
❖ Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cell phone, camera, and even
we are also agents.

Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.

Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

2
Introduction to AI & ML (22ESC147)-Module-II

Intelligent Agents:

An intelligent agent is an autonomous entity which act upon an environment using


sensors and actuators for achieving goals. An intelligent agent may learn from the environment
to achieve their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:


❖ Rule 1: An AI agent must have the ability to perceive the environment.
❖ Rule 2: The observation must be used to make decisions.
❖ Rule 3: Decision should result in an action.
❖ Rule 4: The action taken by an AI agent must be a rational action.

Rational Agent:

A Rational agent is any piece of software, hardware or a combination of the two which
can interact with the environment with actuators after perceiving the environment with
sensors.

❖ A rational agent is an agent which has clear preference, models uncertainty, and acts in
a way to maximize its performance measure with all possible actions.
❖ A rational agent is said to perform the right things. AI is about creating rational agents
to use for game theory and decision theory for various real-world scenarios.
❖ For an AI agent, the rational action is most important because in AI reinforcement
learning algorithm, for each best possible action, agent gets the positive reward and for
each wrong action, an agent gets a negative reward.

Rationality:

The rationality of an agent is measured by its performance measure. Rationality can be judged
on the basis of following points:
❖ Performance measure which defines the success criterion.
❖ Agent prior knowledge of its environment.
❖ Best possible actions that an agent can perform.
❖ The sequence of precepts.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

3
Introduction to AI & ML (22ESC147)-Module-II

Vacuum Cleaner as Rational Agent:

For example, consider the case of this Vacuum cleaner as a Rational agent. It has
the environment as the floor which it is trying to clean. It has sensors like Camera’s or dirt
sensors which try to sense the environment. It has the brushes and the suction pumps
as actuators which take action. Percept is the agent’s perceptual inputs at any given point of
time. The action that the agent takes on the basis of the perceptual input is defined by the agent
function.
Hence before an agent is put into the environment, a Percept sequence and the
corresponding actions are fed into the agent. This allows it to take action on the basis of the
inputs.
An example would be something like a table

Percept Sequence Action

Area1 Dirty Clean

Area1 Clean Move to Area2

Area2 Clean Move to Area1

Area2 Dirty Clean

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

4
Introduction to AI & ML (22ESC147)-Module-II

Based on the input (percept), the vacuum cleaner would either keep moving between Area1
and Area2 or perform a clean operation. This is a simplistic example but more complexity
could be built in with the environmental factors.
For example, depending on the amount of dirt, the cleaning could be a power clean or a regular
clean. This would further result in introducing a sensor which could calculate the amount of
dirt and so on.
This percept sequence is not only fed into the agent before it starts but it can also be
learned as the agent encounters newer precepts. The agent’s initial configuration could reflect
some prior knowledge of the environment, but as the agent gains experience this may be
modified and augmented. This is achieved through reinforcement learning or other learning
techniques.
The idea is that the agents much suited for the AI world are the ones which have immense
computing power at their disposal and are making non trivial decisions. If you look at
the earlier post, they need to learn, form perceptions and correlations and then act rationally as
intelligent agents.
Rational Behavior
For the rational agent that we have defined above, though it would clean the floor but
it would needlessly oscillate between the two areas. Thus it is not the most performant agent.
May be after a few checking cycles, if both Area1 and Area2 are clean then it should just goto
sleep for some time. The sleep time could exponentially increase if the next time again there is
no dirt.
So the idea is that we define a performance measure which would define the criteria of success.
The success would also have the costs (penalties associated with it). For example

Action Points

Moving from one area to another -5

Suction noise -2

Cleaning 20

Others X

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

5
Introduction to AI & ML (22ESC147)-Module-II

Now for the agent to perform effectively, it would be guided by the above penalty scores. For
example, it is moves recklessly between the areas, it loses 5 points every time so it has to be
prudent of its movement. Whenever it cleans, apart from gaining 20 points, it loses 2 points so
it has to make sure that it cleans when the dirt is beyond the defined threshold. Similarly there
could be other penalty point associated.
The performance measure has to be well defined as well. For example in this case it might
be the Average cleanliness of areas over time. Hence the agent would try to keep the area clean
with the minimum penalties.
Hence, in essence, the rational behavior depend on
❖ The performance measure which defines the criteria of success.
❖ The agent’s prior knowledge of the environment.
❖ The actions that the agent can perform.
❖ The agent’s percept sequence realized and learned to date.

Mapping from Sequences to Actions


Ideal mappings describe ideal agents since they describe the sequence of actions for it
that maximize performance measure. A mapping from percept sequences to actions is the
action the agent takes in response to each percept.
• Autonomy
“A system is autonomous to the extent that its behaviour is determined by its own
experience” If actions depend entirely on built in knowledge without considering precepts
the agent lacks autonomy. Autonomy is achieved by giving to the agent built in
knowledge together with the ability to learn, just as in nature animals have instincts but
also learn from the environment.
• A truly autonomous intelligent agent should be able to operate successfully in a wide
variety of environments given sufficient time to adapt.
• Reactivity
In order to define reactivity we shall first define the notion of logical agent and consider
what are Knowledge Based Agents.
Concept of Logical Agent.
“An agent that can form representations of the world, use a process of inference to derive
new references of the world and use these new representatives to decide what to do.”

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

6
Introduction to AI & ML (22ESC147)-Module-II

Agent Environment in AI

An environment is everything in the world which surrounds the agent, but it is not a
part of an agent itself. An environment can be described as a situation in which an agent is
present. The environment is where agent lives, operate and provide the agent with something
to sense and ac upon it.

Properties of Environments (Types of Environments in AI)


1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible

1. Fully observable vs partially observable environment

In types of environment in AI, the first one can be classified as fully observable or partially
observable, depending on the extent to which the agent has access to information about the
current state of the environment.

• A fully observable environment is one in which the agent has complete information
about the current state of the environment. The agent has direct access to all
environmental features that are necessary for making decisions. Examples of fully
observable environments include board games like chess or checkers.
• A partially observable environment is one in which the agent does not have complete
information about the current state of the environment. The agent can only observe a
subset of the environment, and some aspects of the environment may be hidden
or uncertain. Examples of partially observable environments include driving a car in
traffic.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

7
Introduction to AI & ML (22ESC147)-Module-II

2. Deterministic vs Stochastic

The environment in artificial intelligence can be classified as deterministic


or stochastic, depending on the level of predictability of the outcomes of the agent's actions.

• A deterministic environment is one in which the outcome of an action is completely


predictable and can be precisely determined. The state of the environment completely
determines the result of an agent's action. an a deterministic environment, the agent's
actions have a one-to-one correspondence with the resulting outcomes. Examples of
deterministic environments include simple mathematical equations, where the outcome
of each operation is precisely defined.
• A stochastic environment is one in which the outcome of an action is uncertain and
involves probability. The state of the environment only partially determines the result
of an agent's action, and there is a degree of randomness or unpredictability in the
outcome. Examples of stochastic environments include games of chance like poker or
roulette, where the outcome of each action is influenced by random factors like the
shuffle of cards or the spin of a wheel.

3. Competitive vs Collaborative

In types of environment in AI, another one can be classified as competitive or collaborative,


depending on whether the agents in it are competing against each other or working together to
achieve a common goal.

• A competitive environment is one in which multiple agents are trying to achieve


conflicting goals. Each agent's success is directly tied to the failure of others, and the
agents must compete against each other to achieve their objectives. Examples of
competitive environments include games like chess.
• A collaborative environment is one in which multiple agents are working together to
achieve a common goal. The success of each agent is directly tied to the success of the
group as a whole, and the agents must collaborate and coordinate their actions to
achieve their objectives. Examples of collaborative environments include tasks like
search and rescue.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

8
Introduction to AI & ML (22ESC147)-Module-II

4. Single-agent vs Multi-agent

The environment in Artificial Intelligence can be classified as a single-agent or multi-agent


environment, depending on the number of agents interacting within it.

• A single-agent environment is one in which a single agent interacts with the


environment to achieve its goals. Examples of single-agent environments include
puzzles and mazes. The agent must use search algorithms or planning techniques to
find a path to its goal state.
• A multi-agent environment is one in which multiple agents interact with each other and
the environment to achieve their individual or collective goals. Examples of multi-agent
environments include multiplayer games and traffic simulations. The agents must use
game theory or multi-agent reinforcement learning techniques to optimize their
behavior.

5. Static vs Dynamic

Types of environment in AI can be classified as static or dynamic, depending on whether it


changes over time.

• A static environment is one in which the environment does not change over time. The
state of the environment remains constant, and the agent's actions do not affect the
environment. Examples of static environments include mathematical problems or logic
puzzles. The agent can use techniques like search algorithms or decision trees to
optimize its behavior.
• A dynamic environment is one in which the environment changes over time. The state
of the environment evolves based on the actions of the agent and other factors, and the
agent's actions can affect the future state of the environment. Examples of dynamic
environments include video games or robotics applications. The agent must
use techniques like planning or reinforcement learning to optimize its behavior in
response to the changing environment.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

9
Introduction to AI & ML (22ESC147)-Module-II

6. Discrete vs Continuous

The environment in Artificial Intelligence can be classified as discrete or continuous,


depending on the nature of the state and action spaces.

• The state space refers to the set of all possible states that the environment can be in. For
example, in a game of chess, the state space would include all possible board
configurations. In a robotic control task, the state space may include information about
the position and velocity of the robot and its environment.
• The action space refers to the set of all possible actions that the agent can take in each
state of the environment. For example, in a game of chess, the action space would
include all possible moves that the player can make. In a robotic control task, the action
space may include commands for controlling the speed and direction of the robot.
• A discrete environment is one in which the state and action spaces are finite and
discrete. Examples of discrete environments include board games like chess or
checkers. The agent's decision-making process can be based on techniques like search
algorithms or decision trees.
• In contrast, a continuous environment is one in which the state and action spaces are
continuous and infinite. Examples of continuous environments include robotics or
control systems. In a continuous environment, the agent's decision-making process
must take into account the continuous nature of the state and action spaces. The agent
must use techniques like reinforcement learning or optimization to learn and optimize
its behavior.

7. Episodic vs Sequential

In AI, an environment can be classified as episodic or sequential, depending on the nature of


the task and the relationship between the agent's actions and the environment.

• An episodic environment is one in which the agent's actions do not affect the future
states of the environment. The goal of the agent is to maximize the immediate reward
obtained during each episode. Examples of episodic environments include games like
chess. The agent can use techniques like Monte Carlo methods or Q-learning to learn
the optimal policy for each episode.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

10
Introduction to AI & ML (22ESC147)-Module-II

• In contrast, a sequential environment is one in which the agent's actions affect the future
states of the environment. The goal of the agent is to maximize the cumulative reward
obtained over multiple interactions. Examples of sequential environments include
robotics applications or video games. The agent must use techniques like dynamic
programming or reinforcement learning to learn the optimal policy over multiple
interactions.

8. Known vs Unknown

The environment in Artificial Intelligence can be classified as a known or unknown


environment, depending on the agent's level of knowledge about the environment.

• A known environment is one in which the agent has complete knowledge of the
environment's rules, state transitions, and reward structure. The agent knows exactly
what actions are available to it, and the outcome of each action is known with certainty.
Examples of known environments include chess or tic-tac-toe games. In a known
environment, the agent can use techniques like search algorithms or decision trees to
optimize its behavior.
• In contrast, an unknown environment is one in which the agent has limited or no
knowledge about the environment's rules, state transitions, and reward structure. The
agent may not know what actions are available to it, or the outcome of each action may
be uncertain. Examples of unknown environments include exploration tasks or real-
world applications. In an unknown environment, the agent must use techniques like
reinforcement learning or exploration-exploitation trade-offs to optimize its behavior.
• It's important to note that Known vs Unknown and Fully observable vs partially
observable environments are independent of each other. For example, an environment
could be known and partially observable, or unknown and fully observable. The choice
of which characterization to use depends on the specific problem being addressed and
the capabilities of the agent.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

11
Introduction to AI & ML (22ESC147)-Module-II

Conclusion
Here are the main points to conclude the types of environment AI:

• Fully Observable vs Partially Observable environment: The agent can either


observe the entire state of the environment or only a portion of it.
• Deterministic vs Stochastic environment: The outcome of an action is either certain
or uncertain due to randomness.
• Competitive vs Collaborative environment: The agent interacts with other agents that
may be competing or collaborating.
• Single-agent vs Multi-agent environment: The agent interacts with either a single
entity or multiple entities.
• Static vs Dynamic environment: The environment can either remain constant or
change over time.
• Discrete vs Continuous environment: The state and action spaces can either be finite
and well-defined or infinite and continuous.
• Episodic vs Sequential environment: The agent's interaction with the environment
can either be divided into distinct episodes or a continuous sequence of actions.
• Known vs Unknown environment: The agent can either have complete knowledge
about the environment or have limited or no knowledge about the environment's rules
and outcomes.
• When developing AI systems, these environments must be taken into account because
they affect how decisions are made and how behavior can be optimized.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

12
Introduction to AI & ML (22ESC147)-Module-II

Structure of Intelligent Agents

Agent’s structure can be viewed as


❖ Agent = Architecture + Agent Program
❖ Architecture = the machinery that an agent executes on.
❖ Agent Program = an implementation of an agent function.

Agent Terminology
❖ Performance Measure of Agent − It is the criteria, which determines how
successful an agent is.
❖ Behavior of Agent − It is the action that agent performs after any given
sequence of precepts.
❖ Percept − It is agent’s perceptual inputs at a given instance.
❖ Percept Sequence − It is the history of all that an agent has perceived till
date.
❖ Agent Function − It is a map from the precept sequence to an action.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

13
Introduction to AI & ML (22ESC147)-Module-II

Many AI Agents use the PEAS model in their structure. PEAS is an acronym for Performance
Measure, Environment, Actuators, and Sensors.

Performance Measure – It is a measure which defines the success or failure of an agent.


Performance of each agent will vary with respect to its precept.

Environment – It is the surroundings from where the agent will learn and react with the help
of sensors and actuators respectively.

Actuators – The part of the agent from which it executes its output are called actuators.

Sensors – The parts of the agent which help an agent to collect information regarding the
environment are known as sensors

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

14
Introduction to AI & ML (22ESC147)-Module-II

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

15
Introduction to AI & ML (22ESC147)-Module-II

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

16
Introduction to AI & ML (22ESC147)-Module-II

Agent Performance Environment Actuators Sensors


measure
Vacuum Cleanliness, security, Room, table, carpet, Wheels, brushes Camera, sensors
cleaner battery floors
Chatbot Helpful responses, Messaging platform, Sender mechanism, NLP algorithms
system accurate responses internet, website typer

Autonomous Efficient navigation, Roads, traffic, Brake, accelerator, Cameras, GPS,


vehicle safety, time, comfort pedestrians, road signs steer, horn speedometer

Hospital Patient's health, cost Doctors, patients, Prescription, Symptoms


nurses, staff diagnosis, tests,
treatments
Part-picking Percentage of parts in Conveyor belt with Jointed arms and Camera, joint
robot correct bins parts; bins hand angle sensors

Maximize scores, Classroom, Desk,


Subject Smart displays, Eyes, Ears,
Improvement is Chair, Board, Staff,
Tutoring Corrections Notebooks
students Students

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

17
Introduction to AI & ML (22ESC147)-Module-II

Types of Agents

Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
➢ Simple Reflex Agent
➢ Model-based reflex agent
➢ Goal-based agents
➢ Utility-based agent
➢ Learning agent

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

18
Introduction to AI & ML (22ESC147)-Module-II

Simple Reflex agent:


➢ The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current precepts and ignore the rest of the percept history.
➢ These agents only succeed in the fully observable environment.
➢ The Simple reflex agent does not consider any part of precepts history during their decision and
action process.
➢ The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
➢ Problems for the simple reflex agent design approach:
❖ They have very limited intelligence
❖ They do not have knowledge of non-perceptual parts of the current state
❖ Mostly too big to generate and to store.
❖ Not adaptive to changes in the environment.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

19
Introduction to AI & ML (22ESC147)-Module-II

Advantages of Simple Reflex Agents


• Easy to design and implement with minimal computational resources.
• Provide real-time responses to environmental changes.
• Highly reliable when sensors and rules are well-designed.
Limitations of Simple Reflex Agents
• 1. Prone to errors if input sensors or rules are faulty.
• 2. Lack memory or state, limiting their applicability to specific tasks.
• 3. Unable to handle partial observability or adapt to new situations.
Applications of Simple Reflex Agents
• Games
Simple reflex agents are typically used in games where all the information of the
current game state is directly observable, such as Chess, Checkers, Tic Tac Toe, and
Connect-Four.
• Room cleaner
A room cleaner agent works only if there is dirt in the room.
• Mars lander
A mars lander will collect a rock in a specific place it needs to collect, even if it finds
the same rock in a different place.
• Thermostat
A thermostat is a simple reflex agent that regulates room temperature based on a
predefined setpoint.

Goal-based agents
❖ The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
❖ The agent needs to know its goal which describes desirable situations.
❖ Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
❖ They choose an action, so that they can achieve the goal.
❖ These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

20
Introduction to AI & ML (22ESC147)-Module-II

Advantages of Goal-based Agents


• Simple to implement and understand.
• Efficient for achieving specific goals.
• Well-suited for structured environments and various applications like robotics and
game AI.
Disadvantages of Goal-based Agents
• Limited to a specific goal and lack adaptability to changing environments.
• Ineffective for complex tasks with numerous variables.
• Requires significant domain knowledge to define goals.

Applications of Goal-based Agents


Goal-based agents can be used in a variety of applications, including:
Robotics, Computer vision, Natural language processing, Content generation, Game AI,
Intelligent systems.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

21
Introduction to AI & ML (22ESC147)-Module-II

Utility-based agents
❖ These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given
state.
❖ Utility-based agent act based not only goals but also the best way to achieve the goal.
❖ The Utility-based agent is useful when there are multiple possible alternatives, and an agent
has to choose in order to perform the best action.
❖ The utility function maps each state to a real number to check how efficiently each action
achieves the goals.

Advantages of Utility-based Agents


• Handles a wide range of decision-making problems.
• Learns from experience and adjusts decision-making strategies.
• Offers a consistent and objective framework for decision-making.
Disadvantages of Utility-based Agents
• Requires an accurate model of the environment, leading to decision-making errors if
not properly implemented.
• Computationally expensive and resource-intensive.
• Does not consider moral or ethical considerations.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

22
Introduction to AI & ML (22ESC147)-Module-II

Applications of Utility-based Agents


An example is the route recommendation system which solves the 'best' route to reach a
destination. .
1. Real life examples, like GPS system. We set our destination to our goal. It shows more than
one path, we are in a happy state. But, maybe there can be traffic jams or there can be protest.
State changed now in an unhappy state. Because we can't reach our goal within the time. Here,
GPS will show another path or shortcut. so, again we'll be in a happy state. Utility function
measures happy or unhappy state. It works in a partially observable environment. Cause, I
know my car features. Don’t know
about other cars or roads.
2. Another example, let’s show our mars Lander on the surface of mars with an obstacle in its
way. In a goal-based agent it is uncertain which
.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

23
Introduction to AI & ML (22ESC147)-Module-II

QUESTION BANK
Q.1. What is an agent in AI Explain briefly.

Q.2. What is an Intelligent agent? Explain the concept of rationality in agent and Mapping

from Sequences to Actions

Q.3. What is an environment of an agent? Explain the different properties of an Environment.

Q.4. List the different types of AI agents.

Q.5. With neat diagram explain the Working of a simple reflex agent and also list the

advantages, limitations and applications of it.

Q.6. With neat diagram explain the Working of a goal-based agent and also list the

advantages, limitations and applications of it.

Q.7. With neat diagram explain the Working of a utility-based agent and also list the

advantages, limitations and applications of it.

Prepared by: PRASHANT MULGE, Asst. Prof., Dept. of AI & DS, Sharnbasva University

24

You might also like