Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

ARTIFICIAL INTELLIGENCE

UNIT - IV

DEPARTMENT OF CSE (Data Science)


UNIT – IV
Classical Planning: Definition of Classical Planning,
Algorithms for Planning with State Space Search, Planning
Graphs, other Classical Planning Approaches, Analysis of
Planning approaches.

Planning and Acting in the Real World: Time, Schedules, and


Resources, Hierarchical Planning, Planning and Acting in
Nondeterministic Domains, Multi agent Planning.
Classical Planning

 Classical planning is the problem of finding a sequence


of actions that maps a given initial state to a goal state,
where the environment and the actions are deterministic.

 The computational challenge is to devise effective


methods to obtain such action sequences called plans.
Cont ..
 Planning is one of the classic AI problems. It has been
used as the basis for applications like controlling
robots and having conversations.

 Plan is a sequence of actions. An action is a


transformation of a state, so a plan can be thought of
as a series of transformations of some initial state.
What is a Plan?
For any planning system, we need the domain description, action
specification, and goal description.

A plan is assumed to be a sequence of actions and each action has its


own set of preconditions to be satisfied before performing the
action and also some effects which can be positive or negative.

So, we have Forward State Space Planning (FSSP) and Backward


State Space Planning (BSSP) at the basic level.
Cont ..
Forward State Space Planning
Forward state space planning is a concept in artificial intelligence (AI) that involves
predicting and planning future actions and states based on the current state of
an environment. It is commonly used in the field of robotics and autonomous
systems to enable agents to make informed decisions and perform complex tasks.

In forward state space planning, an agent explores the possible actions it can take
from a given state and predicts the resulting states that would arise from those
actions. This process is often represented as a search or tree traversal algorithm,
where the agent considers different paths and evaluates their potential outcomes.

The agent typically uses a model or representation of the environment to simulate


the effects of different actions. This model can be a physics-based simulation or a
learned model based on historical data. By simulating the outcomes, the agent can
estimate the future states and use this information to select the most promising
actions.
Cont ..
There are various algorithms and techniques that can be used for forward state space
planning, such as breadth-first search, depth-first search, A* search, and others.
These algorithms differ in their exploration strategies, heuristics, and efficiency.

Forward state space planning can be applied to a wide range of AI applications,


including robotics, game playing, resource allocation, scheduling, and decision-
making problems. It allows agents to reason about the consequences of their
actions, consider long-term objectives, and make optimal or near-optimal decisions
based on the predicted future states.

It is important to note that forward state space planning is just one approach to
planning in AI. Other techniques, such as backward state space planning (e.g.,
using techniques like goal regression), reinforcement learning, and hierarchical
task networks, are also commonly used depending on the problem domain and
requirements.
Backward State Space Planning
Backward state space planning is another approach in artificial intelligence
(AI) that involves reasoning backwards from a goal state to determine the
sequence of actions needed to reach that goal. Unlike forward state space
planning, which starts from the initial state and explores future states,
backward state space planning starts from the goal state and works its
way backward to the initial state.

In backward state space planning, the agent begins with a goal state and
uses a model or representation of the environment to determine the
actions that would lead to that goal state. The agent reasons in reverse,
considering the possible predecessor states and the actions that could have
been taken to reach those states. By iteratively tracing back from the goal
state, the agent constructs a plan that consists of a sequence of actions
leading from the initial state to the goal state.
Cont ..
This process often involves searching or exploring the state space in a reverse
manner. The agent considers the possible predecessor states of the goal state and
generates successor states by applying appropriate actions. It continues this process
until it reaches the initial state or a state for which the predecessor is not known.

Backward state space planning is commonly used in problems where the goal is
well-defined and known in advance. It is particularly useful when the goal state is
more easily specified and recognizable compared to the initial state. This approach
can be beneficial in domains such as puzzle solving, path planning, and problem
solving tasks.

One common algorithm used in backward state space planning is backward


search, which is essentially a depth-first search starting from the goal state and
exploring the possible predecessor states until reaching the initial state.
Heuristics can also be incorporated to guide the search and improve efficiency.
Cont ..

It is worth noting that backward state space planning is just


one approach to planning in AI. Depending on the problem
domain and requirements, other planning techniques, such as
forward state space planning, reinforcement learning, and
hierarchical task networks, may also be employed.

The choice of planning approach depends on factors such as the


problem complexity, available information, and the nature
of the goal and initial states.
Algorithms for Planning with State Space Search
In AI, planning with state space search involves the use of algorithms to find an
optimal or near-optimal sequence of actions to achieve a goal in a given
environment. There are several algorithms commonly used for this purpose.

Breadth-First Search (BFS): BFS explores all nodes at the current depth before
moving to the next depth level. It guarantees finding the shortest path to the goal but
may be inefficient if the state space is large.

Depth-First Search (DFS): DFS explores a path until it reaches a leaf node before
backtracking. It may find a solution quickly but does not guarantee the shortest path
and can get stuck in infinite loops.

Iterative Deepening Depth-First Search (IDDFS): IDDFS is a combination of BFS


and DFS. It performs a series of DFS searches with increasing depth limits until a
solution is found. It combines the space efficiency of DFS with the completeness and
optimality of BFS.
Cont ..
Uniform-Cost Search (UCS): UCS expands nodes based on their path cost
from the initial state. It always selects the lowest-cost path to explore next,
guaranteeing an optimal solution if the step costs are non-negative.

A* Search: A* combines the cost function of UCS with a heuristic function


that estimates the cost from a state to the goal. It expands nodes with the
lowest estimated total cost, which is the sum of the path cost and the
heuristic cost. A* is both complete and optimal if the heuristic is admissible
(never overestimates) and consistent.

Greedy Best-First Search: Greedy Best-First Search expands nodes based


on their heuristic value alone. It prioritizes nodes that are estimated to be
closest to the goal. Greedy search is not guaranteed to find an optimal
solution.
Cont ..
Hill Climbing: Hill Climbing is a local search algorithm that iteratively improves
the current solution by making small changes. It evaluates neighboring states and
selects the one with the best heuristic value. Hill Climbing can get stuck in local
optima and may not find the global optimal solution.

Genetic Algorithms: Genetic Algorithms use a population-based approach


inspired by natural evolution. They maintain a population of potential solutions and
iteratively apply selection, crossover, and mutation operations to generate new
solutions. Genetic Algorithms can handle complex search spaces but may require a
large number of iterations to converge.

These are just a few examples of algorithms used for planning with state space
search in AI. The choice of algorithm depends on the problem domain, the
characteristics of the state space, and the specific requirements of the planning
task.
Planning graphs
Planning graphs are a representation and reasoning technique used
in Artificial Intelligence (AI) for solving planning problems. They
provide a structured way to analyze the state space and actions of a
planning problem, allowing for efficient search and decision-making.

In planning, the goal is to find a sequence of actions that transform


an initial state into a desired goal state. Planning graphs break down
this problem into two main components: the state space and the action
space.
Cont ..
The state space represents the possible states that the planning
problem can be in. Each state is described by a set of
propositions, which are statements about the world.

For example, in a blocks world domain, propositions can


represent the locations and positions of blocks.

The action space represents the set of possible actions that can
be taken in the planning problem. Each action has preconditions,
which specify the conditions that must be true in order for the
action to be applicable, and effects, which describe the changes
that the action will bring about in the state.
has leveled off. Every subsequent level will be identical, so further expansion is
unnecessary.
Cont ..
 A planning graph is constructed by iteratively expanding the
state and action spaces.

 It starts with an initial state and then applies the available actions
to generate new states. This process continues until a fixed-point is
reached, where no new states or actions can be added.

 The planning graph has two types of layers: the state layer and the
action layer.
Cont ..

 The state layer consists of nodes representing the


propositions (i.e., the true/false values of the propositions) at
a specific level of the planning graph.

 The action layer consists of nodes representing the actions


that can be applied at a specific level of the graph.
Cont ..
 The edges in the planning graph represent relationships
between states and actions.

 For example, there can be edges between states and the actions that
have their preconditions satisfied by those states.

 The planning graph allows for efficient reasoning about the


planning problem.

 It can be used for various tasks, such as determining whether a goal


is reachable from the initial state, identifying the set of actions
required to achieve a goal, or detecting conflicts and
dependencies between actions.
Cont ..

Once the planning graph is constructed, various algorithms can be


applied to search for a plan or to perform other types of analysis.

Some common algorithms include breadth-first search, depth-first


search, and heuristic-based search algorithms like A*.

Overall, planning graphs provide a powerful tool for representing and


reasoning about planning problems in AI. They help break down
complex planning problems into manageable components, enabling
efficient search and decision-making.
Other Planning Approaches
In addition to the classical planning approach, there are several other planning
approaches in the field of artificial intelligence (AI).

Hierarchical Task Network (HTN) Planning:

HTN planning is a planning paradigm that focuses on decomposing complex tasks


into smaller, manageable subtasks. It involves defining a hierarchy of tasks and
specifying methods to decompose high-level tasks into lower-level tasks. HTN
planning is particularly useful for domains with complex actions and where the
problem-solving process can be naturally decomposed.
Cont ..
Constraint-Based Planning: Constraint-based planning focuses on representing and
reasoning about constraints that must be satisfied during the planning process.
Constraints can represent various conditions, such as resource limitations, precedence
relationships, or logical relationships between variables.

Temporal Planning: Temporal planning extends classical planning by incorporating


time into the planning process. It involves specifying temporal constraints on actions
and events, enabling the planner to reason about the timing and sequencing of actions.
Temporal planning is essential for domains where time is a crucial factor, such as
scheduling, resource allocation, and robotics.
Cont ..
Probabilistic Planning: Probabilistic planning deals with planning under uncertainty.
It takes into account the stochastic nature of actions and outcomes by assigning
probabilities to different outcomes and incorporates them into the planning process.
Markov Decision Processes (MDPs) and Partially Observable Markov Decision
Processes (POMDPs) are common formalisms used in probabilistic planning.
Reactive Planning: Reactive planning emphasizes real-time decision-making in
dynamic environments. It involves continuously sensing and reacting to the
environment without explicit long-term planning. Reactive planners often use rule-
based or behavior-based approaches to handle immediate stimuli and generate
appropriate responses.
Cont ..

Multi-Agent Planning: Multi-agent planning involves coordinating the actions of


multiple agents to achieve a collective goal. It focuses on modeling the
interactions, dependencies, and cooperation between individual agents. Multi-
agent planning can be challenging due to the complexity introduced by inter-agent
dependencies and the need to consider the beliefs, goals, and actions of other agents.

Each approach has its strengths and weaknesses, making them suitable for different
problem domains and applications.
Analysis of Planning approaches
Planning methods in AI can be analyzed based on various parameters. Here are
some key parameters to consider when evaluating planning methods:
Completeness: Completeness refers to whether a planning method is guaranteed
to find a solution if one exists. Some planning methods, such as classical planning
algorithms like STRIPS (Stanford Research Institute Problem Solver) are
complete and will always find a solution if it exists within a finite search space.
On the other hand, heuristic-based methods like A* search may not be complete
but can offer improved efficiency in certain scenarios.

Optimality: Optimality refers to the ability of a planning method to find the


best possible solution, typically measured by an optimality criterion such as the
shortest path or the minimum cost. Optimal planning methods ensure that the
solution they find is the best among all possible solutions. However, achieving
optimality often comes at the cost of increased computational complexity.
Cont ..
Efficiency: Efficiency measures the computational resources required by a
planning method, such as time and memory. Different planning methods
can have varying efficiency characteristics. Some methods may be efficient for
small problem instances but struggle with larger and more complex scenarios,
while others may be specifically designed to handle large-scale planning
problems efficiently.

Domain-specificity: Some planning methods are designed for specific


domains or problem types, leveraging the characteristics of the domain to
improve performance. Domain-specific planning methods often exploit
domain knowledge and constraints to reduce the search space or guide the
planning process effectively. These methods can outperform more general-
purpose approaches in their specific domains but may be less flexible in other
domains.
Cont ..
Heuristics: Heuristics play a crucial role in many planning methods.
Heuristics estimate the distance to the goal or the expected cost of
reaching the goal from a given state. Effective heuristics can greatly
improve the efficiency and effectiveness of planning methods. Different
heuristics can be used depending on the specific planning problem, and
designing good heuristics is often a challenging task.

Online vs. Offline Planning: Planning methods can be categorized as online


or offline. Offline planning involves computing a plan in advance before
executing it, while online planning computes the next action(s) on the fly
during execution. Offline planning can leverage more computational
resources and search algorithms, allowing for more comprehensive
exploration of the search space. In contrast, online planning focuses on
efficiency and real-time decision-making.
Cont ..
Uncertainty handling: Some planning methods explicitly consider
uncertainties in the environment, such as probabilistic outcomes or
incomplete information. Planning under uncertainty requires reasoning
about different possible outcomes and their associated probabilities. Methods
such as Markov Decision Processes (MDPs) and Partially Observable MDPs
(POMDPs) are commonly used to address uncertainty in planning.

These parameters provide a framework to analyze and compare different


planning methods based on their characteristics and suitability for specific
problem domains. It's important to note that no single planning method is
universally superior, as the choice of method depends on the problem at
hand, available resources, and desired trade-offs between completeness,
optimality, efficiency, and other factors.
Planning and Acting in the Real World
Time, Schedules, and Resources
Time, schedules, and resources play important roles in the field of AI (Artificial
Intelligence) in several ways. Let's explore each of these aspects:
Time:
In AI, time is a critical factor for various reasons, including:
Real-Time Processing: Many AI applications require real-time processing, where
systems must analyze and respond to data in a timely manner. Examples include
autonomous vehicles, robotics, and real-time fraud detection.
Training Time: Training AI models can be a time-consuming process, especially for
complex models and large datasets. Researchers and practitioners often need to invest
significant time to train and fine-tune models to achieve desired performance levels.
Inference Time: Once trained, AI models need to perform inference or make
predictions on new data. Optimizing inference time is crucial for applications like
recommendation systems, natural language processing, and image recognition.
Schedules
Schedules are essential for managing AI projects and workflows efficiently. Consider
the following aspects:
Development and Iterations: AI projects often involve iterative development
cycles, where models are trained, evaluated, and refined multiple times. Schedules
help manage these iterations, ensuring that project milestones are met, and progress
is tracked effectively.
Data Collection and Preparation: AI models heavily rely on data. Schedules help
plan and coordinate data collection, annotation, cleaning, and preprocessing tasks.
Proper scheduling ensures that the right data is available at the right time for training
and testing models.
Deployment and Maintenance: After developing an AI model, schedules are
necessary to plan its deployment, integration into existing systems, and ongoing
maintenance. Regular updates, monitoring, and performance evaluation are crucial
for ensuring the model's effectiveness over time.
Resources
AI requires various resources for development, training, and deployment. Some key
resources include:
Computational Power: AI models, especially deep learning models, require
significant computational resources for training and inference. This includes high-
performance processors (e.g., GPUs, TPUs) and large-scale computing infrastructure
(e.g., cloud-based services).
Data: High-quality and diverse datasets are crucial for training AI models
effectively. Collecting, curating, and managing data resources is a vital task in AI
projects.
Human Expertise: AI projects often require multidisciplinary teams with expertise
in areas such as data science, machine learning, software engineering, and domain
knowledge. Allocating and managing human resources is essential for project
success.
Cont ..
Funding: AI research and development can be resource-intensive. Adequate funding
is necessary to procure hardware, software, data, and hire skilled professionals to
drive AI initiatives forward.

Optimizing the allocation and utilization of these resources is essential to ensure


effective AI development, deployment, and operation.

Time, schedules, and resources are crucial aspects of AI projects. Managing time
effectively, following well-defined schedules, and optimizing the allocation of
resources contribute to the success of AI initiatives, enabling efficient development,
training, deployment, and maintenance of AI models and systems.
Hierarchical Planning

Hierarchical planning in AI is a planning approach that breaks down


complex tasks into subtasks or smaller planning problems, enabling more
efficient and manageable problem-solving. It involves organizing actions
and decisions into a hierarchy of levels, with high-level goals at the top and
low-level actions at the bottom.

Examples of hierarchical planning in different domains:

Robotics, AI Game, Autonomous Vehicles, Production Planning


Robotics
Consider a robot tasked with cleaning a room. The high-level goal is to clean
the entire room, and it can be decomposed into subgoals:

Move to a particular corner of the room.


Clean the floor in that corner.
Repeat the process for each corner.

At the lowest level, the robot performs actions like moving forward,
turning, and using its cleaning mechanism. The planning system breaks
down the overall goal into subgoals and actions, allowing the robot to
efficiently navigate and clean the room.
AI Game Characters
In video games, non-player characters (NPCs) often use hierarchical planning for
decision-making. For example, consider an NPC in a role-playing game who wants
to prepare for a battle. The hierarchy might look like this:

High-level goal: Prepare for battle.


Subgoal 1: Equip appropriate weapons and armor.
Subgoal 2: Find health potions and other resources.
Each subgoal can further decompose into smaller actions, such as searching for
items, equipping them, etc. This hierarchy enables the character to prioritize tasks
efficiently and adapt to different situations.
Autonomous Vehicles
Autonomous vehicles often use hierarchical planning to handle complex driving
scenarios. For instance, when navigating an intersection, the hierarchy might look
like this:

High-level goal: Safely cross the intersection.


Subgoal 1: Detect and identify traffic signs and signals.
Subgoal 2: Plan the trajectory and speed to cross without colliding.
Subgoal 3: Execute the planned actions (accelerate, decelerate, steer).
Each subgoal involves various perception, planning, and control tasks, but
hierarchical planning allows the vehicle's AI to handle them in a structured manner.
Production Planning
In manufacturing, hierarchical planning can optimize production processes. Suppose a
company wants to produce a certain number of products:

High-level goal: Produce a specified number of products.


Subgoal 1: Allocate resources to different production lines.
Subgoal 2: Schedule tasks for each production line.
Subgoal 3: Assign workers and machines to tasks.
The high-level goal is decomposed into subgoals, each focusing on a specific aspect of
production, allowing for efficient resource allocation and task scheduling.
Hierarchical planning is beneficial in AI because it simplifies complex problems,
reduces planning complexity, and enhances the system's ability to handle large-scale
tasks effectively.
Planning and Acting in Nondeterministic Domains
In nondeterministic domains, the outcome of actions is uncertain, which
poses challenges for planning and acting. Here are a few examples of
planning and acting in such domains:
Robot Navigation: Consider a scenario where a robot needs to navigate
through a cluttered environment to reach a target location. However,
the environment is dynamic, and objects can move unpredictably. The robot
needs to plan its path while considering the uncertain movements of objects
and adapt its actions accordingly to avoid collisions.
Game Playing: In games with uncertain outcomes, such as poker or chess,
planning and acting involve evaluating different moves and their potential
consequences. The player needs to consider the opponent's potential moves,
their strategies, and possible random events like dice rolls or card draws to
make informed decisions.
Cont ..
Autonomous Vehicles: Planning and acting in the context of autonomous
vehicles is another example of dealing with nondeterministic domains.
The vehicle needs to navigate through traffic, where other vehicles might
change lanes, pedestrians can appear unexpectedly, and traffic signals
may change. The autonomous vehicle's planning and action selection need
to account for these uncertainties to ensure safe and efficient navigation.

Resource Allocation: In scenarios where resources need to be allocated


to tasks or projects, there can be uncertainties regarding the availability of
resources, the completion time of tasks, or the potential risks associated
with different options. Planning and acting involve making decisions on
resource allocation based on probabilistic estimates and risk assessments.
Cont ..
Robotic Manipulation: In tasks involving robotic manipulation, the
uncertainty arises from various factors, such as noisy sensor
measurements, uncertain object poses, or variability in the object's
physical properties. The robot needs to plan its actions considering the
uncertainty to grasp and manipulate objects effectively.

In all these examples, planning and acting in nondeterministic domains


require techniques that can handle uncertainty, such as probabilistic
modeling, Monte Carlo methods, or reinforcement learning approaches.
These methods enable decision-making and action selection that take into
account the uncertain outcomes of actions in the environment.
Multi-agent planning
Multi-agent planning in AI refers to the process of coordinating the actions
of multiple agents to achieve a common goal or solve a complex problem. It
involves generating plans for each agent while considering the interactions
and dependencies among them. Here are a few examples of multi-agent
planning in AI:

RoboCup Soccer: In the RoboCup Soccer competition, teams of


autonomous robots play soccer against each other. Each robot is an agent
with its own perception, decision-making, and action capabilities. The
team's overall objective is to outperform the opponent team by scoring
goals. Multi-agent planning is used to coordinate the movements and actions
of the robots, such as passing, shooting, and defending, to achieve a
successful team strategy.
Cont ..
Traffic Control: In a traffic control system, multiple traffic lights or autonomous
vehicles need to be coordinated to optimize traffic flow and minimize congestion.
Each traffic light or vehicle is an agent, and the goal is to maximize traffic
efficiency. Multi-agent planning algorithms can be employed to determine the
optimal timing of traffic light changes or to coordinate the actions of autonomous
vehicles at intersections, considering factors such as traffic density, vehicle
priorities, and safety.

Supply Chain Management: In a complex supply chain network involving


multiple suppliers, manufacturers, distributors, and retailers, multi-agent planning
can be used to optimize the flow of goods, minimize costs, and meet customer
demands. Each entity in the supply chain can be considered an agent, and their
actions and decisions need to be coordinated to achieve efficient inventory
management, production planning, and order fulfillment.
Cont ..
Cooperative Robotics: In scenarios where multiple robots need to work together to accomplish
a task, multi-agent planning plays a crucial role. For example, in disaster response scenarios, a team
of autonomous robots may need to explore a hazardous environment, map the area, and perform
search and rescue operations. Multi-agent planning algorithms can be used to coordinate the robots
movements, task assignments, and information sharing to maximize the effectiveness of the team.

Multi-Agent Games: Multi-agent planning is also relevant in the context of game playing. For
instance, in games like chess, poker, or Go, where multiple agents compete against each other,
planning algorithms are used to make strategic decisions, predict opponents moves, and choose
optimal actions. The agents need to plan their moves while considering the actions and possible
strategies of their opponents.

These examples illustrate the diverse applications of multi-agent planning in AI, ranging from
robotics and traffic control to supply chain management and game playing. The underlying
objective is to enable effective coordination and collaboration among multiple autonomous agents
to achieve desired outcomes.

You might also like