Professional Documents
Culture Documents
AI-UNIT-5 NOTES
AI-UNIT-5 NOTES
PLANNING:
Everything we humans do is with a certain goal in mind and all our actions are
oriented towards achieving our goal. In a similar fashion, planning is also done
for Artificial Intelligence. For example, reaching a particular destination
requires planning. Finding the best route is not the only requirement in
planning, but the actions to be done at a particular time and why they are
done is also very important. That is why planning is considered as the
reasoning side of acting. In other words, planning is all about deciding the
actions to be performed by the Artificial Intelligence system and the
functioning of the system on its own in domain-independent situations.
What is a Plan?
For any planning system, we need the domain description, action
specification, and goal description. A plan is assumed to be a sequence of
actions and each action has its own set of preconditions to be satisfied before
performing the action and also some effects which can be positive or negative.
So, we have Forward State Space Planning (FSSP) and Backward State Space
Planning (BSSP) at the basic level.
FSSP behaves in a similar fashion like forward state space search. It says
that given a start state S in any domain, we perform certain actions required
and acquire a new state S’ (which includes some new conditions as well) which
is called progress and this proceeds until we reach the goal state. The actions
have to be applicable in this case.
Exhaustiveness:
Completeness:
Optimality:
To begin the search process, we set the current state to the initial state.
We then check if the current state is the goal state. If it is, we terminate
the algorithm and return the result.
If the current state is not the goal state, we generate the set of possible
successor states that can be reached from the current state.
For each successor state, we check if it has already been visited. If it has,
we skip it, else we add it to the queue of states to be visited.
Next, we set the next state in the queue as the current state and check if
it's the goal state. If it is, we return the result. If not, we repeat the
previous step until we find the goal state or explore all the states.
If all possible states have been explored and the goal state still needs to
be found, we return with no solution.
State space search algorithms are used in various fields, such as robotics,
game playing, computer networks, operations research, bioinformatics,
cryptography, and supply chain management. In artificial intelligence,
state space search algorithms can solve problems like pathfinding,
planning, and scheduling.
They are also useful in planning robot motion and finding the best
sequence of actions to achieve a goal. In games, state space search
algorithms can help determine the best move for a player given a
particular game state.
State space search algorithms can optimize routing and resource
allocation in computer networks and operations research.
In Bioinformatics, state space search algorithms can help find patterns in
biological data and predict protein structures.
In Cryptography, state space search algorithms are used to break codes
and find cryptographic keys.
So, it comprises actions (steps) with constraints (for ordering and causality) on
them.
The algorithm needs to start off with an initial plan. This is an unfinished plan,
which we will refine until we reach a solution plan. The initial plan comprises
two dummy steps, called Start and Finish.
Start is a step with no preconditions, only effects: the effects are the initial
state of the world. Finish is a step with no effects, only preconditions: the
preconditions are the goal.
This initial plan is refined using POP’s plan refinement operators. As we apply
them, they will take us from an unfinished plan to a less and less unfinished
plan, and ultimately to a solution plan. There are four operators, falling into
two groups:
The promotion and demotion operators may be less clear. Why are these
needed? POP uses problem-decomposition: faced with a conjunctive
precondition, it uses goal achievement on each conjunct separately. But, as we
know, this brings the risk that the steps we add when achieving one part of a
precondition might interfere with the achievement of another precondition.
And the idea of promotion and demotion is to add ordering constraints so that
the step cannot interfere with the achievement of the precondition.
Note that solutions may still be partially-ordered. This retains flexibility for as
long as possible. Only immediately prior to execution will the plan need
linearisation, i.e. the imposition of arbitrary ordering constraints on steps that
are not yet ordered. (In fact, if there’s more than one agent, or if there’s a
single agent but it is capable of multitasking, then some linearisation can be
avoided: steps can be carried out in parallel.)
PLANNING GRAPH:
A Planning Graph is a data structure primarily used in automated
planning and artificial intelligence to find solutions to planning problems. It
represents a planning problem’s progression through a series of levels that
describe states of the world and the actions that can be taken. Here’s a
breakdown of its main components and how it functions:
Levels: A Planning graph has two alternating types of levels: action levels
and state levels. The first level is always a state level, representing the
initial state of the planning problem.
State Levels: These levels consist of nodes representing logical
propositions or facts about the world. Each successive state level
contains all the propositions of the previous level plus any that can be
derived by the actions of the intervening action levels.
Action Levels: These levels contain nodes representing actions. An action
node connects to a state level if the state contains all the preconditions
necessary for that action. Actions in turn can create new state
conditions, influencing the subsequent state level.
Edges: The graph has two types of edges: one connecting state nodes to
action nodes (indicating that the state meets the preconditions for the
action), and another connecting action nodes to state nodes (indicating
the effects of the action).
Mutual Exclusion (Mutex) Relationships: At each level, certain pairs of
actions or states might be mutually exclusive, meaning they cannot
coexist or occur together due to conflicting conditions or effects. These
mutex relationships are critical for reducing the complexity of the
planning problem by limiting the combinations of actions and states that
need to be considered.
Levels in Planning Graphs
Level S0: It is the initial state of the planning graph that consists of
nodes each representing the state or conditions that can be true.
Level A0: Level A0 consists of nodes that are responsible for taking all
specific actions in terms of the initial condition described in the S0.
Si: It represents the state or condition which could hold at a time i,
it may be both P and ¬P.
Ai: It contains the actions that could have their preconditions
satisfied at i.
The planning graph has a single proposition level that contains all the
initial conditions. The planning graph runs in stages, each stage and its
key workings are described below:
1. Extending the Planning Graph: At stage i (the current level), the graph
plan takes the planning graph from stage i-1 (the previous stage) and
extends it by one time step. This adds the next action level representing
all possible actions given the propositions (states) in the previous level,
followed by the proposition level representing the resulting states after
actions have been performed.
2. Valid Plan Found: If the graph plan finds a valid plan, it halts the
planning process.
3. Proceeding to the Next Stage: If no valid plan is found, the algorithm
determines that the goals are not all achievable in time i and moves to
the next stage.
Negation of Each Other: Two literals are mutually exclusive if one is the
negation of the other.
Achieved by Mutually Exclusive Actions: No pair of non-mutex actions
can make both literals true at the same level.
The next state is determined by the action taken at the action level based
on the propositional variables. In the CAKE example, we have four
propositional variables in the next state:
At state level S1, all literals are obtained by considering any subset of
actions at A0. In simple terms, state level S1 holds all possible outcomes
after the actions in A0 are considered. In our example, since we only
have the Eat(Cake) action at A0, S1 will list all possible outcomes with
and without the action being taken.
Mutual Exclusion
P: In this statement, ‘The sky is blue’ five basic sentence components are
used.
Q: ‘There is only one thing wrong at the moment we are in the middle of
a rain.”
R: ‘Sometimes they were just saying things without realizing: “The
ground is wet”’.
P∧Q: ”It is clear that the word ‘nice’ for the sentence ‘Saturday is a nice
day’ exists as well as the word ‘good’ for the sentence ‘The weather is
good today. ’”
P∨Q: “It may probably be that the sky is blue or that it is raining. ”
¬P: I was not mindful that the old adage “The sky is not blue” deeply
describes a geek.
1. Propositions:
2. Logical Connectives:
Logical connectives are used to form complex propositions from simpler ones.
The primary connectives are:
IMPLIES (→): A conditional that is true if the first proposition implies the
second.
Example: “If it rains, then the ground is wet” (It rains → The
ground is wet) is true unless it rains and the ground is not wet.
IFF (↔): A biconditional that is true if both propositions are either true
or false together.
3. Truth Tables:
Truth tables are used to determine the truth value of complex propositions
based on the truth values of their components. They exhaustively list all
possible truth value combinations for the involved propositions.
Example: “P ∨ ¬P”
Example: “P ∧ ¬P”
Example: “P ∧ Q”
Logical Equivalence
Two statements have the same logical form if the truth of every proposition
contained in the first statement has the same value in all cases as the truth of
every proposition contained in the second statement. For instance:
Properties of Operators
The logical operators in propositional logic have several important properties:
1. Commutativity:
P∧Q≡Q∧P
P∨Q≡Q∨P
2. Associativity:
(P ∧ Q) ∧ R ≡ P ∧ (Q ∧ R)
(P ∨ Q) ∨ R ≡ P ∨ (Q ∨ R)
3. Distributivity:
P ∧ (Q ∨ R) ≡ (P ∧ Q) ∨ (P ∧ R)
P ∨ (Q ∧ R) ≡ (P ∨ Q) ∧ (P ∨ R)
4. Identity:
P ∧ true ≡ P
P ∨ false ≡ P
5. Domination:
P ∨ true ≡ true
P ∧ false ≡ false
6. Double Negation:
¬ (¬P) ≡ P
7. Idempotence:
P∧P≡P
P∨P≡P
Applications of Propositional Logic in AI
1. Knowledge Representation:
2. Automated Reasoning:
Modus Ponens: If “P → Q” and “P” are true, then “Q” must be true.
Modus Tollens: If “P → Q” and “¬Q” are true, then “¬P” must be true.
4. Decision Making:
Propositional logic is applied in NLP for tasks like semantic parsing, where
natural language sentences are converted into logical representations. This
helps in understanding and reasoning about the meaning of sentences.
Rule-Based Planning: This type of planning is where the AI agent uses a set of
rules to make decisions. The agent will start with a problem or situation and
then look at the rules it knows in order to find a solution. This type of planning
can be quite brittle, as small changes in the problem or situation can cause the
agent to lose its way.
A best-first search algorithm tries to find the best solution first, and then
works its way down to the next best solution. This approach is usually used
when there is no obvious way to determine which solution is the best. A depth-
first search algorithm tries to find the deepest solution first. This approach is
usually used when there is no obvious way to determine which solution is the
best or when the goal state is located at the end of a very deep path.
Once the goals are created, the AI can then work out the steps needed
to achieve each individual goal. A third way to plan is through a technique
called heuristic planning. Heuristic planning uses a set of rules or guidelines
that help the AI find a solution to a problem. These rules can be based on
experience or past knowledge.
This might involve getting in your car and driving there, or taking public
transportation. The same basic principles apply when it comes to AI planning
systems. The problem is what needs to be solved, the goal is what needs to be
achieved, and the actions are the steps that need to be taken in order to reach
the goal. In order for an AI planning system to work properly, it must be able to
understand these three components correctly. One common challenge with AI
planning systems is verifying that they are actually solving the right problem
and reaching their correct goals.
In order for an AI system's decisions and actions to be reliable, it is
important for us humans as developers and testers of these systems to be
confident in their capabilities. One way we can do this is by using validation
methods such as debugging and testing tools as well as mathematical proofs
and models.
This may be due to a lack of knowledge on the part of the system, or due
to incorrect data in the initial conditions or desired outcomes. Another
common type of error is when the system generates a sequence of steps that
leads to an undesired outcome. This may be due to incorrect data in the initial
conditions or desired outcomes, or due to flaws in the logic of the planning
algorithm.
High-level goals: The overall objectives or tasks that the AI system aims
to achieve.
The agent decomposes the overall task into sub-tasks and generates a
plan for each sub-task, taking into account dependencies, constraints, and the
goals of the overall task. Each sub-plan is executed sequentially, with the
results of each step being used to guide subsequent steps.
CONDITIONAL PLANNING:
Conditional planning in AI, also known as contingent planning or
conditional decision-making, is a type of planning that involves creating plans
that can adapt to different situations or contingencies. This approach is
particularly useful in dynamic and uncertain environments where an AI system
must account for various possible scenarios and make decisions based on the
actual state of the world as it evolves.
Future Directions
Continuous Planning in AI
Multi-Agent Planning in AI
Challenges
Techniques
Future Directions
Natural language processing has existed for well over fifty years, and the
technology has its origins in linguistics or the study of human language. It has
an assortment of real-world applications within a number of industries and
fields, including intelligent search engines, advanced medical research, and
business processing intelligence.
Just as we humans have various natural senses, such as eyes to see with
or ears to hear; computers support program instructions to read language text
and microphones to collect and analyze audio. Similar to how humans use their
brains to process input, computers have a program instruction set to process
their inputs and information. After processing occurs, this input is transformed
into code that only the computer system can interpret.
There are two main stages to the natural language processing process:
Algorithm development.
The data preprocessing stage involves preparing or 'cleaning' the text data
into a specific format for computer devices to analyze. The preprocessing
arranges the data into a workable format and highlights features within the
text. This enables a smooth transition to the next step - the algorithm
development stage - which works with that input data without any initial data
errors occurring.