Professional Documents
Culture Documents
AI Notes
AI Notes
AI Notes
One potential problem with expert systems is the number of comparisons that need
to be made between rules and facts in the database.
In some cases, where there are hundreds or even thousands of rules, running
comparisons against each rule can be impractical.
The Rete Algorithm is an efficient method for solving this problem and is used by
a number of expert system tools, including OPS5 and Eclipse.
The Rete is a directed, acyclic, rooted graph.
Each path from the root node to a leaf in the tree represents the left-hand side of a
rule.
Each node stores details of which facts have been matched by the rules at that point
in the path. As facts are changed, the new facts are propagated through the Rete
from the root node to the leaves, changing the information stored at nodes
appropriately.
This could mean adding a new fact, or changing information about an old fact, or
deleting an old fact. In this way, the system only needs to test each new fact
against the rules, and only against those rules to which the new fact is relevant,
instead of checking each fact against each rule.
The Rete algorithm depends on the principle that in general, when using forward
chaining in expert systems, the values of objects change relatively infrequently,
meaning that relatively few changes need to be made to the Rete.
The basic inference cycle of a production system is match, select and execute as
indicated in Fig 8.6. These operations are performed as follows
Match
During the match portion of the cycle, the conditions in the LHS of the rules in the
knowledge base are matched against the contents of working memory to determine
which rules have their LHS conditions satisfied with consistent bindings to
working memory terms.
Select
From the conflict set, one of the rules is selected to execute. The selection strategy
may depend on recency of useage, specificity of the rule or other criteria
Execute
The rule selected from the conflict set is executed by carrying the action
or conclusion part of the rule, the RHS of the rule. This may involve an I/O
operation, adding, removing or changing clauses in working memory or simply
causing a halt
The above cycle is repeated until no rules are put in the conflict set or until
a stopping condition is reached
in most expert systems, the contents of working memory change very little from
cycle to cycle. There is persistence in the data known as temporal redundancy. This
makes exhaustive matching on every cycle unnecessary. Instead, by saving match
information, it is only necessary to compare working memory changes on each
cycle. In RETE, addition to, removal from, and changes to working memory are
translated directly into changes to the conflict set in Fig . Then when a rule from
the conflict set has been selected to fire, it is removed from the set and the
remaining entries are saved for the next cycle. Consequently, repetitive matching
of all rules against working memory is avoided. Furthermore, by indexing rules
with the condition terms appearing in their LHS, only those rules which could
match. Working memory changes need to be examined. This greatly reduces the
number of comparisons required on each cycle
Many rules in a knowledge base will have the same conditions occurring in their
LHS. This is just another way in which unnecessary matching can arise. Repeating
testing of the same conditions in those rules could be avoided by grouping rules
which share the same conditions and linking them to their common terms. It would
then be possible to perform a single set of tests for all the applicable rules shown in
Fig below
A* Search Algorithm in Artificial Intelligence
An Introduction to A* Search Algorithm in AI
A* (pronounced "A-star") is a powerful graph traversal and pathfinding algorithm widely used in
artificial intelligence and computer science. It is mainly used to find the shortest path between
two nodes in a graph, given the estimated cost of getting from the current node to the destination
node. The main advantage of the algorithm is its ability to provide an optimal path by exploring
the graph in a more informed way compared to traditional search algorithms such as Dijkstra's
algorithm.
Algorithm A* combines the advantages of two other search algorithms: Dijkstra's algorithm and
Greedy Best-First Search. Like Dijkstra's algorithm, A* ensures that the path found is as short as
possible but does so more efficiently by directing its search through a heuristic similar to Greedy
Best-First Search. A heuristic function, denoted h(n), estimates the cost of getting from any
given node n to the destination node.
1. g(n): the actual cost to get from the initial node to node n. It represents the sum of the
costs of node n outgoing edges.
2. h(n): Heuristic cost (also known as "estimation cost") from node n to destination node n.
This problem-specific heuristic function must be acceptable, meaning it never
overestimates the actual cost of achieving the goal. The evaluation function of node n is
defined as f(n) = g(n) h(n).
Algorithm A* selects the nodes to be explored based on the lowest value of f(n), preferring the
nodes with the lowest estimated total cost to reach the goal. The A* algorithm works:
a. Find the node with the smallest f-value (i.e., the node with the minor g(n) h(n)) in
the open list.
b. Move the selected node from the open list to the closed list.
c. Createall valid descendantsof the selected node.
d. For each successor, calculateits g-value as the sum of the current node's g value
and the cost of movingfrom the current node to the successor node. Update the g-
value of the tracker when a better path is found.
e. If the followeris not in the open list, add it with the calculated g-value and
calculate its h-value. If it is already in the open list, update its g value if the new
path is better.
f. Repeat the cycle. Algorithm A* terminates when the target node is reached or
when the open list empties, indicating no paths from the start node to the target
node. The A* search algorithm is widely used in various fields such as robotics,
video games, network routing, and design problems because it is efficient and can
find optimal paths in graphs or networks.
However, choosing a suitable and acceptable heuristic function is essential so that the algorithm
performs correctly and provides an optimal solution.
History of the A* Search Algorithm in Artificial
Intelligence
It was developed by Peter Hart, Nils Nilsson, and Bertram Raphael at the Stanford Research
Institute (now SRI International) as an extension of Dijkstra's algorithm and other search
algorithms of the time. A* was first published in 1968 and quickly gained recognition for its
importance and effectiveness in the artificial intelligence and computer science communities.
Here is a brief overview of the most critical milestones in the history of the search algorithm A*:
1. Early search algorithms: Before the development of A*, various graph search
algorithms existed, including Depth-First Search (DFS) and Breadth-First Search (BFS).
Although these algorithms helped find paths, they did not guarantee optimality or
consider heuristics to guide the search
2. Dijkstra's algorithm: In 1959, Dutch computer scientist Edsger W. Dijkstra introduced
Dijkstra's algorithm, which found the shortest path in a weighted graph with non-negative
edge weights. Dijkstra's algorithm was efficient, but due to its exhaustive nature, it had
limitations when used on larger graphs or
3. Informed Search: Knowledge-based search algorithms (also known as heuristic search)
have been developed to incorporate heuristic information, such as estimated costs, to
guide the search process efficiently. Greedy Best-First Search was one such algorithm,
but it did not guarantee optimality for finding the shortest path.
4. A* development: In 1968, Peter Hart, Nils Nilsson, and Bertram Raphael introduced the
A* algorithm as a combination of Dijkstra's algorithm and Greedy Best-First Search. A*
used a heuristic function to estimate the cost from the current node to the destination
node by combining it with the actual cost of reaching the current node. This allowed A*
to explore the graph more consciously, avoiding unnecessary paths and guaranteeing an
optimal solution.
5. Righteousness and Perfection: The authors of A* showed that the algorithm is perfect
(always finds a solution if one exists) and optimal (finds the shortest path) under certain
conditions.
6. Wide-spread adoption and progress: A* quickly gained popularity in the AI and IT
communities due to its efficiency and Researchers and developers have extended and
applied the A* algorithm to various fields, including robotics, video games, engineering,
and network routing. Several variations and optimizations of the A* algorithm have been
proposed over the years, such as Incremental A* and Parallel A*. Today, the A* search
algorithm is still a fundamental and widely used algorithm in artificial intelligence and
graph traversal. It continues to play an essential role in various applications and research
fields. Its impact on artificial intelligence and its contribution to pathfinding and
optimization problems have made it a cornerstone algorithm in intelligent systems
research.
The algorithm starts with a priority queue to store the nodes to be explored. It also instantiates
two data structures g(n): The cost of the shortest path so far from the starting node to node n and
h(n), the estimated cost (heuristic) from node n to the destination node. It is often a reasonable
heuristic, meaning it never overestimates the actual cost of achieving a goal. Put the initial node
in the priority queue and set its g(n) to 0. If the priority queue is not empty, Remove the node
with the lowest f(n) from the priority queue. f(n) = g(n) h(n). If the deleted node is the
destination node, the algorithm ends, and the path is found. Otherwise, expand the node and
create its neighbors. For each neighbor node, calculate its initial g(n) value, which is the sum of
the g value of the current node and the cost of moving from the current node to a neighboring
node. If the neighbor node is not in priority order or the original g(n) value is less than its current
g value, update its g value and set its parent node to the current node. Calculate the f(n) value
from the neighbor node and add it to the priority queue.
If the cycle ends without finding the destination node, the graph has no path from start to finish.
The key to the efficiency of A* is its use of a heuristic function h(n) that provides an estimate of
the remaining cost of reaching the goal of any node. By combining the actual cost g (n) with the
heuristic cost h (n), the algorithm effectively explores promising paths, prioritizing nodes likely
to lead to the shortest path. It is important to note that the efficiency of the A* algorithm is
highly dependent on the choice of the heuristic function. Acceptable heuristics ensure that the
algorithm always finds the shortest path, but more informed and accurate heuristics can lead to
faster convergence and reduced search space.
AD
1. Optimal solution: A* ensures finding the optimal (shortest) path from the start node to
the destination node in the weighted graph given an acceptable heuristic function. This
optimality is a decisive advantage in many applications where finding the shortest path is
essential.
2. Completeness: If a solution exists, A* will find it, provided the graph does not have an
infinite cost This completeness property ensures that A* can take advantage of a solution
if it exists.
3. Efficiency: A* is efficient ifan efficient and acceptable heuristic function is used.
Heuristics guide the search to a goal by focusing on promising paths and avoiding
unnecessary exploration, making A* more efficient than non-aware search algorithms
such as breadth-first search or depth-first search.
4. Versatility: A* is widely applicable to variousproblem areas, including wayfinding,
route planning, robotics, game development, and more. A* can be used to find optimal
solutions efficiently as long as a meaningful heuristic can be defined.
5. Optimized search: A* maintains a priority order to select the nodes with the minor f(n)
value (g(n) and h(n)) for expansion. This allows it to explore promising paths first, which
reduces the search space and leads to faster convergence.
6. Memory efficiency: Unlike some other search algorithms, such as breadth-first search,
A* stores only a limited number of nodes in the priority queue, which makes it memory
efficient, especially for large graphs.
7. Tunable Heuristics: A*'s performancecan be fine-tuned by selecting different heuristic
functions. More educated heuristics can lead to faster convergence and less expanded
nodes.
8. Extensively researched: A* is a well-established algorithm with decades of research and
practical applications. Many optimizations and variations have been developed, making it
a reliable and well-understood troubleshooting tool.
9. Web search: A* can be used for web-based path search, where the algorithm constantly
updates the path according to changes in the environment or the appearance of new It
enables real-time decision-making in dynamic scenarios.
The AO* algorithm belongs to the family of informed search algorithms, meaning it utilizes heuristics or
estimated cost functions to guide the search process. It efficiently balances the trade-off between
computation time and the quality of the solution. Unlike some other algorithms that focus solely on
finding the optimal solution, AO* provides a series of progressively improving solutions, making it
adaptable to various scenarios.
1. Initialization
The algorithm begins with the initialization of critical components:
Start State:
It starts from the initial state, which represents the current state of the problem or the starting point in a
graph.
Cost Estimates:
For each state, AO* maintains an "optimistic" cost estimate, denoted as g*(s), which serves as a lower
bound on the true cost from the start state to that state. Initially, these cost estimates are set to infinity for
all states except the start state, which has a cost estimate of zero.
Priority Queue:
AO* uses a priority queue (often implemented as a binary heap) to keep track of states that need to be
expanded. States are prioritized in the queue based on their g*(s) values, with states having lower cost
estimates being higher in priority.
2. Iterative Expansion
The core of AO* is an iterative process that repeatedly selects and expands the most promising state from
the priority queue. This process continues until certain termination conditions are met. Here's how it
works:
Selecting a State:
The algorithm selects the state with the lowest g*(s) value from the priority queue. This state represents
the most promising path discovered so far.
Expanding a State:
Once a state is selected, AO* generates its successor states, which are the states reachable from the
current state by taking valid actions or moving along edges in the graph. These successor states are
generated and evaluated.
Updating Cost Estimates:
For each successor state, the algorithm updates its g*(s) value. The updated value depends on the cost of
reaching that successor state from the current state and the g*(s) value of the current state.
Adding to Priority Queue:
The newly generated states, along with their updated g*(s) values, are added to the priority queue.
3. Termination
The search process continues until certain termination conditions are met. These conditions can include:
5. Adaptation
Another essential aspect of AO* is its adaptability. It can adjust its search strategy based on available
computational resources and user requirements. If more time or computational power is available, AO*
can perform a more exhaustive search to improve the solution quality. Conversely, if resources are
limited, it can return a solution quickly without completing the entire search.
Initialization:
Start State:
The algorithm begins with the starting point as the initial state.
Cost Estimates:
Initially, all states except the starting point have cost estimates set to infinity. The starting point has a cost
estimate of zero.
Priority Queue:
A priority queue is initialized with the starting point.
Iterative Expansion:
The algorithm selects the state with the lowest g*(s) value from the priority queue, which is the starting
point initially.
It generates successor states by considering possible moves (e.g., moving up, down, left, or right) from
the current state.
The cost estimates for these successor states are updated based on the cost of moving from the current
state to the successors.
The successor states and their updated cost estimates are added to the priority queue.
Termination:
The search process continues until a termination condition is met. This condition could be finding the
optimal path or a user-requested stop.
Solution Retrieval:
At any point during the search, if the user decides to retrieve the best solution found so far, AO* can
provide an incremental path through the maze. This allows the user to start moving towards the goal
while the algorithm continues to refine the path in the background.
Example - 2: Robotic Navigation
In the field of robotics, AO* can be used for path planning and navigation in dynamic environments. Let's
consider a scenario where a robot needs to navigate through an environment with moving obstacles.
Initialization:
Start State:
The robot's current position is the initial state.
Cost Estimates:
Initially, cost estimates for states are set based on the distance from the current position to possible goal
positions.
Priority Queue:
The priority queue is initialized with the robot's current position.
Iterative Expansion:
The algorithm selects the state with the lowest g*(s) value from the priority queue, which is the robot's
current position initially.
It generates successor states by simulating possible movements of the robot.
The cost estimates for these successor states are updated based on the estimated time or energy required
to reach those states.
The successor states and their updated cost estimates are added to the priority queue.
Termination:
The search process continues until a termination condition is met. This could be when the robot reaches
the goal, when a user intervention occurs, or when a timeout occurs.
Solution Retrieval:
AO* allows the robot to start moving toward the goal based on the best path found so far, even if it's not
the optimal path. This is particularly useful in scenarios where the environment is changing, and the robot
needs to adapt its path in real time.
Advantages and Disadvantages of AO* Algorithm
Advantages
Adaptability:
AO* can adapt to changing requirements and computational resources, making it suitable for real-time
systems.
Incremental Solutions:
It provides incremental solutions, allowing users to make progress while the search continues.
Optimistic Estimates:
The use of optimistic cost estimates can guide the search efficiently.
Heuristic Guidance:
Like A*, it benefits from heuristic guidance, improving search efficiency.
Disadvantages
Quality vs. Time Trade-off:
AO* sacrifices optimality for adaptability. It may not always find the absolute best solution but provides a
good compromise between quality and time.
Complexity:
Implementing AO* can be more complex than simpler algorithms due to its adaptability and nature.
Heuristic Quality:
The effectiveness of AO* heavily depends on the quality of the heuristic function. Poor heuristics can
lead to suboptimal solutions.
Real-life Applications of AO* Algorithm
AO* finds applications in various fields:
Robotics:
AO* is widely used in robotic path planning. Robots can navigate complex environments while
continuously improving their routes.
Video Games:
In video games, AO* is used for character pathfinding, ensuring that game characters move efficiently
and avoid obstacles.
Network Routing:
In computer networking, AO* helps in finding optimal routes for data packets in dynamic networks.
Autonomous Vehicles:
Autonomous vehicles use AO* for real-time route planning, taking into account traffic conditions and
obstacles.
Natural Language Processing:
AO* is used in parsing and grammar-checking algorithms to generate syntactically correct sentences
incrementally.
Resource Management:
It is used in resource allocation and scheduling, such as allocating resources in cloud computing.
What is a Rule-based System?
A system that relies on a collection of predetermined rules to decide what to do next is known as
a rule-based system in AI. These laws are predicated on several circumstances and deeds. For
instance, if a patient has a fever, the doctor may recommend antibiotics because the patient may
have an infection. Expert systems, decision support systems, and chatbots are examples of apps
that use rule-based systems.
The rules are written simply for humans to comprehend, making rule-based systems simple to
troubleshoot and maintain.
Given a set of inputs, rule-based systems will always create the same output, making
them predictable and dependable. This property is known as determinism.
A rule-based system in AI is transparent because the standards are clear and open to human
inspection, which makes it simpler to comprehend how the system operates.
A rule-based system in AI is scalable. When scaled up, large quantities of data can be handled by
rule-based systems.
Rule-based systems can be modified or updated more easily because the rules can be divided
into smaller components.
4. Explanations facilities:
The user can use the explanation facilities to question the expert system on how it came
to a particular conclusion or why a particular fact is necessary. The expert system must be
able to defend its logic, recommendations, analyses, and conclusions.
5. User Interface:
The user interface is the channel through which the user interacts with the expert system
to find a solution to an issue. The user interface should be as simple and intuitive as
possible, and the dialogue should be as helpful and friendly as possible.
Each of these five components is essential to any rule-based system in AI. These form the
basis of the rule-based structure. However, the mechanism might also include a few extra
parts. The working brain and the external interface are two examples of these parts.
6. External connection:
An expert system can interact with external data files and programs written in traditional
computer languages like C, Pascal, FORTRAN, and Basic, thanks to the external
interface.
7. Active recall:
The working memory keeps track of transient data and knowledge.
Medical Diagnosis:
Based on a patient's symptoms, medical history, and test findings, a rule-based system in AI can
make a diagnosis. The system can make a diagnosis by adhering to a series of guidelines
developed by medical professionals.
Fraud Detection:
Based on particular criteria, such as the transaction's value, location, and time of day, a rule-
based system in AI can be used to spot fraudulent transactions. The system, for the additional
examination, can then flag the transaction.
Quality Control:
A rule-based system in AI can ensure that products satisfy particular quality standards. Based on
a set of guidelines developed by quality experts, the system can check for flaws.
Decision support systems:
They are created to aid decision-making, such as choosing which assets to buy or what to buy.
Limited ability to adapt to new situations Can adapt to new situations by learning from data
Fast decision-making May require significant time for training the model
Rule-based System vs. Machine Learning System
Rule-Based Systems Machine Learning
Rules are created by human experts Models are trained using data
Limited ability to adapt to new situations Can adapt to new situations by retraining the model
Fast decision-making May require significant time for training the model
Examples: Medical diagnosis, fraud detection Examples: Image recognition, speech recognition
Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of game states it has
to examine are exponential in depth of the tree. Since we cannot eliminate the exponent,
but we can cut it to half. Hence there is a technique by which without checking each
node of the game tree we can compute the correct minimax decision, and this technique
is called pruning. This involves two threshold parameter Alpha and beta for future
expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only
prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:
a. Alpha: The best (highest-value) choice we have found so far at any point along
the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any point along the
path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the
final decision but making algorithm slow. Hence by pruning these nodes, it makes the
algorithm fast.
AD
Step-1: In the first step, the algorithm generates the entire game-tree and apply the
utility function to get the utility values for the terminal states. In the below tree diagram,
let's take A is the initial state of the tree. Suppose maximizer takes first turn which has
worst-case initial value =- infinity, and minimizer will take next turn which has worst-
case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so
we will compare each value in terminal state with initial value of Maximizer and
determines the higher nodes values. It will find the maximum among the all.
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with
+∞, and will find the 3rd layer node values.
Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes
value and find the maximum value for the root node. In this game tree, there are only 4
layers, hence we reach immediately to the root node, but in real games, there will be
more than 4 layers.
o For node A max(4, -3)= 4
That was the complete workflow of the minimax two player game.