Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Introduction to Arti cial Intelligence:

Arti cial Intelligence (AI) refers to the simulation of human-like intelligence processes by
machines, particularly computer systems. It involves the development of algorithms and
computational models that enable computers to perform tasks that typically require human
intelligence, such as understanding natural language, recognizing patterns in data, making
decisions, and learning from experience.

Arti cial Intelligence Techniques:

Arti cial intelligence (AI) encompasses a wide range of techniques and methodologies aimed at
enabling machines to simulate human-like intelligence and behavior. Some common AI
techniques include:

1. Machine Learning:
• De nition: Machine learning is a subset of AI that focuses on developing algorithms capable of
learning from data to make predictions or decisions without being explicitly programmed.

• Detail: In machine learning, algorithms are trained on a dataset composed of input-output


pairs. Through the process of training, the algorithm adjusts its parameters to minimize the error
between its predictions and the actual outputs. This process allows the algorithm to learn
patterns and relationships within the data, enabling it to generalize to unseen examples and
make accurate predictions

• .Applications: Machine learning is used in various elds such as image and speech recognition,
natural language processing, recommendation systems, predictive analytics, autonomous
vehicles, and medical diagnosis.

2. Deep Learning:
• De nition: Deep learning is a subset of machine learning that employs arti cial neural
networks with many layers (deep architectures) to learn complex patterns from large amounts
of data.

• Detail: Deep learning architectures, such as convolutional neural networks (CNNs) for
computer vision and recurrent neural networks (RNNs) for sequential data, have shown
remarkable performance in tasks like image recognition, speech recognition, and language
translation. Deep learning models learn hierarchical representations of data, where each layer
extracts increasingly abstract features from the input. The ability to automatically learn
hierarchical representations makes deep learning particularly powerful for tasks involving
large, high-dimensional datasets.

• Applications: Deep learning is extensively used in image and speech recognition, natural
language processing tasks such as language translation and sentiment analysis, autonomous
driving systems, healthcare (e.g., medical image analysis and drug discovery), and many other
domains.

3.Evolutionary Algorithms (EAs):


• De nition: Evolutionary Algorithms (EAs) are optimization techniques inspired by
the principles of natural selection and Darwinian evolution. They are used to solve
complex optimization problems by mimicking the process of biological evolution.
• Detail: EAs operate by maintaining a population of candidate solutions to the
optimization problem. These solutions are represented as individuals in a
population, analogous to organisms in a biological population. The algorithm
iteratively evaluates, selects, combines, and mutates these individuals to evolve
better solutions over successive generations
fi
fi
fi
fi
fi
fi
fi
fi
fi
Applications: Evolutionary algorithms have been applied to a wide range of optimization
problems, including engineering design, scheduling, vehicle routing, nancial modeling,
machine learning optimization, and more. They excel in complex, non-linear, or high-
dimensional search spaces where traditional optimization methods may struggle.

4. Natural Language Processing (NLP):


• De nition: Natural language processing is a branch of AI concerned with enabling
computers to understand, interpret, and generate human language.

• Detail: NLP involves a range of tasks, including text classi cation, named entity recognition,
sentiment analysis, language translation, question answering, and text generation. NLP
techniques leverage machine learning and deep learning approaches to process and
analyze textual data, extracting semantic meaning and structure from text. Advanced NLP
models such as transformers have revolutionized the eld by capturing contextual
relationships in language more e ectively.
• Applications: NLP nds applications in virtual assistants (e.g., Siri, Alexa), language
translation services (e.g., Google Translate), sentiment analysis in social media monitoring,
chatbots for customer service, and information extraction from unstructured text sources
like news articles and documents.

4. Computer Vision:
• De nition: Computer vision involves teaching computers to interpret and
understand the visual world, enabling them to extract meaningful information from
images and videos.

• Detail: Computer vision tasks include object detection, image classi cation, image
segmentation, facial recognition, and pose estimation. Techniques such as
convolutional neural networks (CNNs) are commonly used in computer vision due to
their ability to automatically learn hierarchical features from image data. Pre-trained
CNN models, such as ResNet and VGGNet, provide e ective feature extraction
capabilities, while techniques like transfer learning enable the adaptation of pre-
trained models to speci c vision tasks with limited labeled data.

• Applications: Computer vision has diverse applications, including autonomous


vehicles, surveillance systems, medical image analysis, industrial quality control,
augmented reality, facial recognition systems for security and authentication, and
image-based search engines.

Advantages and limitations of AI:

Advantages
Automation and E ciency:
AI technologies automate repetitive tasks, allowing organizations to streamline processes
and reduce the need for human intervention. This automation leads to increased
e ciency, productivity, and cost savings. Tasks that are mundane, time-consuming, or
error-prone can be delegated to AI systems, freeing up human resources for more
strategic and creative endeavors. Industries such as manufacturing, logistics, customer
service, and administrative tasks bene t greatly from AI-driven automation, enabling
smoother operations and faster task completion.
ffi
fi
fi
fi
ffi
fi
ff
fi
fi
fi
ff
fi
fi
Decision Support:
AI systems analyze vast amounts of data using advanced algorithms to provide valuable
insights and recommendations for decision-making. By processing and interpreting
complex data sets, AI helps organizations identify trends, patterns, and correlations that
may not be readily apparent to humans. These insights empower decision-makers to
make informed and data-driven decisions quickly and accurately. From business strategy
formulation to operational planning and resource allocation, AI-driven decision support
systems improve outcomes, optimize processes, and enhance competitiveness across
various industries.

Personalization:
AI technologies enable personalized experiences by analyzing user preferences,
behaviors, and interactions with products or services. By leveraging data-driven
approaches, AI tailors recommendations, content, and interactions to individual users'
preferences, interests, and needs. Personalization enhances user satisfaction,
engagement, and loyalty by delivering relevant and meaningful experiences. E-commerce
platforms, streaming services, social media platforms, and recommendation engines
utilize AI to personalize product recommendations, content suggestions, and advertising
campaigns, resulting in higher conversion rates and customer retention.

Innovation:
AI fosters innovation by augmenting human capabilities and enabling the development of
novel solutions to complex problems. By leveraging advanced algorithms and computing
power, AI empowers researchers, engineers, and creatives to explore new frontiers and
tackle challenges that were previously inaccessible or impractical. From healthcare and
nance to art and scienti c research, AI-driven innovations revolutionize industries, push
the boundaries of what's possible, and create new opportunities for growth and
development. Applications such as autonomous vehicles, medical diagnostics, language
translation, and virtual assistants exemplify the transformative potential of AI-driven
innovation.

Accuracy and Predictability:


AI algorithms leverage data-driven approaches to make predictions, classi cations, and
decisions with high levels of accuracy and reliability. By learning from large datasets and
identifying patterns, relationships, and dependencies within data, AI systems achieve
precise and consistent results. In applications such as medical diagnosis, fraud detection,
weather forecasting, and predictive maintenance, AI enhances accuracy and
predictability, leading to better outcomes, informed decision-making, and resource
optimization.

Limitations:

Data Dependence and Bias:


AI systems are highly dependent on the quality, diversity, and representativeness of
training data. Biases present in the data can lead to biased predictions and decisions,
reinforcing existing inequalities and prejudices. Moreover, biased AI systems may
inadvertently discriminate against certain groups or perpetuate stereotypes. Ensuring
data quality, diversity, and fairness is essential to mitigate biases and improve the
reliability and equity of AI systems.
fi
fi
fi
Interpretability:
Many AI models, especially deep learning models, are complex and di cult to interpret or
explain. Lack of interpretability limits transparency and accountability, making it
challenging to understand how decisions are made and why certain outcomes are
predicted. This opacity may erode trust in AI systems, particularly in critical applications
such as healthcare, nance, and criminal justice. Balancing the trade-o between model
complexity and interpretability is crucial for ensuring transparency, accountability, and
ethical use of AI technologies.

Resource Intensive:
AI algorithms, particularly deep learning models, require signi cant computational
resources, memory, and energy. Training and deploying complex AI models can be
computationally intensive and costly, limiting scalability and accessibility. Organizations
with limited resources or infrastructure constraints may face challenges in adopting and
deploying AI technologies. Improving the e ciency, scalability, and accessibility of AI
algorithms is essential for democratizing AI and fostering inclusive innovation.

Security Risks:
AI systems are vulnerable to adversarial attacks, manipulation, and vulnerabilities, posing
risks to security and reliability. Adversarial attacks involve maliciously crafted inputs or
perturbations designed to exploit weaknesses in AI models and cause them to make
incorrect predictions or decisions. Furthermore, AI systems may inadvertently learn and
amplify biases present in the training data, leading to unintended consequences and
ethical concerns. Ensuring the robustness, security, and safety of AI systems requires
rigorous testing, validation, and mitigation strategies to address risks and vulnerabilities.

Ethical Concerns:
AI technologies raise ethical and societal implications related to privacy, fairness,
accountability, and job displacement. Concerns about data privacy highlight the need to
protect individuals' personal information and ensure responsible data usage. Algorithmic
fairness concerns address biases and discrimination in AI systems, emphasizing the
importance of fairness, transparency, and accountability in AI development and
deployment. Additionally, AI-driven automation may lead to job displacement and
economic inequality, necessitating ethical guidelines, regulations, and responsible AI
practices to ensure that AI bene ts society while minimizing unintended consequences
and risks.

Application of AI:

Certainly! Here are ve applications of arti cial intelligence (AI):

Healthcare:
AI is revolutionizing healthcare by improving diagnosis, treatment, and patient care. AI-
powered medical imaging systems assist radiologists in interpreting medical images,
leading to faster and more accurate diagnoses. AI algorithms analyze patient data to
predict diseases, identify risk factors, and personalize treatment plans. Virtual health
assistants provide real-time support to patients, o ering personalized health advice and
monitoring vital signs remotely. Additionally, AI-driven drug discovery accelerates the
development of new treatments and therapies.
fi
fi
fi
fi
ffi
ff
fi
ffi
ff
Finance:
AI is transforming the nance industry by enhancing risk management, fraud detection,
and customer service. AI algorithms analyze vast amounts of nancial data to identify
patterns, trends, and anomalies, enabling more e ective risk assessment and investment
strategies. AI-powered chatbots and virtual assistants provide personalized nancial
advice and support to customers, improving customer service and engagement.
Moreover, AI-based fraud detection systems identify fraudulent activities in real-time,
minimizing nancial losses and protecting against cyber threats.

Autonomous Vehicles:
AI is driving the development of autonomous vehicles, revolutionizing transportation and
mobility. AI algorithms process sensor data from cameras, LiDAR, radar, and GPS to
perceive the vehicle's surroundings and make real-time driving decisions. Autonomous
vehicles can navigate complex environments, interpret tra c signals, and avoid obstacles
autonomously, reducing accidents and improving road safety. AI-powered navigation
systems optimize route planning, tra c management, and vehicle coordination, leading to
more e cient transportation networks.

E-commerce:
AI is reshaping the e-commerce industry by enhancing customer experiences,
personalizing recommendations, and optimizing supply chain management. AI-powered
recommendation engines analyze customer preferences, browsing behavior, and
purchase history to o er personalized product recommendations, increasing sales and
customer satisfaction. AI algorithms optimize pricing strategies, inventory management,
and logistics operations to minimize costs and maximize e ciency. Chatbots and virtual
assistants provide real-time customer support, improving engagement and retention.

Education:
AI is transforming education by personalizing learning experiences, improving student
engagement, and enabling adaptive learning platforms. AI-powered tutoring systems
analyze student performance data to tailor lesson plans and provide targeted support to
individual students. Virtual tutors and chatbots o er real-time assistance and feedback,
enhancing students' understanding and retention of course materials. AI algorithms
analyze educational data to identify learning trends and optimize curriculum design,
leading to more e ective teaching strategies and improved educational outcomes.

AGENT:
In arti cial intelligence, an agent is a computer program or system that is designed to
perceive its environment, make decisions and take actions to achieve a speci c goal or
set of goals. The agent operates autonomously, meaning it is not directly controlled by a
human operator.

An AI system can be de ned as the study of the rational agent and its environment. The
agents sense the environment through sensors and act on their environment through
actuators. An AI agent can have mental properties such as knowledge, belief, intention,
etc
.An agent can be anything that perceiveits environment through sensors and act upon
that environment through actuators. An Agent runs in the cycle of perceiving, thinking,
and acting. An agent can be:
fi
ffi
fi
ff
ff
fi
fi
ffi
ff
ff
ffi
ffi
fi
fi
fi
◦ Human-Agent: A human agent has eyes, ears, and other organs which work for
sensors and hand, legs, vocal tract work for actuators.
◦ Robotic Agent: A robotic agent can have cameras, infrared range nder, NLP for
sensors and various motors for actuators.
◦ Software Agent: Software agent can have keystrokes, le contents as sensory
input and act on those inputs and display output on the screen.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through
sensors.

Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can
be an electric motor, gears, rails, etc.

E ectors: E ectors are the devices which a ect the environment. E ectors can be legs,
wheels, arms, ngers, wings, ns, and display screen.
.

Learning agents:
These agents employ an additional learning element to gradually improve and become
more knowledgeable over time about an environment. The learning element uses
feedback to decide how the performance elements should be gradually changed to show
improvement.A learning agent is an intelligent agent that improves its performance over
time by learning from experience or data. Learning agents employ machine learning
algorithms to acquire knowledge, adapt to new environments, and re ne their behavior
through iterative interactions with the environment. These agents learn patterns,
relationships, and regularities from data to make predictions, classify inputs, or optimize
decision-making processes. Learning agents can be trained using various learning
including supervised learning, unsupervised learning, reinforcement learning, and semi-
supervised learning. Examples of learning agents include spam lters, image recognition
systems, recommendation engines, and game-playing AI.

Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using
sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.An
intelligent agent is an entity equipped with arti cial intelligence (AI) capabilities to perceive
its environment, reason about it, and take actions to achieve speci c goals autonomously
or with minimal human intervention. Intelligent agents exhibit behaviors such as problem-
solving, decision-making, learning, and adaptation. These agents often operate in
dynamic and uncertain environments, where they must continuously sense and respond
ff
ff
fi
fi
ff
fi
fi
fi
fi
fi
ff
fi
to changes in their surroundings. Examples of intelligent agents include virtual assistants,
autonomous robots, recommendation systems, and self-driving cars.

An intelligent agent is an autonomous entity which act upon an environment using


sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
◦ Rule 1: An AI agent must have the ability to perceive the environment.
◦ Rule 2: The observation must be used to make decisions.
◦ Rule 3: Decision should result in an action.
◦ Rule 4: The action taken by an AI agent must be a rational action.

PEAS Representation
PEAS is a type of model on which an AI agent works upon. When we de ne an AI agent
or rational agent, then we can group its properties under PEAS representation model. It is
made up of four words:
◦ P: Performance measure
◦ E: Environment
◦ A: Actuators
◦ S: Sensors
Example
Let's suppose a self-driving car then PEAS representation will be:
Performance: Safety, time, legal drive, comfort
Environment: Roads, other vehicles, road signs, pedestrian
Actuators: Steering, accelerator, brake, signal, horn
Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

Types ofAgent:
1. Simple Re ex agent:

◦ The Simple re ex agents are the simplest agents. These agents take decisions on
the basis of the current percepts and ignore the rest of the percept history.
◦ These agents only succeed in the fully observable environment.
◦ The Simple re ex agent does not consider any part of percepts history during their
decision and action process.
◦ The Simple re ex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt
in the room.
◦ Problems for the simple re ex agent design approach:
◦ They have very limited intelligence
◦ They do not have knowledge of non-perceptual parts of the current state
◦ Mostly too big to generate and to store.
◦ Not adaptive to changes in the environment.
fl
fl
fl
fl
fl
fi
2. Model-based re ex agent

◦ The Model-based agent can work in a partially observable environment, and track
the situation.
◦ A model-based agent has two important factors:
◦ Model: It is knowledge about "how things happen in the world," so it is called
a Model-based agent.
◦ Internal State: It is a representation of the current state based on percept
history.
◦ These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
◦ Updating the agent state requires information about:
1. How the world evolves
2. How the agent's action affects the world.
fl
3. Goal-based agents

◦ The knowledge of the current state environment is not always suf cient to decide
for an agent to what to do.
◦ The agent needs to know its goal which describes desirable situations.
◦ Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
◦ They choose an action, so that they can achieve the goal.
◦ These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different
scenario are called searching and planning, which makes an agent proactive.

4. Utility-based agents

◦ These agents are similar to the goal-based agent but provide an extra component
of utility measurement which makes them different by providing a measure of
success at a given state.
◦ Utility-based agent act based not only goals but also the best way to achieve the
goal.
◦ The Utility-based agent is useful when there are multiple possible alternatives, and
an agent has to choose in order to perform the best action.
◦ The utility function maps each state to a real number to check how ef ciently each
action achieves the goals.
fi
fi
5. Learning Agents

◦ A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
◦ It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
◦ A learning agent has mainly four conceptual components, which are:
3. Learning element: It is responsible for making improvements by learning
from environment
4. Critic: Learning element takes feedback from critic which describes that how
well the agent is doing with respect to a xed performance standard.
5. Performance element: It is responsible for selecting external action
6. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
◦ Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.
fi
Problem solving techniques:
Heuristics

The heuristic approach focuses solely upon experimentation as well as test procedures to
comprehend a problem and create a solution. These heuristics don't always o er better
ideal answer to something like a particular issue, though. Such, however, unquestionably
provide e ective means of achieving short-term objectives. Consequently, if conventional
techniques are unable to solve the issue e ectively, developers turn to them. Heuristics
are employed in conjunction with optimization algorithms to increase the e ciency
because they merely o er moment alternatives while compromising precision.

Searching Algorithms

Several of the fundamental ways that AI solves every challenge is through searching.
These searching algorithms are used by rational agents or problem-solving agents for
select the most appropriate answers. Intelligent entities use molecular representations and
seem to be frequently main objective when nding solutions. Depending upon that calibre
of the solutions they produce, most searching algorithms also have attributes of
completeness, optimality, time complexity, and high computational.

Computing Evolutionary

This approach to issue makes use of the well-established evolutionary idea. The idea of
"survival of the ttest underlies the evolutionary theory. According to this, when a creature
successfully reproduces in a tough or changing environment, these coping mechanisms
are eventually passed down to the later generations, leading to something like a variety of
new young species. By combining several traits that go along with that severe
ff
fi
ff
ff
fi
ffi
ff
environment, these mutated animals aren't just clones of something like the old ones. The
much more notable example as to how development is changed and expanded is
humanity, which have done so as a consequence of the accumulation of advantageous
mutations over countless generations.

Genetic Algorithms

Genetic algorithms have been proposed upon that evolutionary theory. These programs
employ a technique called direct random search. In order to combine the two healthiest
possibilities and produce a desirable offspring, the developers calculate the t factor.
Overall health of each individual is determined by rst gathering demographic information
and afterwards assessing each individual. According on how well each member matches
that intended need, a calculation is made. Next, its creators employ a variety of
methodologies to retain their nest participants.

1. Rank Selection
2. Tournament Selection
3. Steady Selection
4. Roulette Wheel Selection (Fitness Proportionate Selection)
5. Elitism

State space search:


A state space is a way to mathematically represent a problem by de ning all the possible
states in which the problem can be. This is used in search algorithms to represent the
initial state, goal state, and current state of the problem. Each state in the state space is
represented using a set of variables.

The e ciency of the search algorithm greatly depends on the size of the state space,
and it is important to choose an appropriate representation and search strategy to search
the state space e ciently.

One of the most well-known state space search algorithms is the A algorithm. Other
commonly used state space search algorithms include breadth- rst search
(BFS), depth- rst search (DFS), hill climbing, simulated annealing, and genetic
algorithms.

Features of State Space Search


State space search has several features that make it an e ective problem-solving
technique in Arti cial Intelligence. These features include:
• Exhaustiveness:
State space search explores all possible states of a problem to nd a solution.
• Completeness:
If a solution exists, state space search will nd it.
• Optimality:
Searching through a state space results in an optimal solution.
• Uninformed and Informed Search:
State space search in arti cial intelligence can be classi ed as uninformed if it
ffi
fi
fi
ffi
fi
fi
fi
fi
fi
ff
fi
fi
fi
fi
provides additional information about the problem.
In contrast, informed search uses additional information, such as heuristics, to guide
the search process.

Steps in State Space Search


The steps involved in state space search are as follows:

• To begin the search process, we set the current state to the initial state.
• We then check if the current state is the goal state. If it is, we terminate the
algorithm and return the result.
• If the current state is not the goal state, we generate the set of possible successor
states that can be reached from the current state.
• For each successor state, we check if it has already been visited. If it has, we skip it,
else we add it to the queue of states to be visited.
• Next, we set the next state in the queue as the current state and check if it's the
goal state. If it is, we return the result. If not, we repeat the previous step until we
nd the goal state or explore all the states.
• If all possible states have been explored and the goal state still needs to be found,
we return with no solution.

State Space Representation


State space Representation involves de ning an INITIAL STATE and a GOAL STATE and
then determining a sequence of actions, called states, to follow.
• State:
A state can be an Initial State, a Goal State, or any other possible state that can be
generated by applying rules between them.
• Space:
In an AI problem, space refers to the exhaustive collection of all conceivable states.
• Search:
This technique moves from the beginning state to the desired state by applying
good rules while traversing the space of all possible states.
fi
fi
• Search Tree:
To visualize the search issue, a search tree is used, which is a tree-like structure that
represents the problem. The initial state is represented by the root node of the
search tree, which is the starting point of the tree.

Heuristic search:

Heuristic techniques are a type of problem-solving and decision-making process used in


Arti cial Intelligence (AI) systems. Heuristics involve the use of an objective function that
determines which options should be given priority when solving a particular problem. This
is done by creating a data structure such as a priority queue, where elements with higher
values take precedence over those with lower values. Additionally, heuristic search
algorithms employ cognitive biases and Kahneman’s fast/slow thinking to make decisions
based on predetermined criteria or evaluation function

Production system characteristics:


A production system in AI is a framework that assists in developing computer programs
to automate a wide range of tasks. It signi cantly impacts the creation of AI-based
systems like computer software, mobile applications, and manufacturing tools. By
establishing rules, a production system empowers machines to demonstrate particular
behaviors and adapt to their surroundings.
In Arti cial Intelligence, a production system serves as a cognitive architecture. It
encompasses rules representing declarative knowledge, allowing machines to make
decisions and act based on di erent conditions. Many expert systems and automation
methodologies rely on the rules de ned in production systems to guide their behavior.

A production system’s architecture consists of rules structured as left-hand side (LHS)


and right-hand side (RHS) equations. The LHS speci es the condition to be evaluated,
while the RHS determines the output or action resulting from the estimated condition.
This rule-based approach forms the foundation of production systems in AI, enabling
machines to process information and respond accordingly.

Components of a Production System in AI

For making an AI-based intelligent system that performs speci c tasks, we need an
architecture. The architecture of a production system in Arti cial Intelligence consists of
production rules, a database, and the control system.
fi
fi
ff
fi
fi
fi
fi
fi
Let us discuss each one of them in detail.

Global Database
A global database consists of the architecture used as a central data structure. A database
contains all the necessary data and information required for the successful completion of a
task. It can be divided into two parts as permanent and temporary. The permanent part of
the database consists of xed actions, whereas the temporary part alters according to
circumstances.

Production Rules
Production rules in AI are the set of rules that operate on the data fetched from the global
database. Also, these production rules are bound with precondition and postcondition that
gets checked by the database. If a condition is passed through a production rule and gets
satis ed by the global database, then the rule is successfully applied. The rules are of the
form A®B, where the right-hand side represents an outcome corresponding to the problem
state represented by the left-hand side.

Control System
The control system checks the applicability of a rule. It helps decide which rule should be
applied and terminates the process when the system gives the correct output. It also
resolves the con ict of multiple conditions arriving at the same time. The strategy of the
control system speci es the sequence of rules that compares the condition from the global
database to reach the correct result.

Characteristics of a Production System


There are mainly four characteristics of the production system in AI that is simplicity,
modi ability, modularity, and knowledge-intensive.

Simplicity
The production rule in AI is in the form of an ‘IF-THEN’ statement. Every rule in the
production system has a unique structure. It helps represent knowledge and reasoning in
the simplest way possible to solve real-world problems. Also, it helps improve the
readability and understanding of the production rules.

Modularity
The modularity of a production rule helps in its incremental improvement as the production
rule can be in discrete parts. The production rule is made from a collection of information
and facts that may not have dependencies unless there is a rule connecting them together.
The addition or deletion of single information will not have a major effect on the output.
Modularity helps enhance the performance of the production system by adjusting the
parameters of the rules.
fi
fi
fl
fi
fi
Modi ability
The feature of modi ability helps alter the rules as per requirements. Initially, the skeletal
form of the production system is created. We then gather the requirements and make
changes in the raw structure of the production system. This helps in the iterative
improvement of the production system.

Knowledge-intensive
Production systems contain knowledge in the form of a human spoken language, i.e.,
English. It is not built using any programming languages. The knowledge is represented in
plain English sentences. Production rules help make productive conclusions from these
sentences.

Classes of a Production System

There are four types of production systems that help in categorizing methodologies for
solving different varieties of problems. Let us have a look at each one of them.

Monotonic Production System


In this type of a production system, the rules can be applied simultaneously as the use of
one rule does not prevent the involvement of another rule that is selected at the same
time.

Partially Commutative Production System


This class helps create a production system that can give the results even by
interchanging the states of rules. If using a set of rules transforms State A into State B,
then multiple combinations of those rules will be capable to convert State A into State B.

Non-monotonic Production System


This type of a production system increases ef ciency in solving problems. The
implementation of these systems does not require backtracking to correct the previous
incorrect moves. The non-monotonic production systems are necessary from the
implementation point of view to nd an ef cient solution.
Wish to gain an in-depth knowledge of AI? Check out our Arti cial Intelligence Tutorial and
gather more insights!

Commutative System
Commutative systems are helpful where the order of an operation is not important. Also,
problems where the changes are reversible use commutative systems. On the other hand,
partially commutative production systems help in working on problems, where the changes
are irreversible such as a chemical process. When dealing with partially commutative
systems, the order of processes is important to get the correct results.

Generate and test:

The generate-and-test strategy is the simplest of all the approaches. It consists of the
following steps:

Algorithm: Generate-and-Test
fi
fi
fi
fi
fi
fi
1. Generate a possible solution. For some problems. this means generating a particular
point in the problem space. For others, it means generating a path from a start state.
2. Test to see if this is actually a solution by comparing the chosen point or the endpoint
of the chosen path to the set of acceptable goal states.
3. If a solution has been found, quit. Otherwise, return to step 1.
The following diagram shows the Generate and Test Heuristic Search Algorithm

Generate-and-test, like depth- rst search, requires that complete solutions be generated
for testing.

In its most systematic form, it is only an exhaustive search of the problem space.
Solutions can also be generated randomly but the solution is not guaranteed.
This approach is what is known as the British Museum algorithm: nding an object in the
British Museum by wandering randomly.

Example: coloured blocks


“Arrange four 6-sided cubes in a row, with each side of each cube painted one of four
colors, such that on all four
sides of the row one block face of each color are showing.”

Heuristic: If there are more red faces than other colours then, when placing a block with
several red faces, use few
of them as possible as outside faces.

Hill climbing:
◦ Hill climbing algorithm is a local search algorithm which continuously moves in the
direction of increasing elevation/value to nd the peak of the mountain or best
solution to the problem. It terminates when it reaches a peak value where no
neighbor has a higher value.
◦ Hill climbing algorithm is a technique which is used for optimizing the mathematical
problems. One of the widely discussed examples of Hill climbing algorithm is
Traveling-salesman Problem in which we need to minimize the distance traveled by
the salesman.
◦ It is also called greedy local search as it only looks to its good immediate neighbor
state and not beyond that.
fi
fi
fi
◦ A node of hill climbing algorithm has two components which are state and value.
◦ Hill Climbing is mostly used when a good heuristic is available.
◦ In this algorithm, we don't need to maintain and handle the search tree or graph as
it only keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

◦ Generate and Test variant: Hill Climbing is the variant of Generate and Test
method. The Generate and Test method produce feedback which helps to decide
which direction to move in the search space.
◦ Greedy approach: Hill-climbing algorithm search moves in the direction which
optimizes the cost.
◦ No backtracking: It does not backtrack the search space, as it does not remember
the previous states.

State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing algorithm


which is showing a graph between various states of algorithm and Objective function/Cost.

On Y-axis we have taken the function which can be an objective function or cost function,
and state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to
nd the global minimum and local minimum. If the function of Y-axis is Objective function,
then the goal of the search is to nd the global maximum and local maximum.

Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its neighbor states, but
there is also another state which is higher than it.
fi
fi
Global Maximum: Global maximum is the best possible state of state space landscape. It
has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently present.

Flat local maximum: It is a at space in the landscape where all the neighbor states of
current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Problems in Hill Climbing Algorithm:

1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher than
the local maximum.

Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.

2. Plateau: A plateau is the at area of the search space in which all the neighbor states of
the current state contains the same value, because of this algorithm does not nd any best
direction to move. A hill-climbing search might be lost in the plateau area.

Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could nd non-plateau region.

3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher
than its surrounding areas, but itself has a slope, and cannot be reached in a single move.

Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.
fl
fl
fi
fi
Types of Hill Climbing Algorithm:

1. Simple Hill Climbing:


Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbor node state at a time and selects the rst one which
optimizes current cost and set it as a current state. It only checks it's one successor
state, and if it nds better than the current state, then move else be in the same state.
This algorithm has the following features:

2. Steepest-Ascent hill climbing:


The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This
algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more time as it
searches for multiple neighbors

3. Stochastic hill climbing:


Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this
search algorithm selects one neighbor node at random and decides whether to choose it
as a current state or examine another state.

◦ Less time consuming


◦ Less optimal solution and the solution is not guaranteed

Constraint satisfaction problem:

A Constraint Satisfaction Problem in arti cial intelligence involves a set of variables, each
of which has a domain of possible values, and a set of constraints that de ne the
allowable combinations of values for the variables. The goal is to nd a value for each
variable such that all the constraints are satis ed.

In the elds of arti cial intelligence (AI) and computer science, an issue has been
designated as a constraint satisfaction problem (CSP). It is described by a set of
variables, a domain for each variable, and a set of constraints that outline the possible
combinations of values for these variables. Finding a variable assignment that meets all of
the criteria is the main objective of solving a CSP. The goal of constraint satisfaction
problems is to identify values for a collection of variables that satisfy a set of limitations or
guidelines.
fi
fi
fi
fi
fi
fi
fi
fi
More formally, a CSP is de ned as a triple(X,D,C), where:

X is a set of variables { 1 x 1 , 2 x 2 , ..., x n }.

D is a set of domains { 1 D 1 , 2 D 2 , ..., D n }, where each D i is the set of possible


values for x i .

C is a set of constraints { 1 C 1 , 2 C 2 , ..., C m }, where each C i is a constraint


that restricts the values that can be assigned to a subset of the variables.

The goal of a CSP is to nd an assignment of values to the variables that satis es all the
constraints. This assignment is called a solution to the CSP.

There are mainly three basic components in the constraint satisfaction problem:
Variables: The things that need to be determined are variables. Variables in a CSP are
the objects that must have values assigned to them in order to satisfy a particular set of
constraints. Boolean, integer, and categorical variables are just a few examples of the
various types of variables Variables, for instance, could stand in for the many puzzle cells
that need to be lled with numbers in a sudoku puzzle.
Domains: The range of potential values that a variable can have is represented by
domains. Depending on the issue, a domain may be nite or limitless. For instance, in
Sudoku, the set of numbers from 1 to 9 can serve as the domain of a variable
representing a problem cell.
Constraints: The guidelines that control how variables relate to one another are known as
constraints. Constraints in a CSP de ne the ranges of possible values for variables. Unary
constraints, binary constraints, and higher-order constraints are only a few examples of
the various sorts of constraints. For instance, in a sudoku problem, the restrictions might
be that each row, column, and 3×3 box can only have one instance of each number from
1 to 9.
Types of search algorithms
Based on the search problems we can classify the search algorithms into uninformed
(Blind search) search and informed search (Heuristic search) algorithms.

Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal. It operates in a brute-force way as it only includes information about
how to traverse the tree and how to identify leaf and goal nodes. Uninformed search
applies a way in which search tree is searched without any information about the search
space like initial state operators and test for the goal, so it is also called blind search.It
examines each node of the tree until it achieves the goal node.
It can be divided into ve main types:
◦ Breadth- rst search
◦ Uniform cost search
◦ Depth- rst search
◦ Iterative deepening depth- rst search
◦ Bidirectional Search

Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search. Informed search strategies can nd a
fi
fi

fi
fi
fi
fi
fi



fi






fi


fi
fi
solution more e ciently than an uninformed search strategy. Informed search is also
called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to nd a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another
way.
An example of informed search algorithms is a traveling salesman problem.
1. Greedy Search
2. A* Search

1. Breadth- rst Search:


◦ Breadth- rst search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth- rst search.
◦ BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
◦ The breadth- rst search algorithm is an example of a general-graph search
algorithm.
◦ Breadth- rst search implemented using FIFO queue data structure.
Advantages:

◦ BFS will provide a solution if any solution exists.


◦ If there are more than one solutions for a given problem, then BFS will provide the
minimal solution which requires the least number of steps.
Disadvantages:

◦ It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
◦ BFS needs lots of time if the solution is far away from the root node.
Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will
follow the path which is shown by the dotted arrow, and the traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
fi
fi
fi
fi
fi
fi
ffi
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of
nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest
solution and b is a node at every state.

T (b) = 1+b2+b3+.......+ bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of
frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some
nite depth, then BFS will nd a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the
node.

2. Depth- rst Search


◦ Depth- rst search isa recursive algorithm for traversing a tree or graph data
structure.
◦ It is called the depth- rst search because it starts from the root node and follows
each path to its greatest depth node before moving to the next path.
◦ DFS uses a stack data structure for its implementation.
◦ The process of the DFS algorithm is similar to the BFS algorithm.
Note: Backtracking is an algorithm technique for nding all possible solutions using
recursion.

Advantage:


DFS requires very less memory as it only needs to store a stack of the nodes on the
path from root node to the current node.
◦ It takes less time to reach to the goal node than BFS algorithm (if it traverses in the
right path).
Disadvantage:


There is the possibility that many states keep re-occurring, and there is no
guarantee of nding the solution.
◦ DFS algorithm goes for deep down searching and sometime it may go to the in nite
loop.
Example:

In the below search tree, we have shown the ow of depth- rst search, and it will follow the
order as:

Root node--->Left node ----> right node.

It will start searching from root node S, and traverse A, then B, then D and E, after
traversing E, it will backtrack the tree as E has no other successor and still goal node is
fi
fi
fi
fi
fi
fi
fl
fi
fi
fi
not found. After backtracking it will traverse node C and then G, and here it will terminate
as it found goal node.

Completeness: DFS search algorithm is complete within nite state space as it will
expand every node within a limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the
algorithm. It is given by:

T(n)= 1+ n2+ n3 +.........+ nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d
(Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node,
hence space complexity of DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps
or high cost to reach to the goal node.

Informed Search Algorithms


But informed search algorithm contains an array of knowledge such as how far we are
from the goal, path cost, how to reach to goal node, etc. This knowledge help agents to
explore less to the search space and nd more e ciently the goal node.
The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it nds
the most promising path. It takes the current state of the agent as its input and produces
the estimation of how close agent is from the goal. The heuristic method, however, might
not always give the best solution, but it guaranteed to nd a good solution in reasonable
time. Heuristic function estimates how close a state is to the goal. It is represented by
h(n), and it calculates the cost of an optimal path between the pair of states. The value of
the heuristic function is always positive.
Admissibility of the heuristic function is given as:
fi
ffi
fi
fi
fi
1. h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be
less than or equal to the estimated cost.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes
based on their heuristic value h(n). It maintains two lists, OPEN and CLOSED list. In the
CLOSED list, it places those nodes which have already expanded and in the OPEN list, it
places nodes which have yet not been expanded.
On each iteration, each node n with the lowest heuristic value is expanded and generates
all its successors and n is placed to the closed list. The algorithm continues unit a goal
state is found.
In the informed search we will discuss two main algorithms which are given below:
◦ Best First Search Algorithm(Greedy search)
◦ A* Search Algorithm

1.) Best- rst Search Algorithm (Greedy Search):


Greedy best- rst search algorithm always selects the path which appears best at that
moment. It is the combination of depth- rst search and breadth- rst search algorithms. It
uses the heuristic function and search. Best- rst search allows us to take the advantages
of both algorithms. With the help of best- rst search, at each step, we can choose the
most promising node. In the best rst search algorithm, we expand the node which is
closest to the goal node and the closest cost is estimated by heuristic function, i.e.

1. f(n)= g(n).
Were, h(n)= estimated cost from node n to the goal.
The greedy best rst algorithm is implemented by the priority queue.
Best rst search algorithm:
◦ Step 1: Place the starting node into the OPEN list.
◦ Step 2: If the OPEN list is empty, Stop and return failure.
◦ Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n),
and places it in the CLOSED list.
◦ Step 4: Expand the node n, and generate the successors of node n.
◦ Step 5: Check each successor of node n, and nd whether any node is a goal node
or not. If any successor node is goal node, then return success and terminate the
search, else proceed to Step 6.
◦ Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
◦ Step 7: Return to Step 2.
Advantages:
◦ Best rst search can switch between BFS and DFS by gaining the advantages of
both the algorithms.
◦ This algorithm is more e cient than BFS and DFS algorithms.
Disadvantages:
◦ It can behave as an unguided depth- rst search in the worst case scenario.
◦ It can get stuck in a loop as DFS.
◦ This algorithm is not optimal.
fi
fi
fi
fi
fi
ffi
fi
fi
fi
fi
fi
fi
fi
A* Search Algorithm:
A* search is the most commonly known form of best- rst search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of
UCS and greedy best- rst search, by which it solve the problem e ciently. A* search
algorithm nds the shortest path through the search space using the heuristic function.
This search algorithm expands less search tree and provides optimal result faster. A*
algorithm is similar to UCS except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the node.
Hence we can combine both costs as following, and this sum is called as a tness
number.

At each point in the search space, only those node is expanded which have the lowest
value of f(n), and the algorithm terminates when the goal node is found.
Algorithm of A* search:
Step1: Place the starting node in the OPEN list.
Step 2: Check if the OPEN list is empty or not, if the list is empty then return failure and
stops.
Step 3: Select the node from the OPEN list which has the smallest value of evaluation
function (g+h), if node n is goal node then return success and stop, otherwise
Step 4: Expand node n and generate all of its successors, and put n into the closed list.
For each successor n', check whether n' is already in the OPEN or CLOSED list, if not
then compute evaluation function for n' and place into Open list.
Step 5: Else if node n' is already in OPEN and CLOSED, then it should be attached to the
back pointer which re ects the lowest g(n') value.
Step 6: Return to Step 2.
Advantages:
◦ A* search algorithm is the best algorithm than other search algorithms.
◦ A* search algorithm is optimal and complete.
◦ This algorithm can solve very complex problems.
Disadvantages:
◦ It does not always produce the shortest path as it mostly based on heuristics and
approximation.
◦ A* search algorithm has some complexity issues.
◦ The main drawback of A* is memory requirement as it keeps all generated nodes in
the memory, so it is not practical for various large-scale problems.

What is Transfer Learning?


Transfer learning is a technique in machine learning where a model trained on one task is
used as the starting point for a model on a second task. This can be useful when the
second task is similar to the rst task, or when there is limited data available for the
second task. By using the learned features from the rst task as a starting point, the
fi
fl
fi
fi
fi
fi
ffi
fi
model can learn more quickly and e ectively on the second task. This can also help to
prevent over tting, as the model will have already learned general features that are likely
to be useful in the second task.

Agent Environment in AI

An environment is everything in the world which surrounds the agent, but it is not a part of
an agent itself. An environment can be described as a situation in which an agent is
present.

The environment is where agent lives, operate and provide the agent with something to
sense and act upon it. An environment is mostly said to be non-feministic.

Features of Environment

1. Fully observable vs Partially Observable:

◦ If an agent sensor can sense or access the complete state of an environment at


each point of time then it is a fully observable environment, else it is partially
observable.
◦ A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
◦ An agent with no sensors in all environments then such an environment is called
as unobservable.
2. Deterministic vs Stochastic:

◦ If an agent's current state and selected action can completely determine the next
state of the environment, then such environment is called a deterministic
environment.
◦ A stochastic environment is random in nature and cannot be determined completely
by an agent.
◦ In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:

◦ In an episodic environment, there is a series of one-shot actions, and only the


current percept is required for the action.
◦ However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent

◦ If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
◦ However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
◦ The agent design problems in the multi-agent environment are different from single
agent environment.
fi
ff
5. Static vs Dynamic:

◦ If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
◦ Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
◦ However for dynamic environment, agents need to keep looking at the world at
each action.
◦ Taxi driving is an example of a dynamic environment whereas Crossword puzzles
are an example of a static environment.
6. Discrete vs Continuous:

◦ If in an environment there are a nite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else
it is called continuous environment.
◦ A chess gamecomes under discrete environment as there is a nite number of
moves that can be performed.
◦ A self-driving car is an example of a continuous environment.
7. Known vs Unknown

◦ Known and unknown are not actually a feature of an environment, but it is an


agent's state of knowledge to perform an action.
◦ In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
◦ It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
8. Accessible vs Inaccessible

◦ If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else it
is called inaccessible.
◦ An empty room whose state can be de ned by its temperature is an example of an
accessible environment.
◦ Information about an event on earth is an example of Inaccessible environment.

Control Strategies:
Control strategies in AI refer to the methods and policies employed to guide the search
process through the state space. Different control strategies dictate how the search
algorithm explores potential solutions and decides which states to visit next. Two
primary control strategies are:

Breadth-First Search (BFS):

Explores the state space level by level, considering all successors of a node before
moving on to the next level. BFS guarantees the shortest path to the goal if the path
costs are non-decreasing.
Depth-First Search (DFS):
fi
fi
fi
Explores the state space by going as deep as possible along one branch before
backtracking. DFS may not guarantee the optimal solution, and it can get stuck in
in nite paths, but it is memory-ef cient.
Blind Search Algorithms:
Blind search algorithms, also known as uninformed search algorithms, do not use any
domain-speci c information about the problem. They explore the state space
without considering the actual values of states or the likelihood of reaching the goal.
Two common blind search algorithms are:

Depth-First Search (DFS):

Starts from the initial state and explores as far as possible along each branch before
backtracking. DFS uses a stack data structure to keep track of states.
Breadth-First Search (BFS):

Explores the state space level by level, considering all successors of a node before
moving on to the next level. BFS uses a queue data structure to manage the order
of state exploration.

Characteristics and Considerations:


Completeness:
BFS is complete; it will nd a solution if one exists. DFS may not be complete if the state
space is in nite or if it gets stuck in an in nite branch.
Optimality:
BFS is optimal as it always nds the shortest path to the goal. DFS may not be optimal
since it can nd a solution on a deeper level before nding a shorter one.
Space Complexity:
BFS tends to have higher space complexity as it stores all nodes at each level. DFS, on
the other hand, has lower space complexity but may suffer from stack over ow if
the depth is too deep.
Time Complexity:
The time complexity of both BFS and DFS is O(b^d), where b is the branching factor, and
d is the depth of the shallowest goal state.
Applications:
Blind search algorithms are suitable for problems where little or no information about the
structure of the state space is available.
Blind search algorithms are foundational and serve as the basis for more advanced search
strategies. While they lack ef ciency in certain scenarios, understanding these
algorithms is essential for building a strong foundation in AI problem-solving
techniques.\

AO*Search
Best- rst search is what the AO* algorithm does. The AO* method divides any given
di cult problem into a smaller group of problems that are then resolved using the
AND-OR graph concept. AND OR graphs are specialized graphs that are used in
problems that can be divided into smaller problems. The AND side of the graph
represents a set of tasks that must be completed to achieve the main goal, while the OR
side of the graph represents di erent methods for accomplishing the same main goal.
fi
ffi
fi
fi
fi
fi
fi
fi
ff
fi
fi
fi
fi
fl
AND-OR Graph
In the above gure, the buying of a car may be broken down into smaller problems or
tasks that can be accomplished to achieve the main goal in the above gure, which is
an example of a simple AND-OR graph. The other task is to either steal a car that will help
us accomplish the main goal or use your own money to purchase a car that will
accomplish the main goal. The AND symbol is used to indicate the AND part of the
graphs, which refers to the need that all subproblems containing the AND to be resolved
before the preceding node or issue may be nished.
The start state and the target state are already known in the
knowledge-based search strategy known as the AO* algorithm, and the best path is
identi ed by heuristics. The informed search technique considerably reduces the
algorithm’s time complexity. The AO* algorithm is far more e ective in searching AND-
OR trees than the A* algorithm.
Working of AO* algorithm:
The evaluation function in AO* looks like this:
f(n) = g(n) + h(n)
f(n) = Actual cost + Estimated cost
here,
f(n) = The actual cost of traversal.
g(n) = the cost from the initial node to the current node.
h(n) = estimated cost from the current node to the goal state.

Di erence between the A* Algorithm and AO* algorithm


• A* algorithm and AO* algorithm both works on the best rst search.
• They are both informed search and works on given heuristics values.
• A* always gives the optimal solution but AO* doesn’t guarantee to give the optimal
solution.
• Once AO* got a solution doesn’t explore all possible paths but A* explores all
paths.
• When compared to the A* algorithm, the AO* algorithm uses less memory.
• opposite to the A* algorithm, the AO* algorithm cannot go into an endless loop.

History of AI

Arti cial Intelligence is not a new word and not a new technology for researchers. This
technology is much older than you would imagine. Even there are the myths of Mechanical
ff
fi
fi
fi
fi
fi
ff
fi
men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of
AI which de nes the journey from the AI generation to till date development.

Maturation of Arti cial Intelligence (1943-1952)

◦Year 1943: The rst work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of arti cial neurons.
◦ Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.
◦ Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's ability
to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Arti cial Intelligence (1952-1956)

◦Year 1955: An Allen Newell and Herbert A. Simon created the " rst arti cial
intelligence program"Which was named as "Logic Theorist". This program had
proved 38 of 52 Mathematics theorems, and nd new and more elegant proofs for
some theorems.
◦ Year 1956: The word "Arti cial Intelligence" rst adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the rst time, AI coined
as an academic eld.
At that time high-level computer languages such as FORTRAN, LISP, or COBOL were
invented. And the enthusiasm for AI was very high at that time.

The golden years-Early enthusiasm (1956-1974)

◦ Year 1966: The researchers emphasized developing algorithms which can solve
mathematical problems. Joseph Weizenbaum created the rst chatbot in 1966,
which was named as ELIZA.
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
◦ Year 1972: The rst intelligent humanoid robot was built in Japan which was named
as WABOT-1.
The rst AI winter (1974-1980)

◦ The duration between years 1974 to 1980 was the rst AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of
funding from government for AI researches.
◦ During AI winters, an interest of publicity on arti cial intelligence was decreased.
A boom of AI (1980-1987)

◦ Year 1980: After AI winter duration, AI came back with "Expert System". Expert
systems were programmed that emulate the decision-making ability of a human
expert.
◦ In the Year 1980, the rst national conference of the American Association of
Arti cial Intelligence was held at Stanford University.
The second AI winter (1987-1993)

◦ The duration between the years 1987 to 1993 was the second AI Winter duration.
◦ Again Investors and government stopped in funding for AI research as due to high
cost but not ef cient result. The expert system such as XCON was very cost
effective.
The emergence of intelligent agents (1993-2011)

◦ Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary
Kasparov, and became the rst computer to beat a world chess champion.
◦ Year 2002: for the rst time, AI entered the home in the form of Roomba, a vacuum
cleaner.
◦ Year 2006: AI came in the Business world till the year 2006. Companies like
Facebook, Twitter, and Net ix also started using AI.
Deep learning, big data and arti cial general intelligence (2011-present)

◦ Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had
to solve the complex questions as well as riddles. Watson had proved that it could
understand natural language and can solve tricky questions quickly.
◦ Year 2012: Google has launched an Android app feature "Google now", which was
able to provide information to the user as a prediction.
◦ Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test."
◦ Year 2018: The "Project Debater" from IBM debated on complex topics with two
master debaters and also performed extremely well.
◦ Google has demonstrated an AI program "Duplex" which was a virtual assistant and
which had taken hairdresser appointment on call, and lady on other side didn't
notice that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom. Nowadays companies like Google, Facebook,
IBM, and Amazon are working with AI and creating amazing devices. The future of Arti cial
Intelligence is inspiring and will come with high intelligence.
fi
fi
fi
fi
fi
fi
fl
fi
fi
fi
fi
fi

You might also like