AIML-UNIT1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 140

MIT Art Design and Technology University

MIT School of Computing, Pune

21BTCS003- Artificial Intelligence & Machine Learning

Class - T.Y. (SEM-II), CORE


Unit – I Introduction to Artificial Intelligence

Prof. Abhishek Das

AY 2023-2024 SEM-VI
Unit I - Syllabus
Unit – Introduction to AI 09 hours

• AI problems, foundation of AI and history of AI intelligent agents: Agents and Environments,

• The concept of rationality, the nature of environments, structure of agents, problem solving agents,

• Problem formulation. Searching- Searching for solutions, uniformed search strategies –

• Breadth first search,

• Depth first Search.

• Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,

• Problem reduction, Game Playing-Adversial search, Games,

• mini-max algorithm, optimal decisions in multiplayer games, Problem in Game playing,

• Alpha-Beta pruning, Evaluation functions.


Unit I – Outline
• AI problems, foundation of AI and history of AI intelligent agents: Agents and Environments,

• The concept of rationality, the nature of environments, structure of agents, problem solving agents,

• Problem formulation. Searching- Searching for solutions, uniformed search strategies –

• Breadth first search,

• Depth first Search.

• Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,

• Problem reduction, Game Playing-Adversial search, Games,

• mini-max algorithm, optimal decisions in multiplayer games, Problem in Game playing,

• Alpha-Beta pruning, Evaluation functions.


INTRODUCTION TO AI

• In 2004, John McCarthy defined artificial intelligence (AI) as the


science and engineering of making intelligent machines, especially
intelligent computer programs.
• However, much before this definition was coined, the birth of AI was
marked by Alan Turing’s seminal work, Computing Machinery and
Intelligence published in 1950.
• Alan Turing, also known as father of computer science raised
questions like “Can machines think?” in this paper
• Continuing his work, he later proposed Turing Test, where a human
interrogator would try to distinguish between a computer and human
text response.
INTRODUCTION TO AI

Ques> What is AI?


Ans> The science and engineering of making intelligent machines.
or
Artificial Intelligence is the science of getting machines to think and
make decisions like humans.
INTRODUCTION TO AI

• From a layman’s view: artificial intelligence (AI) simply means the


intelligence demonstrated by machines that help them to mimic the
actions of humans. AI stimulates natural intelligence in machines that
are programmed to learn from experiences, adjust to new inputs and
perform human-like tasks.
• From a researcher’s view: AI is a set of algorithms that generates
results without having to explicitly instructed to do so, thereby
making machines capable of thinking and acting rationally and
humanely.
INTRODUCTION TO AI

• Def: Artificial Intelligence (AI) is a multidisciplinary field of computer


science and engineering that focuses on the creation, development,
and deployment of algorithms, computational models, and systems
that demonstrate behaviors associated with human intelligence.
• AI aims to enable machines to perform tasks and solve problems that
traditionally require human cognitive abilities.
• This includes learning from data, reasoning, problem-solving,
perception, understanding natural language, and adapting to new or
changing environments.
INTRODUCTION TO AI
• Artificial Intelligence (AI) is like teaching
computers to be super smart.
• It's making them understand, learn, and do
tasks without someone telling them every
single step.
• Imagine if your pet robot could learn how to
play a new game all by itself, just by watching
you play it once. That's a bit like what AI does.
• It helps computers learn and figure things out
on their own, a bit like how you learn new
things every day.
• AI helps computers do tasks that normally
need human brains, like recognizing your
voice, playing games, or helping with
problems.
INTRODUCTION TO AI

• AI techniques encompass various subfields and methods such as


machine learning, which includes supervised learning, unsupervised
learning, and reinforcement learning. Other areas of AI include
natural language processing (NLP), computer vision, robotics, expert
systems, and neural networks, among others.
• The goal of AI is to create intelligent systems that can perform tasks
autonomously, accurately, and efficiently, potentially leading to
advancements in numerous domains and industries, impacting
society in diverse ways.
WHY AI?
DEMAND FOR AI
ARTIFICIAL INTELLIGENCE (AI): SUMMARY

1. AI is a technique that enables machines


to mimic human behavior.
2. It is the theory and development of
computer systems able to perform tasks
normally requiring human intelligence,
such as visual perception, speech
recognition, decision-making and
translation between languages.
ARTIFICIAL INTELLIGENCE (AI): SUMMARY
ARTIFICIAL INTELLIGENCE (AI): EXAMPLES

1. AI is accomplished by studying how human brain thinks, learns,


decide, and work while trying to solve a problem, and then using the
outcomes of this study as a basis of developing intelligent software
and systems.
2. AI has made it possible for machines to learn from experience and
grow to perform human-like tasks.
3. Flashy examples:
1.Self Driving Cars
2.Chess Playing Computers (based on Deep Learning)
3.Language Translation (Natural Language Processing)
AREAS CONTRIBUTED BY AI
AREAS CONTRIBUTED BY AI
IMPACT OF AI ON HUMAN LIFE
IMPACT OF AI ON HUMAN LIFE
AI AGENTS

AI systems are designed to:


• Learn/Perception: They can improve their performance over time by
analyzing and learning from patterns present in data.
• In other words, this refers to the agent's ability to gather information
from its environment using sensors.
• Sensors can range from simple devices like cameras or microphones
to more complex systems capable of interpreting data from various
sources.
AI AGENTS

AI systems are designed to:


• Reasoning/Processing: AI systems can use rules and logic to make
decisions, draw conclusions, or solve problems.
• Once an agent receives information from its environment, it
processes this information using algorithms, logic, or learning
methods.
• It might use predefined rules, statistical models, machine learning, or
other techniques to make decisions or solve problems.
AI AGENTS

AI systems are designed to:


• Adapt/Action: They can adapt their behavior or responses based on
new information, changing circumstances, or feedback received.
• After processing the information, the agent takes action based on its
observations and reasoning.
• These actions are executed through actuators, which could be
physical mechanisms (like robotic arms or motors) or digital processes
that interact with the environment.
AI ENVIRONMENTS

• The environment refers to the surroundings or context in which the


intelligent agent operates.
• It can be anything from the physical world (such as a robot navigating
a room) to virtual spaces (such as a computer program interacting
with a database).
• Environments can be simple or complex, static or dynamic, and can
contain various elements that an agent must interact with to achieve
its objectives.
AI ENVIRONMENTS

• Intelligent agents are designed to adapt and make decisions


autonomously based on the information they receive from the
environment.
• These agents can range from simple systems like automated
thermostats that adjust temperature based on room conditions to
complex AI systems like self-driving cars or chatbots that interact with
humans.
• The interaction between an intelligent agent and its environment is
crucial for understanding how AI systems perceive, reason, and act,
allowing them to perform tasks, solve problems, and achieve goals
effectively.
RULES OF AI AGENTS
ADVANTAGES OF AI

• Perform well on tasks that uses detailed data


• Takes less time to perform that needs to process huge volume of data
• Generates consistent and accurate results
• Can be used 24 X 7
• Optimizes tasks by better utilizing resources
• Automates complex processes
• Minimizes downtime by predicting maintenance needs
• Enables companies to produce new products having better quality
and speed
DISADVANTAGES OF AI

• Involves more cost


• Technical expertise required to develop and use AI applications
• Lack of trained professionals
• Incomplete or inaccurate data may result in disastrous results
• Lacks the capability to generalize tasks
BRIEF HISTORY OF AI
DETAILED HISTORY OF AI
TYPES OF AI
TYPES OF AI
TYPES OF AI

AI Type 1: Based on capabilities:


• Narrow AI (Weak AI): This type of AI is designed to perform specific
tasks or functions. It operates within a limited context and does not
possess general human intelligence.
• For example, Siri and Alexa are weak AI systems.
• These systems are already trained with appropriate responses to
classify things accordingly.
• Application of narrow AI has resulted in significant societal benefits.
Google search, Image recognition software, self-driving cars, and
IBM’s Watson are some examples of such system.
TYPES OF AI: NARROW AI
TYPES OF AI

AI Type 1: Based on capabilities:


• General AI: General AI is a type of intelligence which could perform
any intellectual task with efficiency like a human.
• The idea behind the general AI to make such a system which could be
smarter and think like a human by its own.
TYPES OF AI: GENERAL AI

• Commonly known as strong AI,


Artificial General Intelligence involves
machines that possess the ability to
perform any intellectual task that a
human being can.
• Machines don’t possess human-like
abilities, they have a strong
processing unit that can perform high-
level computations but they’re not yet
capable of thinking and reasoning like
a human.
TYPES OF AI

AI Type 1: Based on capabilities:


• Strong / Super AI: Also known as artificial super intelligence (ASI) makes full
attempt to resemble human brain. Super AI represents a stage of system
intelligence where machines have the potential to exceed human cognitive
abilities, capable of outperforming humans in various tasks. This level of
intelligence emerges from the development of general AI.
• Some key characteristics of strong AI include capability include the ability to
think, to reason, solve the puzzle, make judgments, plan, learn, and
communicate by its own.
• Super AI remains a theoretical idea within Artificial Intelligence.
• Bringing such systems to life represents a groundbreaking task with world-
changing potential.
TYPES OF AI: SUPER AI

• Artificial Super Intelligence is a term


referring to the time when the
capability of computers will surpass
humans.
• ASI is presently seen as a hypothetical
situation as depicted in movies and
science fiction books, where machines
have taken over the world. However,
tech masterminds like Elon Musk
believe that ASI will take over the
world by 2040!
TYPES OF AI

AI Type 1: Based on capabilities:


TYPES OF AI

Weak AI Strong AI
It supports narrow range of applications It supports wider range of applications
with a limited scope. with a wide scope

This application is good at specific task This application has an incredible human-
level intelligence

It uses supervised and unsupervised It uses clustering and association


learning to process data techniques to process data

Example: Siri, Alexa Example: Advanced robotics


TYPES OF AI

Artificial Intelligence type-2: Based on functionality:


• Reactive Machines: Purely reactive machines are the most basic
types of Artificial Intelligence.
• Such AI systems do not store memories or past experiences for future
actions.
• These machines only focus on current scenarios and react on it as per
possible best action.
• IBM's Deep Blue system is an example of reactive machines.
• Google's AlphaGo is also an example of reactive machines.
TYPES OF AI

Artificial Intelligence type-2: Based on functionality:


• Limited Memory: Limited memory machines can store past
experiences or some data for a short period of time.
• These machines can use stored data for a limited time period only.
• Self-driving cars are one of the best examples of Limited Memory
systems. These cars can store recent speed of nearby cars, the
distance of other cars, speed limit, and other information to navigate
the road.
TYPES OF AI

Artificial Intelligence type-2: Based on functionality:


• Theory of Mind: Theory of Mind AI should understand the human
emotions, people, beliefs, and be able to interact socially like humans.
• This type of AI machines are still not developed, but researchers are
making lots of efforts and improvement for developing such AI
machines.
TYPES OF AI

Artificial Intelligence type-2: Based on functionality:


• Self Awareness: Self-awareness AI is the future of Artificial
Intelligence. These machines will be super intelligent, and will have
their own consciousness, sentiments, and self-awareness.
• These machines will be smarter than human mind.
• Self-Awareness AI does not exist in reality still and it is a hypothetical
concept.
LANGUAGES FOR AI
Unit I – Outline
• AI problems, foundation of AI and history of AI intelligent agents: Agents and Environments,

• The concept of rationality, the nature of environments, structure of agents, problem solving agents,

• Problem formulation. Searching- Searching for solutions, uniformed search strategies –

• Breadth first search,

• Depth first Search.

• Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,

• Problem reduction, Game Playing-Adversial search, Games,

• mini-max algorithm, optimal decisions in multiplayer games, Problem in Game playing,

• Alpha-Beta pruning, Evaluation functions.


INTELLIGENT AGENTS

An intelligent agent is a software entity that enables artificial


intelligence to take action. Intelligent agent senses the environment
and uses actuators to initiate action and conducts operations in the
place of users.

Intelligent Agent (IA) is an entity that makes decisions.


WHAT IS AN INTELLIGENT AGENT (IA)?
1. This agent has some level of autonomy that allows it to perform specific,
predictable, and repetitive tasks for users or applications.
2. It’s also termed as ‘intelligent’ because of its ability to learn during the
process of performing tasks.
3. The two main functions of intelligent agents include perception and
action. Perception is done through sensors while actions are initiated
through actuators.
4. Intelligent agents consist of sub-agents that form a hierarchical structure.
Lower-level tasks are performed by these sub-agents.
5. The higher-level agents and lower-level agents form a complete system
that can solve difficult problems through intelligent behaviors or
responses.
CONCEPT OF RATIONALITY

Understanding the concept of rationality, the nature of environments,


the structure of agents, and problem-solving agents is crucial in the
realm of Artificial Intelligence (AI) as these concepts form the basis for
designing intelligent systems.
• Rationality in AI refers to the ability of an agent to make decisions
that optimize achieving its goals or objectives. A rational agent aims
to choose the best action based on its observations and knowledge to
maximize expected outcomes or utility. However, it's important to
note that rationality doesn't guarantee success but rather focuses on
making the best decisions given available information and resources.
NATURE OF ENVIRONMENTS

• Environments in AI can vary widely in terms of their characteristics.


They can be simple or complex, static or dynamic, deterministic or
stochastic.
• Environments encompass everything that an agent interacts with, and
understanding their nature helps in designing appropriate agents.
• Some environments might be fully observable, where agents have
complete information, while others might be partially observable or
contain uncertainty and unpredictability.
STRUCTURE OF INTELLIGENT AGENTS

Architecture Agent Function Agent Program

This refers to machinery or This is a function in which This is an implementation or


devices that consists of actions are mapped from a execution of the agent
actuators and sensors. The certain percept sequence. function. The agent function
intelligent agent executes on Percept sequence refers to a is produced through the
this machinery. Examples history of what the agent program’s execution
include a personal computer, intelligent agent has on the physical architecture.
a car, or a camera. perceived.
HOW DO INTELLIGENT AGENTS WORK?

Sensors, actuators, and effectors are the three main components the intelligent
agents

1. Sensor: A device that detects environmental changes and sends the


information to other devices. An agent observes the environment
through sensors. E.g.: Camera, GPS, radar.

2. Actuators: Machine components that convert energy into motion.


Actuators are responsible for moving and controlling a system. E.g.:
electric motor, gears, rails, etc.

3. Effectors: Devices that affect the environment. E.g.: wheels, and


display screen.
PEAS PROPERTIES

1. PEAS is a type of model on which an AI agent works upon.


2. When we define an AI agent or rational agent, then we can group its
properties under PEAS representation model.
3. It is made up of four words:
• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
PEAS FOR SELF-DRIVING CARS

1. Performance: Safety, time, legal


drive, comfort
2. Environment: Roads, other
vehicles, road signs, pedestrian
3. Actuators: Steering, accelerator,
brake, signal, horn
4. Sensors: Camera, GPS,
speedometer, odometer,
accelerometer, sonar.
Reflex means immediate action. For example: “sneezing is a reflex action”

TYPES OF AI AGENTS
Observable environment: If agent know complete knowledge of environment.
For example: Chess Game

Simple Reflex Model-based


Goal-based agents Utility-based agent Learning agent
Agent reflex agent

1. Simplest agents
2. Take decisions on the basis of the
current percepts
3. Ignore the rest of the percept history
4. Works on Condition-action (if-then)
rule
5. Environment should be fully
observable
6. Problems: limited intelligence
TYPES OF AI AGENTS
Partially Observable environment: If agent does not know the complete knowledge
of environment. For example: Traffic on road

Simple Reflex Model-based


Goal-based agents Utility-based agent Learning agent
Agent reflex agent

1. Can work in a partially observable


environment and track the situation.
2. Representation of the current state
based on percept history.

Model based means knowledge base or history


TYPES OF AI AGENTS

Simple Reflex Model-based


Goal-based agents Utility-based agent Learning agent
Agent reflex agent

1. Expansion of Model Based Reflex


Agents by having the "goal"
information.
2. Searching and Planning

Example: planning to travel from Pune to Ooty


TYPES OF AI AGENTS

Simple Reflex Model-based


Goal-based agents Utility-based agent Learning agent
Agent reflex agent

1. Extra component of utility measurement


2. Utility-based agent act based not only goals but
also the best way to achieve the goal.
3. The Utility-based agent is useful when there are
multiple possible alternatives and an agent has to
choose in order to perform the best action.
4. The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
TYPES OF AI AGENTS
Other agents takes the decision on the basis of knowledge but learning agent start with some basic
knowledge and improve with experiences.

Simple Reflex Model-based


Goal-based agents Utility-based agent Learning agent
Agent reflex agent

1. It has learning capabilities.


2. Four conceptual components:
1. Learning element: It is responsible for making improvements by learning
from environment
2. Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
3. Learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
GOALS OF INTELLIGENT AGENTS

1. High performance
2. Optimal result
3. Rational action
PROBLEM SOLVING AGENTS IN AI

• Problem-solving agents are a fundamental type of agent in AI that


aim to achieve specific goals by searching for sequences of actions
that lead to desired outcomes. These agents operate by perceiving
the current state of the environment, analyzing it to determine the
actions needed to transition to a desirable state, and executing a
series of actions to reach that state.
• Problem-solving agents utilize various algorithms and techniques such
as search algorithms, heuristics, and planning methods to find
optimal or satisfactory solutions to problems within their
environment.
PROBLEM SOLVING AGENTS IN AI

• In other words, problem-solving agents in AI are like smart detectives.


Imagine you're solving a mystery. You look at clues, think about what
they mean, and then decide what to do next. Similarly, problem-
solving agents in AI look at the situation they're in, figure out what
actions they can take, and then choose the best actions to reach their
goals.
• They work step by step, evaluating the environment, making
decisions based on what they know, and taking actions to solve
problems. Just like how a detective solves a case by gathering clues
and making smart choices, problem-solving agents in AI solve
problems by analyzing information and picking the best possible
actions to achieve their objectives.
Unit I – Outline
• AI problems, foundation of AI and history of AI intelligent agents: Agents and Environments,

• The concept of rationality, the nature of environments, structure of agents, problem solving agents,

• Problem formulation. Searching- Searching for solutions, uniformed search strategies –

• Breadth first search,

• Depth first Search.

• Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,

• Problem reduction, Game Playing-Adversial search, Games,

• mini-max algorithm, optimal decisions in multiplayer games, Problem in Game playing,

• Alpha-Beta pruning, Evaluation functions.


STATE SPACE SEARCH

Used in Problem Solving

Process used in AI in which successive configurations or states of an


instance are considered with intention of finding a GOAL state with desired
property.

Problems are modelled as State


Space
Set of all possible states for a given problem is known as state space representation of
the problem.

STATE SPACE SEARCH [CONT..]

1. State Space Representation consist of defining an INITIAL state (from where


to start), The GOAL state (the destination) and then we follow certain set of
sequence of steps (called states).
2. Define separately:
1. State: AI problem can be represented as a well formed set of possible states. State can
be initial state, goal state and other possible states.
2. Space: In an AI problem the exhaustive set of all possible states is called space.
3. Search: It is a technique which takes the initial state to goal state by applying certain set
of valid rules while moving through space of all possible states.
3. For search process, we need the following:
1. Initial state
2. Set of valid rules
3. Goal state
PROBLEM FORMULATION IN AI

• Problem formulation is the initial step in solving problems using


Artificial Intelligence. It involves defining the problem in a way that an
AI agent can understand and work on finding a solution. This process
generally consists of:
• Initial State: Describing the current situation or state the agent is in.
• Actions: Listing the possible actions the agent can take.
• Transition Model: Specifying how actions change the state.
• Goal Test: Defining conditions to check if a state is a solution or goal.
• Path Cost: Assigning costs or weights to actions or paths.
SEARCHING FOR SOLUTIONS IN AI

• Searching in AI involves finding a sequence of actions or steps that


lead from an initial state to a goal state.
• The process navigates through different states by applying actions
until it reaches a state that satisfies the goal test.
REPRESENTATION

S: (S, A, Action(S), Result(s,a), Cost(s,a))

Function; which action is Cost estimation after applying


Set of all possible states
possible for current state action on particular state

Function; state reached by


Set of all possible actions
performing action ‘a’ on state ‘s’
STATE SPACE SEARCH
(EXAMPLE: EIGHT TILE PUZZLE)

Start State Goal State

Up

Possible Action Down

Left
Solution
Right
PROPERTIES OF SEARCH ALGORITHMS

Algorithm is said to be complete if it guarantees to


Completeness return a solution.

Optimality Algorithm is guaranteed to be the best solution


(lowest path cost)

Time Complexity Time for an algorithm to complete its task

Maximum storage space required at any point


Space Complexity
during the search
TYPES OF SEARCH ALGORITHMS

Search Algorithm

Uninformed/Blind
Informed search
search
Breadth First Search Best First Search
Depth First Search A* Search
Uniform Cost Search
Depth Limited Search
Bidirectional Search
TYPES OF SEARCH ALGORITHMS [CONT.. ]
Time consuming, More time & space complexity Quick solution, Less time & space complexity

Uninformed/Blind Informed Search


Search

• Does not contain any domain • Uses domain knowledge


knowledge such as closeness of the • Solution more efficiently than an
goal uninformed search
• Operates in a brute-force way • Also called a Heuristic search
• Search applies without any
information
• It is also called blind search A heuristic is a way which might not
• Examines each option until it achieves always be guaranteed for best solutions
the goal but guaranteed to find a good solution in
reasonable time.
STATE SPACE SEARCH: POPULAR EXAMPLES

1. Eight titles problem


2. Water jug problem
3. Traveling salesman problem
UNIFORMED SEARCH STRATEGIES

• Uniformed search strategies are algorithms used by AI agents to


explore the problem space without any information about the goal or
the states beyond their immediate reach.
• Two common uniformed search strategies are Breadth-First Search
(BFS) and Depth-First Search (DFS).
UNIFORMED SEARCH STRATEGIES

• Breadth-First Search (BFS):


• BFS explores all the nodes at a given depth in the search tree before moving
on to the nodes at the next level.
• It starts at the initial state and systematically explores all possible successors
before moving to the next level.
• This strategy guarantees the shortest path to the goal if the edges have
uniform costs, but it might require a lot of memory as it keeps track of all
nodes at a given depth.
UNIFORMED SEARCH STRATEGIES

• Depth-First Search (DFS):


• DFS explores as far as possible along each branch before backtracking.
• It starts at the initial state and goes as deeply as possible along each branch
before backtracking.
• DFS can use less memory compared to BFS but doesn't guarantee the shortest
path to the goal and can get stuck in infinite branches or deep paths.
Unit I – Outline
• AI problems, foundation of AI and history of AI intelligent agents: Agents and Environments,

• The concept of rationality, the nature of environments, structure of agents, problem solving agents,

• Problem formulation. Searching- Searching for solutions, uniformed search strategies –

• Breadth first search,

• Depth first Search.

• Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,

• Problem reduction, Game Playing-Adversarial search, Games,

• Mini-max algorithm, optimal decisions in multiplayer games, Problem in Game playing,

• Alpha-Beta pruning, Evaluation functions.


Always gives the guaranty to give the good solution but does not guaranty to give the optimal
solution.

HEURISTIC SEARCH

• Heuristic Search and Heuristic Function are used in informed search.

• Heuristic Search is a simple searching technique that tries to optimize


a problem using Heuristic Function.

• Optimization means that we will try to solve a problem in minimum


number of steps and cost.
HEURISTIC SEARCH

• It is a function H(n) that gives an estimation on the cost of getting


from node 'n' to goal state.
• It helps in selecting optimal node for expansion.
HEURISTIC SEARCH: TYPES

Heuristic function

Admissible Non Admissible

• It never overestimates the cost of • May overestimates the cost of


reaching the goal. reaching the goal.

H(n) <= H*(n) H(n) > H*(n)

• Here H(n) is heuristic cost and • Here H(n) is heuristic cost and
H*(n) is estimated cost. H*(n) is estimated cost.
HEURISTIC
Start State
SEARCH: EXAMPLE
A Heuristic Cost

1 1 H(B)=3, H(C)=4 and H(D)=5


H(B)=3 1 H(D)=5

B C D Total cost= Heuristic Cost + Actual cost

3 H(C)=4 F(n)= H(n)+G(n)

E H(E)=2
B=3+1=4; C=4+1=5; D=5+1=6
5
2 Actual cost from A to G = 1+3+5+2 = 11
F G ADMISSIBLE
H(B)=3, so H(n)=3 and H*(n)=11
H(F)=3 Goal State
H(n)<=H*(n)
3<=11
HEURISTIC
Start State
SEARCH: EXAMPLE
A Heuristic Cost

1 1 H(D)=5
H(B)=3 1 H(D)=5

B C D Total cost= Heuristic Cost + Actual cost

3 H(C)=4 F(n)= H(n)+G(n)

E H(E)=2 3
Actual cost from A to G (via D) = 1+3 = 4
5
2
F G NON- H(D)=5, so H(n)=5 and H*(n)=4
ADMISSIBLE
H(F)=3 Goal State H(n)>H*(n)
5>4
GENERATE AND TEST SEARCH
Also known as “British Museum Search Algorithm”.

• Generate and Test Search is a heuristic search technique based on Depth First Search with Backtracking which
guarantees to find a solution if done systematically and there exists a solution.

• The generate-and-test strategy is the simplest of all the approaches. It consists of the following steps:

• Algorithm: Generate-and-Test
1. Generate all possible solutions.
2. Select one solution among the possible
solutions.
3. If a solution has been found and acceptable,
quit. Otherwise return to step 1.
PROPERTIES OF GOOD GENERATOR
Good Generators need to be complete i.e. they should
generate all the possible solutions and cover all the possible
Complete
states. In this way, we can guaranty our algorithm to
converge to the correct solution at some point in time.

Good Generators should not yield a duplicate solution at any


Non-Redundant point of time as it reduces the efficiency of algorithm thereby
increasing the time of search.

Good Generators have the knowledge about the search space


Informed search which they maintain in the form of an array of knowledge.
This can be used to search how far the agent is from the goal,
calculate the path cost and even find a way to reach the goal.
EXAMPLE: PROBLEM STATEMENT

Let us take a simple example to understand the importance of a good generator.


Consider a pin made up of three 2 digit numbers i.e. the numbers are of the form.
EXAMPLE: SOLUTION

• The total number of solutions in this case is (100*100*100) which is approximately 1M.
• If we do not make use of any informed search technique then it results in exponential time
complexity.
• Let’s say if we generate 5 solutions every minute. Then the total numbers generated in 1 hour are
5*60=300 and the total number of solutions to be generated are 1M.
• Let us consider the brute force search technique for example linear search whose average time
complexity is N/2. Then on an average, the total number of the solutions to be generated are
approximately 5 lakhs.
• Using this technique even if you work for about 24 hrs. a day then also you will need 10 weeks to
complete the task.
EXAMPLE: SOLUTION

• Now consider using heuristic function where we have domain knowledge that every number is a
prime number between 0-99 then the possible number of solutions are (25*25*25) which is
approximately 15,000.
• Now consider the same case that you are generating 5 solutions every minute and working for 24
hrs. then you can find the solution in less than 2 days which was being done in 10 weeks in the case
of uninformed search. .

Conclusion
We can conclude for here that if we can find a good heuristic then time complexity can be reduced
gradually. But in the worst-case time and space complexity will be exponential. It all depends on the
generator i.e. better the generator lesser is the time complexity.
HILL CLIMBING ALGORITHM

• Hill climbing algorithm is a local search algorithm which continuously


moves in the direction of increasing elevation/value to find the peak
of the mountain or best solution to the problem. It terminates when
it reached a peak value where no neighbor has higher value.

Salient Features:
• Local Space algorithm
• Greedy Approach
• No Backtracking
HILL CLIMBING ALGORITHM
The State-space diagram is a graphical representation of the set of states(input) our search
algorithm can reach vs the value of our objective function(function we intend to
maximize/minimize).

The X-axis denotes the state space


i.e. states or configuration our
algorithm may reach.
The Y-axis denotes the values of
objective function corresponding to
a particular state.

The best solution will be that state


space where objective function has
maximum value or global maxima.
HILL CLIMBING ALGORITHM
It is a state which is better than its neighboring state however there exists a state which is
Local maxima better than it (global maximum). This state is better because here the value of the
objective function is higher than its neighbors

Global maxima It is the best possible state in the state space diagram. This because at this state,
objective function has the highest value.

Plateau/flat
It is a flat region of state space where neighboring states have the same value.
local maxima

It is a region which is higher than its neighbor's but itself has a slope. It is a special kind
Ridge
of local maximum.

Current state The region of state space diagram where we are currently present during the search.
(Denoted by the highlighted circle in the given image.)
HILL CLIMBING ALGORITHM: TYPES

Types of Hill climbing

Steepest - Ascent hill Stochastic hill


Simple Hill Climbing climbing
climbing
SIMPLE HILL CLIMBING: INTRODUCTION

• It is a local search algorithm, i.e., it does not have any knowledge of


the whole global problem
• Until and unless it gets best solutions, it keeps moving. As it stops
getting the best solution/move, the process stops over there.
• Cannot backtrack
SIMPLE HILL CLIMBING

Simple hill climbing is the simplest way to implement a hill-climbing algorithm. It


only evaluates the neighbor node state at a time and selects the first one which
optimizes current cost and set it as a current state. It only checks it’s one successor
state, and if it finds better than the current state, then move else be in the same
state.

This algorithm has the following features:


1. Less time consuming
2. Less optimal solution
3. The solution is not guaranteed
SIMPLE HILL CLIMBING: ALGORITHM

Step 1: Evaluate the initial state, if it is goal state then return success and Stop.

Step 2: Loop Until a solution is found or there is no new operator left to apply.

Step 3: Select and apply an operator to the current state.

Step 4: Check new state:


If it is goal state, then return success and quit.
else if it is better than the current state then assign new state as a current state.
else if not better than the current state, then return to step 2.

Step 5: Exit.
SIMPLE HILL CLIMBING: ALGORITHM
STEEPEST-ASCENT HILL CLIMBING

The steepest-Ascent algorithm is a variation of the simple hill-climbing algorithm. This


algorithm examines all the neighboring nodes of the current state and selects one
neighbor node which is closest to the goal state. This algorithm consumes more time as it
searches for multiple neighbors.
STEEPEST-ASCENT HILL CLIMBING:
ALGORITHM

Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make the current state as your initial state.

Step 2: Loop until a solution is found or the current state does not change.
1. Let S be a state such that any successor of the current state will be better than it.
2. For each operator that applies to the current state;
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the S.
• If it is better than S, then set new state as S.
• If the S is better than the current state, then set the current state to S.
Step 3: Exit.
STOCHASTIC HILL CLIMBING

Stochastic hill climbing does not examine for all its neighbours before moving. Rather, this
search algorithm selects one neighbour node at random and evaluate it as a current state
or examine another state.
PROBLEMS IN DIFFERENT REGIONS IN HILL
CLIMBING

Local maximum: At a local maximum all


neighboring states have values which are
worse than the current state. Since hill-
climbing uses a greedy approach, it will not
move to the worse state and terminate itself.
The process will end even though a better
solution may exist.

To overcome the local maximum problem:


Utilize the backtracking technique. Maintain a
list of visited states. If the search reaches an
undesirable state, it can backtrack to the
previous configuration and explore a new
path.
PROBLEMS IN DIFFERENT REGIONS IN HILL
CLIMBING

Plateau: On the plateau, all


neighbors have the same
value. Hence, it is not
possible to select the best
direction.

To overcome plateaus:
Make a big jump. Randomly
select a state far away from
the current state. Chances
are that we will land at a
non-plateau region
PROBLEMS IN DIFFERENT REGIONS IN HILL
CLIMBING

Ridge: Any point on a ridge


can look like a peak because
the movement in all possible
directions is downward.
Hence, the algorithm stops
when it reaches such a state.

To overcome Ridge: You


could use two or more rules
before testing. It implies
moving in several directions at
once.
APPLICATIONS OF HILL CLIMBING

• Hill Climbing technique can be used to solve many problems, where


the current state allows for an accurate evaluation function, such as
Network-Flow, Travelling Salesman problem, 8-Queens problem,
Integrated Circuit design, etc.
• Hill Climbing is used in inductive learning methods too. This technique
is also used in robotics for coordinating multiple robots in a team.
A* SEARCH ALGORITHM

• Moving from one place to another is a task we do almost


everyday.

• Finding the shortest path by ourselves was difficult.

• We now have algorithms that help us find that shortest route.

A* is one the most popular algorithms out there.


A* ALGORITHM: INTRODUCTION

It is an advanced BFS algorithm that searches for shorter paths first rather than
the longer paths. A* is optimal as well as a complete algorithm.

A* is sure to find the least cost from the source to the


Optimal
destination

A* is going to find all the paths that are available to us from


Complete
the source to the destination

So that makes A* the best algorithm right? YES.

But A* is slow and also the space it requires is a lot as it saves all the possible
paths that are available to us. This makes other faster algorithms have an upper
hand over A* but it is nevertheless, one of the best algorithms out there.
WHY CHOOSE A* OVER OTHER FASTER
ALGORITHMS?
Dijkstra’s algorithm and A* Algorithm for comparison

It finds all the paths that can be taken


It finds the most optimal path that it can
without finding or knowing which is the
take from the source in reaching the
most optimal one for the problem that we
destination. It knows which is the best path
are facing. This leads to the unoptimized
that can be taken from its current state
working of the algorithm and unnecessary
and how it needs to reach its destination.
computations.
IN-AND-OUT OF A* ALGORITHM

A* is used to find the most optimal path from a source to a destination. It optimizes the path
by calculating the least distance from one node to the other.

Need to remember : F = G + H

F : This parameter is used to find the least cost from one node to the next node.
Responsible to find the most optimal path from our source to destination.

G : The cost of moving from one node to the other node.


This parameter changes for every movement from one node to other most optimal path.

H : H is the heuristic/estimated path between the current code to the destination node.
Not actual cost but is the assumption cost from the node to destination.
A* ALGORITHM: EXAMPLE
A* ALGORITHM: EXAMPLE
A* ALGORITHM: ALGORITHM
A* ALGORITHM

• H(n) <= h*(n) → Underestimation


• H(n) >= h*(n) → Overestimation
BEST FIRST SEARCH ALGORITHM

• Uses evaluation algorithm (function) to decide which adjacent node is most


promising and then explore

• Category of Heuristic or Informed Search

• Priority Queue is used to store cost of nodes

• Combination of BFS and DFS

• Good, May not be optimal


BEST FIRST SEARCH ALGORITHM: ALGORITHM
BEST FIRST SEARCH ALGORITHM: EXAMPLE
BEST FIRST SEARCH ALGORITHM: EXAMPLE
AO* SEARCH ALGORITHM

1. AO* algorithm is a best first search algorithm.

2. AO* algorithm uses the concept of AND-OR graphs to decompose any complex
problem given into smaller set of problems which are further solved.

3. AND-OR graphs are specialized graphs that are used in problems that can be
broken down into sub problems where;
• AND side of the graph represent a set of task that need to be done to achieve
the main goal , whereas the
• OR side of the graph represent the different ways of performing task to achieve
the same main goal.
AO* SEARCH ALGORITHM

Want to pass
Mobile
in exam

Work Purchase Do Do Hard


Gift Pass exam
Hard Mobile cheating Work

AND AND
OR OR
AO* SEARCH ALGORITHM: EXAMPLE 01
AO* SEARCH ALGORITHM: EXAMPLE 02
AO* SEARCH ALGORITHM: EXAMPLE 02

Example#3
AO* SEARCH ALGORITHM: EXAMPLE 02

Example#4
AO* SEARCH ALGORITHM: EXAMPLE 02

Example#4
AO* SEARCH ALGORITHM: ALGORITHM
A* VS AO* ALGORITHM

1. A* algorithm provides with the optimal solution, whereas AO* stops when it
finds any solution.

2. AO* algorithm requires lesser memory compared to A* algorithm.

3. AO* algorithm doesn't go into infinite loop whereas the A* algorithm can go
into an infinite loop.
Unit I – Outline
• AI problems, foundation of AI and history of AI intelligent agents: Agents and Environments,

• The concept of rationality, the nature of environments, structure of agents, problem solving agents,

• Problem formulation. Searching- Searching for solutions, uniformed search strategies –

• Breadth first search,

• Depth first Search.

• Search with partial information (Heuristic search) Hill climbing, A*, AO* Algorithms,

• Problem reduction, Game Playing-Adversarial search, Games,

• Mini-max algorithm, optimal decisions in multiplayer games, Problem in Game playing,

• Alpha-Beta pruning, Evaluation functions.


INTRODUCTION TO GAME PLAYING

• Types of Game Playing Algorithm:


• Minimax algorithm
• Alpha-Beta Pruning
INTRODUCTION TO GAME PLAYING
ADVERSARIAL SEARCH: INTRODUCTION
Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.

There might be some situations where more than one agent is searching for the
solution in the same search space, and this situation usually occurs in game
playing.

The environment with more than one agent is termed as multi-agent environment,
in which each agent is an opponent of other agent and playing against each other.

Searches in which two or more players with conflicting goals are trying to explore the
same search space for the solution, are called adversarial searches, often known
as Games.
TECHNIQUES REQUIRED TO GET THE OPTIMAL
SOLUTION

There is always a need to choose those algorithms which provide the best optimal
solution in a limited time. So, we use the two techniques which could fulfill our
requirements.

A technique which allows ignoring the unwanted portions of


Pruning
the search tree which make no difference in its final result.

Heuristic
It allows to approximate the cost value at each level of the
Evaluation
search tree, before reaching the goal node.
Function
TYPES OF GAMES
A game with the perfect information is that in which agents can look into the
Perfect
complete board. Agents have all the information about the game, and they can
information
see each other moves also.

If in a game agents do not have all information about the game and not aware
Imperfect
with what's going on, such type of games are called the game with imperfect
information
information.

Deterministic Deterministic games are those games which follow a strict pattern and set of rules
games for the games, and there is no randomness associated with them.

Non-deterministic are those games which have various unpredictable events and
Non-
has a factor of chance or luck. This factor of chance or luck is introduced by
deterministic
either dice or cards. These are random, and each action response is not fixed.
games
Such games are also called as stochastic games.
FORMALIZATION OF THE PROBLEM

Initial state It specifies how the game is set up at the start.

Player(s) It specifies which player has moved in the state space.

Action(s) It returns the set of legal moves in state space.

It is the transition model, which specifies the result of moves in the


Result(s, a)
state space.
Terminal- Terminal test is true if the game is over, else it is false at any case.
Test(s) The state where the game ends is called terminal states.
A utility function gives the final numeric value for a game that ends
Utility(s, p)
in terminal states s for player p. It is also called payoff function.
TYPES OF ADVERSARIAL SEARCH

There are two types of adversarial search:


1. Minimax Algorithm
2. Alpha-beta Pruning
MINI-MAX ALGORITHM
Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory. It provides an optimal move for the player assuming that opponent
is also playing optimally.

The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.

In this algorithm two players play the game, one is called MAX and other is called MIN.

Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.

The minimax algorithm performs a depth-first search algorithm for the exploration of the
complete game tree.
MINI-MAX ALGORITHM: EXAMPLE

There are two players one is called Maximizer and other is called Minimizer.

Maximizer will try to get the Maximum possible score, and Minimizer will try to get the
minimum possible score.

This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves
to reach the terminal nodes.

At the terminal node, the terminal values are given so we will compare those value and
backtrack the tree until the initial state occurs.
STEPS INVOLVED IN SOLVING THE TWO-
PLAYER GAME TREE

Step-1: The algorithm generates the entire


game-tree and apply the utility function to
get the utility values for the terminal states.
In the tree diagram, let's take A is the initial
state of the tree. Suppose maximizer takes
first turn which has worst-case initial value
=- infinity, and minimizer will take next turn
which has worst-case initial value = +infinity.
STEPS INVOLVED IN SOLVING THE TWO-
PLAYER GAME TREE

Step 2: Now, first we find the utilities value for


the Maximizer, its initial value is -∞, so we will
compare each value in terminal state with
initial value of Maximizer and determines the
higher nodes values. It will find the maximum
among the all.

For node D: max(-1,- -∞) => max(-1,4)= 4


For Node E: max(2, -∞) => max(2, 6)= 6
For Node F: max(-3, -∞) => max(-3,-5) = -3
For node G: max(0, -∞) = max(0, 7) = 7.
STEPS INVOLVED IN SOLVING THE TWO-
PLAYER GAME TREE

Step 3: In the next step, it's a turn for


minimizer, so it will compare all
nodes value with +∞, and will find the
3rd layer node values.

For node B= min(4,6) = 4


For node C= min (-3, 7) = -3.
STEPS INVOLVED IN SOLVING THE TWO-
PLAYER GAME TREE

Step 4: Now it's a turn for Maximizer,


and it will again choose the
maximum of all nodes value and find
the maximum value for the root
node.

For node A max(4, -3)= 4.


PROPERTIES OF MINI-MAX ALGORITHM

Complete: Min-Max algorithm is Complete. It will definitely find a solution (if


exist), in the finite search tree.

Optimal: Min-Max algorithm is optimal if both opponents are playing optimally.

The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge
branching factor, and the player has lots of choices to decide
ALPHA BETA PRUNING
Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the minimax
algorithm.

There is a technique by which without checking each node of the game tree we can compute the correct minimax decision
and this technique is called pruning. This involves two threshold parameter Alpha and beta for future expansion, so it is
called alpha-beta pruning. It is also called as Alpha-Beta Algorithm.

The two-parameter can be defined as:


Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximizer. The initial value of
alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer. The initial value of
beta is +∞.

The Alpha-beta pruning to a standard minimax algorithm returns the same move as the standard algorithm does, but it
removes all the nodes which are not really affecting the final decision but making algorithm slow. Hence by pruning these
nodes, it makes the algorithm fast.
ALPHA BETA PRUNING: EXAMPLE
ALPHA BETA PRUNING: ADVANTAGES
ALPHA BETA PRUNING: DISADVANTAGES

You might also like