Professional Documents
Culture Documents
Artificial Intelligence 7: Local Search
Artificial Intelligence 7: Local Search
Artificial Intelligence 7: Local Search
Artificial Intelligence
7: Local Search
Previously: Goal-Based Agents
A goal-based agent:
● Has knowledge about the environment state AND the agent’s goal.
● Combines the goal and the environment model to choose actions.
Previously: Goal-Based Agents
A goal-based agent:
● Uninformed search:
○ Search trees are HUGE (high time complexity, high space complexity)
○ In the worst-case scenario, not much better than brute-force search
Disadvantages:
● Local search algorithms are generally incomplete
● Sometimes finds sub-optimal solutions
Local Search Problem Formulation
For goal-based agents:
utility-based
● States: How the world
agents: is represented in the agent’s point of view
● Initial State: The state where the agent starts in
● Actions: A set of actions the agent can take (based on the state the agent is
in)
● Transition Model: A description of how actions taken by the agent changes
the states
● Goal Test: Determines whether a state is a goal state
● Objective
Path Cost:Function
The cost (Utility
that theFunction): A function
agent suffers thatan
for doing calculates
action in the value
a given of a
state
state
○ Represents “how happy” the agent is with the particular state
○ The goal of the agent is to find the state that maximizes (or minimizes) the value of that objective
function
Example: 8-Queen Problem
● State: A vector of length 8 containing integers in the range 1-8
A number on the ith position represents the row number
of
the queen placed on the ith column
● Initial state: [1 1 1 1 1 1 1 1]
● Actions: Change one number in the vector
●
● Transition
Objective model: Obvious
function: The number of queen pairs that attack
each
other (to be minimized)
○ Example: f([1 1 1 1 1 1 1 1]) = 28
○ Example: f([8 2 4 1 7 5 3 6]) = 0
Hill Climbing Search
Hill Climbing Search
● Hill Climbing Search is simply a loop that continually moves in the direction
of increasing value (“uphill”):
1. Start with a random state
2. Move to a successor state that has the best value based on the objective function
3. Repeat step 2 until there is no neighbor with a better objective function value
State-Space Landscape
Hill Climbing Search
● Hill climbing search is the most basic local search algorithm
● The idea is simply to iteratively find the locally best improvement, without
looking ahead beyond the immediate neighbors of the current state
○ It is also called the greedy local search
○ Analogical to trying to find the top of Mount Everest in a thick fog while suffering from amnesia
● No search tree!
○ The data structure involves only the current state, its neighbors, and their objective function
values
Example: 8-Queen Problem
● An example of a random state with an
objective function value h = 17
● Numbers in other squares represent
objective function values h of each
successor state
○ The best successor state has a value h = 12
Example: 8-Queen Problem
● After 5 iterations, this state is reached
○ h=1
● However, no further successors have better
objective function value
● Hill Climbing terminates!
● This state is called a local optimum (as
opposed to global optimum)
Local Optima
● A local optimum is a peak
state that is better than any of
its successor states, but is
worse than the global
optimum
● In general, local search
algorithms are prone to local
optima
Hill Climbing Analysis
● Starting from randomly generated 8-queen states, hill climbing reaches a
local optimum of 8-queen problem 86% of the time
○ It finds the global optimum only 14% of the time
● However, it takes 4 steps on average to find the global optimum
○ 3 steps on average when it gets stuck in a local optimum
○ Recall that the 8-queen problem has 88 ≈ 17 million states
Hill Climbing Variants
● The basic hill climbing algorithm terminates when there is no better
successor
● Sometimes, it is beneficial to also allow sideways moves instead of only
uphill moves
○ A sideways move is a hill climbing iteration where the chosen successor has exactly the
same objective function value as the current state
● However, beware of infinite loops among same-value states!
○ Can be solved by putting a limit on consecutive sideways moves
● Allowing 100 consecutive sideways moves on 8-queen problem:
○ Problem instances solved by hill climbing improves from 14% to 94%
○ Average steps become 21 steps for successes, 64 for failures
Hill Climbing Variants
● Stochastic Hill Climbing:
○ Chooses the next state randomly among all uphill moves instead of choosing the best one
○ Probabilities proportional to the objective values
○ Usually slower but finds better solutions