Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 66

Introduction to Artificial

Intelligence

Lecture 05
Amna Iftikhar Spring ' 2021 1
Today’s Agenda
• Hill climbing
• Simulated annealing
• Local beam search
• Genetic algorithms

Amna Iftikhar Spring ' 2021 2


Local Search Algorithms
• Local Search Algorithms keep a single "current" state,
and move to neighboring states in order to try
improve it.

• Solution path needs not be maintained.

• Hence, the search is “local”.


• Local search suitable for problems in which path is not
important; the goal state itself is the solution.

• It is an optimization search
Amna Iftikhar Spring ' 2021 3
Classical Search vs. Local Search
Classical search Local Search
• systematic exploration of • In many optimization problems,
search space. the path to the goal is irrelevant;
the goal state itself is the solution.
• Keeps one or more paths
• State space = set of "complete"
in memory.
configurations.
• Records which • Find configuration satisfying
alternatives have been constraints, Find best state
explored at each point according to some objective
along the path. function h(s). e.g., n-queens, h(s)=
number of attacking queens. In
• The path to the goal is a
such cases, we can use Local
solution to the problem.
Search Algorithms.

Amna Iftikhar Spring ' 2021 4


Example: n-queens
• Put n queens on an n × n board
with no two queens on the same
row, column, or diagonal.
• In the 8-queens problem, what
matters is the final configuration
of queens, not the order in which
they are added.

Amna Iftikhar Spring ' 2021 5


Local Search: Key Idea
Key idea:
1. Select (random) initial state (generate an initial guess).

2. Make local modification to improve current state


(evaluate current state and move to other states).
3. Repeat Step 2 until goal state found (or out of time).

Amna Iftikhar Spring ' 2021 6


Local Search: Key Idea
Advantages Drawback
• Use very little memory – • Local Search can get stuck in
usually a constant amount. local maxima and not find the
• Can often find reasonable optimal solution.
solutions in large or infinite
state spaces (e.g.,
continuous). For which
systematic search is
unsuitable.

Amna Iftikhar Spring ' 2021 7


State-Space Landscape
• A state space landscape: is a graph of states associated with their costs.
• State-space landscape
– Location (defined by state)
– Elevation (defined by the value of the heuristic cost function or objective
function)
– If elevation = cost, aim to find the lowest valley (a global minimum)
– If elevation = objective function, find the highest peak (a global maximum)
– A complete local search algorithm always find a goal if one exists
– An optimal algorithm always find a global minimum/maximum

Amna Iftikhar Spring ' 2021 8


Local and Global Optima
• Global optimum
– A solution which is better than all other
solutions
– Or no worse than any other solution
• Local optimum
– A solution which is better than nearby
solutions
– A local optimum is not necessarily a global
one
Amna Iftikhar Spring ' 2021 9
Global / Local (max/min)

• A local max/min is over a small area.


– For instance, if a point is lower than the next
nearest point on the left & right than it's a local min.
• There can be many local maxs and mins over an
entire graph.
• A global max/min is the highest/lowest point on
the entire graph.
• There can only be ONE gobal max and/or min
on a graph and there may not be one at all.
Amna Iftikhar Spring ' 2021 10
Global / Local (max/min)

Amna Iftikhar Spring ' 2021 11


Hill-Climbing Search

Amna Iftikhar Spring ' 2021 12


Hill-Climbing Search
• Main Idea: Keep a single current node and move to a neighboring
state to improve it.
• Uses a loop that continuously moves in the direction of increasing
value (uphill):
– Choose the best successor, choose randomly if there is more
than one.
– Terminate when a peak reached where no neighbor has a
higher value.
• It also called greedy local search, steepest ascent/descent.

Amna Iftikhar Spring ' 2021 13


Hill-Climbing Search
• “Like climbing Everest in thick fog with amnesia”
• Only record the state and its evaluation instead of
maintaining a search tree

function HILL-CLIMBING ( problem ) returns a state that is a local maximum


inputs: problem, a problem
local variables: current, a node
neighbor, a node

current  MAKE-NODE( INITIAL-STATE[ problem ])


loop do
neighbor  a highest-valued successor of current
if VALUE[ neighbor ]  VALUE[ current ] then return STATE[ current ]
current  neighbor

Amna Iftikhar Spring ' 2021 14


Hill-Climbing Search
it is possible to make progress
attempts to find a better
solution by incrementally
changing a single element
of the solution

Plateaux

Plateaux

Amna Iftikhar
Current state Spring ' 2021 15
Hill-Climbing Search
Local maxima: a local maximum is a peak that is
higher than each of its neighboring states, but
lower than the global maximum. Hill-climbing
algorithms that reach the vicinity of a local
maximum will be drawn upwards towards the
peak, but will then be stuck with nowhere else to
go.

Plateaux: a plateau is an area of the state space


landscape where the evaluation function is flat. It
can be a flat local maximum, from which no uphill
exit exists, or a shoulder, from which it is possible
to make progress.

Ridges: Ridges result in a sequence of local


maxima that is very difficult for greedy algorithms
to navigate. (the search direction is not towards
the top but towards the side)
Amna Iftikhar Spring ' 2021 16
Hill-Climbing Search Example
our aim is to find a path from S to M
associate
S heuristics with
every node, that
is the straight line
A B 11 distance from the
9
path terminating
city to the goal
city
7.5 C 8.5 D 8 E F 9 G 9

6 H 5 I J 7 K 6

2 L 0 M N 4 O 4

Amna Iftikhar Spring ' 2021 17


Hill-Climbing Search Example
S

9 A B 11

7.5 C 8.5 D 8 E F 9 G 9

6 H 5 I J 7 K 6

2 L 0 M N 4 O 4

Amna Iftikhar Spring ' 2021 18


Hill-Climbing Search Example
S

9 A B 11

7.5 C 8.5 D 8 E F 9 G 9

6 H 5 I J 7 K 6

2 L 0 M N 4 O 4

Amna Iftikhar Spring ' 2021 19


Hill-Climbing Search Example
S

9 A B 11

7.5 C 8.5 D 8 E F 9 G 9

6 H 5 I J 7 K 6

2 L 0 M N 4 O 4

Amna Iftikhar Spring ' 2021 20


Hill-Climbing Search Example
S

9 A B 11

7.5 C 8.5 D 8 E F 9 G 9

6 H 5 I J 7 K 6

2 L 0 M N 4 O 4

Amna Iftikhar Spring ' 2021 21


Hill-Climbing Search Example
Local Maximum
From A find a solution where 10 A
H and K are final states

10 B 7 F
8 J

4 D C E 3 G
2 5
0 K

I 6
0 H

K 0

Amna Iftikhar Spring ' 2021 22


Hill-Climbing Search Example
Local Maximum
10 A

10 B 7 F
8 J

4 D C E 3 G
2 5
0 K

I 6
0 H

K 0

Amna Iftikhar Spring ' 2021 23


Hill-Climbing Search Example
Local Minimum
10 A

10 B 7 F
8 J

4 D C E 3 G
2 5
0 K

G is local minimum I 6
0 H

K 0
Hill climbing is sometimes called greedy local search because it grabs a good neighbor
state without thinking ahead about where
Amna Iftikhar Springto go next.
' 2021 24
Hill-Climbing Search Example
Local Maximum, Local Minimum

Amna Iftikhar Spring ' 2021 25


Alternative hill climbing
• Stochastic hill climbing
– chooses at random from among the uphill moves (neighbors); the probability of
selection can vary with the steepness of the uphill move.
– This usually converges more slowly than steepest ascent, but in some state
landscapes it finds better solutions.
• First-choice hill climbing
– implements stochastic hill climbing by generating successors randomly until one is
generated that is better than the current state.
– This is a good strategy when a state has many (e.g., thousands) of successors.
• Random-restart hill climbing
– adopts the well known adage(proverb), "If at first you don't succeed, try, try again."
It conducts a series of hill-climbing searches from randomly generated initial state,
stopping when a goal is found.
– It is complete with probability approaching 1, for the trivial reason that it will
eventually generate a goal state as the initial state. (It iteratively does hill-climbing,
each time with a random initial condition)
Amna Iftikhar Spring ' 2021 26
Simulated Annealing Search

Amna Iftikhar Spring ' 2021 27


Annealing

What does the term annealing mean?

Annealing, in metallurgy and materials science, is a heat


treatment wherein a material is altered, causing changes in its
properties such as strength and hardness. It is a process that
produces conditions by heating to above the recrystallization
temperature and maintaining a suitable temperature, and then
cooling. Annealing is used to induce ductility, soften material,
relieve internal stresses, refine the structure by making it
homogeneous, and improve cold working properties.

Amna Iftikhar Spring ' 2021 28


Simulated Annealing Search
• Main Idea: escape local maxima by
allowing some "bad" moves but
gradually decrease their frequency.
• Select a neighbor at random.

• If better than current state go there.

• Otherwise, go there with some


probability.
• Probability goes down with time
(similar to temperature cooling)
Amna Iftikhar Spring ' 2021 29
Simulated annealing search
• Initialize current to starting state
• For i = 1 to 
– If T(i) = 0 return current
– Let next = random successor of current
– Let  = value(next) – value(current)
– If  > 0 then let current = next
– Else let current = next with probability exp(/T(i))

Amna Iftikhar Spring ' 2021 30


Probability of selecting bad moves

Temperature Probability of acceptance


1-worse 2-worse 3-worse
10 0.9 0.82 0.74
1 0.37 0.14 0.05
0.25 0.018 0.0003 0.000006
0.1 0.00005 2×10-9 9×10-14

Amna Iftikhar Spring ' 2021 31


Simulated annealing search
• One can prove: If temperature decreases slowly
enough, then simulated annealing search will find a
global optimum with probability approaching one
• However:
– This usually takes impractically long
– The more downhill steps you need to escape a local
optimum, the less likely you are to make all of them in a row

Amna Iftikhar Spring ' 2021 32


Local Beam Search

Amna Iftikhar Spring ' 2021 33


Local beam search
 Main Idea: Keep track of k states rather than just one.
 Start with k randomly generated states.

 At each iteration, all the successors of all k states are generated.

 If any one is a goal state, stop; else select the k best successors from the
complete list and repeat.
 Drawback: the k states tend to regroup very quickly in the same region  lack of
diversity.
 Is this the same as running k greedy searches in parallel?

Greedy search Beam search


Amna Iftikhar Spring ' 2021 34
Local beam search
• Instead of keeping only one node in memory
– Keep track of k states
– Start with k randomly generated states
– For each step, generate the successors of all k states
– If anyone is a goal, the algorithm halts
• Otherwise selects the k best successors from the complete list and
repeats

Amna Iftikhar Spring ' 2021 35


Local beam search
• Look like instead of doing it in parallel, rather than sequential, but
– In random-search, search independently of others
– In local beam search, useful info is passed among the parallel search
threads
• Quickly abandon unfruitful searches, and move to the resourceful ones
• Drawbacks
– Lack of diversity among k states
– Can quickly become concentrated in a small region of search space
(more expensive version of hill climbing)
• Alternative: Stochastic beam search
– Instead of choosing best k from the successors, choose k successors at
random (with probability of choosing a given successor being an
increasing function of its value)
– Similar to natural selection
Amna Iftikhar Spring ' 2021 36
Genetic Algorithm Search

Amna Iftikhar Spring ' 2021 37


Knapsack problem

Amna Iftikhar Spring ' 2021 38


Genetic Algorithms
• Formally introduced in the US in the 70s by John Holland.
• GAs emulate ideas from genetics and natural selection and
can search potentially large spaces.
• Before we can apply Genetic Algorithm to a problem, we
need to answer:

- How is an individual represented?


- What is the fitness function?
- How are individuals selected?
- How do individuals reproduce?

Amna Iftikhar Spring ' 2021 39


Genetic Algorithms: Representation of
states (solutions)

• Each state or individual is represented as a string over a


finite alphabet. It is also called chromosome which Contains
genes.
genes

Solution: 607 1001011111


Encoding
Chromosome:
Binary String

Amna Iftikhar Spring ' 2021 40


Genetic Algorithms: Fitness Function
• Each state is rated by the evaluation function called
fitness function. Fitness function should return
higher values for better states:
Fitness(X) should be greater than Fitness(Y) !!
[Fitness(x) = 1/Cost(x)]

Cost

States
Amna Iftikhar
X Spring ' 2021
Y 41
GA Parent Selection - Roulette Wheel
• Sum the fitnesses of all the
population members, TF
• Generate a random number, m,
between 0 and TF
• Return the first population
member whose fitness added
Roulette Wheel Selection to the preceding population
members is greater than or
equal to m

Amna Iftikhar Spring ' 2021 42


Genetic Algorithms: Selection
How are individuals selected ?

Roulette Wheel Selection

1 2 3 4 5 6 7 8
1 2 3 1 3 5 1 2

0 Rnd[0..18] = 7 Rnd[0..18] = 12 18
Chromosome4 Chromosome6
Genetic Algorithms: Cross-Over
and Mutation
How do individuals reproduce ?
Genetic Algorithms
Crossover - Recombination

1010000000 Parent1 Offspring1 1011011111


1001011111 Parent2 Offspring2 1000000000

Crossover
single point - With some high probability (crossover rate)
random apply crossover to the parents. (typical
values are 0.8 to 0.95)
Stochastic Search: Genetic Algorithms
Mutation
mutate

Offspring1 1011011111 Offspring1 1011001111

Offspring2 1010000000 Offspring2 1000000000

Original offspring Mutated offspring

With some small probability (the mutation rate) flip


each bit in the offspring (typical values between 0.1
and 0.001)
Genetic Algorithm
Example

Maxone problem
Genetic Algorithms
Algorithm:
1. Initialize population with p Individuals at
random
2. For each Individual h compute its fitness
3. While max fitness < threshold do
Create a new generation Ps
4. Return the Individual with highest fitness

Amna Iftikhar Spring ' 2021 55


Genetic algorithms

Has the effect of “jumping” to a completely different new


part of the search space (quite non-local)

Amna Iftikhar Spring ' 2021 56


Genetic Algorithm (cont.)

Initial Population Fitness Fn Selection Crossover Mutation

 Fitness function: # of non-attacking pair of queens


(min = 0, max = 8×7/2 = 28)
 Probability for selected for rep
 24/(24+23+20+11) = 31%
 23/(24+23+20+11) = 29%, etc
32752411 24748552 32748552
Amna Iftikhar Spring ' 2021 57
Genetic
Operate on state representation.
Algorithm

production of next generation random mutation


Fitness function: number of non-attacking
crossover point randomly generated
pairs of queens (min = 0, max = 8 × 7/2 = 28
 the higher the better)
24/(24+23+20+11) = 31% probability of a given pair selection
23/(24+23+20+11) = 29% etc proportional to the fitness (b)
Amna Iftikhar Spring ' 2021 58
GA is a good no clue approach to
problem solving
• GA is superb if:
– Your space is loaded with lots of weird bumps and local
minima.
• GA tends to spread out and test a larger subset of
your space than many other types of
learning/optimization algorithms.
– You don’t quite understand the underlying process of
your problem space.
– You have lots of processors
• GA’s parallelize very easily!

Amna Iftikhar Spring ' 2021 59


60

Evolvable Circuits

Amna Iftikhar Spring ' 2021


61

Robotics

Amna Iftikhar Spring ' 2021


62

Car Design

Amna Iftikhar Spring ' 2021


Trip traffic and shipment routing

Amna Iftikhar Spring ' 2021 63


Feature selection in Machine Learning

Amna Iftikhar Spring ' 2021 64


65

Evolutionary Arts

What is the major challenge?


Amna Iftikhar Spring ' 2021
66

Evolving Mona Lisa

Amna Iftikhar Spring ' 2021

You might also like