Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Advanced Intelligent Systems

Advanced Intelligent Systems


Fourth Slide: Local Search

Thomas Anung Basuki

September 26, 2017


Advanced Intelligent Systems
Motivation

Table of Contents

1 Motivation

2 Hill-Climbing

3 Simulated Annealing

4 Genetic Algorithm
Advanced Intelligent Systems
Motivation

Motivation

Sometimes goal test cannot be defined


Ex: maximisation or minimisation (optimisation problems) of
objective function
path cost is not relevant, only the optimal state is relevant
The previous algorithms require too much memory, not
suitable for inifinite state space
Local search algorithms try to find local optimum by
exploring neighbours of current state
Advanced Intelligent Systems
HC

Table of Contents

1 Motivation

2 Hill-Climbing

3 Simulated Annealing

4 Genetic Algorithm
Advanced Intelligent Systems
HC

Simple Hill Climbing

function Simple-Hill-Climbing(problem)
current ←MAKE-NODE(problem.INITIAL-STATE)
repeat
repeat
action ← select an action not yet applied from
problem.ACTIONS(current)
until f(SUCCESSOR(current,action)) > f(current) or no more actions
if f(SUCCESSOR(current,action)) > f(current)
current ← SUCCESSOR(current,action)
until no more actions
return current
Advanced Intelligent Systems
HC

Steepest Ascent Hill Climbing

function Steepest-Ascent-Hill-Climbing(problem)
neighbour ←MAKE-NODE(problem.INITIAL-STATE)
repeat
current ← neighbour
for each action in problem.ACTIONS(current)
if f(SUCCESSOR(current,action)) > f(neighbour )
neighbour ← SUCCESSOR(current,action)
until current == neighbour
return current
Advanced Intelligent Systems
HC

Problems and Variants of Hill-Climbing

Hill-Climbing can be trapped in local maxima, ridges or


plateaus (flat local maxima or shoulders)
Stochastic Hill-Climbing chooses successors randomly from
the uphill moves; the steepnes can be used to represent
probability
Random-Restart Hill-Climbing tries simultaneously from
many random initial states and finds the best final state
Advanced Intelligent Systems
SimA

Table of Contents

1 Motivation

2 Hill-Climbing

3 Simulated Annealing

4 Genetic Algorithm
Advanced Intelligent Systems
SimA

Simulated Annealing

Annealing is a process to temper or harden metals by heating


them and gradually cooling them
SA is developed as a variant of HC that tries to avoid local
maxima by simulating annealing
chooses move randomly, accepts a downhill move with some
probability < 1
the probability of accepting bad moves is proportional to its
badness and decreases as temperature goes down
Advanced Intelligent Systems
SimA

SA Algorithm
Advanced Intelligent Systems
GenAlg

Table of Contents

1 Motivation

2 Hill-Climbing

3 Simulated Annealing

4 Genetic Algorithm
Advanced Intelligent Systems
GenAlg

Genetic Algorithm - 1

Similar to Stochastic Beam Search, but generate successors by combining two


parents
mimics the natural selection (only dealing with sexual reproduction)
The list of k nodes is called population
Each node or individual is represented as a string of symbols
There are three operators: selection, crossover and mutation
The objective function is called fitness function, and must return higher values
for better states
Advanced Intelligent Systems
GenAlg

Genetic Algorithm - 2
Advanced Intelligent Systems
GenAlg

GA for 8-Queen Problem

Each state/individual can be represented as a string of 8 digits or a 24-bits


number
The fitness function f (s) counts the number of nonattacking pairs of queens in
state s
The maximum of fitness function is 28
In n-point crossover, n positions are chosen randomly for crossover points
Offsprings are created by crossing over parent strings at crossover points
Each location is subject to random mutation with small probability
Advanced Intelligent Systems
GenAlg

GA Operations
Advanced Intelligent Systems
GenAlg

Genetic Algorithm - 2
Advanced Intelligent Systems
GenAlg

Choosing Selection Operator

Selective Pressure is the time required by an operator to


produce a uniform population
Random selection, each individual has the same probability to be chosen
Proportional Selection by John Holland
Tournament Selection
Rank-based Selection
Elitism and (µ + λ)-selection
Hall of Fame
Advanced Intelligent Systems
GenAlg

Proportional Selection

A probability distribution proportional to the fitness is created


Individuals are selected by sampling the distribution,
f (x )
ϕs (xi ) = Σnsγ fγi(x )
l=1 i

where ϕs (xi ) is the probability that xi will be selected, fγ (xi )


is the scaled fitness of xi to produce positive values, and ns
the population size.
Roulette wheel
Universal sampling
Advanced Intelligent Systems
GenAlg

Roulette Wheel Selection

function Roulette-Wheel(ϕs )
sum = ϕs (x1 )
r = random(0,1)
while (r ¡ sum)
i=i+1
sum = sum + ϕs (xi )
return xi
requires ns calls, with a high variance in the number of offspring
Advanced Intelligent Systems
GenAlg

Universal Sampling

function Universal-Sampling(ϕs )
for i = 1 to ns do λi = 0
r = random(0,1/λ) //λ is the total number of offspring
sum = 0
for i = 1 to ns do
sum = sum + ϕs (xi )
while r < sum
λi = λi + 1
r = r + 1/λ
return (λ1 , λ2 , . . . , λns )
Advanced Intelligent Systems
GenAlg

Tournament Selection

A group of nts individuals is randomly selected (nts < ns )


The best individual of the group is selected and returned by
the operator
Provided that nts is not too large, tournament selection
prevents the best individual from dominating
Advanced Intelligent Systems
GenAlg

Rank-based Selection

uses rank ordering of fitness values


Non-deterministic linear sampling selects xi such that i =
random(0,random(0,ns -1)), where individuals are sorted in decreasing order of
fitness values
Linear ranking assumes that the best individual creates λ̂ offsprings and the
worst individual λ̃ offsprings, where 1 ≤ λ̂ ≤ 2 and λ̃ = 2 − λ̂.
λ̃+fr (xi )(λ̂−λ̃)/(ns −1)
ϕs (xi ) = ns
, where fr (xi ) is the rank of xi .
1−e −fr (xi )
Nonlinear ranking, for example, ϕs (xi ) = β
may use roulette wheel or universal sampling to select individuals
Advanced Intelligent Systems
GenAlg

Other Techniques

In (µ + λ)-selection, individuals are selected from both


parents (µ) and offspring (λ)
Elitism ensures that the best individuals from current
population survive in the next generation, by copying (without
mutation) the best individuals to the next generation
The Hall of Fame is used to record the best individuals ever
found, and can be used as a parent pool for crossover

You might also like