Hill Climbing Search-1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

ARTIFICIAL INTELLIGENCE /UNIT II /LECTURE V

Hill Climbing Search Techniques


Hill Climbing is heuristic search used for mathematical optimization problems in the
field of Artificial Intelligence. Given a large set of inputs and a good heuristic function, it tries
to find a sufficiently good solution to the problem. This solution may not be the global
optimal maximum.
1. Simple Hill Climbing: It examines the neighboring nodes one by one and selects the first
neighboring node which optimizes the current cost as next node.
1.1 Algorithm for Simple Hill Climbing:
Step 1: Evaluate the initial state. If it is a goal state then stop and return success. Otherwise,
make initial state as current state.
Step 2: Loop until the solution state is found or there are no new operators present which
can be applied to current state.
a) Select an operator that has not been yet applied to the current state and apply it to
produce a new state.
b) Perform these operations to evaluate new state:
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is
found.
Step 3: Exit.

1.2 Example: Solving 8- Puzzle Problem using Hill Climbing

Let the heuristic function for this problem has been defined as the combination of g(x) and
h(x).
i.e. F(x)= g(x) and h(x)

where g(x) : how many steps in the problem you have already done or the current state
from the initial state.

h(x) : Number of ways through which you can reach at the goal state from the current
state or

Parul Saxena /Artificial Intelligence /Unit II /Lecture V /20.04.2020 / Page 1 of 6


Or
h(x) is the heuristic estimator that compares the current state with the goal state note down

how many states are displaced from the initial or the current state. After calculating the F(x)

value at each step finally take the smallest F(x) value at every step and choose that as the

next current state to get the goal state.

Let us take an example.

(Initial State) (Goal State)

1 2 3 1 2 3
5 6 4 5 6
4 7 8 7 8

Step1

2 3

1 5 6 F(x) = 1+4=5
4 7 8

1 2 3
1 2 3
5 6
5 6 F(x) = 1+4=5
4 7 8
4 7 8

1 2 3

4 5 6 F(x) = 1+2=3

7 8

Parul Saxena /Artificial Intelligence /Unit II /Lecture V /20.04.2020 / Page 2 of 6


Step 2

1 2 3

4 5 6 F(x) = 2+1=3

7 8

1 2 3

4 5 6

7 8

1 2 3

5 6 F(x) = 2+3=5

4 7 8

Step3

1 2 3

4 6 F(x) = 3+2=5

7 5 8

1 2 3
1 2 3
4 5 6
4 5 6 F(x) = 3+2=5
7 8
7 8

1 2 3

4 5 6 F(x) = 3+0=3

7 8

Parul Saxena /Artificial Intelligence /Unit II /Lecture V /20.04.2020 / Page 3 of 6


1.3 Different regions in the State Space Diagram :
1.3.1 Local maximum : It is a state which is better than its neighboring state however there
exists a state which is better than it(global maximum). This state is better because here
value of objective function is higher than its neighbors.
1.3.2 Global maximum : It is the best possible state in the state space diagram. This because
at this state, objective function has highest value.
1.3.3 Plateua/flat local maximum : It is a flat region of state space where neighboring states
have the same value.
1.3.4 Ridge : It is region which is higher than its neighbours but itself has a slope. It is a
special kind of local maximum.
1.3.5 Current state : The region of state space diagram where we are currently present
during the search.
1.3.6. Shoulder : It is a plateau that has an uphill edge.

1.4 Problems in different regions in Hill climbing


Hill climbing cannot reach the optimal/best state (global maximum) if it enters any of
the following regions :
1.4.1 Local maximum : At a local maximum all neighboring states have a values which is
worse than than the current state. Since hill climbing uses greedy approach, it will not move

Parul Saxena /Artificial Intelligence /Unit II /Lecture V /20.04.2020 / Page 4 of 6


to the worse state and terminate itself. The process will end even though a better solution
may exist. To overcome local maximum problem : Utilize backtracking technique. Maintain a
list of visited states. If the search reaches an undesirable state, it can backtrack to the
previous configuration and explore a new path.
1.4.2. Plateau : On plateau all neighbors have same value . Hence, it is not possible to select
the best direction. To overcome plateaus : Make a big jump. Randomly select a state far
away from current state. Chances are that we will land at a non-plateau region.
1.4.3. Ridge : Any point on a ridge can look like peak because movement in all possible
directions is downward. Hence the algorithm stops when it reaches this state. To overcome
Ridge : In this kind of obstacle, use two or more rules before testing. It implies moving in
several directions at once.

Ridge

Figure: Representation of Ridge

2. Steepest-Ascent Hill Climbing : It first examines all the neighboring nodes and then
selects the node closest to the solution state as next node.
Step 1 : Evaluate the initial state. If it is goal state then exit else make the current state as
initial state.
Step 2 : Repeat these steps until a solution is found or current state does not change:
i. Let ‘temp’ be a state such that any successor of the current state will be better than it;
ii. for each operator that applies to the current state

Parul Saxena /Artificial Intelligence /Unit II /Lecture V /20.04.2020 / Page 5 of 6


a. apply the new operator and create a new state
b. evaluate the new state
(i). if this state is goal state then quit else compare with ‘temp’
(ii). if this state is better than ‘temp’, set this state as ‘temp’
(iii) if temp is better than current state set current state to temp
Step 3 : Exit

References:

(1) Rich & Knight, Artificial Intelligence – Tata McGraw Hill, 2nd edition, 1991.

(2) e-PG Pathshala (Gateway for e-books for UG & PG) http://epgp.inflibnet.ac.in/

Parul Saxena /Artificial Intelligence /Unit II /Lecture V /20.04.2020 / Page 6 of 6

You might also like