Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Unit-1 CSE- III SEM DSA

MITRC

1
Unit-1 CSE- III SEM DSA

UNIT-1

(INTRODUCTION TO DATA STRUCTURE AND STCK)


LECTURE NO.1

Definition of Algorithms:

An algorithm is the step-by-step unambiguous instructions to solve a given problem.

It is a combination of sequence of finite steps to solve particular problems.

It describes:

1. It means not unnecessary lines.


2. Sequence may be coumplousry.
3. It means output comes from finite steps.
MITRC

Ex:

Adding two numbers.

ATN( )

1. Take two nos.(a,b)


2. C=add(a,b)
3. Return©;

Properties of algorithms:

1. It should terminate after finite time.


2. It should produce at least one output.
3. It should take 0 or more inputs.
4. It should be deterministic (Unambiguous).
5. It is a programming language independent.

Steps required constructing algorithm:

1. Problem definition. (Know the program clearly)

2
Unit-1 CSE- III SEM DSA
2. Design the algorithm.
3. Draw flow chart.
4. Verification and testing.
5. Coding and implementation.
6. Analysis or analysing algorithm.

Analysis of algorithm:

If the problem contain more than one solution we will find the better solution based on two
parameter.

1. Time

2. Space

Abstract Data Types (ADTs) MITRC

Before defining abstract data types, let us consider the different view of system-defined data
types. We all know that, by default, all primitive data types (int, float, etc.) support basic
operations such as addition and subtraction. The system provides the implementations for the
primitive data types. For user-defined data types we also need to define operations. The
implementation for these operations can be done when we want to actually use them. That
means, in general, user defined data types are defined along with their operations. To simplify
the process of solving problems, we combine the data structures with their operations and we call
this Abstract Data Types (ADTs). An ADT consists of two parts:

1. Declaration of data

2. Declaration of operations

Commonly used ADTs include: Linked Lists, Stacks, Queues, Priority Queues, Binary Trees,
Dictionaries, Disjoint Sets (Union and Find), Hash Tables, Graphs, and many others.
For example, stack uses LIFO (Last-In-First-Out) mechanism while storing the data in data
structures. The last element inserted into the stack is the first element that gets deleted. Common
operations of it are: creating the stack, pushing an element onto the stack, popping an element
from stack, finding the current top of the stack, finding number of elements in the stack etc.

3
Unit-1 CSE- III SEM DSA
Types of Analysis

To analyze the given algorithm, we need to know with which inputs the algorithm takes less time
(performing wel1) and with which inputs the algorithm takes a long time. We have already seen
that an algorithm can be represented in the form of an expression. That means we represent the
algorithm with multiple expressions: one for the case where it takes less time and another for the
case where it takes more time.
In general, the first case is called the best case and the second case is called the worst case for
the algorithm. To analyze an algorithm we need some kind of syntax, and that forms the base for
asymptotic analysis/notation. There are three types of analysis:

Worst case
1. Defines the input for which the algorithm takes a long time (slowest time to
complete).
2. Input is the one for which the algorithm runs the slowest.
MITRC
Best case
1. Defines the input for which the algorithm takes the least time (fastest time to
complete).
2. Input is the one for which the algorithm runs the fastest.

Average case
1. Provides a prediction about the running time of the algorithm.
2. Run the algorithm many times, using many different inputs that come from some
distribution that generates these inputs, compute the total running time (by adding the
individual times), and divide by the number of trials.
3. Assumes that the input is random.

Lower Bound <= Average Time <= Upper Bound

For a given algorithm, we can represent the best, worst and average cases in the form of
expressions. As an example, let f(n) be the function which represents the given algorithm.
Similarly for the average case. The expression defines the inputs with which the algorithm takes
the average running time (or memory).

4
Unit-1 CSE- III SEM DSA

LECTURE NO. 2

Asymptotic Notation
Having the expressions for the best, average and worst cases, for all three cases we need to
identify the upper and lower bounds. To represent these upper and lower bounds, we need some
kind of syntax, and that is the subject of the following discussion. Let us assume that the given
algorithm is represented in the form of function f(n).

Big-O Notation [Upper Bounding Function]

When we have only asymptotic upper bound, we use O-notation.

O(g(n)) – pronounced “big-oh of g of n “or” oh of g of n”.

O(g(n)) = {f(n): there exit(+ve) constants c andMITRC


n0 such that
0<=f(n)<=c.g(n) for all n>=n0}, [where c>0,&& n0>=1]

Example-1 Find upper bound for f(n) = 3n + 8

Solution: 3n + 8 ≤ 4n, for all n ≥ 8

∴ 3n + 8 = O(n) with c = 4 and n0 = 8

Example-2 Find upper bound for f(n) = n2 + 1

Solution: n2 + 1 ≤ 2n2, for all n ≥ 1

∴n2 + 1 = O(n2) with c = 2 and n0 = 1

Example-3 Find upper bound for f(n) = n4 + 100n2 + 50

Solution: n4 + 100n2 + 50 ≤ 2n4, for all n ≥ 11

∴n4 + 100n2 + 50 = O(n4 ) with c = 2 and n0 = 11

Example-4 Find upper bound for f(n) = 2n3 – 2n2

Solution: 2n3 – 2n2 ≤ 2n3, for all n >1

5
Unit-1 CSE- III SEM DSA
∴ 2n3 – 2n2 = O(n3 ) with c = 2 and n0 = 1

Example-5 Find upper bound for f(n) = n

Solution: n ≤n, for all n ≥ 1

∴n = O(n) with c = 1 and n0 = 1

Example-6 Find upper bound for f(n) = 410

Solution: 410 ≤ 410, for all n >1

∴ 410 = O(1) with c = 1 and n0 = 1

Note 1. If both function are greater by fraction of a no. of function, it cannot handle it.

Note 2. If both function are greater by fractionMITRC


of a constant number it can be handle it.

Omega- Ω Notation [Lower Bounding Function]

Similar to the O discussion, this notation gives the tighter lower bound of the given algorithm

and we represent it as f(n) = Ω(g(n)). That means, at larger values of n, the tighter lower bound

of f(n) is g(n). For example, if f(n) = 100n2 + 10n + 50, g(n) is Ω(n2).

Ω Examples

Example-1 Find lower bound for f(n) = 5n2.

Solution: ∃c, n0 Such that: 0 ≤cn2≤5n2 ⇒cn2 ≤ 5n2 ⇒c = 5 and n0 = 1

∴ 5n2 = Ω(n2) with c = 5 and n0 = 1

Example-2 Prove f(n) = 100n + 5 ≠Ω(n2).

Solution: ∃ c, n0 Such that: 0 ≤ cn2 ≤ 100n + 5

100n + 5 ≤ 100n + 5n(∀n ≥1) = 105n

cn2 ≤ 105n ⇒n(cn - 105) ≤ 0

6
Unit-1 CSE- III SEM DSA
Since n is positive ⇒cn - 105 ≤0 ⇒n ≤105/c

⇒Contradiction: n cannot be smaller than a constant

Example-3 2n = Q(n), n3 = Q(n3), = O(logn).

Θ notation

Now consider the definition of Θ notation. It is defined as Θ(g(n)) = {f(n): there exist positive

constants c1,c2 and n0 such that 0 ≤c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥n0}. g(n) is an asymptotic

tight bound for f(n). Θ(g(n)) is the set of functions with the same order of growth as g(n).

Examples: MITRC

Example 1 Find Θ bound for

Solution: for all, n ≥ 2

∴ with c1 = 1/5,c2 = 1 and n0 = 2

Example 2 Prove n ≠Θ(n2)

Solution: c1 n2 ≤ n ≤ c2n2 ⇒ only holds for: n ≤ 1/c1

∴n ≠Θ(n2)

Example 3 Prove 6n3 ≠Θ(n2)

Solution: c1 n2≤6n3 ≤c2 n2 ⇒ only holds for: n ≤ c2 /6

∴ 6n3 ≠Θ(n2)

Example 4 Prove n ≠Θ(logn)

Solution: c1logn ≤ n ≤ c2logn ⇒ c2 ≥ , ∀n ≥n0 – Impossible

7
Unit-1 CSE- III SEM DSA
Important Notes:

For analysis (best case, worst case and average), we try to give the upper bound (O) and lower

bound (Ω) and average running time (Θ). From the above examples, it should also be clear that,

for a given function (algorithm), getting the upper bound (O) and lower bound (Ω) and average

running time (Θ) may not always be possible. For example, if we are discussing the best case of

an algorithm, we try to give the upper bound (O) and lower bound (Ω) and average running time

(Θ).

MITRC

LECTURE NO.3

STACK

Definition:
Stack is a linear data structure which follows a particular order in which the operations are performed.
The order may be LIFO (Last In First Out).

Definition:A stack is an ordered list in which insertion and deletion are done at one end, called top. The
last element inserted is the first one to be deleted. Hence, it is called the Last in First out (LIFO) or First
in Last out (FILO) list.

Example: -A deck of cards or a pile of plates, etc.

8
Unit-1 CSE- III SEM DSA
Basic Operations:
Stack operations may involve initializing the stack, using it and then de-initializing it. Apart from these
basic stuffs, a stack is used for the following two primary operations −

 push() − Pushing (storing) an element on the stack.


 pop() − Removing (accessing) an element from the stack.
When data is pushed onto stack.
To use a stack efficiently, we need to check the status of stack as well. For the same purpose, the
following functionality is added to stacks-

 peek() − get the top data element of the stack, without removing it.

 isFull() − check if stack is full.

 isEmpty() − check if stack is empty.


MITRC

At all times, we maintain a pointer to the last Pushed data on the stack. As this pointer always represents
the top of the stack, hence named top. The top pointer provides top value of the stack without actually
removing it.

Exceptions
Attempting the execution of an operation may sometimes cause an error condition, called an
exception. Exceptions are said to be “thrown” by an operation that cannot be executed. In the
Stack ADT, operations pop and top cannot be performed if the stack is empty. Attempting the
execution of pop (top) on an empty stack throws an exception. Trying to push an element in a
full stack throws an exception.

Applications of stack:

Following are some of the applications in which stacks play an important role.

Direct applications

• Balancing of symbols

• Infix-to-postfix conversion

• Evaluation of postfix expression

• Implementing function calls (including recursion)

9
Unit-1 CSE- III SEM DSA
Indirect applications

• Auxiliary data structure for other algorithms (Example: Tree traversal algorithms)

• Component of other data structures (Example: Simulating queues refer Queues chapter)

Implementation
There are many ways of implementing stack ADT; below are the commonly used methods.
• Simple array based implementation
• Dynamic array based implementation
• Linked lists implementation
Simple Array Implementation
This implementation of stack ADT uses an array. In the array, we add elements from left to right and use
a variable to keep track of the index of the top element. The array storing the stack elements may
become full. A push operation will then throw a fullstack exception. Similarly, if we try deleting an
MITRC
element from an empty stack it will throw stackempty exception.

Performance & Limitations

Performance

Let n be the number of elements in the stack. The complexities of stack operations with this

representation can be given as:

Space Complexity (for n push operations) O(n)

Time Complexity of Push() O(1)

Time Complexity of Pop() O(1)

Time Complexity of Size() O(1)

Time Complexity of IsEmptyStack() O(1)

Time Complexity of IsFullStackf) O(1)

10
Unit-1 CSE- III SEM DSA
Time Complexity of DeleteStackQ O(1)

Limitations

The maximum size of the stack must first be defined and it cannot be changed. Trying to push a new
element into a full stack causes an implementation-specific exception.

Dynamic Array Implementation

First, let’s consider how we implemented a simple array based stack. We took one index variable top
which points to the index of the most recently inserted element in the stack. To insert (or push) an
element, we increment top index and then place the new element at that index. Similarly, to delete (or
pop) an element we take the element at top index and then decrement the top index. We represent an
empty queue with top value equal to –1. The issue that still needs to be resolved is what we do when all
the slots in the fixed size array stack are occupied?

First try: What if we increment the size of the array by 1 every time the stack is full?
MITRC
• Push(); increase size of S[] by 1

• Pop(): decrease size of S[] by 1

Problems with this approach?

This way of incrementing the array size is too expensive. Let us see the reason for this. For example, at n
= 1, to push an element create a new array of size 2 and copy all the old array elements to the new array,
and at the end add the new element. At n = 2, to push an element create a new array of size 3 and copy
all the old array elements to the new array, and at the end add the new element.

Similarly, at n = n – 1, if we want to push an element create a new array of size n and copy all the old
array elements to the new array and at the end add the new element. After n push operations the total
time T(n) (number of copy operations) is proportional to 1 + 2 + ... + n ≈ O(n2).

Alternative Approach: Repeated Doubling

Let us improve the complexity by using the array doubling technique. If the array is full, create a new
array of twice the size, and copy the items. With this approach, pushing n items takes time proportional
to n (not n2). For simplicity, let us assume that initially we started with n = 1 and moved up to n = 32.
That means, we do the doubling at 1,2,4,8,16. The other way of analyzing the same approach is:

at n = 1, if we want to add (push) an element, double the current size of the array and copy all the
elements of the old array to the new array.

11
Unit-1 CSE- III SEM DSA
At n = 1, we do 1 copy operation, at n = 2, we do 2 copy operations, and at n = 4, we do 4 copy
operations and so on. By the time we reach n = 32, the total number of copy operations is 1+2 + 4

+ 8+16 = 31 which is approximately equal to 2n value (32). If we observe carefully, we are doing the
doubling operation logn times. Now, let us generalize the discussion. For n push operations we double
the array size logn times. That means, we will have logn terms in the expression below. The total time
T(n) of a series of n push operations is proportional to T(n) is O(n) and the amortized time of a push
operation is O(1) .

Performance

Let n be the number of elements in the stack. The complexities for operations with this

representation can be given as:

Space Complexity (for n push operations) O(n)

Time Complexity of CreateStack() O(1)


MITRC

Time Complexity of PushQ O(1) (Average)

Time Complexity of PopQ O(1)

Time Complexity of Top() O(1)

Time Complexity of IsEmpryStackf) O(1))

Time Complexity of IsFullStackf) O(1)

Time Complexity of DeleteStackQ O(1)

Note: Too many doublings may cause memory overflow exception.

12
Unit-1 CSE- III SEM DSA

MITRC

LECTURE NO.4

Linked List Implementation

The other way of implementing stacks is by using Linked lists. Push operation is implemented
by inserting element at the beginning of the list. Pop operation is implemented by deleting the
node from the beginning (the header/top node).

Performance

13
Unit-1 CSE- III SEM DSA
Let n be the number of elements in the stack. The complexities for operations with this
representation can be given as:

Space Complexity (for n push operations) O(n)

Time Complexity of CreateStack() O(1)

Time Complexity of Push() O(1) (Average)

Time Complexity of Pop() O(1)

Time Complexity of Top() O(1)

Time Complexity of IsEmptyStack() O(1)

Time Complexity of DeleteStack() O(n)

Comparison of Implementations
MITRC
Comparing Incremental Strategy and Doubling Strategy

We compare the incremental strategy and doubling strategy by analyzing the total time T(n)
needed to perform a series of n push operations. We start with an empty stack represented by an
array of size 1.

We call amortized time of a push operation is the average time taken by a push over the series
of operations, that is, T(n)/n.

Incremental Strategy

The amortized time (average time per operation) of a push operation is O(n) [O(n2)/n].

Doubling Strategy

In this method, the amortized time of a push operation is O(1) [O(n)/n].

Comparing Array Implementation and Linked List Implementation

Array Implementation

• Operations take constant time.

• Expensive doubling operation every once in a while.

14
Unit-1 CSE- III SEM DSA
• Any sequence of n operations (starting from empty stack) –“amortized”bound takes time
proportional to n.

Linked List Implementation

• Grows and shrinks gracefully.

• Every operation takes constant time O(1).

• Every operation uses extra space and time to deal with references.

MITRC

Push Operation:
The process of putting a new data element onto stack is known as a Push Operation. Push operation
involves a series of steps −

 Step 1 − Checks if the stack is full.


 Step 2 − If the stack is full, produces an error and exit.
 Step 3 − If the stack is not full, increments top to point next empty space.
 Step 4 − Adds data element to the stack location, where top is pointing.
 Step 5 − Returns success.

15
Unit-1 CSE- III SEM DSA

MITRC

LECTURE NO.5

Algorithm of Push Operation :


void Push (S,N,TOP,X) S = Name of the array
{ N = Size of the array
if (TOP = = N-1) TOP = TOP of the stack
{ X = Elements
Printf (“Stack is Overflow”);
Exit(1);
}
else
{
TOP ++;
S[TOP] = X;
}
}

16
Unit-1 CSE- III SEM DSA

Pop Operation:
Accessing the content while removing it from the stack, is known as a Pop Operation. In an array
implementation of pop() operation, the data element is not actually removed, instead top is decremented
to a lower position in the stack to point to the next value. But in linked-list implementation, pop()
actually removes data element and de-allocates memory space.

A Pop operation may involve the following steps −

 Step 1 − Checks if the stack is empty.

 Step 2 − If the stack is empty, produces an error and exit.

 Step 3 − If the stack is not empty, accesses the data element at which top is pointing.

 Step 4 − Decreases the value of top by 1.


MITRC
 Step 5 − Returns success.

Algorithm of POP Operation:


void Push (S,N,TOP,X) S = Name of the array
{ N = Size of the array
if (TOP = = -1) TOP = TOP of the stack
{ X = Elements
Printf (“Stack is Underflow”);
Exit(1);
}

else
{

17
Unit-1 CSE- III SEM DSA
TOP --;
S[TOP] = X;
}
}

Implementation of Stack in an Array:


As stack is an order collection of items also an array is order collection of items , still array can
not be use a stack because of following points:-

1.) The number of element in an array is fixed where as for a stack there is no any bound.
2.) We are access any element in a array while its index value but for a stack only the top
element on be access. So we can not call an array a stack. But we can use array as a stack.

Method 1 (Divide the array in slots of size n/k):


MITRC
A simple way to implement k stacks is to divide the array in k slots of size n/k each, and fix the
slots for different stacks, i.e., use arr[0] to arr[n/k-1] for first stack, and arr[n/k] to arr[2n/k-1] for
stack2 where arr[] is the array to be used to implement two stacks and size of array be n.

The problem with this method is inefficient use of array space. A stack push operation may result
in stack overflow even if there is space available in arr[]. For example, say the k is 2 and array
size (n) is 6 and we push 3 elements to first and do not push anything to second second stack.
When we push 4th element to first, there will be overflow even if we have space for 3 more
elements in array.

Method 2 (A space efficient implementation):

The idea is to use two extra arrays for efficient implementation of k stacks in an array. This may not make
much sense for integer stacks, but stack items can be large for example stacks of employees, students, etc
where every item is of hundreds of bytes. For such large stacks, the extra space used is comparatively
very less as we use two integer arrays as extra space.

Following are the two extra arrays are used:


1) top[]: This is of size k and stores indexes of top elements in all stacks.
2) next[]: This is of size n and stores indexes of next item for the items in array arr[]. Here arr[] is actual
array that stores k stacks.

18
Unit-1 CSE- III SEM DSA
Together with k stacks, a stack of free slots in arr[] is also maintained. The top of this stack is stored in a
variable ‘free’.

Application of Stack:

1.) Reversing a List


2.) Factorial Calculation
3.) For evaluating expression (Infix to postfix conversion)
MITRC

Reversing a List:

There are some steps for this task and these are:

1. Read a string.
2. Push all characters until NULL is
not found - Characters will be
stored in stack variable.
3. Pop all characters until NULL is
not found - As we know stack is
a LIFO technique, so last character
will be pushed first and finally we
will get reversed string in a variable
in which we store inputted string.

Algorithm:
void Reverse (char *c, int n)
{
Stack<char> S;
//Loop for Push

19
Unit-1 CSE- III SEM DSA
for (int i=0; i<n; i++)
{
S.PUSH (c[i]);
}
//Loop for POP
for (int i=0; i<n; i++)
{
c[i] = S.POP();
S.POP();
}
}

Factorial Calculation:
MITRC
It is recursive because it can be expresses recursively.

n! = f(n) = n f(n-1)

If x=1, produce value 1.

Otherwise, result is x*(x-1)!

Program :-
#include<stdio.h>
#include<string.h>
#define MAX 100
// push method
int push(int *s, int top, int ele)
{
int i;
if (top>= MAX)
{
printf("\nStack Overflow");
}
else
{
s[++top] = ele;
}

20
Unit-1 CSE- III SEM DSA
return top;
}
//pop method
int pop(int *a, int *top)
{
if ((*top) >= 0)
{
(*top) = (*top) - 1;
return a[(*top) + 1];
}
else
{
printf("Stack underflow\n");
return 0;
} MITRC

}
void main()
{
int n;
int i; // loop variable
int ans = 1 ; // stroes the final answer
int TOP = -1; // stack variables that mainntain stack's top
int s[MAX]; // the stack
printf("\nEnter number: ");
scanf("%d",&n);
// here we can also make sure that the use is not entering a number
// that can not be accomodated in the stack
// if the user enters a number less than or equal to 0
// then we can not find factorial
if(n<=0)
{
printf("\nThe number can not be less than 0");
}
else
{
// push the numbers n,n-1 ....1 in the stack

21
Unit-1 CSE- III SEM DSA
// you can also go in reverse manner here. the result will be the same
// because in multiplication, order does not matter
for(i = n ; i>0 ; i--)
{
TOP = push(s,TOP, i);
}
// now pop all the elements one by one
// multiply them with the answer variable
while(TOP>=0)
{
ans = ans * pop(s,&TOP);
}
printf("\nFactorail is %d\n",ans);
}
// getch(); MITRC

LECTURE NO.6

Infix to Postfix Transformation:


The way to write arithmetic expression is known as a notation. An arithmetic expression can be written
in three different but equivalent notations, i.e., without changing the essence or output of an expression.
These notations are −

 Infix Notation
 Prefix (Polish) Notation

 Postfix (Reverse-Polish) Notation

These notations are named as how they use operator in expression. We shall learn the same here in this
chapter.

Infix Notation

22
Unit-1 CSE- III SEM DSA
We write expression in infix notation, e.g. a - b + c, where operators are used in-between operands. It is
easy for us humans to read, write, and speak in infix notation but the same does not go well with
computing devices. An algorithm to process infix notation could be difficult and costly in terms of time
and space consumption.

Prefix Notation

In this notation, operator is prefixed to operands, i.e. operator is written ahead of operands. For
example, +ab. This is equivalent to its infix notation a + b. Prefix notation is also known as Polish
Notation.

Postfix Notation

This notation style is known as Reversed Polish Notation. In this notation style, the operator is postfixed
to the operands i.e., the operator is written after the operands. For example, ab+. This is equivalent to its
infix notation a + b.
MITRC

The following table briefly tries to show the difference in all three notations −

Sr.No. Infix Notation Prefix Notation Postfix Notation

1 a+b +ab ab+

2 (a + b) ∗ c ∗+abc ab+c∗

3 a ∗ (b + c) ∗a+bc abc+∗

4 a/b+c/d +/ab/cd ab/cd/+

5 (a + b) ∗ (c + d) ∗+ab+cd ab+cd+∗

6 ((a + b) ∗ c) – d -∗+abcd ab+c∗d-

23
Unit-1 CSE- III SEM DSA

MITRC

LECTURE NO.7

Algorithm to convert Infix to Postfix:


Let, X is an arithmetic expression written in infix notation. This algorithm finds the equivalent postfix
expression Y.

1. Push “(“onto Stack, and add “)” to the end of X.


2. Scan X from left to right and repeat Step 3 to 6 for each element of X until the Stack is empty.
3. If an operand is encountered, add it to Y.
4. If a left parenthesis is encountered, push it onto Stack.
5. If an operator is encountered ,then:
1. Repeatedly pop from Stack and add to Y each operator (on the top of Stack) which has
the same precedence as or higher precedence than operator.
2. Add operator to Stack.
[End of If]

24
Unit-1 CSE- III SEM DSA
6. If a right parenthesis is encountered ,then:
1. Repeatedly pop from Stack and add to Y each operator (on the top of Stack) until a left
parenthesis is encountered.
2. Remove the left Parenthesis.
[End of If]
[End of If]
7. END.

Let’s take an example to better understand the algorithm

Infix Expression: A+ (B*C-(D/E^F)*G)*H, where ^ is an exponential operator.

MITRC

Resultant Postfix Expression: ABC*DEF^/G*-H*+

Advantage of Postfix Expression over Infix Expression:


An infix expression is difficult for the machine to know and keep track of precedence of operators. On the
other hand, a postfix expression itself determines the precedence of operators (as the placement of
operators in a postfix expression depends upon its precedence).Therefore, for the machine it is easier to
carry out a postfix expression than an infix expression.

25
Unit-1 CSE- III SEM DSA

LECTURE NO.8

Tower of Hanoi:
MITRC

 The Tower of Honoi is also called the Tower of Brahma or Lucas tower.
 It consists of 3 rules, and the number of disks of different sizes which can slide onto any rod. The
puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at
the top, thus make a conical shape.
 The Objective o the puzzle is to move the entire stack to another rod obeying the following rule.

a) Only one disk move at a time.


b) Each move consists of taking the upper disk from the one of the rods and sliding it onto another
rod, on top of the other disk that may already be present on that rod.
c) No disk may be placed on top of a smaller disk and all disk will be there in 3-towers only.

Steps of Tower of Hanoi:


Steps 1- Move (n-1) disks from L to M.

TOH (n-1, L, M)

Steps 2- Move nth disks from (L-R).

MOVE (L-R)

Steps 3- Move n-1 disks from (M-R).

TOH (n-1, M, L , R)

26
Unit-1 CSE- III SEM DSA

Algorithm for Tower of Hanoi:


TOH (n, L, M, R)
{
If (n= =0)
return ;
else {
TOH (n-1, L, R, M)
MOVE (L-R)
TOH (n-1, M, L, R)
}
}

NOTE :

1. Tower of Honoi (TOH) returns (2n-1) numbers


MITRC of nodes.
2. In TOH number of function call = (2n+1-1)
3. In TOH number of move will be TOH(n) = (2n-1)

27
Unit-1 CSE- III SEM DSA

MITRC

28

You might also like