Professional Documents
Culture Documents
Adsa Unit 3 &4
Adsa Unit 3 &4
Adsa Unit 3 &4
*Introduction*
• Stack is one of the basic linear Data structure, that we use for storing our data.
• Data in a stack is stored in a serialized manner.
• One important thing about using a Stack is that the data first entered in the stack will be at the
last of the stack.
• This is one of the reason why we also called Stack a LIFO Data Structure, i.e; Last in First Out.
1|Page
*OPERATIONS ON STACK*
• push() − Pushing (storing) an element on the stack.
• pop() − Removing (accessing) an element from the stack.
• peek() − get the top data element of the stack, without removing it.
• isFull() − check if stack is full.
• isEmpty() − check if stack is empty.
*Representation Of Stack*
1. Array Representation of Stack
2. Linked List Representation of Stack
1.ARRAY REPRESENTATION
2|Page
INSERTING ELEMENTS INTO STACK
void push(int val)
{
if(top>=n-1)
cout<<"Stack Overflow"<<endl;
else {
top++;
stack[top]=val;
}
}
3|Page
cout<<"Stack is empty";
}
4|Page
INSERTING ELEMENTS INTO STACK
void push(int val)
{
struct Node* newnode = (struct Node*) malloc(sizeof(struct Node));
newnode->data = val;
newnode->next = top;
top = newnode;
}
DELETING ELEMENTS FROM STACK
void pop()
{
if(top==NULL)
cout<<"Stack Underflow"<<endl;
else
{
cout<<"The popped element is "<< top->data <<endl;
top = top->next;
}
5|Page
}
*MULTIPLE STACKS*
• When a stack is created using single array, we can not able to store large amount of data, thus this
problem is rectified using more than one stack in the same array of sufficient array. This
technique is called as Multiple Stack.
6|Page
1.First Approach
• First, we will divide the array into two sub-arrays. The array will be divided into two equal parts.
First, the sub-array would be considered stack1 and another sub array would be considered
stack2.
• For example, if we have an array of n equal to 8 elements. The array would be divided into two
equal parts, i.e., 4 size each shown as below:
The first subarray would be stack 1 named as st1, and the second subarray would be stack 2 named
as st2. On st1, we would perform push1() and pop1() operations, while in st2, we would perform
push2() and pop2() operations. The stack1 would be from 0 to n/2, and stack2 would be from n/2 to
n-1.
• If the size of the array is odd.For example, the size of an array is 9 then the left subarray would be
of 4 size, and the right subarray would be of 5 size shown in figure:
2.Second Approach
• In this approach, we are having a single array named as 'a'. In this case, stack1 starts from 0 while
stack2 starts from n-1. Both the stacks start from the extreme corners, i.e., Stack1 starts from the
leftmost corner (at index 0), and Stack2 starts from the rightmost corner (at index n-1). Stack1
extends in the right direction, and stack2 extends in the left direction, shown as below:
• If we push 'a' into stack1 and 'q' into stack2 shown as below:
7|Page
Therefore, we can say that this approach overcomes the problem of the first approach. In this case, the
stack overflow condition occurs only when top1 + 1 = top2. This approach provides a space-efficient
implementation means that when the array is full, then only it will show the overflow error. In contrast,
the first approach shows the overflow error even if the array is not full.
8|Page
• If you use multiple stacks, one of the stacks (or maybe a couple) can be used as a backup stack
which is impossible in case of using a single stack.
Cons of Using Multiple Stacks
• Your website content gets distributed across multiple stacks. This introduces interdependency,
which requires proper management.
• Unnecessary overheads get involved if you use multiple stacks for a static or a simple website.
*Application of stacks*
• Call log in mobiles uses the stack, to get a first-person call log you have to scroll.
• Text Editors: Undo or Redo mechanism in the Text Editors(Excel, Notepad or WordPad
etc.)
*QUEUE*
• A queue is defined as a linear data structure that is open at both ends and the operations are
performed in First In First Out (FIFO) order.
• We define a queue to be a list in which all additions to the list are made at one end, and all
deletions from the list are made at the other end. The element which is first pushed into the order,
the operation is first performed on that.
9|Page
*Real Life example of a queue data structure*
• Let’s consider a line of people waiting to buy a ticket at a cinema hall. A new person will join the
line from the end and the person standing at the front will be the first to get the ticket and leave
the line. Similarly in a queue data structure, data added first will leave the queue first.
Some other applications of the queue in real-life are:
• People on an escalator
• Cashier line in a store
• A car wash line
• One way exits
*REPRESENTATION OF QUEUE*
1.ARRAY REPRESENTATION
To implement a queue using an array,
1.create an array arr of size n and
2.take two variables front and rear both of which will be initialized to 0 which means the queue is
currently empty.
3.rear is the index up to which the elements are stored in the array and
4.front is the index of the first element of the array.
*Insertion in Queue*
void insert (int queue[], int max, int front, int rear, int item)
{
if (rear + 1 == max)
{
printf("overflow");
}
else
{
10 | P a g e
front = 0;
rear = 0;
}
else
{
rear = rear + 1;
}
queue[rear]=item;
}
}
*Deletion in Queue*
int delete (int queue[], int max, int front, int rear)
{
int y;
if (front == -1 || front > rear)
{
printf("underflow");
}
else
{
y = queue[front];
if(front == rear)
11 | P a g e
}
*Display in Queue*
void queueDisplay()
{
int i;
if (front == rear) {
printf("\nQueue is Empty\n");
return;
}
// traverse front to rear and print elements
for (i = front; i < rear; i++) {
printf(" %d <-- ", queue[i]);
}
return;
}
1. INSERTION IN QUEUE
ptr -> data = item;
if(front == NULL)
{
front = ptr;
rear = ptr;
front -> next = NULL;
rear -> next = NULL;
}
12 | P a g e
else
{
rear -> next = ptr;
rear = ptr;
rear->next = NULL;
}
*DELETION IN QUEUE*
ptr = front;
front = front -> next;
free(ptr);
DISPLAY ELEMENTS OF QUEUE
void Display() {
temp = front;
if ((front == NULL) && (rear == NULL)) {
cout<<"Queue is empty"<<endl;
return;
}
cout<<"Queue elements are: ";
while (temp != NULL) {
cout<<temp->data<<" ";
temp = temp->next;
}
cout<<endl;
}
13 | P a g e
*DIFFERENT TYPES OF QUEUE*
1.CIRCULAR QUEUE
• A circular queue is the extended version of a regular queue where the last element is connected to
the first element. Thus forming a circle-like structure.
• The main advantage of a circular queue over a simple queue is better memory utilization. If the
last position is full and the first position is empty, we can insert an element in the first position.
This action is not possible in a simple queue.
14 | P a g e
2.PRIORITY QUEUE
• A priority queue is a special type of queue in which each element is associated with a priority and
is served according to its priority. If elements with the same priority occur, they are served
according to their order in the queue.
• Insertion occurs based on the arrival of the values and removal occurs based on priority.
3.DOUBLE-ENDED QUEUE
• Deque or Double Ended Queue is a type of queue in which insertion and removal of elements can
either be performed from the front or the rear. Thus, it does not follow FIFO rule (First In First
Out).
1.Input Restricted Deque
• In this deque, input is restricted at a single end but allows deletion at both the ends.
2.Output Restricted Deque
• In this deque, output is restricted at a single end but allows insertion at both the ends.
15 | P a g e
*Application Of Queue*
• Applied as waiting lists for a single shared resource like CPU, Disk, and Printer.
• Applied as buffers on MP3 players and portable CD players.
• Applied on Operating system to handle the interruption.
• Applied to add song at the end or to play from the front.
• Applied on WhatsApp when we send messages to our friends and they don’t have an internet
connection then these messages are queued on the server of WhatsApp.
16 | P a g e
UNIT 4
Data Structure & Algorithms
Content
• Tree – Types of trees,Creating Binary tree from General tree, Traversing a tree, Huffman Tree
• Binary Search Trees – BST Operations,Threaded Binary Tree,AVL
Trees, Red Black Trees,Splay Trees
INTRODUCTION TO TREE
A tree is called binary tree if each node has at most 2 child.
A Binary tree is represented by a pointer to the topmost node of the tree.
Binary Tree node contains the following parts :-
• Data
• Pointer to left child
• Pointer to right child
Basic Operation On Binary Tree :-
• Inserting an element.
• Removing an element.
• Searching for an element.
• Traversing an element.
Tree Traversals (In-order, Pre-order and Post-order)
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have only one
logical way to traverse them, trees can be traversed in different ways.
17 | P a g e
TYPES OF TREES
The following are the different types of trees data structure:
Binary Tree
Binary Search Tree (BST)
AVL Tree
B-Tree
Binary Tree
Binary tree is a tree data structure in which each node can have 0, 1, or 2 children – left and right child.
Binary trees can be divided into the following types:
1.Perfect binary tree: Every internal node has two child nodes. All the leaf nodes are at the same level.
2.Full binary tree: Every parent node or an internal node has either exactly two children or no child nodes.
3.Complete binary tree: All levels except the last one are full with nodes.
4.Degenerate binary tree: All the internal nodes have only one child.
5.Balanced binary tree: The left and right trees differ by either 0 or 1.
18 | P a g e
Tree Traversals (Pre-Order, Post-Order and In-Order)
1.Inorder Traversal
Algorithm Inorder(tree)
a. Traverse the left subtree, i.e., call Inorder(left)
b. Visit the root.
c. Traverse the right subtree, i.e., call Inorder(right)
2.Preorder Traversal
Algorithm Preorder(tree)
a. Visit the root.
b. Traverse the left subtree, i.e., call Preorder(left)
c. Traverse the right subtree, i.e., call Preorder(right)
3.Postorder Traversal
Algorithm Postorder(tree)
a. Traverse the left subtree, i.e., call Postorder(left)
b. Traverse the right subtree, i.e., call Postorder(right)
c. Visit the root.
19 | P a g e
Huffman Coding Tree
Huffman coding provides codes to characters such that the length of the code depends on the
relative frequency or weight of the corresponding character. Huffman codes are of variable-
length, and without any prefix (that means no code is a prefix of any other). Any prefix-free
binary code can be displayed or visualized as a binary tree with the encoded characters stored at
the leaves.
Huffman tree or Huffman coding tree defines as a full binary tree in which each leaf of the tree
corresponds to a letter in the given alphabet.
The Huffman tree is treated as the binary tree associated with minimum external path weight that
means, the one associated with the minimum sum of weighted path lengths for the given set of
leaves. So the goal is to construct a tree with the minimum external path weight.
Letter frequency table :-
20 | P a g e
Huffman Coding Tree
Example :-
21 | P a g e
Properties of binary tree:-
The properties of a binary tree are as -
1. The maximum number of nodes at level ‘l’ of a binary tree is 2l.
Here ‘l’ is the level i.e. the number of nodes on the path from the root to the node. Level of the root is 0.
This can be proved by induction.
For root, l = 0, number of nodes = 20 = 1
Assume that the maximum number of nodes on level ‘l’ is 2l
Since in Binary tree every node has at most 2 children, next level would have twice nodes, i.e. 2 * 2l.
2. The Maximum number of nodes in a binary tree of height ‘h’ is 2h – 1.
Here the height of a tree is the maximum number of nodes on the root to leaf path. Height of a tree with a
single node is considered as 1.
A tree has maximum nodes if all levels have maximum nodes. So maximum number of nodes in a binary
tree of height h is 1 + 2 + 4 + .. + 2h-1. This is a simple geometric series with h terms and sum of this
series is 2h– 1.
22 | P a g e
18
/ \
15 30
/ \ / \
40 50 100 40
/ \ /
8 7 9
/ \
15 30
/ \ / \
40 50 100 40
/ \
20 20
/ \
30 30
/ \
40 40
23 | P a g e
BINARY SEARCH TREE
Binary Search Tree is a node-based binary tree data structure which has the following properties:
• The left subtree of a node contains only nodes with keys lesser than the node’s key.
• The right subtree of a node contains only nodes with keys greater than the node’s key.
• The left and right subtree each must also be a binary search tree.
The above properties of Binary Search Tree provides an ordering among keys so that the operations like
search, minimum and maximum can be done fast. If there is no ordering, then we may have to compare
every key to search for a given key.
Search operation works well like binary search in the binary search tree.
Illustration to search in below tree:
1. Start from the root.
2. Compare the searching element with root, if less than root, then recursively
call left subtree, else recursively call right subtree.
3. If the element to search is found anywhere, return true, else return false.
3
s
Insertion in Binary Search Tree
A new key is always inserted at the leaf. We start searching for a key from the root until we hit a leaf
node. Once a leaf node is found, the new node is added as a child of the leaf node.
100 100
/ \ Insert 40 / \
/ \ / \
10 30 10 30
40
24 | P a g e
Illustration to insert in below tree :-
1. Start from the root.
2. Compare the inserting element with root, if less than root, then recursively call left subtree, else
recursively call right subtree.
3. After reaching the end, just insert that node at left(if less than current) or else right.
Time Complexity : The worst-case time complexity of search and insert operations is O(h) where h is the
height of the Binary Search Tree. In the worst case, we may have to travel from root to the deepest leaf
node. The height of a skewed tree may become n and the time complexity of search and insert operation
may become O(n).
self.left = None
self.right = None
self.val = key
if root is None:
return Node(key)
else:
if root.val == key:
return root
else:
return root
def inorder(root):
if root:
inorder(root.left)
print(root.val)
inorder(root.right)
r = Node(50)
25 | P a g e
r = insert(r, 30)
r = insert(r, 20)
r = insert(r, 40)
r = insert(r, 70)
r = insert(r, 60)
r = insert(r, 80)
inorder(r)
26 | P a g e
Deletion in Binary Search Tree
The important thing to note is, inorder successor is needed only when the right child is not
empty. In this particular case, inorder successor can be obtained by finding the minimum value
in the right child of the node.
Time Complexity : The worst case time complexity of delete operation is O(h) where h is the
height of the Binary Search Tree. In worst case, we may have to travel from the root to the
deepest leaf node. The height of a skewed tree may become n and the time complexity of delete
operation may become O(n).
Optimization to above code for two children case :
In the recursive code, we recursively call delete() for the successor. We can avoid recursive calls
by keeping track of the parent node of the successor so that we can simply remove the successor
by making the child of a parent NULL. We know that the successor would always be a leaf node.
27 | P a g e
return None
if root.left is None:
temp = root.right
root = None
3 4
28 | P a g e
AVL TREE
AVL tree is a self-balancing Binary Search Tree (BST) where the difference between heights of left and
right subtrees cannot be more than one for all nodes.
The 1st tree is AVL because the differences between the heights of left and right subtrees for every node
are less than or equal to 1 while the 2nd tree is not AVL because the differences between the heights of the
left and right subtrees for 8 and 12 are greater than 1.
Why AVL Trees?
Most of the BST operations (e.g., search, max, min, insert, delete.. etc) take O(h) time where h is the
height of the BST. The cost of these operations may become O(n) for a skewed Binary tree. If we make
sure that the height of the tree remains O(log(n)) after every insertion and deletion, then we can guarantee
an upper bound of O(log(n)) for all these operations. The height of an AVL tree is always O(log(n))
where n is the number of nodes in the tree.
/\ / \
/\ - - - - - - - - -> / \ / \
x T3 T1 T2 T3 T4
/\
T1 T2
z z x
29 | P a g e
/\ / \ / \
/\ - - - - - - - - -> / \ - - - - - - - -> /\ /\
T1 x y T3 T1 T2 T3 T4
/\ /\
T2 T3 T1 T2
z y
/ \ / \
T1 y Left Rotate(z) z x
/ \ - - - - - - - -> /\ /\
T2 x T1 T2 T3 T4
/\
T3 T4
z z x
/\ /\ / \
/ \ - - - - - - - - -> / \ - - - - - - - -> / \ / \
x T4 T2 y T1 T2 T3 T4
/\ / \
T2 T3 T3 T4
30 | P a g e
The following ideas to implement the idea are as :-
1. Perform the normal BST insertion.
2. The current node must be one of the ancestors of the newly inserted node. Update the height of
the current node.
3. Get the balance factor (left subtree height – right subtree height) of the current node.
4. If the balance factor is greater than 1, then the current node is unbalanced and we are either in the
Left Left case or left Right case. To check whether it is left left case or not, compare the newly
inserted key with the key in the left subtree root.
5. If the balance factor is less than -1, then the current node is unbalanced and we are either in the
Right Right case or Right-Left case. To check whether it is the Right Right case or not, compare
the newly inserted key with the key in the right subtree root.
Time Complexity: O(log(n)), For Insertion
31 | P a g e
elif key < root.val: return self.rightRotate(root)
root.left = self.insert(root.left, # Case 4 - Right Left
key)
if balance < -1 and key < root.right.val:
else:
root.right =
root.right = self.rightRotate(root.right)
self.insert(root.right, key)
return self.leftRotate(root)
root.height = 1 +
max(self.getHeight(root.left),
3 4
return root self.getHeight(z.right))
def leftRotate(self, z): y.height = 1 + max(self.getHeight(y.left),
self.getHeight(y.right))
y = z.right
return y
T2 = y.left
def getHeight(self, root):
y.left = z
if not root:
z.right = T2
return 0
z.height = 1 +
max(self.getHeight(z.left), return root.height
self.getHeight(z.right))
def getBalance(self, root):
y.height = 1 +
if not root:
max(self.getHeight(y.left),
self.getHeight(y.right): return 0
return y return self.getHeight(root.left) -
self.getHeight(root.right)
def rightRotate(self, z):
def preOrder(self, root):
y = z.left
if not root:
T3 = y.right
return
y.right = z
print("{0} ".format(root.val), end="")
z.left = T3
self.preOrder(root.left)
z.height = 1 +max(self.getHeight(z.left),
self.preOrder(root.right)
32 | P a g e
myTree = AVL_Tree()
root = None
root = myTree.insert(root, 10)
root = myTree.insert(root, 20)
root = myTree.insert(root, 30)
root = myTree.insert(root, 40)
root = myTree.insert(root, 50)
root = myTree.insert(root, 25)
print("Preorder traversal of the",
"constructed AVL tree is")
myTree.preOrder(root)
print()
33 | P a g e
c) Right Right Case
z y
/ \ / \
T1 y Left Rotate(z) z x
/ \ - - - - - - - -> /\ /\
T2 x T1 T2 T3 T4
/\
T3 T4
z z x
/\ /\ / \
/ \ - - - - - - - - -> / \ - - - - - - - -> / \ / \
x T4 T2 y T1 T2 T3 T4
/\ / \
T2 T3 T3 T4
34 | P a g e
Time Complexity : Since AVL tree is balanced, the height is O(Logn). So time complexity of AVL
delete is O(Log n).
35 | P a g e
3 4
if balance > 1 and key > root.left.val: return root
root.left = self.leftRotate(root.left) elif key < root.val:
return self.rightRotate(root) root.left = self.delete(root.left, key)
# Case 4 - Right Left elif key > root.val:
if balance < -1 and key < root.right.val: root.right = self.delete(root.right, key)
root.right = self.rightRotate(root.right) else:
return self.leftRotate(root) if root.left is None:
return root temp = root.right
def delete(self, root, key): root = None
# Step 1 - Perform standard BST delete return temp
if not root: elif root.right is None:
temp = root.left
root = None
return temp
temp =
5 6
self.getMinValueNode(root.right) self.leftRotate(root)
root.val = temp.val # Case 3 - Left Right
root.right = self.delete(root.right), if balance > 1 and self.getBalance(root.left) < 0:
temp.
root.left = self.leftRotate(root.left)
()
return self.rightRotate(root)
if root is None:
# Case 4 - Right Left
return root
36 | P a g e
root.height = 1 + max(self.getHeight(root.left), if balance < -1 and self.getBalance(root.right) >
self.getHeight(root.right)) 0:
# Case 1 - Left Left root.right = self.rightRotate(root.right)
if balance > 1 and self.getBalance(root.left) >= return self.leftRotate(root)
0:
return root
return self.rightRotate(root)
def leftRotate(self, z):
# Case 2 - Right Right
y = z.right
if balance < -1 and self.getBalance(root.right)
T2 = y.left
<= 0:
y.left = z
return
7 8
z.right = T2 self.getHeight(z.right))
z.height = 1 + max(self.getHeight(z.left), y.height = 1 + max(self.getHeight(y.left),
self.getHeight(z.right)) self.getHeight(y.right)
y.height = 1 + max(self.getHeight(y.left), return y
def getHeight(self, root):
self.getHeight(y.right))
if not root:
return y
return 0
def rightRotate(self, z):
return root.height
y = z.left
def getBalance(self, root):
T3 = y.right
if not root:
y.right = z
return 0
z.left = T3
return self.getHeight(root.left) -
z.height = 1 + max(self.getHeight(z.left), self.getHeight(root.right)
def getMinValueNode(self, root):
if root is None or root.left is None:
return root
37 | P a g e
9 10
return self.getMinValueNode(root.left) key = 10
def preOrder(self, root): root = myTree.delete(root, key)
if not root: print("Preorder Traversal after deletion -")
return myTree.preOrder(root)
print("{0} ".format(root.val), end="") print()
self.preOrder(root.left)
self.preOrder(root.right)
myTree = AVL_Tree()
root = None
nums = [9, 5, 10, 0, 6, 11, -1, 1, 2]
for num in nums:
root = myTree.insert(root, num)
print("Preorder Traversal after insertion -")
myTree.preOrder(root)
print()
38 | P a g e
In right-left rotation, the arrangements are first shifted to the right and then to the left.
1. Do right rotation on x-y.
2. Do left rotation on z-y.
39 | P a g e
Deletion in Red Black Tree
Algorithm to delete a node :-
1. Save the color of nodeToBeDeleted in origrinalColor.
2. If the left child of nodeToBeDeleted is NULL
a. Assign the right child of nodeToBeDeleted to x.
b. Transplant nodeToBeDeleted with x.
3. Else if the right child of nodeToBeDeleted is NULL
a. Assign the left child of nodeToBeDeleted into x.
b. Transplant nodeToBeDeleted with x.
4. Else
a. Assign the minimum of right subtree of noteToBeDeleted into y.
b. Save the color of y in originalColor.
c. Assign the rightChild of y into x.
d. If y is a child of nodeToBeDeleted, then set the parent of x as y.
e. Else, transplant y with rightChild of y.
f. Transplant nodeToBeDeleted with y.
g. Set the color of y with originalColor.
4. If the originalColor is BLACK, call DeleteFix(x).
THREADED TREES
In the linked representation of binary trees, more than one half of the link fields contain NULL values
which results in wastage of storage space. If a binary tree consists of n nodes then n+1 link fields
contain NULL values. So in order to effectively manage the space, a method was devised by Perlis
and Thornton in which the NULL links are replaced with special links known as threads. Such binary
trees with threads are known as threaded binary trees. Each node in a threaded binary tree either
contains a link to its child node or thread to other nodes in the tree.
40 | P a g e
Advantages of Threaded Binary Tree
1. In this Tree it enables linear traversal of elements.
2. It eliminates the use of stack as it perform linear traversal.
3. Enables to find parent node without explicit use of parent pointer
4. Threaded tree give forward and backward traversal of nodes by in-order
fashion
5. Nodes contain pointers to in-order predecessor and successor
41 | P a g e
In one-way threaded binary trees, a thread will appear
either in the right or left link field of a node. If it
appears in the right link field of a node then it will
point to the next node that will appear on performing
in order traversal. Such trees are called Right threaded
binary trees. If thread appears in the left field of a node
then it will point to the nodes inorder predecessor.
Such trees are called Left threaded binary trees.
SPLAY TREES
A splay tree is a binary search tree with the additional property that recently accessed elements are quick
to access again. Like self-balancing binary search trees, a splay tree performs basic operations such as
insertion, look-up and removal in O(log n) amortized time.
In a splay tree, every operation is performed at the root of the tree. All operations in the splay tree
involve one common operation called splaying.
42 | P a g e
3.Zag rotation [Left Rotation]
4.Zag zag [Two Left Rotations]
5.Zig zag [Zig followed by Zag]
6.Zag zig [Zag followed by Zig]
Zig rotation
This rotation is similar to the right rotation in the AVL tree. In zig rotation, every node moves one
position to the right of its current position. We use Zig rotation when the item which is to be searched is
either a root node or a left child of the root node.
Zag rotation
This rotation is similar to the left rotation in the AVL tree. In zag rotation, every node moves one position
to the left of its current position. We use Zag rotation when the item which is to be searched is either a
root node or a right child of the root node.
43 | P a g e
Zig Zag rotation
This type of rotation is a sequence of zig rotations followed by zag rotations. So far, we've seen that both
the parent and the grandparent are in a RR or LL relationship. Now, here we will see the RL and LR kinds
of relationships between parent and grandparent. Every node moves one position to the right, followed by
one position to the left of its current position.
44 | P a g e