Adsa Unit 3 &4

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 44

UNIT 3

STACK AND QUEUE


STACK:-
• A stack is an ordered collection of items where the addition of new items and the removal of
existing items always takes place at the same end

*Introduction*
• Stack is one of the basic linear Data structure, that we use for storing our data.
• Data in a stack is stored in a serialized manner.
• One important thing about using a Stack is that the data first entered in the stack will be at the
last of the stack.
• This is one of the reason why we also called Stack a LIFO Data Structure, i.e; Last in First Out.

1|Page
*OPERATIONS ON STACK*
• push() − Pushing (storing) an element on the stack.
• pop() − Removing (accessing) an element from the stack.
• peek() − get the top data element of the stack, without removing it.
• isFull() − check if stack is full.
• isEmpty() − check if stack is empty.

*Representation Of Stack*
1. Array Representation of Stack
2. Linked List Representation of Stack

1.ARRAY REPRESENTATION

2|Page
INSERTING ELEMENTS INTO STACK
void push(int val)
{
if(top>=n-1)
cout<<"Stack Overflow"<<endl;
else {
top++;
stack[top]=val;
}
}

DELETING ELEMENTS FROM STACK


void pop()
{
if(top<=-1)
cout<<"Stack Underflow"<<endl;
else { cout<<"The popped element is "<< stack[top] <<endl;
top--;
}
}
DISPLAY ELEMENTS OF STACK
void display()
{
if(top>=0)
{
cout<<"Stack elements are:";
for(int i=top; i>=0; i--)
cout<<stack[i]<<" ";
cout<<endl;
} else

3|Page
cout<<"Stack is empty";
}

*LINKED LIST REPRESENTAION OF STACK*

4|Page
INSERTING ELEMENTS INTO STACK
void push(int val)
{
struct Node* newnode = (struct Node*) malloc(sizeof(struct Node));
newnode->data = val;
newnode->next = top;
top = newnode;
}
DELETING ELEMENTS FROM STACK
void pop()
{
if(top==NULL)
cout<<"Stack Underflow"<<endl;
else
{
cout<<"The popped element is "<< top->data <<endl;
top = top->next;
}

5|Page
}

DISPLAYING ELEMENTS OF STACK


void display() {
struct Node* ptr;
if(top==NULL)
cout<<"stack is empty";
else {
ptr = top;
cout<<"Stack elements are: ";
while (ptr != NULL) {
cout<< ptr->data <<" ";
ptr = ptr->next;
}
} cout<<endl;}

*MULTIPLE STACKS*
• When a stack is created using single array, we can not able to store large amount of data, thus this
problem is rectified using more than one stack in the same array of sufficient array. This
technique is called as Multiple Stack.

6|Page
1.First Approach
• First, we will divide the array into two sub-arrays. The array will be divided into two equal parts.
First, the sub-array would be considered stack1 and another sub array would be considered
stack2.
• For example, if we have an array of n equal to 8 elements. The array would be divided into two
equal parts, i.e., 4 size each shown as below:

The first subarray would be stack 1 named as st1, and the second subarray would be stack 2 named
as st2. On st1, we would perform push1() and pop1() operations, while in st2, we would perform
push2() and pop2() operations. The stack1 would be from 0 to n/2, and stack2 would be from n/2 to
n-1.

• If the size of the array is odd.For example, the size of an array is 9 then the left subarray would be
of 4 size, and the right subarray would be of 5 size shown in figure:

2.Second Approach
• In this approach, we are having a single array named as 'a'. In this case, stack1 starts from 0 while
stack2 starts from n-1. Both the stacks start from the extreme corners, i.e., Stack1 starts from the
leftmost corner (at index 0), and Stack2 starts from the rightmost corner (at index n-1). Stack1
extends in the right direction, and stack2 extends in the left direction, shown as below:
• If we push 'a' into stack1 and 'q' into stack2 shown as below:

7|Page
Therefore, we can say that this approach overcomes the problem of the first approach. In this case, the
stack overflow condition occurs only when top1 + 1 = top2. This approach provides a space-efficient
implementation means that when the array is full, then only it will show the overflow error. In contrast,
the first approach shows the overflow error even if the array is not full.

*Pros and Cons of Using Single Vs Multiple Stacks*


Pros of Using a Single Stack:
• When you have a relatively smaller website (with just one domain), using a single stack for
managing the content will suffice. A static site is an excellent example of a single stack usage or,
in the case of an experimental website.
• It involves less overhead as opposed to using multiple stacks. Therefore management of a single
stack is much simplified.
• Lesser interdependence as there is just one stack.

Cons of Using a Single Stack


• Maintaining assets and entries can become complicated when you have multiple domains such as
blogs, docs, e-commerce, etc.
• It becomes challenging to manage various projects that involve multiple teams in a single stack.
Pros of Using Multiple Stacks
• When you have multiple domains (such as blogs, marketing, integrations, and so on) as part of
your website, using multiple stacks to manage and maintain the content of each domain is
simplified.
• You can keep content related to each domain in a separate stack. Thus, it is easier to categorize
and classify your website content.
• Managing multiple projects becomes easier as you can distribute and segregate the project's
content in different stacks.

8|Page
• If you use multiple stacks, one of the stacks (or maybe a couple) can be used as a backup stack
which is impossible in case of using a single stack.
Cons of Using Multiple Stacks
• Your website content gets distributed across multiple stacks. This introduces interdependency,
which requires proper management.
• Unnecessary overheads get involved if you use multiple stacks for a static or a simple website.

*Application of stacks*
• Call log in mobiles uses the stack, to get a first-person call log you have to scroll.
• Text Editors: Undo or Redo mechanism in the Text Editors(Excel, Notepad or WordPad
etc.)

*QUEUE*
• A queue is defined as a linear data structure that is open at both ends and the operations are
performed in First In First Out (FIFO) order.
• We define a queue to be a list in which all additions to the list are made at one end, and all
deletions from the list are made at the other end. The element which is first pushed into the order,
the operation is first performed on that.

9|Page
*Real Life example of a queue data structure*
• Let’s consider a line of people waiting to buy a ticket at a cinema hall. A new person will join the
line from the end and the person standing at the front will be the first to get the ticket and leave
the line. Similarly in a queue data structure, data added first will leave the queue first.
Some other applications of the queue in real-life are:
• People on an escalator
• Cashier line in a store
• A car wash line
• One way exits

*REPRESENTATION OF QUEUE*
1.ARRAY REPRESENTATION
To implement a queue using an array,
1.create an array arr of size n and
2.take two variables front and rear both of which will be initialized to 0 which means the queue is
currently empty.
3.rear is the index up to which the elements are stored in the array and
4.front is the index of the first element of the array.

*Insertion in Queue*
void insert (int queue[], int max, int front, int rear, int item)
{
if (rear + 1 == max)
{
printf("overflow");
}
else
{

if(front == -1 && rear == -1)

10 | P a g e
front = 0;
rear = 0;
}
else
{
rear = rear + 1;
}
queue[rear]=item;
}
}

*Deletion in Queue*
int delete (int queue[], int max, int front, int rear)
{
int y;
if (front == -1 || front > rear)

{
printf("underflow");
}
else
{
y = queue[front];
if(front == rear)

front = rear = -1;


else
front = front + 1;
}
return y;
}

11 | P a g e
}

*Display in Queue*
void queueDisplay()
{
int i;
if (front == rear) {
printf("\nQueue is Empty\n");
return;
}
// traverse front to rear and print elements
for (i = front; i < rear; i++) {
printf(" %d <-- ", queue[i]);
}
return;
}

2.LINKED LIST REPRESENTATION


The array implementation can not be used for the large scale applications where the queues are
implemented. One of the alternative of array implementation is linked list implementation of queue.

1. INSERTION IN QUEUE
ptr -> data = item;
if(front == NULL)
{
front = ptr;
rear = ptr;
front -> next = NULL;
rear -> next = NULL;
}

12 | P a g e
else
{
rear -> next = ptr;
rear = ptr;
rear->next = NULL;
}

*DELETION IN QUEUE*
ptr = front;
front = front -> next;
free(ptr);
DISPLAY ELEMENTS OF QUEUE
void Display() {
temp = front;
if ((front == NULL) && (rear == NULL)) {
cout<<"Queue is empty"<<endl;
return;
}
cout<<"Queue elements are: ";
while (temp != NULL) {
cout<<temp->data<<" ";
temp = temp->next;
}
cout<<endl;
}

13 | P a g e
*DIFFERENT TYPES OF QUEUE*

1.CIRCULAR QUEUE
• A circular queue is the extended version of a regular queue where the last element is connected to
the first element. Thus forming a circle-like structure.
• The main advantage of a circular queue over a simple queue is better memory utilization. If the
last position is full and the first position is empty, we can insert an element in the first position.
This action is not possible in a simple queue.

14 | P a g e
2.PRIORITY QUEUE
• A priority queue is a special type of queue in which each element is associated with a priority and
is served according to its priority. If elements with the same priority occur, they are served
according to their order in the queue.
• Insertion occurs based on the arrival of the values and removal occurs based on priority.

3.DOUBLE-ENDED QUEUE
• Deque or Double Ended Queue is a type of queue in which insertion and removal of elements can
either be performed from the front or the rear. Thus, it does not follow FIFO rule (First In First
Out).
1.Input Restricted Deque
• In this deque, input is restricted at a single end but allows deletion at both the ends.
2.Output Restricted Deque
• In this deque, output is restricted at a single end but allows insertion at both the ends.

15 | P a g e
*Application Of Queue*
• Applied as waiting lists for a single shared resource like CPU, Disk, and Printer.
• Applied as buffers on MP3 players and portable CD players.
• Applied on Operating system to handle the interruption.
• Applied to add song at the end or to play from the front.
• Applied on WhatsApp when we send messages to our friends and they don’t have an internet
connection then these messages are queued on the server of WhatsApp.

16 | P a g e
UNIT 4
Data Structure & Algorithms

Content
• Tree – Types of trees,Creating Binary tree from General tree, Traversing a tree, Huffman Tree
• Binary Search Trees – BST Operations,Threaded Binary Tree,AVL
Trees, Red Black Trees,Splay Trees

INTRODUCTION TO TREE
A tree is called binary tree if each node has at most 2 child.
A Binary tree is represented by a pointer to the topmost node of the tree.
Binary Tree node contains the following parts :-
• Data
• Pointer to left child
• Pointer to right child
Basic Operation On Binary Tree :-
• Inserting an element.
• Removing an element.
• Searching for an element.
• Traversing an element.
 Tree Traversals (In-order, Pre-order and Post-order)
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have only one
logical way to traverse them, trees can be traversed in different ways.

17 | P a g e
TYPES OF TREES
The following are the different types of trees data structure:
Binary Tree
Binary Search Tree (BST)
AVL Tree
B-Tree

Binary Tree
Binary tree is a tree data structure in which each node can have 0, 1, or 2 children – left and right child.
Binary trees can be divided into the following types:
1.Perfect binary tree: Every internal node has two child nodes. All the leaf nodes are at the same level.
2.Full binary tree: Every parent node or an internal node has either exactly two children or no child nodes.
3.Complete binary tree: All levels except the last one are full with nodes.
4.Degenerate binary tree: All the internal nodes have only one child.
5.Balanced binary tree: The left and right trees differ by either 0 or 1.

Binary Search Tree (BST)


A binary search tree (BST) is also called an ordered or sorted binary tree in which the value at the left sub
tree is lesser than that of root and right subtree has value greater than that of root.
Every binary search tree is a binary tree. However, not every binary tree is a binary search tree. What’s
the difference between a binary tree and binary search tree? The most important difference between the
two is that in a BST, the left child node’s value must be less than the parent while the right child node’s
value must be higher.

18 | P a g e
Tree Traversals (Pre-Order, Post-Order and In-Order)
1.Inorder Traversal
Algorithm Inorder(tree)
a. Traverse the left subtree, i.e., call Inorder(left)
b. Visit the root.
c. Traverse the right subtree, i.e., call Inorder(right)

2.Preorder Traversal
Algorithm Preorder(tree)
a. Visit the root.
b. Traverse the left subtree, i.e., call Preorder(left)
c. Traverse the right subtree, i.e., call Preorder(right)

3.Postorder Traversal
Algorithm Postorder(tree)
a. Traverse the left subtree, i.e., call Postorder(left)
b. Traverse the right subtree, i.e., call Postorder(right)
c. Visit the root.

19 | P a g e
Huffman Coding Tree
 Huffman coding provides codes to characters such that the length of the code depends on the
relative frequency or weight of the corresponding character. Huffman codes are of variable-
length, and without any prefix (that means no code is a prefix of any other). Any prefix-free
binary code can be displayed or visualized as a binary tree with the encoded characters stored at
the leaves.
 Huffman tree or Huffman coding tree defines as a full binary tree in which each leaf of the tree
corresponds to a letter in the given alphabet.
 The Huffman tree is treated as the binary tree associated with minimum external path weight that
means, the one associated with the minimum sum of weighted path lengths for the given set of
leaves. So the goal is to construct a tree with the minimum external path weight.
Letter frequency table :-

Huffman Coding Tree

20 | P a g e
Huffman Coding Tree
Example :-

Algorithm – Huffman Coding Tree

huffmanCoding(string) • for all type of character ch do


• if the frequency of ch is non zero then
Input: A string with different characters. • add ch and its frequency as a node of priority
queue Q
Output: The codes for each individual characters. • while Q is not empty do
• Begin • remove item from Q and assign it to left child of
• define a node with character, frequency, left and node
right child of the node for Huffman tree. • remove item from Q and assign to the right child of
create a list ‘freq’ to store frequency of each node
character, initially, all are 0 for each character c in • traverse the node to find the assigned code
the string • End
• Do increase the frequency for character ch in freq
list.
• done

21 | P a g e
Properties of binary tree:-
The properties of a binary tree are as -
1. The maximum number of nodes at level ‘l’ of a binary tree is 2l.
Here ‘l’ is the level i.e. the number of nodes on the path from the root to the node. Level of the root is 0.
This can be proved by induction.
For root, l = 0, number of nodes = 20 = 1
Assume that the maximum number of nodes on level ‘l’ is 2l
Since in Binary tree every node has at most 2 children, next level would have twice nodes, i.e. 2 * 2l.
2. The Maximum number of nodes in a binary tree of height ‘h’ is 2h – 1.
Here the height of a tree is the maximum number of nodes on the root to leaf path. Height of a tree with a
single node is considered as 1.
A tree has maximum nodes if all levels have maximum nodes. So maximum number of nodes in a binary
tree of height h is 1 + 2 + 4 + .. + 2h-1. This is a simple geometric series with h terms and sum of this
series is 2h– 1.

Types of Binary Tree:-


1. Full Binary Tree :-
A Binary Tree is a full binary tree if every node has 0 or 2 children. The following are the examples of a
full binary tree. We can also say a full binary tree is a binary tree in which all nodes except leaf nodes
have two children.

2. Complete Binary Tree :-


A Binary Tree is a Complete Binary Tree if all the levels are completely filled except possibly the last
level and the last level has all keys as left as possible. Practical example of Complete Binary Tree is
Binary Heap.

22 | P a g e
18

/ \

15 30

/ \ / \

40 50 100 40

/ \ /

8 7 9

3. Perfect Binary Tree :-


A Binary tree is a Perfect Binary Tree in which all the internal nodes have two children and all leaf nodes
are at the same level. A perfect binary tree is a type of binary tree in which every internal node has
exactly two child nodes and all the leaf nodes are at the same level.
18

/ \

15 30

/ \ / \

40 50 100 40

4. Balanced Binary Tree:-


A binary tree is balanced if the height of the tree is O(Log n) where n is the number of nodes. For
Example, the AVL tree maintains O(Log n) height by making sure that the difference between the heights
of the left and right subtrees is at most 1.
Red-Black trees maintain O(Log n) height by making sure that the number of Black nodes on every root
to leaf paths is the same and there are no adjacent red nodes. Balanced Binary Search trees are
performance-wise good as they provide O(log n) time for search, insert and delete.

5. A degenerate (or pathological) tree:-


A Tree where every internal node has one child. Such trees are performance-wise same as linked list.
A degenerate or pathological tree is the tree having a single child either left or right.

6. Skewed Binary Tree:-


10 10

/ \

20 20

/ \

30 30

/ \

40 40

Left-Skewed Binary Tree Right-Skewed Binary Tree

23 | P a g e
BINARY SEARCH TREE
Binary Search Tree is a node-based binary tree data structure which has the following properties:
• The left subtree of a node contains only nodes with keys lesser than the node’s key.
• The right subtree of a node contains only nodes with keys greater than the node’s key.
• The left and right subtree each must also be a binary search tree.
The above properties of Binary Search Tree provides an ordering among keys so that the operations like
search, minimum and maximum can be done fast. If there is no ordering, then we may have to compare
every key to search for a given key.
Search operation works well like binary search in the binary search tree.
Illustration to search in below tree:
1. Start from the root.
2. Compare the searching element with root, if less than root, then recursively
call left subtree, else recursively call right subtree.
3. If the element to search is found anywhere, return true, else return false.
3

s
Insertion in Binary Search Tree
A new key is always inserted at the leaf. We start searching for a key from the root until we hit a leaf
node. Once a leaf node is found, the new node is added as a child of the leaf node.
100 100

/ \ Insert 40 / \

20 500 ———> 20 500

/ \ / \

10 30 10 30

40

24 | P a g e
Illustration to insert in below tree :-
1. Start from the root.
2. Compare the inserting element with root, if less than root, then recursively call left subtree, else
recursively call right subtree.
3. After reaching the end, just insert that node at left(if less than current) or else right.
Time Complexity : The worst-case time complexity of search and insert operations is O(h) where h is the
height of the Binary Search Tree. In the worst case, we may have to travel from root to the deepest leaf
node. The height of a skewed tree may become n and the time complexity of search and insert operation
may become O(n).

Program for insertion in binary search tree


class Node:

def __init__(self, key):

self.left = None

self.right = None

self.val = key

def insert(root, key):

if root is None:

return Node(key)

else:

if root.val == key:

return root

elif root.val < key:

root.right = insert(root.right, key)

else:

root.left = insert(root.left, key)

return root

def inorder(root):

if root:

inorder(root.left)

print(root.val)

inorder(root.right)

r = Node(50)

25 | P a g e
r = insert(r, 30)

r = insert(r, 20)

r = insert(r, 40)

r = insert(r, 70)

r = insert(r, 60)

r = insert(r, 80)

inorder(r)

Deletion in Binary Search Tree:-


1) Node to be deleted is the leaf: Simply remove from the tree.
50 50
/ \ delete(20) / \
30 70 ---------> 30 70
/ \ / \ / \ \
20 40 60 80 40 60 80
2) Node to be deleted has only one child: Copy the child to the node and delete the child
50 50
/ \ delete(30) / \
30 70 ---------> 40 70
\ / \ / \ \
40 60 80 60 80
3) Node to be deleted has two children: Find inorder successor of the node. Copy contents of the inorder
successor to the node and delete the inorder successor. Note that inorder predecessor can also be used.
50 60
/ \ delete(50) / \
40 70 ---------> 40 70
/ \ \
60 80 80

26 | P a g e
Deletion in Binary Search Tree
The important thing to note is, inorder successor is needed only when the right child is not
empty. In this particular case, inorder successor can be obtained by finding the minimum value
in the right child of the node.
Time Complexity : The worst case time complexity of delete operation is O(h) where h is the
height of the Binary Search Tree. In worst case, we may have to travel from the root to the
deepest leaf node. The height of a skewed tree may become n and the time complexity of delete
operation may become O(n).
Optimization to above code for two children case :
In the recursive code, we recursively call delete() for the successor. We can avoid recursive calls
by keeping track of the parent node of the successor so that we can simply remove the successor
by making the child of a parent NULL. We know that the successor would always be a leaf node.

Program for deletion in binary search tree


1 2
class Node:
# Constructor to create a new node if key < node.key:
def __init__(self, key): node.left = insert(node.left, key)
self.key = key else:
self.left = None node.right = insert(node.right, key)
self.right = None return node
def inorder(root): def deleteNode(root, key):
if root is not None: if root is None:
inorder(root.left) return root
print(root.key, end=" ") if key < root.key:
inorder(root.right) root.left = deleteNode(root.left, key)
def insert(node, key): return root
if node is None: elif(key > root.key):
return Node(key) root.right = deleteNode(root.right, key)
return root
if root.left is None and root.right is None:

27 | P a g e
return None
if root.left is None:
temp = root.right
root = None

3 4

return temp root = insert(root, 20)


elif root.right is None: root = insert(root, 40)
temp = root.left root = insert(root, 70)
root = None root = insert(root, 60)
return temp root = insert(root, 80)
succParent = root print("Inorder traversal of the given tree")
succ = root.right inorder(root)
while succ.left != None: print("\nDelete 20")
succParent = succ root = deleteNode(root, 20)
succ = succ.left print("Inorder traversal of the modified tree")
if succParent != root: inorder(root)
succParent.left = succ.right print("\nDelete 30")
else: root = deleteNode(root, 30)
succParent.right = succ.right print("Inorder traversal of the modified tree")
root.key = succ.key inorder(root)
return root print("\nDelete 50")
root = None root = deleteNode(root, 50)
root = insert(root, 50) print("Inorder traversal of the modified tree")
root = insert(root, 30) inorder(root)

28 | P a g e
AVL TREE
AVL tree is a self-balancing Binary Search Tree (BST) where the difference between heights of left and
right subtrees cannot be more than one for all nodes.

The 1st tree is AVL because the differences between the heights of left and right subtrees for every node
are less than or equal to 1 while the 2nd tree is not AVL because the differences between the heights of the
left and right subtrees for 8 and 12 are greater than 1.
Why AVL Trees?
Most of the BST operations (e.g., search, max, min, insert, delete.. etc) take O(h) time where h is the
height of the BST. The cost of these operations may become O(n) for a skewed Binary tree. If we make
sure that the height of the tree remains O(log(n)) after every insertion and deletion, then we can guarantee
an upper bound of O(log(n)) for all these operations. The height of an AVL tree is always O(log(n))
where n is the number of nodes in the tree.

Insertion in AVL Tree


Following are two basic operations that can be performed to balance a BST without violating the BST
property (keys(left) < key(root) < keys(right)). Left Rotation, Right Rotation
1. Left Left Case
T1, T2, T3 and T4 are subtrees.
z y

/\ / \

y T4 Right Rotate (z) x z

/\ - - - - - - - - -> / \ / \

x T3 T1 T2 T3 T4

/\

T1 T2

2. Left Right Case

z z x

29 | P a g e
/\ / \ / \

y T4 Left Rotate (y) x T4 Right Rotate(z) y z

/\ - - - - - - - - -> / \ - - - - - - - -> /\ /\

T1 x y T3 T1 T2 T3 T4

/\ /\

T2 T3 T1 T2

3. Right Right Case

z y

/ \ / \

T1 y Left Rotate(z) z x

/ \ - - - - - - - -> /\ /\

T2 x T1 T2 T3 T4

/\

T3 T4

4. Right Left Case

z z x

/\ /\ / \

T1 y Right Rotate (y) T1 x Left Rotate(z) z y

/ \ - - - - - - - - -> / \ - - - - - - - -> / \ / \

x T4 T2 y T1 T2 T3 T4

/\ / \

T2 T3 T3 T4

Insertion in AVL Tree


Approach :-
The idea is to use recursive BST insert, after insertion, we get pointers to all ancestors one by one in a
bottom-up manner. So we don’t need a parent pointer to travel up. The recursive code itself travels up and
visits all the ancestors of the newly inserted node.

30 | P a g e
The following ideas to implement the idea are as :-
1. Perform the normal BST insertion.
2. The current node must be one of the ancestors of the newly inserted node. Update the height of
the current node.
3. Get the balance factor (left subtree height – right subtree height) of the current node.
4. If the balance factor is greater than 1, then the current node is unbalanced and we are either in the
Left Left case or left Right case. To check whether it is left left case or not, compare the newly
inserted key with the key in the left subtree root.
5. If the balance factor is less than -1, then the current node is unbalanced and we are either in the
Right Right case or Right-Left case. To check whether it is the Right Right case or not, compare
the newly inserted key with the key in the right subtree root.
Time Complexity: O(log(n)), For Insertion

Program for insertion in AVL tree


1 2

class TreeNode(object): self.getHeight(root.right))


def __init__(self, val): # Case 1 - Left Left
self.val = val if balance > 1 and key < root.left.val:
self.left = None return self.rightRotate(root)
self.right = None # Case 2 - Right Right
self.height = 1 if balance < -1 and key > root.right.val:
class AVL_Tree(object): return self.leftRotate(root)
def insert(self, root, key): # Case 3 - Left Right
if not root: if balance > 1 and key > root.left.val:
return TreeNode(key) root.left =
self.leftRotate(root.left)

31 | P a g e
elif key < root.val: return self.rightRotate(root)
root.left = self.insert(root.left, # Case 4 - Right Left
key)
if balance < -1 and key < root.right.val:
else:
root.right =
root.right = self.rightRotate(root.right)
self.insert(root.right, key)
return self.leftRotate(root)
root.height = 1 +
max(self.getHeight(root.left),

3 4
return root self.getHeight(z.right))
def leftRotate(self, z): y.height = 1 + max(self.getHeight(y.left),
self.getHeight(y.right))
y = z.right
return y
T2 = y.left
def getHeight(self, root):
y.left = z
if not root:
z.right = T2
return 0
z.height = 1 +
max(self.getHeight(z.left), return root.height
self.getHeight(z.right))
def getBalance(self, root):
y.height = 1 +
if not root:
max(self.getHeight(y.left),
self.getHeight(y.right): return 0
return y return self.getHeight(root.left) -
self.getHeight(root.right)
def rightRotate(self, z):
def preOrder(self, root):
y = z.left
if not root:
T3 = y.right
return
y.right = z
print("{0} ".format(root.val), end="")
z.left = T3
self.preOrder(root.left)
z.height = 1 +max(self.getHeight(z.left),
self.preOrder(root.right)

32 | P a g e
myTree = AVL_Tree()
root = None
root = myTree.insert(root, 10)
root = myTree.insert(root, 20)
root = myTree.insert(root, 30)
root = myTree.insert(root, 40)
root = myTree.insert(root, 50)
root = myTree.insert(root, 25)
print("Preorder traversal of the",
"constructed AVL tree is")
myTree.preOrder(root)
print()

Deletion in AVL Tree


Following are two basic operations that can be performed to re-balance a BST without violating the BST
property (keys(left) < key(root) < keys(right)). Left Rotation, Right Rotation
a) Left Left Case
T1, T2, T3 and T4 are subtrees.
z y
/\ / \
y T4 Right Rotate (z) x z
/\ - - - - - - - - -> / \ / \
x T3 T1 T2 T3 T4
/\
T1 T2
b) Left Right Case
z z x
/\ / \ / \
y T4 Left Rotate (y) x T4 Right Rotate(z) y z
/\ - - - - - - - - -> / \ - - - - - - - -> / \ /\
T1 x y T3 T1 T2 T3 T4
/\ /\
T2 T3 T1 T2

33 | P a g e
c) Right Right Case
z y
/ \ / \
T1 y Left Rotate(z) z x

/ \ - - - - - - - -> /\ /\

T2 x T1 T2 T3 T4

/\

T3 T4

d) Right Left Case

z z x

/\ /\ / \

T1 y Right Rotate (y) T1 x Left Rotate(z) z y

/ \ - - - - - - - - -> / \ - - - - - - - -> / \ / \

x T4 T2 y T1 T2 T3 T4

/\ / \

T2 T3 T3 T4

*Deletion in AVL Tree


In the recursive BST delete, after deletion, we get pointers to all ancestors one by one in bottom up
manner. So we don’t need parent pointer to travel up. The recursive code itself travels up and visits all the
ancestors of the deleted node.
1. Perform the normal BST deletion.
2. The current node must be one of the ancestors of the deleted node. Update the height of the
current node.
3. Get the balance factor (left subtree height – right subtree height) of the current node.
4. If balance factor is greater than 1, then the current node is unbalanced and we are either in Left
Left case or Left Right case. To check whether it is Left Left case or Left Right case, get the
balance factor of left subtree. If balance factor of the left subtree is greater than or equal to 0, then
it is Left Left case, else Left Right case.
5. If balance factor is less than -1, then the current node is unbalanced and we are either in Right
Right case or Right Left case. To check whether it is Right Right case or Right Left case, get the
balance factor of right subtree. If the balance factor of the right subtree is smaller than or equal to
0, then it is Right Right case, else Right Left case.

34 | P a g e
Time Complexity : Since AVL tree is balanced, the height is O(Logn). So time complexity of AVL
delete is O(Log n).

Program for deletion in AVL tree


1 2
self.insert(root.right, key)
# Generic tree node class # Step 2 - Update the height of the
class TreeNode(object): # ancestor node
def __init__(self, val): root.height = 1 + max(self.getHeight(root.left),
self.val = val
self.getHeight(root.right))
self.left = None
balance = self.getBalance(root)
self.right = None
# Case 1 - Left Left
self.height = 1
if balance > 1 and key < root.left.val:
class AVL_Tree(object):
return self.rightRotate(root)
def insert(self, root, key):
# Case 2 - Right Right
# Step 1 - Perform normal BST
if balance < -1 and key > root.right.val:
if not root:
return self.leftRotate(root)
return TreeNode(key)
# Case 3 - Left Right
elif key < root.val:
root.left = self.insert(root.left, key)
else:
root.right =

35 | P a g e
3 4
if balance > 1 and key > root.left.val: return root
root.left = self.leftRotate(root.left) elif key < root.val:
return self.rightRotate(root) root.left = self.delete(root.left, key)
# Case 4 - Right Left elif key > root.val:
if balance < -1 and key < root.right.val: root.right = self.delete(root.right, key)
root.right = self.rightRotate(root.right) else:
return self.leftRotate(root) if root.left is None:
return root temp = root.right
def delete(self, root, key): root = None
# Step 1 - Perform standard BST delete return temp
if not root: elif root.right is None:
temp = root.left
root = None
return temp
temp =

5 6
self.getMinValueNode(root.right) self.leftRotate(root)
root.val = temp.val # Case 3 - Left Right
root.right = self.delete(root.right), if balance > 1 and self.getBalance(root.left) < 0:
temp.
root.left = self.leftRotate(root.left)
()
return self.rightRotate(root)
if root is None:
# Case 4 - Right Left
return root

36 | P a g e
root.height = 1 + max(self.getHeight(root.left), if balance < -1 and self.getBalance(root.right) >
self.getHeight(root.right)) 0:
# Case 1 - Left Left root.right = self.rightRotate(root.right)
if balance > 1 and self.getBalance(root.left) >= return self.leftRotate(root)
0:
return root
return self.rightRotate(root)
def leftRotate(self, z):
# Case 2 - Right Right
y = z.right
if balance < -1 and self.getBalance(root.right)
T2 = y.left
<= 0:
y.left = z
return

7 8
z.right = T2 self.getHeight(z.right))
z.height = 1 + max(self.getHeight(z.left), y.height = 1 + max(self.getHeight(y.left),

self.getHeight(z.right)) self.getHeight(y.right)
y.height = 1 + max(self.getHeight(y.left), return y
def getHeight(self, root):
self.getHeight(y.right))
if not root:
return y
return 0
def rightRotate(self, z):
return root.height
y = z.left
def getBalance(self, root):
T3 = y.right
if not root:
y.right = z
return 0
z.left = T3
return self.getHeight(root.left) -
z.height = 1 + max(self.getHeight(z.left), self.getHeight(root.right)
def getMinValueNode(self, root):
if root is None or root.left is None:
return root

37 | P a g e
9 10
return self.getMinValueNode(root.left) key = 10
def preOrder(self, root): root = myTree.delete(root, key)
if not root: print("Preorder Traversal after deletion -")
return myTree.preOrder(root)
print("{0} ".format(root.val), end="") print()
self.preOrder(root.left)
self.preOrder(root.right)
myTree = AVL_Tree()
root = None
nums = [9, 5, 10, 0, 6, 11, -1, 1, 2]
for num in nums:
root = myTree.insert(root, num)
print("Preorder Traversal after insertion -")
myTree.preOrder(root)
print()

Red Black Tree


 Left-Right and Right-Left Rotate :-
 In left-right rotation, the arrangements are first shifted to the left and then to the right.
1. Do left rotation on x-y.
2. Do right rotation on y-z.

38 | P a g e
 In right-left rotation, the arrangements are first shifted to the right and then to the left.
1. Do right rotation on x-y.
2. Do left rotation on z-y.

Insertion in Red Black Tree


 Algorithm to insert a node :-
1. Let y be the leaf (ie. NIL) and x be the root of the tree.
2. Check if the tree is empty (ie. whether x is NIL). If yes, insert newNode as a root node and color
it black.
3. Else, repeat steps following steps until leaf (NIL) is reached.
a. Compare newKey with rootKey.
b. If newKey is greater than rootKey, traverse through the right subtree.
c. Else traverse through the left subtree.
4. Assign the parent of the leaf as a parent of newNode.
5. If leafKey is greater than newKey, make newNode as rightChild.
6. Else, make newNode as leftChild.
7. Assign NULL to the left and rightChild of newNode.
8. Assign RED color to newNode.
9. Call InsertFix-algorithm to maintain the property of red-black tree if violated.

39 | P a g e
Deletion in Red Black Tree
 Algorithm to delete a node :-
1. Save the color of nodeToBeDeleted in origrinalColor.
2. If the left child of nodeToBeDeleted is NULL
a. Assign the right child of nodeToBeDeleted to x.
b. Transplant nodeToBeDeleted with x.
3. Else if the right child of nodeToBeDeleted is NULL
a. Assign the left child of nodeToBeDeleted into x.
b. Transplant nodeToBeDeleted with x.
4. Else
a. Assign the minimum of right subtree of noteToBeDeleted into y.
b. Save the color of y in originalColor.
c. Assign the rightChild of y into x.
d. If y is a child of nodeToBeDeleted, then set the parent of x as y.
e. Else, transplant y with rightChild of y.
f. Transplant nodeToBeDeleted with y.
g. Set the color of y with originalColor.
4. If the originalColor is BLACK, call DeleteFix(x).

THREADED TREES
In the linked representation of binary trees, more than one half of the link fields contain NULL values
which results in wastage of storage space. If a binary tree consists of n nodes then n+1 link fields
contain NULL values. So in order to effectively manage the space, a method was devised by Perlis
and Thornton in which the NULL links are replaced with special links known as threads. Such binary
trees with threads are known as threaded binary trees. Each node in a threaded binary tree either
contains a link to its child node or thread to other nodes in the tree.

40 | P a g e
Advantages of Threaded Binary Tree
1. In this Tree it enables linear traversal of elements.
2. It eliminates the use of stack as it perform linear traversal.
3. Enables to find parent node without explicit use of parent pointer
4. Threaded tree give forward and backward traversal of nodes by in-order
fashion
5. Nodes contain pointers to in-order predecessor and successor

*One-way threaded Binary trees*

41 | P a g e
In one-way threaded binary trees, a thread will appear
either in the right or left link field of a node. If it
appears in the right link field of a node then it will
point to the next node that will appear on performing
in order traversal. Such trees are called Right threaded
binary trees. If thread appears in the left field of a node
then it will point to the nodes inorder predecessor.
Such trees are called Left threaded binary trees.

*Two-way threaded Binary Trees*


In two-way threaded Binary trees, the right link field
of a node containing NULL values is replaced by a
thread that points to nodes inorder successor and left
field of a node containing NULL values is replaced by
a thread that points to nodes inorder predecessor.

SPLAY TREES
A splay tree is a binary search tree with the additional property that recently accessed elements are quick
to access again. Like self-balancing binary search trees, a splay tree performs basic operations such as
insertion, look-up and removal in O(log n) amortized time.
In a splay tree, every operation is performed at the root of the tree. All operations in the splay tree
involve one common operation called splaying.

Rotations in Splay Tree


1.Zig rotation [Right Rotation]
2.Zig zig [Two Right Rotations]

42 | P a g e
3.Zag rotation [Left Rotation]
4.Zag zag [Two Left Rotations]
5.Zig zag [Zig followed by Zag]
6.Zag zig [Zag followed by Zig]

Zig rotation
This rotation is similar to the right rotation in the AVL tree. In zig rotation, every node moves one
position to the right of its current position. We use Zig rotation when the item which is to be searched is
either a root node or a left child of the root node.

Zag rotation
This rotation is similar to the left rotation in the AVL tree. In zag rotation, every node moves one position
to the left of its current position. We use Zag rotation when the item which is to be searched is either a
root node or a right child of the root node.

Zag Zag Rotation


It’s a kind of double zag rotation. Here we perform zag rotation two times. Every node moves two
positions to the left of its current position. But why are we doing this?
We are doing this because sometimes situations arise where we need to search for the item that has both
parent and grandparent. In such cases, we have to perform four rotations for splaying.

43 | P a g e
Zig Zag rotation
This type of rotation is a sequence of zig rotations followed by zag rotations. So far, we've seen that both
the parent and the grandparent are in a RR or LL relationship. Now, here we will see the RL and LR kinds
of relationships between parent and grandparent. Every node moves one position to the right, followed by
one position to the left of its current position.

Zag Zig Rotation


This rotation is similar to the Zig-zag rotation, the only difference is that here every node moves one
position to the left, followed by one position to the right of its current position.

Advantages of Splay Tree


1.In the AVL and Red-Black trees, we need to store some information. Like in AVL trees, we need to
store the balance factor of each node, and in the red-black trees, we also need to store one extra bit of
information that denotes the color of the node, either black or red.
2.Splay tree is the fastest type of binary search tree, which is used in a variety of practical applications
such as GCC compilers.

44 | P a g e

You might also like