What Is Asymptotic Notation

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 51

What is Asymptotic Notation?

Asymptotic notation of an algorithm is a mathematical representation of its


complexity.

 Note - In asymptotic notation, when we want to represent the complexity of an algorithm,
we use only the most significant terms in the complexity of that algorithm and ignore least
significant terms in the complexity of that algorithm (Here complexity can be Space
Complexity or Time Complexity).
For example, consider the following time complexities of two algorithms...

 Algorithm 1 : 5n2 + 2n + 1
 Algorithm 2 : 10n2 + 8n + 3

Generally, when we analyze an algorithm, we consider the time complexity for larger
values of input data (i.e. 'n' value). In above two time complexities, for larger value
of 'n' the term '2n + 1' in algorithm 1 has least significance than the term '5n2', and the
term '8n + 3' in algorithm 2 has least significance than the term '10n2'.
Here, for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very
larger than the value of least significant terms ( 2n + 1 and 8n + 3 ). So for larger value
of 'n' we ignore the least significant terms to represent overall time required by an
algorithm. In asymptotic notation, we use only the most significant terms to represent the
time complexity of an algorithm.

Majorly, we use THREE types of Asymptotic Notations and those are as follows...

1. Big - Oh (O)
2. Big - Omega (Ω)
3. Big - Theta (Θ)

Big - Oh Notation (O)


Big - Oh notation is used to define the upper bound of an algorithm in terms of Time
Complexity.
That means Big - Oh notation always indicates the maximum time required by an
algorithm for all input values. That means Big - Oh notation describes the worst case of
an algorithm time complexity.
Big - Oh Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as O(g(n)).

f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C g(n) is greater than f(n) which
indicates the algorithm's upper bound.

Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values
of C > 0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)

Big - Omege Notation (Ω)


Big - Omega notation is used to define the lower bound of an algorithm in terms of Time
Complexity.
That means Big-Omega notation always indicates the minimum time required by an
algorithm for all input values. That means Big-Omega notation describes the best case of
an algorithm time complexity.
Big - Omega Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as Ω(g(n)).

f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis

In above graph after a particular input value n 0, always C g(n) is less than f(n) which
indicates the algorithm's lower bound.

Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values
of C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)

Big - Theta Notation (Θ)


Big - Theta notation is used to define the average bound of an algorithm in terms of
Time Complexity.
That means Big - Theta notation always indicates the average time required by an
algorithm for all input values. That means Big - Theta notation describes the average
case of an algorithm time complexity.
Big - Theta Notation can be defined as follows...

Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >=
1. Then we can represent f(n) as Θ(g(n)).

f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis

 The following 2 more asymptotic notations are used to represent the time
complexity of algorithms.
Little ο asymptotic notation
Big-Ο is used as a tight upper bound on the growth of an algorithm’s
effort (this effort is described by the function f(n)), even though, as
written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used
to describe an upper bound that cannot be tight. 
Definition: Let f(n) and g(n) be functions that map positive integers to
positive real numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if
for any real constant c > 0, there exists an integer constant n0 ≥ 1 such

that 0 ≤ f(n) < c*g(n). 


 Thus, little o() means loose upper-bound of f(n). Little o is a rough
estimate of the maximum order of growth whereas Big-Ο may be the
actual order of growth. 
In mathematical relation, f(n) = o(g(n)) means lim  f(n)/g(n) = 0 n→∞ 
Examples:
Is 7n + 8 ∈ o(n2)? 
In order for that to be true, for any c, we have to be able to find an n0 that
makes 
f(n) < c * g(n) asymptotically true. 
lets took some example, 
If c = 100,we check the inequality is clearly true. If c = 1/100 , we’ll have
to use 
a little more imagination, but we’ll be able to find an n0. (Try n0 = 1000.)
From 
these examples, the conjecture appears to be correct. 
then check limits, 
lim  f(n)/g(n) = lim  (7n + 8)/(n2) = lim  7/2n = 0 (l’hospital) 
n→∞ n→∞ n→∞ 
hence 7n + 8 ∈ o(n2)
Little ω asymptotic notation
Definition : Let f(n) and g(n) be functions that map positive integers to
positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for
any real constant c > 0, there exists an integer constant n0 ≥ 1 such that
f(n) > c * g(n) ≥ 0 for every integer n ≥ n0. 
f(n) has a higher growth rate than g(n) so main difference between Big
Omega (Ω) and little omega (ω) lies in their definitions.In the case of Big
Omega f(n)=Ω(g(n)) and the bound is 0<=cg(n)<=f(n), but in case of little
omega, it is true for 0<=c*g(n)<f(n). 
The relationship between Big Omega (Ω) and Little Omega (ω) is similar
to that of Big-Ο and Little o except that now we are looking at the lower
bounds. Little Omega (ω) is a rough estimate of the order of the growth
whereas Big Omega (Ω) may represent exact order of growth. We use ω
notation to denote a lower bound that is not asymptotically tight. And, f(n)
∈ ω(g(n)) if and only if g(n) ∈ ο((f(n)).  
In mathematical relation, 
if f(n) ∈ ω(g(n)) then,  
lim  f(n)/g(n) = ∞ 
n→∞ 
Example: 
Prove that 4n + 6 ∈ ω(1);  
the little omega(ο) running time can be proven by applying limit formula
given below. 
if lim  f(n)/g(n) = ∞ then functions f(n) is ω(g(n)) 
n→∞ 
here,we have functions f(n)=4n+6 and g(n)=1 
lim   (4n+6)/(1) = ∞ 
n→∞ 
and,also for any c we can get n0 for this inequality 0 <= c*g(n) < f(n), 0 <=
c*1 < 4n+6 
Hence proved. 

Amortize Analysis
This analysis is used when the occasional operation is very slow, but most of
the operations which are executing very frequently are faster. Data structures
we need amortized analysis for Hash Tables, Disjoint Sets etc.
In the Hash-table, the most of the time the searching time complexity is O(1),
but sometimes it executes O(n) operations. When we want to search or insert
an element in a hash table for most of the cases it is constant time taking the
task, but when a collision occurs, it needs O(n) times operations for collision
resolution.

Aggregate Method
The aggregate method is used to find the total cost. If we want to add a
bunch of data, then we need to find the amortized cost by this formula.
For a sequence of n operations, the cost is −

Example on Amortized Analysis


For a dynamic array, items can be inserted at a given index in O(1) time. But
if that index is not present in the array, it fails to perform the task in constant
time. For that case, it initially doubles the size of the array then inserts the
element if the index is present.

For the dynamic array, let = cost of ith insertion.

Probabilistic data structure works with large data set, where we want to
perform some operations such as finding some unique items in given
data set or it could be finding the most frequent item or if some items
exist or not. To do such an operation probabilistic data structure uses
more and more hash functions to randomize and represent a set of data.
The more number of hash function the more accurate result.
Things to remember
A deterministic data structure can also perform all the operations that a
probabilistic data structure does but only with low data sets. As stated
earlier, if the data set is too big and couldn’t fit into the memory, then the
deterministic data structure fails and is simply not feasible. Also in case of
a streaming application where data is required to be processed in one go
and perform incremental updates, it is very difficult to manage with the
deterministic data structure.
Use Cases
1. Analyze big data set
2. Statistical analysis
3. Mining tera-bytes of data sets, etc
Popular probabilistic data structures
1. Bloom filter
2. Count-Min Sketch
3. HyperLogLog

Binary Search Tree(BST)

In this tutorial, you will learn how Binary Search Tree works. Also, you will find
working examples of Binary Search Tree in C, C++, Java and Python.

Binary search tree is a data structure that quickly allows us to maintain a sorted
list of numbers.

 It is called a binary tree because each tree node has a maximum of two
children.
 It is called a search tree because it can be used to search for the presence of a
number in  O(log(n))  time.

The properties that separate a binary search tree from a regular binary tree is

1. All nodes of left subtree are less than the root node
2. All nodes of right subtree are more than the root node
3. Both subtrees of each node are also BSTs i.e. they have the above two
properties

A tree having a right subtree with one value smaller than the root is
shown to demonstrate that it is not a valid binary search tree
The binary tree on the right isn't a binary search tree because the right subtree of
the node "3" contains a value smaller than it.

There are two basic operations that you can perform on a binary search tree:

Search Operation

The algorithm depends on the property of BST that if each left subtree has
values below root and each right subtree has values above the root.

If the value is below the root, we can say for sure that the value is not in the right
subtree; we need to only search in the left subtree and if the value is above the
root, we can say for sure that the value is not in the left subtree; we need to only
search in the right subtree.
Algorithm:

If root == NULL

return NULL;

If number == root->data

return root->data;

If number < root->data


return search(root->left)

If number > root->data

return search(root->right)

Let us try to visualize this with a diagram.


4 is not found so, traverse through the left subtree of 8

4 is not found so, traverse through the right subtree of 3


4 is not found so, traverse through the left subtree of 6

4 is found
If the value is found, we return the value so that it gets propagated in each
recursion step as shown in the image below.

If you might have noticed, we have called return search(struct node*) four times.
When we return either the new node or NULL, the value gets returned again and
again until search(root) returns the final result.

If the value is found in any of the subtrees, it is propagated up so that


in the end it is returned, otherwise null is returned
If the value is not found, we eventually reach the left or right child of a leaf node
which is NULL and it gets propagated and returned.

Insert Operation

Inserting a value in the correct position is similar to searching because we try to


maintain the rule that the left subtree is lesser than root and the right subtree is
larger than root.

We keep going to either right subtree or left subtree depending on the value and
when we reach a point left or right subtree is null, we put the new node there.

Algorithm:
If node == NULL

return createNode(data)if (data < node->data)

node->left = insert(node->left, data);else if (data > node->data)

node->right = insert(node->right, data); return node;

The algorithm isn't as simple as it looks. Let's try to visualize how we add a
number to an existing BST.
4<8 so, transverse through the left child of 8

4>3 so, transverse through the right child of 8


4<6 so, transverse through the left child of 6

Insert 4 as a left child of 6


We have attached the node but we still have to exit from the function without
doing any damage to the rest of the tree. This is where the  return node;  at the
end comes in handy. In the case of  NULL , the newly created node is returned and
attached to the parent node, otherwise the same node is returned without any
change as we go up until we return to the root.

This makes sure that as we move back up the tree, the other node connections
aren't changed.

Image showing the importance of returning the root element at the end so
that the elements don't lose their position during the upward recursion
step.
Deletion Operation

There are three cases for deleting a node from a binary search tree.

Case I

In the first case, the node to be deleted is the leaf node. In such a case, simply
delete the node from the tree.
4 is to be deleted

Delete the node


Case II

In the second case, the node to be deleted lies has a single child node. In such a
case follow the steps below:

1. Replace that node with its child node.


2. Remove the child node from its original position.

6 is to be deleted
copy the value of its child to the node and delete the child

Final tree

Case III
In the third case, the node to be deleted has two children. In such a case follow
the steps below:

1. Get the inorder successor of that node.


2. Replace the node with the inorder successor.
3. Remove the inorder successor from its original position.

3 is to be deleted
Copy the value of the inorder successor (4) to the node

Delete the inorder successor


Python, Java and C/C++ Examples

Python
Java

C
C++

// Binary Search Tree operations in Java

class BinarySearchTree {

class Node {
int key;

Node left, right;

public Node(int item) {

key = item;

left = right = null;


}

Node root;

BinarySearchTree() {
root = null;

void insert(int key) {

root = insertKey(root, key);

}
// Insert key in the tree

Node insertKey(Node root, int key) {

// Return a new node if the tree is empty

if (root == null) {

root = new Node(key);


return root;

// Traverse to the right place and insert the node

if (key < root.key)

root.left = insertKey(root.left, key);


else if (key > root.key)

root.right = insertKey(root.right, key);

return root;

}
void inorder() {

inorderRec(root);

// Inorder Traversal

void inorderRec(Node root) {


if (root != null) {

inorderRec(root.left);

System.out.print(root.key + " -> ");

inorderRec(root.right);

}
void deleteKey(int key) {

root = deleteRec(root, key);

Node deleteRec(Node root, int key) {


// Return if the tree is empty

if (root == null)

return root;

// Find the node to be deleted

if (key < root.key)


root.left = deleteRec(root.left, key);

else if (key > root.key)

root.right = deleteRec(root.right, key);

else {

// If the node is with only one child or no child

if (root.left == null)
return root.right;

else if (root.right == null)

return root.left;

// If the node has two children

// Place the inorder successor in position of the node to be deleted


root.key = minValue(root.right);

// Delete the inorder successor

root.right = deleteRec(root.right, root.key);

}
return root;

// Find the inorder successor

int minValue(Node root) {

int minv = root.key;


while (root.left != null) {

minv = root.left.key;

root = root.left;

return minv;

}
// Driver Program to test above functions

public static void main(String[] args) {

BinarySearchTree tree = new BinarySearchTree();

tree.insert(8);
tree.insert(3);

tree.insert(1);

tree.insert(6);

tree.insert(7);

tree.insert(10);

tree.insert(14);
tree.insert(4);

System.out.print("Inorder traversal: ");

tree.inorder();

System.out.println("\n\nAfter deleting 10");


tree.deleteKey(10);

System.out.print("Inorder traversal: ");

tree.inorder();

}
Binary Search Tree Complexities

Time Complexity

Best Case Average Case Worst Case


Operation
Complexity Complexity Complexity

Search O(log n) O(log n) O(n)

Insertion O(log n) O(log n) O(n)

Deletion O(log n) O(log n) O(n)

Here,  n  is the number of nodes in the tree.

Space Complexity

The space complexity for all the operations is  O(n) .


Binary Search Tree Applications

1. In multilevel indexing in the database


2. For dynamic sorting
3. For managing virtual memory areas in Unix kernel

Previous Tutorial:

Balanced Binary Tree


Next Tutorial:

AVL Tree

Share on:

Did you find this article helpful?

Related Tutorials

DS & Algorithms

Tree Traversal - inorder, preorder and postorder


DS & Algorithms

Perfect Binary Tree

DS & Algorithms

Full Binary Tree

DS & Algorithms

Binary Tree

窗体顶端
Join our newsletter for the latest updates.

Join
窗体底端
Tutorials
 Python 3 Tutorial
 JavaScript Tutorial
 SQL Tutorial
 C Tutorial
 Java Tutorial
 Kotlin Tutorial
 C++ Tutorial
 Swift Tutorial
 C# Tutorial
 Go Tutorial
 DSA Tutorial

Examples
 Python Examples
 JavaScript Examples
 C Examples
 Java Examples
 Kotlin Examples
 C++ Examples

Company


 About
 Advertising
 Privacy Policy
 Terms & Conditions
 Contact
 Blog
 Youtube

Apps
 Learn Python
 Learn C Programming
 Learn Java

© Parewa Labs Pvt. Ltd. All rights reserved.

You might also like