Professional Documents
Culture Documents
What Is Asymptotic Notation
What Is Asymptotic Notation
What Is Asymptotic Notation
Note - In asymptotic notation, when we want to represent the complexity of an algorithm,
we use only the most significant terms in the complexity of that algorithm and ignore least
significant terms in the complexity of that algorithm (Here complexity can be Space
Complexity or Time Complexity).
For example, consider the following time complexities of two algorithms...
Algorithm 1 : 5n2 + 2n + 1
Algorithm 2 : 10n2 + 8n + 3
Generally, when we analyze an algorithm, we consider the time complexity for larger
values of input data (i.e. 'n' value). In above two time complexities, for larger value
of 'n' the term '2n + 1' in algorithm 1 has least significance than the term '5n2', and the
term '8n + 3' in algorithm 2 has least significance than the term '10n2'.
Here, for larger value of 'n' the value of most significant terms ( 5n2 and 10n2 ) is very
larger than the value of least significant terms ( 2n + 1 and 8n + 3 ). So for larger value
of 'n' we ignore the least significant terms to represent overall time required by an
algorithm. In asymptotic notation, we use only the most significant terms to represent the
time complexity of an algorithm.
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
1. Big - Oh (O)
2. Big - Omega (Ω)
3. Big - Theta (Θ)
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as O(g(n)).
f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C g(n) is greater than f(n) which
indicates the algorithm's upper bound.
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values
of C > 0 and n0>= 1
f(n) <= C g(n)
⇒3n + 2 <= C n
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows...
3n + 2 = O(n)
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can
represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis
In above graph after a particular input value n 0, always C g(n) is less than f(n) which
indicates the algorithm's lower bound.
Example
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values
of C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows...
3n + 2 = Ω(n)
Consider function f(n) as time complexity of an algorithm and g(n) is the most
significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >=
1. Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on
X-Axis and time required is on Y-Axis
The following 2 more asymptotic notations are used to represent the time
complexity of algorithms.
Little ο asymptotic notation
Big-Ο is used as a tight upper bound on the growth of an algorithm’s
effort (this effort is described by the function f(n)), even though, as
written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used
to describe an upper bound that cannot be tight.
Definition: Let f(n) and g(n) be functions that map positive integers to
positive real numbers. We say that f(n) is ο(g(n)) (or f(n) Ε ο(g(n))) if
for any real constant c > 0, there exists an integer constant n0 ≥ 1 such
Amortize Analysis
This analysis is used when the occasional operation is very slow, but most of
the operations which are executing very frequently are faster. Data structures
we need amortized analysis for Hash Tables, Disjoint Sets etc.
In the Hash-table, the most of the time the searching time complexity is O(1),
but sometimes it executes O(n) operations. When we want to search or insert
an element in a hash table for most of the cases it is constant time taking the
task, but when a collision occurs, it needs O(n) times operations for collision
resolution.
Aggregate Method
The aggregate method is used to find the total cost. If we want to add a
bunch of data, then we need to find the amortized cost by this formula.
For a sequence of n operations, the cost is −
Probabilistic data structure works with large data set, where we want to
perform some operations such as finding some unique items in given
data set or it could be finding the most frequent item or if some items
exist or not. To do such an operation probabilistic data structure uses
more and more hash functions to randomize and represent a set of data.
The more number of hash function the more accurate result.
Things to remember
A deterministic data structure can also perform all the operations that a
probabilistic data structure does but only with low data sets. As stated
earlier, if the data set is too big and couldn’t fit into the memory, then the
deterministic data structure fails and is simply not feasible. Also in case of
a streaming application where data is required to be processed in one go
and perform incremental updates, it is very difficult to manage with the
deterministic data structure.
Use Cases
1. Analyze big data set
2. Statistical analysis
3. Mining tera-bytes of data sets, etc
Popular probabilistic data structures
1. Bloom filter
2. Count-Min Sketch
3. HyperLogLog
In this tutorial, you will learn how Binary Search Tree works. Also, you will find
working examples of Binary Search Tree in C, C++, Java and Python.
Binary search tree is a data structure that quickly allows us to maintain a sorted
list of numbers.
It is called a binary tree because each tree node has a maximum of two
children.
It is called a search tree because it can be used to search for the presence of a
number in O(log(n)) time.
The properties that separate a binary search tree from a regular binary tree is
1. All nodes of left subtree are less than the root node
2. All nodes of right subtree are more than the root node
3. Both subtrees of each node are also BSTs i.e. they have the above two
properties
A tree having a right subtree with one value smaller than the root is
shown to demonstrate that it is not a valid binary search tree
The binary tree on the right isn't a binary search tree because the right subtree of
the node "3" contains a value smaller than it.
There are two basic operations that you can perform on a binary search tree:
Search Operation
The algorithm depends on the property of BST that if each left subtree has
values below root and each right subtree has values above the root.
If the value is below the root, we can say for sure that the value is not in the right
subtree; we need to only search in the left subtree and if the value is above the
root, we can say for sure that the value is not in the left subtree; we need to only
search in the right subtree.
Algorithm:
If root == NULL
return NULL;
If number == root->data
return root->data;
return search(root->right)
4 is found
If the value is found, we return the value so that it gets propagated in each
recursion step as shown in the image below.
If you might have noticed, we have called return search(struct node*) four times.
When we return either the new node or NULL, the value gets returned again and
again until search(root) returns the final result.
Insert Operation
We keep going to either right subtree or left subtree depending on the value and
when we reach a point left or right subtree is null, we put the new node there.
Algorithm:
If node == NULL
The algorithm isn't as simple as it looks. Let's try to visualize how we add a
number to an existing BST.
4<8 so, transverse through the left child of 8
This makes sure that as we move back up the tree, the other node connections
aren't changed.
Image showing the importance of returning the root element at the end so
that the elements don't lose their position during the upward recursion
step.
Deletion Operation
There are three cases for deleting a node from a binary search tree.
Case I
In the first case, the node to be deleted is the leaf node. In such a case, simply
delete the node from the tree.
4 is to be deleted
In the second case, the node to be deleted lies has a single child node. In such a
case follow the steps below:
6 is to be deleted
copy the value of its child to the node and delete the child
Final tree
Case III
In the third case, the node to be deleted has two children. In such a case follow
the steps below:
3 is to be deleted
Copy the value of the inorder successor (4) to the node
Python
Java
C
C++
class BinarySearchTree {
class Node {
int key;
key = item;
Node root;
BinarySearchTree() {
root = null;
}
// Insert key in the tree
if (root == null) {
return root;
}
void inorder() {
inorderRec(root);
// Inorder Traversal
inorderRec(root.left);
inorderRec(root.right);
}
void deleteKey(int key) {
if (root == null)
return root;
else {
if (root.left == null)
return root.right;
return root.left;
}
return root;
minv = root.left.key;
root = root.left;
return minv;
}
// Driver Program to test above functions
tree.insert(8);
tree.insert(3);
tree.insert(1);
tree.insert(6);
tree.insert(7);
tree.insert(10);
tree.insert(14);
tree.insert(4);
tree.inorder();
tree.inorder();
}
Binary Search Tree Complexities
Time Complexity
Space Complexity
Previous Tutorial:
AVL Tree
Share on:
Related Tutorials
DS & Algorithms
DS & Algorithms
DS & Algorithms
Binary Tree
窗体顶端
Join our newsletter for the latest updates.
Join
窗体底端
Tutorials
Python 3 Tutorial
JavaScript Tutorial
SQL Tutorial
C Tutorial
Java Tutorial
Kotlin Tutorial
C++ Tutorial
Swift Tutorial
C# Tutorial
Go Tutorial
DSA Tutorial
Examples
Python Examples
JavaScript Examples
C Examples
Java Examples
Kotlin Examples
C++ Examples
Company
About
Advertising
Privacy Policy
Terms & Conditions
Contact
Blog
Youtube
Apps
Learn Python
Learn C Programming
Learn Java