Professional Documents
Culture Documents
Daa Answers
Daa Answers
Daa Answers
Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data
into smaller arrays. A large array is partitioned into two arrays one of which holds values
smaller than the specified value, say pivot, based on which the partition is made and another
array holds values greater than the pivot value.
Quicksort partitions an array and then calls itself recursively twice to sort the two resulting
subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-
case complexity are O(n2), respectively.
The pivot value divides the list into two parts. And recursively, we find the pivot for each
sub-lists until all lists contains only one element.
9, 2, 17, 1, 6].
Quick sort in action: part 1.
choose the last element as our pivot. In this case, the last element
is 6, so that will be our pivot element.
In the example shown here, we’re going to move the remaining
items around so that everything smaller than the element 6 is to
the left of it, and everything larger than 6 is to the right of it.
For example, the first element is 9, which we know is larger than 6.
So, it is moved to the right partition. The same goes for the next few
elements within our unsorted list: 12 and 9. However, 2 is smaller
than 6, so it is moved to the left partition.
Notice that, once we’re done moving all the elements around in
relation to the pivot, we’re still not done! The entire collection hasn’t
been sorted in relation to all the elements; however, we do know that
the collection has been sorted in relation to the pivot element. This
is helpful because we won’t need to compare elements in the left
partition to elements in the right partition, which will save us some
time down the road.
Next, let’s look at the left sublist. There’s only one element in this list
aside from the pivot: 1. It just so happens that 1 is already in the
correct place: to the left of the pivot, 2, because it’s smaller than the
pivot. So, this list is effectively sorted!
It’s a slightly different story for the right sublist, however. There are
three elements in addition to the pivot: 9, 12, and 9. They’re all
smaller than the pivot, 17, and they’re all to the left of the pivot. So,
they’re in the correct partition given the pivot. But, we still need to
sort them!
So, we’ll break those three elements down even further, into their
own sublist, and recursively do the same work again: choose a pivot
(9), and sort the remaining two items so that items greater than 9 are
to the right, and items smaller or equal to 9 are to the left.
Now that all the sublists are sorted, there’s only one thing left to do:
combine all the items together again
2.Estimate time complexity for asymptotic notations using f(n) and g(n) functions?
Analysis of Algorithms
The main idea of asymptotic analysis is to have a measure of the efficiency of
algorithms that don’t depend on machine-specific constants and don’t require
algorithms to be implemented and time taken by programs to be compared.
Asymptotic notations are mathematical tools to represent the time complexity of
algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly
used to represent the time complexity of algorithms.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n^2)
Θ provides exact bounds .
2) Big O Notation: The Big O notation defines an upper bound of an
algorithm, it bounds a function only from above. For example, consider the
case of Insertion Sort. It takes linear time in the best case and quadratic time
in the worst case. We can safely say that the time complexity of Insertion
sort is O(n^2). Note that O(n^2) also covers linear time.
If we use Θ notation to represent time complexity of Insertion sort, we have
to use two statements for best and worst cases:
1. The worst-case time complexity of Insertion Sort is Θ(n^2).
2. The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have an upper bound on the time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Here U represents union , we can write it in these manner because O
provides exact or upper bounds .
3) Ω Notation: Just as Big O notation provides an asymptotic upper bound
on a function, Ω notation provides an asymptotic lower bound.
Ω Notation can be useful when we have a lower bound on the time
complexity of an algorithm. As discussed in the previous post, the best case
performance of an algorithm is generally not useful , the Omega notation is
the least used notation among all three.
For a given function g(n), we denote by Ω(g(n)) the set of functions.
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of
Insertion Sort can be written as Ω(n), but it is not very useful information
about insertion sort, as we are generally interested in worst-case and
sometimes in the average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Here U represents union , we can write it in these manner because Ω
provides exact or lower bounds .