Daa Answers

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

1.Algorithm for finding pivot in quicksort with a example?

Quick sort is a highly efficient sorting algorithm and is based on partitioning of array of data
into smaller arrays. A large array is partitioned into two arrays one of which holds values
smaller than the specified value, say pivot, based on which the partition is made and another
array holds values greater than the pivot value.
Quicksort partitions an array and then calls itself recursively twice to sort the two resulting
subarrays. This algorithm is quite efficient for large-sized data sets as its average and worst-
case complexity are O(n2), respectively.
The pivot value divides the list into two parts. And recursively, we find the pivot for each
sub-lists until all lists contains only one element.

Quick Sort Pivot Algorithm


Based on our understanding of partitioning in quick sort, we will now try to write an
algorithm for it, which is as follows.
Step 1 − Choose the highest index value has pivot
Step 2 − Take two variables to point left and right of the list excluding pivot
Step 3 − left points to the low index
Step 4 − right points to the high
Step 5 − while value at left is less than pivot move right
Step 6 − while value at right is greater than pivot move left
Step 7 − if both step 5 and step 6 does not match swap left and right
Step 8 − if left ≥ right, the point where they met is new pivot

Quick Sort Algorithm


Using pivot algorithm recursively, we end up with smaller possible partitions. Each partition
is then processed for quick sort. We define recursive algorithm for quicksort as follows −
Step 1 − Make the right-most index value pivot
Step 2 − partition the array using pivot value
Step 3 − quicksort left partition recursively
Step 4 − quicksort right partition recursively

Making quick work of quicksort

We’ll do this by walking through how quicksort would sort a small


collection of numbers that might look something like this: [9, 12,

9, 2, 17, 1, 6].
Quick sort in action: part 1.

choose the last element as our pivot. In this case, the last element
is 6, so that will be our pivot element.
In the example shown here, we’re going to move the remaining
items around so that everything smaller than the element 6 is to
the left of it, and everything larger than 6 is to the right of it.

For example, the first element is 9, which we know is larger than 6.
So, it is moved to the right partition. The same goes for the next few
elements within our unsorted list: 12 and 9. However, 2 is smaller
than 6, so it is moved to the left partition.

Notice that, once we’re done moving all the elements around in
relation to the pivot, we’re still not done! The entire collection hasn’t
been sorted in relation to all the elements; however, we do know that
the collection has been sorted in relation to the pivot element. This
is helpful because we won’t need to compare elements in the left
partition to elements in the right partition, which will save us some
time down the road.

So, if we’re not done, what do we need to do next? Well, quicksort is


a divide and conquer algorithm, which means that its designed to
use the same solution on smaller subproblems. In other words, we
can recursively take the exact same steps we did just now and apply
them to the left and right partitions that still need to be sorted.
Let’s see what that would look like.

In the second part of this walkthrough of quicksort, we will apply the


same steps to the left and right partitions. Looking at the illustration
shown here, we can see that we’re again, choosing the last element of
both sublists as their respective pivot elements. For the left sublist,
the partition is 2, and for the right sublist, the partition is 17.

Next, let’s look at the left sublist. There’s only one element in this list
aside from the pivot: 1. It just so happens that 1 is already in the
correct place: to the left of the pivot, 2, because it’s smaller than the
pivot. So, this list is effectively sorted!

It’s a slightly different story for the right sublist, however. There are
three elements in addition to the pivot: 9, 12, and 9. They’re all
smaller than the pivot, 17, and they’re all to the left of the pivot. So,
they’re in the correct partition given the pivot. But, we still need to
sort them!

So, we’ll break those three elements down even further, into their
own sublist, and recursively do the same work again: choose a pivot
(9), and sort the remaining two items so that items greater than 9 are
to the right, and items smaller or equal to 9 are to the left.
Now that all the sublists are sorted, there’s only one thing left to do:
combine all the items together again

2.Estimate time complexity for asymptotic notations using f(n) and g(n) functions?

Analysis of Algorithms
The main idea of asymptotic analysis is to have a measure of the efficiency of
algorithms that don’t depend on machine-specific constants and don’t require
algorithms to be implemented and time taken by programs to be compared.
Asymptotic notations are mathematical tools to represent the time complexity of
algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly
used to represent the time complexity of algorithms. 
 

1) Θ Notation: The theta notation bounds a function from above and below,


so it defines exact asymptotic behavior. 
A simple way to get the Theta notation of an expression is to drop low-order
terms and ignore leading constants. For example, consider the following
expression. 
3n3 + 6n2 + 6000 = Θ(n3) 
Dropping lower order terms is always fine because there will always be a
number(n) after which Θ(n3) has higher values than Θ(n2) irrespective of the
constants involved. 
For a given function g(n), we denote Θ(g(n)) is following set of functions. 
 
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such
that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >=
n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is
always between c1*g(n) and c2*g(n) for large values of n (n >= n0). The
definition of theta also requires that f(n) must be non-negative for values of n
greater than n0. 

Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
 { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to  Θ(n)
 { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to  Θ( n^2)
  Θ provides exact bounds .
2) Big O Notation: The Big O notation defines an upper bound of an
algorithm, it bounds a function only from above. For example, consider the
case of Insertion Sort. It takes linear time in the best case and quadratic time
in the worst case. We can safely say that the time complexity of Insertion
sort is O(n^2). Note that O(n^2) also covers linear time. 
If we use Θ notation to represent time complexity of Insertion sort, we have
to use two statements for best and worst cases: 
1. The worst-case time complexity of Insertion Sort is Θ(n^2). 
2. The best case time complexity of Insertion Sort is Θ(n). 
The Big O notation is useful when we only have an upper bound on the time
complexity of an algorithm. Many times we easily find an upper bound by
simply looking at the algorithm.  
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}
 
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2) 
Here U represents union  , we can write it in these manner because O
provides exact or upper bounds .
3) Ω Notation: Just as Big O notation provides an asymptotic upper bound
on a function, Ω notation provides an asymptotic lower bound. 
Ω Notation can be useful when we have a lower bound on the time
complexity of an algorithm. As discussed in the previous post, the best case
performance of an algorithm is generally not useful , the Omega notation is
the least used notation among all three. 
For a given function g(n), we denote by Ω(g(n)) the set of functions.  
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of
Insertion Sort can be written as Ω(n), but it is not very useful information
about insertion sort, as we are generally interested in worst-case and
sometimes in the average case. 
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Here U represents union , we can write it in these manner because Ω
provides exact or lower bounds .

You might also like