Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Ques 15 Explain and give quick sort algorithm. Determine its complexity.

Answer:
Quick sort is a highly efficient sorting algorithm and is based on
partitioning of array of data into smaller arrays. A large array is
partitioned into two arrays one of which holds values smaller than
the specified value, say pivot, based on which the partition is made
and another array holds values greater than the pivot value.
Quicksort partitions an array and then calls itself recursively
twice to sort the two resulting subarrays. This algorithm is
quite efficient for large-sized data sets as its average and
worst-case complexity are O(nLogn)
Algorithm:
Step 1 − Choose the highest index value has pivot
Step 2 − Take two variables to point left and right of the list
excluding pivot
Step 3 − left points to the low index
Step 4 − right points to the high
Step 5 − while value at left is less than pivot move right
Step 6 − while value at right is greater than pivot move left
Step 7 − if both step 5 and step 6 does not match swap left and right
Step 8 − if left ≥ right, the point where they met is new pivot
Time Complexity:
The time complexity of quick sort algorithm is of O(n log n).

Ques 16 Write a recursive quick sort algorithm.


Answer:
Recursive Quick Sort Algorithm:
quicksort (p, q)
{
if ( p < q ) then
{
call j = PARTITION(a, p, q+1);
call quicksort(p, j – 1);
call quicksort(j + 1 , q);
}
}
partition(a, m, p)
{
v = a[m]; up = m; down = p;
do
{
repeat
up = up + 1;
until (a[up] > v);
repeat
down = down – 1;
until (a[down] < v);
if (up < down) then call interchange(a, up, down);
} while (up > down);
a[m] = a[down];
a[down] = v;
return (down);
}
interchange(a, up, down)
{
p = a[up];
a[up] = a[down];
a[down] = p;
}

Ques 17 What is quick sort? Sort the given values using quick sort: present
all steps/iterations:

38, 81, 22, 48, 13, 69, 93, 14, 45, 58, 79, 72
AKTU 2016-17, Marks 10
Answer:
Ques 18 Write algorithm for quick sort.trace your algorithm on the
following data to sort the list: 2, 13, 4, 21, 7, 56, 51, 85, 59, 1, 9, 10. How
the choice of pivot elements affects the efficiency of algorithm?
AKTU 2018-19, Marks 07
Answer:
Choice of pivot element affects the efficiency of algorithm:
If pivot element is chosen from first or last element of the given
array then it results in worst case scenario with O(n2) time
complexity.
If it is the median of the array then the array is divided into two
halves every time and results in best or average case scenario with
O(n logn )time complexity.

Ques 19 Use quick sort algorithm to sort 15, 22, 30, 10, 15, 64, 1, 3, 9, 2.
Is it a stable algorithm? Justify.
AKTU 2017-18, Marks 07
Answer:
Write a short note on insertion sort.
AKTU 2014-15, Marks 05
Answer:
Insertion sort is a sorting algorithm in which the elements are
transferred one at a time to the right position. In other words, an
insertion sort helps in building the final sorted list, one item at a
time, with the movement of higher-ranked elements. An insertion
sort has the benefits of simplicity and low overhead.
Insertion sort is a simple sorting algorithm that works the way we
sort playing cards in our hands.
Algorithm
// Sort an arr[] of size n
insertionSort(arr, n)
Loop from i = 1 to n-1.
……a) Pick element arr[i] and insert it into sorted sequence arr[0…i-1]
Analysis:
Complexity of best case: O(n)
Complexity of average case:O(n2)
Complexity of worst case:O(n2)

Ques 13 Write a short note on selection sort.


Answer:
Selection sort will not require no more than n-1 interchanges.
Suppose x is an array of
size n stored in memory. The selection sort algorithm first selects
the smallest element
in the array x and place it at array position 0; then it selects the next
smallest element
in the array x and place it at array position 1. It simply continues this
procedure until it
places the biggest element in the last position of the array.
The array is passed through (n-1) times and the smallest element is
placed in its
respective position in the array
Pass 1: Find the location j of the smallest element in the array x [0],
x[1], . . . . x[n-1],
and then interchange x[j] with x[0]. Then x[0] is sorted.
Pass 2: Leave the first element and find the location j of the smallest
element in the
sub-array x[1], x[2], . . . . x[n-1], and then interchange x[1] with x[j].
Then
x[0], x[1] are sorted.
Pass 3: Leave the first two elements and find the location j of the
smallest element in
the sub-array x[2], x[3], . . . . x[n-1], and then interchange x[2] with
x[j].
Then x[0], x[1], x[2] are sorted.
Pass (n-1): Find the location j of the smaller of the elements x[n-2]
and x[n-1], and
then interchange x[j] and x[n-2]. Then x[0], x[1], . . . . x[n-2] are
sorted. Of
course, during this pass x[n-1] will be the biggest element and so the
entire
array is sorted.
Algorithm:
Step 1 − Set MIN to location 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at location MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat until list is sorted
Analysis:
Complexity of best case: O(n2)
Complexity of average case:O(n2)
Complexity of worst case:O(n2)
Ques 14 Discuss bubble sort.
Answer:
Bubble Sort:
The bubble sort is easy to understand and program. The basic idea of
bubble sort is to
pass through the file sequentially several times. In each pass, we
compare each
element in the file with its successor i.e., X[i] with X[i+1] and
interchange two element
when they are not in proper order. We will illustrate this sorting
technique by taking a
specific example. Bubble sort is also called an exchange sort.
Algorithm:
begin BubbleSort(list)
for all elements of list
if list[i] > list[i+1]
swap(list[i], list[i+1])
end if
end for

return list

end BubbleSort
Analysis:
Complexity of best case: O(n)
Complexity of average case:O(n2)
Complexity of worst case:O(n2)

Concept of Hashing and Collision Resolution Techniques used in


Hashing.
Ques 6 What do you mean by hashing?
Answer:
1. Hashing is an important Data Structure which is designed to
use a special function called the Hash function which is used to
map a given value with a particular key for faster access of
elements.
2. The efficiency of mapping depends of the efficiency of the
hash function used.
3. It is a technique which uses less key comparisons and searches
the element in O(n) time in the worst case and in an average
case it will be done in O(1) time.
4. This method generally used the hash functions to map the keys
into a table, which is called a hash table.

Ques 7 Discuss types of hash functions.


Answer:
Types of Hash Functions:
1. Division method
In this the hash function is dependent upon the remainder of a
division.
Hash function is defined by:
H(k) = k mod m
Here, k(mod m) denotes the remainder when k is divided by m
Where m is the size of the hash table
2. Mid square method
In this method firstly the key is squared and then mid part of the
result is taken as the index.
Hash function is defined by:
H(k)=l
Where , l is obtained by deleting digits from both end of k 2, we
emphasize that the same positions of k2 must be used for all of the
keys.
3. Digit folding method
In this method the key is divided into separate parts and by using
some simple operations these parts are combined to produce a hash
key.
Hash function is defined by:
H(k) = k1 + k2 + ……. + kn
Truncate the address upto the digit based on the size of hash table.

Ques 8 What is collision? Discuss collision resolution techniques.

OR

Write a short note on hashing techniques.


AKTU 2017-18, Marks 3.5
Answer:
Collision:
When the hash value of a key maps to an already occupied bucket of
the hash table, it is called Collision.
Collision Resolution Techniques:
These are the techniques used for resolving or handling the collision.
Types of Collision Resolution Techniques:

Separate Chaining:
This technique creates a linked list to the slot for which collision
occurs.The new key is then inserted in the linked list.These linked
lists to the slots appear like chains.
Linear Probing:
Linear probing is a scheme in computer programming for resolving
collisions in hash tables, data structures for maintaining a collection
of key–value pairs and looking up the value associated with a given
key
Hash function is defined by:
h(k,i) = (h’(k) + i) mod m
Where m is the size of the hash table and h’(k) = k mod m is a basic
hash function
Quadratic Probing:
Quadratic probing is an open addressing scheme in computer
programming for resolving hash collisions in hash tables. Quadratic
probing operates by taking the original hash index and adding
successive values of an arbitrary quadratic polynomial until an open
slot is found.
h(k,i) = (h’(k) + c1i+c2i2) mod m
Where h’ is the auxiliary hash function, c1, c2 not equal to 0 are
auxiliary constants and i=0,1,...,m-1
Double hashing:
In this function, permutations produced have many of the
characteristics of randomly chosen permutations.
Hash function is defined by:
h(k,i) = (h1(k) + i h2(k) ) mod m
Where h1 and h2 are the auxiliary hash functions,and m is the size of
the hash table.
Ques 9 What do you mean by hashing and collision? Discuss the
advantages and disadvantages of hashing over other searching techniques.
AKTu 2014-15, Marks 10
Answer:
Advantages:

1. One of them is while searching. Generally when we compare


most of the data structures like array or binary search tree, the
search time for a particular element or key depends on the
number of elements present in the data structure. But if we are
using a hashing technique, the search time is independent of
the number of elements.
2. Main advantage is synchronization.
3. In many situations, hash tables turn out to be more efficient
than search trees or any other table lookup structure. For this
reason, they are widely used in many kinds of computer
softwares, particularly for associative arrays, database
indexing, caches and sets.

Disadvantages:

1. Hash collisions are practically unavoidable. when hashing a


random subset of a large set of possible keys.
2. Hash tables become quite inefficient when there are many
collisions.
3. Hash table does not allow null values, like hash map.

Ques 10 Write short note on garbage collection.


AKTU 2017-18, Marks 3.5
AKTU 2014-15, Marks 05
Answer:
Garbage Collection is a type of memory management. It automatically
cleans up unused objects and pointers in memory, allowing the
resources to be used again. Some programming languages have
built-in garbage collection, while others require custom functions to
manage unused memory.
The garbage collector, or just collector, attempts to reclaim
garbage, or memory occupied by objects that are no longer in use by
the program.
Que 3.11
Write the collection when collision occurs in hashing. Describe any
collision detection algorithm in brief.
Answer
Collision detection is a fundamental problem in robotics, computer
animation, physically-based modeling, molecular modeling and
computer simulated environments.
Grid Based Collision detection:

1. Here,grids are space-filling.


2. Each cell or voxel(volume pixel) has a list of objects which
intersects it.
3. The uniform grid is used to determine which objects are near to
an object by examining object-lists of the cells the object
overlaps.
4. Intersections for a given object are found by going through
the object lists for all voxels containing the object,performing
intersection tests against objects on those lists.

Algorithm:
for i=1 to n
vmin= voxel(min(bbox(object[i])))
vmax= voxel(max(bbox(object[i])))
for x= vmin to x= vmax
for y= vmin to y= vmax
for z= vmin to z= vmax
for j=1 to n_objects(voxel(x,y,z))
if(not tested (object[i],object[j])
intersect (object[i],object[j])

What do you mean by searching? Explain.


Answer:
Searching is the process of finding a given value position in a list of
values.

1. It decides whether a search key is present in the data or not.


2. It is the algorithmic process of finding a particular item in a
collection of items.
3. It can be done on internal data structure or on external data
structure.

To search an element in a given array, it can be done in following ways:


1. Sequential(Linear) Search

1. It is a basic and simple search algorithm.


2. It compares the element with all the other elements given in
the list. If the element is matched, it returns the value index,
else it returns -1.

2. Binary Search(Used for searching an element in a sorted array.)


1. This searching technique looks for a particular element by
comparing the middle most element of the collection.
2. It is useful when there are large number of elements in an
array.

Ques 2 Write a short note on sequential search and index sequential


search.
Answer:
The Sequential Search is the simplest type of search, it is used when
a list of integers is not in any order.
It examines the first element in the list and then examines each
"sequential" element in the list until a match is found.
This match could be a desired work that you are searching for, or the
minimum number in the list.
It is important to remember that the:
- maximum number of searches required is: n+1
- average number of searches required is: (n+1)/2
where n = the number of elements on the list
Index Sequential Search:

1. An index file can be used to effectively overcome the problem


associated with sequential files and to speed up the key
search
2. The simplest indexing structure is the single-level one: a file
whose records are pairs key and a pointer), where the pointer
is the position in the data file of the record with the given key
3. Only a subset of data records, evenly spaced along the data
file, is indexed to mark the intervals of data records

Ques 3 Write down algorithm for linear/sequential search technique.


Give its analysis.
Answer:
Linear Search Algorithm:
Let array a[n] stores n elements. Determine whether element ‘x’ is
present or not.
linsrch(a[n], x)
{
index = 0;
flag = 0;
while (index < n) do
{
if (x == a[index])
{
flag = 1;
break;
}
index ++;
}
if(flag == 1)
printf(“Data found at %d position“, index);
else
printf(“data not found”);
}
Analysis:
Best Case:Element occurs at first position.Time Complexity is O(1)
Worst Case:Element occurs at last position.Time Complexity is O(n)
Ques 4 Write down the algorithm of binary search technique. Write
down the complexity of algorithm.
Answer:
Binary Search Algorithm:
Let array a[n] of elements in increasing order, n ≥ 0, determine
whether ‘x’ is present,
and if so, set j such that x = a[j] else return 0.
binsrch(a[], n, x)
{
low = 1; high = n;
while (low < high) do
{
mid = ⎣ (low + high)/2 ⎦
if (x < a[mid])
high = mid – 1;
else if (x > a[mid])
low = mid + 1;
else return mid;
}
return 0;
}
Analysis:
The complexity of binary search is O(log2n).
Que 3.5
What is the difference between sequential (linear) search and binary
search technique?
Answer:
Heap Sort and Radix Sort

Ques 23 Write a short note on heap sort.


AKTU 2014-15, Marks 05
OR

Explain heap sort.


AKTU 2017-18, Marks 3.5
Answer:
Heap Sort:
A heap sort algorithm works by first organizing the data to be sorted
into a special type
of binary tree called a heap. Any kind of data can be sorted either in
ascending order or
in descending order using heap tree. It does this with the following
steps:
1. Build a heap tree with the given set of data.
2.
a.Remove the top most item (the largest) and replace it with
the last
element in the heap.
b. Re-heapify the complete binary tree.
c.Place the deleted node in the output.
3. Continue step 2 until the heap tree is empty.
Algorithm:
This algorithm sorts the elements a[n]. Heap sort rearranges them
in-place in non-
decreasing order. First transform the elements into a heap.
heapsort(a, n)
{
heapify(a, n);
for i = n to 2 by – 1 do
{
temp = a[i];
a[i] = a[1];
a[1] = temp;
adjust (a, 1, i – 1);
}
}
heapify (a, n)
//Readjust the elements in a[n] to form a heap.
{
for i ⎣ n/2 ⎦ to 1 by – 1 do adjust (a, i, n);
}
adjust (a, i, n)
// The complete binary trees with roots a(2*i) and a(2*i + 1) are
combined with a(i) to
form a single heap, 1 < i < n. No node has an address greater than n or
less than 1. //
{
j = 2 *i ;
item = a[i] ;
while (j < n) do
{
if ((j < n) and (a (j) < a (j + 1)) then j j + 1;
// compare left and right child and let j be the larger child
if (item > a (j)) then break;
else a[ ⎣ j / 2 ⎦ ] = a[j]
j = 2 * j;
}
a [ ⎣ j / 2 ⎦ ] = item;
}
// a position for item is found
// move the larger child up a level
Time Complexity:
Thus, for ‘n’ elements it takes O(n log n) time, so the priority queue
sorting algorithm
runs in O(n log n) time when we use a heap to implement the priority
queue.

Ques 24 Write a short note on radix sort.

OR

Explain radix sort.


AKTU 2017-18, Marks 3.5
Answer:
Radix sort is a sorting technique that sorts the elements by first
grouping the individual digits of the same place value. Then, sort the
elements according to their increasing/decreasing order.
Suppose, we have an array of 8 elements. First, we will sort elements
based on the value of the unit place. Then, we will sort elements
based on the value of the tenth place. This process goes on until the
last significant place.
Algorithm:
radixSort(array)
d <- maximum number of digits in the largest element
create d buckets of size 0-9
for i <- 0 to d
sort the elements according to ith place digits using countingSort
countingSort(array, d)
max <- find largest element among dth place elements
initialize count array with all zeros
for j <- 0 to size
find the total count of each unique digit in dth place of elements
and
store the count at jth index in count array
for i <- 1 to max
find the cumulative sum and store it in count array itself
for j <- size down to 1
restore the elements to array
decrease count of each element restored by 1
Time Complexity:
Time and space complexity of Radix sort. The time complexity is
O(kn) and space complexity is O(k + n) . Here n is the number of
elements and k is the number of bits required to represent largest
element in the array.

Describe two ways merge sort method. Explain the complexities Of merge
sort method.
Answer:
Time Complexity:
Best Case Complexity: O(n*log n)
Worst Case Complexity: O(n*log n)
Average Case Complexity: O(n*log n)
Space Complexity:
The space complexity of merge sort is O(n).

Ques 21 Write an algorithm for merge sorting. Using the algorithm sort in
ascending order : 10, 25, 16, 5, 35, 48, 8
AKTU 2014-15, Marks 10
Answer:
Merge Sort Program:
void mergeSort(int a[], int p, int r)
{
int q;
if(p < r)
{
q = (p + r) / 2;
mergeSort(a, p, q);
mergeSort(a, q+1, r);
merge(a, p, q, r);
}
}
void merge(int a[], int p, int q, int r)
{
int b[5];
int i, j, k;
k = 0;
i = p;
j = q + 1;
while(i <= q && j <= r)
{
if(a[i] < a[j])
{
b[k++] = a[i++];
}
else
{
b[k++] = a[j++];
}
}

while(i <= q)
{
b[k++] = a[i++];
}

while(j <= r)
{
b[k++] = a[j++];
}

for(i=r; i >= p; i--)


{
a[i] = b[--k];
}
}
void printArray(int a[], int size)
{
int i;
for (i=0; i < size; i++)
{
printf("%d ", a[i]);
}
printf("\n");
}

int main()
{
int arr[] = {32, 45, 67, 2, 7};
int len = sizeof(arr)/sizeof(arr[0]);
printf("Given array: \n");
printArray(arr, len);

mergeSort(arr, 0, len - 1);

printf("\nSorted array: \n");


printArray(arr, len);
return 0;
}
Ques 22 How do you calculate the complexity of sorting algorithms? Also,
write a recursive function in ‘C’ to implement the merge sort on a given
set of integers.
AKTU 2015-16, Marks 10
Answer:
Merge Sort Recursive Code:
#include <stdio.h>
void merge(int arr[], int p, int q, int r) {
int n1 = q - p + 1;
int n2 = r - q;
int L[n1], M[n2];
for (int i = 0; i < n1; i++)
L[i] = arr[p + i];
for (int j = 0; j < n2; j++)
M[j] = arr[q + 1 + j];
int i, j, k;
i = 0;
j = 0;
k = p;
while (i < n1 && j < n2) {
if (L[i] <= M[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = M[j];
j++;
}
k++;
}
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}
while (j < n2) {
arr[k] = M[j];
j++;
k++;
}
}
void mergeSort(int arr[], int l, int r) {
if (l < r) {
// m is the point where the array is divided into two subarrays
int m = l + (r - l) / 2;
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
// Merge the sorted subarrays
merge(arr, l, m, r);
}
}
void printArray(int arr[], int size) {
for (int i = 0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}

You might also like