Lab 09

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Page 1 of 5

Learning Outcomes:
After successfully completing this lab the students will be able to:
1. Understand the divide and conquer mechanism which is at the heart of the two
recursive sorting algorithm.
2. Develop programming solutions for Merge Sort and Quick Sort
3. Empirically compare the performance of these two sorting algorithms

In Lab Tasks
In Lab Task # 01:
You are given skeleton code for this lab. Your task is to complete the merge() and
partition() functions for Merge Sort and Quick Sort respectively.
Code in C language:

For Quick Sort:


void quick_sort(int * ptr_array, int start_idx, int end_idx)
{
if(start_idx < end_idx)
{
int pivot_idx = partition(ptr_array, start_idx,
end_idx);

quick_sort(ptr_array, start_idx, pivot_idx-1);


quick_sort(ptr_array, pivot_idx+1, end_idx);
}
}

For Partition:
int partition(int * ptr_array, int start_idx, int end_idx)
{

int pivot = *(ptr_array+end_idx);


int pivot_idx = start_idx-1;
for(int i=start_idx; i<=(end_idx-1); i++)
{
if(*(ptr_array+i) <= pivot)
{
pivot_idx++;
swap(ptr_array+i, ptr_array+pivot_idx);
}
}
swap(ptr_array+pivot_idx+1, ptr_array+end_idx);
return(pivot_idx+1);
}
Page 2 of 5

For Merge Sort:


void merge_sort(int * ptr_array, int array_size)
{
if(array_size == 1) /// Base case here
return;

int size1 = array_size/2;


int size2;
if(array_size % 2 == 1) /// We are also tackling the
case where the array size is not even
size2 = (array_size/2)+1;
else
size2 = (array_size/2);
//print_array(ptr_array, size1);
//print_array(ptr_array+(array_size/2), size2);
merge_sort(ptr_array, size1);
merge_sort(ptr_array+(array_size/2), size2);
merge(ptr_array, size1, ptr_array+(array_size/2), size2);
return;
}
void merge(int * ptr_arrA, int sizeA, int * ptr_arrB, int
sizeB)
{
int * ptr_arrC = (int *)
malloc(sizeof(int)*(sizeA+sizeB)); /// Array of integers to
temporarily hold the sorted data
int indexA = 0; /// to keep track of the elements in A
int indexB = 0; /// to keep track of the elements in A
int indexC = 0;
while((indexA < sizeA)&&(indexB<sizeB))
{
if((*(ptr_arrA+indexA)) >
(*(ptr_arrB+indexB))) /// if elemnt in A is greater than
element in B
{
*(ptr_arrC + indexC)
= *(ptr_arrB+indexB); /// Place the smaller element at
the start of C
indexB ++;
indexC ++;
}
else
{
*(ptr_arrC + indexC)
= *(ptr_arrA+indexA); /// Place the smaller element at
the start of C
indexA ++;
indexC ++;
}
Page 3 of 5

}
/// At this stage either A or B will have no elements left
while(indexA < sizeA)
{
*(ptr_arrC + indexC) = *(ptr_arrA+indexA);
indexA ++;
indexC ++;
}
while(indexB < sizeB)
{
*(ptr_arrC + indexC) = *(ptr_arrB+indexB);
indexB ++;
indexC ++;
}
/// Now the elements are in sorted order in C.
/// Time to plug them back into the original array
for(indexC = 0; indexC < (sizeA+sizeB); indexC ++)
*(ptr_arrA + indexC) = *(ptr_arrC+indexC);
free(ptr_arrC); /// Delete the previously allocated
memory
return;
}

For Swap:
void swap(int * ptr_num1, int * ptr_num2)
{
int temp;
temp = *(ptr_num1);
*(ptr_num1) = *(ptr_num2);
*(ptr_num2) = temp;
}

Output:
Page 4 of 5

Post Lab Task:


Study and perform comparative analysis between different sorting algorithms we have
implemented in current and previous Lab.
Answer:
Comparative Analysis of Sorting Algorithms

1. Bubble Sort
Complexity: Average and worst-case time complexity of O(n^2), making it
inefficient for large data sets.
Memory: In-place with O(1) extra space.
Stability: Stable; does not change the relative order of elements with equal keys.
Usage: Seldom used in practice due to poor efficiency, but simple to understand and
implement.
Advantage: Detects whether the list is sorted efficiently if no swaps are made.
Disadvantage: Slow for large datasets due to quadratic growth in comparison and
swap operations.

2. Selection Sort
Complexity: Always runs in O(n^2) time complexity, regardless of the input order.
Memory: In-place with O(1) extra space.
Stability: Unstable unless additional steps are taken to preserve stability.
Usage: Useful when memory writes are a costly operation since it minimizes the
number of swaps.
Advantage: Uncomplicated and has a performance advantage over more complex
algorithms in cases with small or medium-sized arrays.
Disadvantage: Inefficiency in sorting large lists; performance generally worse than
Insertion Sort.

3. Insertion Sort
Complexity: O(n^ 2) in the average and worst cases, but can perform as efficiently as
O(n) in the best case.
Memory: In-place with O(1) extra space.
Stability: Stable, maintaining the relative order of records with equal keys.
Usage: Highly effective for small datasets or arrays that are already partially sorted.
Advantage: Simple implementation, efficient for small data sets, adaptive, and can
sort a list as it receives it.
Disadvantage: Poor choice for large unsorted data sets due to quadratic scaling.

4. Merge Sort
Complexity: Guarantees O(n*log*n) time complexity in all cases.
Memory: Not in-place, requires O(n) extra space.
Stability: Stable, preserves the input order of equal elements.
Usage: Preferred for sorting linked lists and large datasets where stability is important.
Advantage: Predictable and good for external sorting (sorting massive files).
Disadvantage: Requires additional memory proportional to the array size.
Page 5 of 5

5. Quick Sort
Complexity: O(n*log*n) average time complexity, but can degrade to O(n^2) if the
pivot selection is poor.
Memory: In-place sort with a space complexity of O(logn) due to recursion.
Stability: Typically unstable without complex additional measures.
Usage: Frequently used, especially where space complexity holds more importance
than stability.
Advantage: Extremely efficient for large datasets with the proper pivot.
Disadvantage: Worst-case performance is poor, but can be mostly avoided with
careful pivot choices like using randomization.

Discussion And Conclusion:


Discussion:
Merge Sort:
Stability: Merge Sort is a stable sorting algorithm, meaning that it maintains the
relative order of records with equal keys (values). This is particularly useful in
scenarios where the preservation of original order is critical, such as in multi-level
sorting applications.
Time Complexity: It has a guaranteed time complexity of O(nlogn) in all cases,
making it very predictable. This is advantageous in real-time systems where time
predictability is crucial.
Space Complexity: Merge Sort is not an in-place sorting algorithm, as it requires
additional space proportional to the array size (i.e., O(n)) for its temporary arrays.
This can be a drawback in memory-constrained environments.

Quick Sort:
Time Efficiency: Quick Sort is generally faster in practice than other O(nlogn)
algorithms like Merge Sort, especially for large datasets, due to its superior locality of
reference and cache performance.
Worst Case: The worst-case performance of Quick Sort (i.e., O(n ^2)) occurs in rare
scenarios, such as when the smallest or largest element is consistently chosen as the
pivot. However, this can be largely mitigated by using randomized or "median-of-
three" pivot selection strategies.
In-Place Sorting: Unlike Merge Sort, Quick Sort does not require additional storage
proportional to the size of the input array, making it more space-efficient.

Conclusion:
In this lab, we conclude that the evaluating sorting algorithms in terms of time
complexity, space requirements, and stability is crucial for selecting the appropriate
method for a given application, ensuring optimal performance and resource utilization.

You might also like