Professional Documents
Culture Documents
Lab 09
Lab 09
Lab 09
Learning Outcomes:
After successfully completing this lab the students will be able to:
1. Understand the divide and conquer mechanism which is at the heart of the two
recursive sorting algorithm.
2. Develop programming solutions for Merge Sort and Quick Sort
3. Empirically compare the performance of these two sorting algorithms
In Lab Tasks
In Lab Task # 01:
You are given skeleton code for this lab. Your task is to complete the merge() and
partition() functions for Merge Sort and Quick Sort respectively.
Code in C language:
For Partition:
int partition(int * ptr_array, int start_idx, int end_idx)
{
}
/// At this stage either A or B will have no elements left
while(indexA < sizeA)
{
*(ptr_arrC + indexC) = *(ptr_arrA+indexA);
indexA ++;
indexC ++;
}
while(indexB < sizeB)
{
*(ptr_arrC + indexC) = *(ptr_arrB+indexB);
indexB ++;
indexC ++;
}
/// Now the elements are in sorted order in C.
/// Time to plug them back into the original array
for(indexC = 0; indexC < (sizeA+sizeB); indexC ++)
*(ptr_arrA + indexC) = *(ptr_arrC+indexC);
free(ptr_arrC); /// Delete the previously allocated
memory
return;
}
For Swap:
void swap(int * ptr_num1, int * ptr_num2)
{
int temp;
temp = *(ptr_num1);
*(ptr_num1) = *(ptr_num2);
*(ptr_num2) = temp;
}
Output:
Page 4 of 5
1. Bubble Sort
Complexity: Average and worst-case time complexity of O(n^2), making it
inefficient for large data sets.
Memory: In-place with O(1) extra space.
Stability: Stable; does not change the relative order of elements with equal keys.
Usage: Seldom used in practice due to poor efficiency, but simple to understand and
implement.
Advantage: Detects whether the list is sorted efficiently if no swaps are made.
Disadvantage: Slow for large datasets due to quadratic growth in comparison and
swap operations.
2. Selection Sort
Complexity: Always runs in O(n^2) time complexity, regardless of the input order.
Memory: In-place with O(1) extra space.
Stability: Unstable unless additional steps are taken to preserve stability.
Usage: Useful when memory writes are a costly operation since it minimizes the
number of swaps.
Advantage: Uncomplicated and has a performance advantage over more complex
algorithms in cases with small or medium-sized arrays.
Disadvantage: Inefficiency in sorting large lists; performance generally worse than
Insertion Sort.
3. Insertion Sort
Complexity: O(n^ 2) in the average and worst cases, but can perform as efficiently as
O(n) in the best case.
Memory: In-place with O(1) extra space.
Stability: Stable, maintaining the relative order of records with equal keys.
Usage: Highly effective for small datasets or arrays that are already partially sorted.
Advantage: Simple implementation, efficient for small data sets, adaptive, and can
sort a list as it receives it.
Disadvantage: Poor choice for large unsorted data sets due to quadratic scaling.
4. Merge Sort
Complexity: Guarantees O(n*log*n) time complexity in all cases.
Memory: Not in-place, requires O(n) extra space.
Stability: Stable, preserves the input order of equal elements.
Usage: Preferred for sorting linked lists and large datasets where stability is important.
Advantage: Predictable and good for external sorting (sorting massive files).
Disadvantage: Requires additional memory proportional to the array size.
Page 5 of 5
5. Quick Sort
Complexity: O(n*log*n) average time complexity, but can degrade to O(n^2) if the
pivot selection is poor.
Memory: In-place sort with a space complexity of O(logn) due to recursion.
Stability: Typically unstable without complex additional measures.
Usage: Frequently used, especially where space complexity holds more importance
than stability.
Advantage: Extremely efficient for large datasets with the proper pivot.
Disadvantage: Worst-case performance is poor, but can be mostly avoided with
careful pivot choices like using randomization.
Quick Sort:
Time Efficiency: Quick Sort is generally faster in practice than other O(nlogn)
algorithms like Merge Sort, especially for large datasets, due to its superior locality of
reference and cache performance.
Worst Case: The worst-case performance of Quick Sort (i.e., O(n ^2)) occurs in rare
scenarios, such as when the smallest or largest element is consistently chosen as the
pivot. However, this can be largely mitigated by using randomized or "median-of-
three" pivot selection strategies.
In-Place Sorting: Unlike Merge Sort, Quick Sort does not require additional storage
proportional to the size of the input array, making it more space-efficient.
Conclusion:
In this lab, we conclude that the evaluating sorting algorithms in terms of time
complexity, space requirements, and stability is crucial for selecting the appropriate
method for a given application, ensuring optimal performance and resource utilization.