Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Efficiency of sorting algorithms

Doda Denis-Mircea
June 2021

Abstract

This paper is about a comparison between sort-


ing algorithms, on their efficiency and memory
consumption. First thing that will be stated in
the paper is the definition for sorting algorithms
and a summary and how each algorithm works
and their efficiency and memory consumption.
Then how the testing will occur, afterwards the
actual testing and finally the results and the con-
clusion.

1
Contents
1 Motivation 3

2 Introduction 3

3 Sorting algorithms 3
3.1 Insertion sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.3 Bubble sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3.4 Quick sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.5 Heap sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.6 Radix sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Testing 8
4.1 Stage 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Stage 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.3 Stage 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5 Conclusion 11

2
1 Motivation
The motive behind this paper is that, this is an important topic in the computer
science world. For the sole reason of their usefulness and their variations. There
are many sorting algorithms and there are instances where a couple of them are
redundant or just plain not efficient enough. So in this paper I will compare
some of the most known and used sorting algorithms used in the computer
science world.

2 Introduction
A sorting algorithm is a method for reorganizing a large number of items into a
specific order, such as alphabetical, highest-to-lowest value or shortest-to-longest
distance. Sorting algorithms take lists of items as input data, perform specific
operations on those lists and deliver ordered arrays as output.(1) It is essential
to explain the methods that professionals use to analyze and assess algorithm
complexity and performance. The current standard is called “Big O notation”
named according to its notation which is an “O” followed by a function such as
“O(n).”Big O is used to denote either the time complexity of an algorithm or
how much space it takes up.(2)

3 Sorting algorithms
Here I will list the sorting algorithms I will be testing:
• Insertion sort
• Merge sort
• Bubble sort
• Quick sort

3
• Heap sort
• Radix sort

3.1 Insertion sort


Insertion sort is a sorting algorithm that places an unsorted element at its
suitable place in each iteration. It works the same way on how a deck of cards
gets sorted in our hand.
void insertionS(int a[], int n)
{
int k;
for (int i = 1; i < n; i++)
{
k = a[i];
int j = i - 1;
while (j >= 0 && a[j] > k).
{
a[j + 1] = a[j];
j--;
}
a[j + 1] = k;
}
}
Time complexity is O(n2 ) and the memory complexity is O(1).(? )

3.2 Merge sort


Merge sort is a divide and conquer algorithm in which it derives the array into
two halves and calls itself for the two halves, and then, as the name suggests,
it merges them into a sorted array. It works by finding the middle point of the
array then dividing it into two halves, after this it calls itself for the first half
and secondly for the other half, and finally it merges it together.
void merge(int a[], int le, int mid, int ri)
{
int n1 = mid - le + 1, n2 = ri - mid, i, j;
int* Left = new int[n1], * Right = new int[n2];
for (i = 0; i < n1; i++)
Left[i] = a[le + i];
for (j = 0; j < n2; j++)
Right[j] = a[mid + 1 + j];
i = 0;
j = 0;
int k = le;
while (i < n1 && j < n2)
{
if (Left[i] <= Right[j])
{
a[k] = Left[i];

4
i++;
}
else
{
a[k] = Right[j];
j++;
}
k++;
}
while (i < n1)
{
a[k] = Left[i];
i++;
k++;
}
while (j < n2)
{
a[k] = Right[j];
j++;
k++;
}
delete[]Left;
delete[]Right;
}
void mergesort(int a[], int le, int ri)
{
if (le >= ri)
return;
int mid = le + (ri - le) / 2;
mergesort(a, le, mid);
mergesort(a, mid + 1, ri);
merge(a, le, mid, ri);
}
Time complexity is (nLogn) and memory complexity is O(n).

3.3 Bubble sort


Bubble sort is one of the easiest sorting algorithms, and for many people, includ-
ing myself, their very first sorting algorithm. This is because of the simplicity of
its kind. It just swaps the adjacent elements until they are in the correct order.
void bubblesort(int a[], int n)
{
int aux;
for (int i = 0; i < n - 1; i++)
for (int j = 0; j < n - i - 1; j++)
if (a[j] > a[j + 1])
{
aux = a[j];
a[j] = a[j + 1];

5
a[j + 1] = aux;
}
}
Time complexity is O(n∗n) and memory complexity is O(1).

3.4 Quick sort


Quick sort is a divide and conquer algorithm. It works by selecting a ’pivot’
element from the array and partitioning the other elements into two sub-arrays,
according to whether they are less than or greater than the pivot.
int part(int a[], int under, int over)
{
int piv = a[over], i = under - 1;
for (int j = under; j <= over; j++)
{
if (a[j] < piv)
{
i++;
int aux = a[i];
a[i] = a[j];
a[j] = aux;
}
}
int auxp = a[i + 1];
a[i + 1] = a[over];
a[over] = auxp;
return(i + 1);
}
void quickS(int a[], int under, int over)
{
if (under < over)
{
int p = part(a, under, over);
quickS(a, under, p - 1);
quickS(a, p + 1, over);
}
}
Time complexity is O(n2 ) and memory complexity is O(Logn).

3.5 Heap sort


Heap sort is a comparison based algorithm based on binary help data structure.
Firstly it finds where the minimum element and places the minimum element
at the beginning. It repeats it until it’s done comparing.
void heap(int a[], int n, int i)
{
int max = i, left = 2 * i + 1, right = 2 * i + 2;
if (left<n && a[left]>a[max])

6
max = left;
if (right<n && a[right]>a[max])
max = right;
if (max != i)
{
int aux = a[i];
a[i] = a[max];
a[max] = aux;
heap(a, n, max);
}
}
void heapS(int a[], int n)
{
int i;
for (i = n / 2 - 1; i >= 0; i--)
heap(a, n, i);
for (i = n - 1; i >= 0; i--)
{
int aux = a[0];
a[0] = a[i];
a[i] = aux;
heap(a, i, 0);
}
}
Time complexity is O(Logn) and memory complexity is O(n∗Logn)

3.6 Radix sort


Radix sort sorts by checking the digits of the number, it compares every digit’s
number to each other’s digit’s number. It is, in theory, the fastest sorting
algorithm.
int maxEl(int a[], int n)
{
int max = a[0];
for (int i = 1; i < n; i++)
if (a[i] > max)
max = a[i];
return max;
}
void countS(int a[], int n, int el)
{
int i, c[10] = { 0 };
int* out = new int[n];
for (i = 0; i < n; i++)
c[(a[i] / el) % 10]++;
for (i = 1; i < 10; i++)
c[i] += c[i - 1];
for (i = n - 1; i >= 0; i--)
{

7
out[c[abs((a[i] / el) % 10)] - 1] = a[i];
c[(a[i] / el) % 10]--;
}
for (i = 0; i < n; i++)
a[i] = out[i];
delete[]out;
}
void radixS(int a[], int n)
{
int max = maxEl(a, n);
for (int el = 1; max / el > 0; el *= 10)
countS(a, n, el);
}
Time complexity here is O(n) and memory complexity is O(n).

4 Testing
The test will have 3 stages. The first stage we will give the sorting algorithms
an array of 30 elements, then the second stage we will give them an array of
150000 elements, and finally , for the last stage we will give them 100000000
elements to sort. I will be checking the time ran be the sorting algorithm by
using an algorithm I made in C++, and for the memory I will be using visual
studio’s build in memory chart. This is my algorithm and the helper functions
void printArr(int a[], int n)
{
for (int i = 0; i < n; i++)
cout << a[i] << " ";
cout << "\n";
}
void elements(vector<int> &a)
{
while(!h.eof())
{
int x;
h >> x;
a.push_back(x);
}
h.close();
}
bool isSorted(int* vec, int n)
{
for (int i = 0; i < n - 1; i++)
{
if (vec[i + 1] < vec[i])
return false;
}

return true;

8
}
int main()
{
vector<int>a;
elements(a);
auto start = chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(false);
//insertionS(a.data(), a.size());
//mergesort(a.data(), 0, a.size() - 1);
//bubblesort(a.data(), a.size());
//quickS(a.data(), 0, a.size() - 1);
//heapS(a.data(), a.size());
//radixS(a.data(), a.size());
auto end = chrono::high_resolution_clock::now();
if (isSorted(a.data(), a.size()))
printArr(a.data(), a.size());
else
cout << "False" << "\n";
double time_taken = chrono::duration_cast<chrono::nanoseconds>(end - start).count();
time_taken *= 1e-9;
cout << "Time taken by program is " << fixed << time_taken << " sec" << "\n";
return 0;
}

4.1 Stage 1
• Insertion sort - 0.000002 s and 1.8 MB

• Merge sort - 0.00001 s and 2.5 MB


• Bubble sort - 0.000003 s and 2.0 MB
• Quick sort - 0.000003 s and 2.0 MB
• Heap sort - 0.000003 s and 2.0 MB

• Radix sort - 0.000007 s and 2.0 MB

From here we draw the conclusion that insertion sort is cleary the winner by
time and by memory complexity, and bubble, quick and heap sort all coming
at the exact same time and memory complexity, radix sort having the same
memory complexity but 4∗10−6 seconds slower then them. And finally merge
sort being the worst in terms of both sorting speed and space complexity.

9
4.2 Stage 2
• Insertion sort - 2.242139 s and 2.0 MB
• Merge sort - 0.037324 s and 2.5 MB

• Bubble sort - 29.947037 s and 2.0 MB


• Quick sort - 0.008196 s and 2.0 MB
• Heap sort - 0.015364 s and 2.0 MB
• Radix sort - 0.020011 s and 2.0 MB

From this second stage we can confidently say that things are very different now,
since now the array has increased 5000 times more. We can see from the results
that the sorting algorithms that did fairly well in the first stage did really poorly
here, like bubble and insertion sort, insertion sort even increasing in memory
capacity. The fastest sorting algorithm here is by far quick sort followed by
heap sort, and scoring third on the list is radix sort.

4.3 Stage 3
• Insertion sort - 3.537317 s and 2.1 MB
• Merge sort - 0.043789 s and 2.5 MB
• Bubble sort - 48.420790 s and 2.0 MB

• Quick sort - 0.010796 s and 2.0 MB


• Heap sort - 0.020446 s and 2.0 MB
• Radix sort - 0.024361 s and 3.0 MB

From the last and final stage we can observe that quick sort still outdid it’s
competitors and scored the fastest and best memory consumption. The podium
stays consistent with heap and radix sort in terms of time but in terms of space,
radix scored the worst by eating a whole 3 MB. But by far the worst time it has
to be bubble sort and it’s terrible almost one minute sorting time.

10
5 Conclusion
The conclusions here are that:
• Insertion sort is the best algorithm to use in the small element size arrays,
and it’s easy to learn code.
• Merge sort is a consistent sorting algorithm but there are better variants
since it’s pretty hard code.

• Bubble sort has good space and time complexity in case of small arrays
but outside of that it should never be used over 1000 element arrays.
• Quick sort is the best algorithm to use generally, the only downside I
would see is probably it’s not so easy to learn code.

• Heap sort is another consistent sorting algorithm, but again it’s probably
the second hardest, speaking only from a code stand point.
• Radix sort is a pretty good sorting algorithm, not as good as heap sort or
quick sort, but faster then merge and insertion, the only draw backs are
this is the hardest algorithm in terms of code and it’s space complexity.

From my experiment I got that quick sort is the fastest algorithm when speaking
directly from 150000+ elements up. It remained stable enough and consistent
with it’s space and time complexity. The slowest algorithm that I got is bubble
sort, that ended up being almost an entire minute when speaking about an
100000000 elements. In my opinion the best sorting algorithm to use is either
quick sort or insertion sort. Quick sort for the same reasons it got my praises
a few paragraphs above, and insertion sort for it’s simplicity and good enough
results.

References
[1] Ivy Wigmore Sorting algorithm, WhatIs.com
[2] Leonardo Galler and Matteo Kimura Sorting algorithms

11

You might also like