Analysis of Loops - Module One Cont'd: Asymptotic Analysis Worst, Average and Best Cases Asymptotic Notations

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Analysis of Loops - Module One Cont’d

We have discussed Asymptotic Analysis, Worst, Average and Best Cases and Asymptotic
Notations.

Here, analysis of iterative programs with simple examples will be discuss.

1) O(1): Time complexity of a function (or set of statements) is considered as O(1) if it


doesn’t contain loop, recursion and call to any other non-constant time function.

// set of non-recursive and non-loop


statements

For example swap() function has O(1) time complexity.

void swap(int *xp, int *yp)


{
int temp = *xp;
*xp = *yp;
*yp = temp;
}

A loop or recursion that runs a constant number of times is also considered as O(1).
For example the following loop is O(1).

// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}

2) O(n): Time Complexity of a loop is considered as O(n) if the loop variables is incremented /
decremented by a constant amount.

For example following functions have O(n) time complexity.

// Here c is a positive integer constant


for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
for (int i = n; i > 0; i -= c) {
// some O(1) expressions
}

3) O(nc): Time complexity of nested loops is equal to the number of times the innermost
statement is executed. For example the following sample loops have O(n2) time
complexity

for (int i = 1; i <=n; i += c) {


for (int j = 1; j <=n; j += c) {
// some O(1) expressions
}
}
for (int i = n; i > 0; i += c) {
for (int j = i+1; j <=n; j += c) {
// some O(1) expressions
}

For example Selection sort and Insertion Sort have O(n2) time complexity.

void selectionSort(int arr[], int n)


{
int i, j, min_idx;

// One by one move boundary of unsorted subarray


for (i = 0; i < n-1; i++)
{
// Find the minimum element in unsorted array
min_idx = i;
for (j = i+1; j < n; j++)
if (arr[j] < arr[min_idx])
min_idx = j;

// Swap the found minimum element with the first


element
swap(&arr[min_idx], &arr[i]);
}
}
/* Function to sort an array using insertion sort*/
void insertionSort(int arr[], int n)
{
int i, key, j;
for (i = 1; i < n; i++)
{
key = arr[i];
j = i-1;

/* Move elements of arr[0..i-1], that are


greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key)
{
arr[j+1] = arr[j];
j = j-1;
}
arr[j+1] = key;
}
}

4) O(Logn) Time Complexity of a loop is considered as O(Logn) if the loop variables is


divided / multiplied by a constant amount.

for (int i = 1; i <=n; i *= c) {


// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}

For example Binary Search(refer iterative implementation) has O(Logn) time complexity.

int binarySearch(int arr[], int l, int r, int x)


{
if (r >= l)
{
int mid = l + (r - l)/2;

// If the element is present at the middle itself


if (arr[mid] == x) return mid;
// If element is smaller than mid, then it can only
be present
// in left subarray
if (arr[mid] > x)
return binarySearch(arr, l, mid-1, x);

// Else the element can only be present in right


subarray
return binarySearch(arr, mid+1, r, x);
}

// We reach here when element is not present in array


return -1;
}

5) O(LogLogn) Time Complexity of a loop is considered as O(LogLogn) if the loop variables is


reduced / increased exponentially by a constant amount.

// Here c is a constant greater than 1


for (int i = 2; i <=n; i = pow(i, c)) {
// some O(1) expressions
}
//Here fun is sqrt or cuberoot or any other constant
root
for (int i = n; i > 0; i = fun(i)) {
// some O(1) expressions
}

How to combine time complexities of consecutive loops?


When there are consecutive loops, we calculate time complexity as sum of time complexities of
individual loops.

for (int i = 1; i <=m; i += c) {


// some O(1) expressions
}
for (int i = 1; i <=n; i += c) {
// some O(1) expressions
}
Time complexity of above code is O(m) + O(n) which is O(m+n)
If m == n, the time complexity becomes O(2n) which is O(n).

General rules for Analyzing Programs


There are no complete sets of rules for programs analysis but the following will help:
1) The running time of a read, write or assignment statement is O(1) with the exception of
languages that allow function calls in assignment statements.
2) The running time of a sequence of statements is calculated using the sum rule.
3) The running time of an if statement is the time cost of executing the conditionally
executed statements plus the cost of evaluating the condition. The cost of evaluating the
condition is taken as O(1). For evaluating an if-then-else statement, the cost is the time to
evaluate the condition plus the larger of the times to execute the statements when the
condition is true and when the condition is false.

4) For a loop, the running time is the sum of the times to execute the body of the loop and
the time to evaluate the condition for termination (which is O(1)). Always the time is the
product of the number of times around the loop and the possible largest time for an
execution of the body, though each loop and has to be considered separately.

5) For a program that has procedures/functions which are not recursive, we first compute the
running times of the called procedures that call no other procedures and then compute the
running times of the procedures/functions that call those procedures/functions using the
running times of the called procedures. If we have recursive procedures, we need to
associate an unknown time function T(n) with each recursive procedure where n is the
size of the arguments. We will then get a recurrence T(n) which is an equation for T(n) in
terms of T(k) for the different values of k.

Let consider the following examples


1. Analysis of Selection Sort
Consider the selection sort below:
Selection_Sort(A[ ] )
1. for j = 1 to n – 1
begin
2. min = j
3. for i = j + 1 to n do
4. if A[i] < A[min] then min = i
5. temp = A[j]
6. A[j] = A[min]
7. A[min] = temp
end
Solution
Lines 5,6, 7 each takes O(1).
Therefore, lines 5-7 take O(max(O(1), O(1), O(1))) = O(1)
For the worst case, we assume the whole of 4 will be executed, therefore, line 4 takes
O(1) + O(1) = O(1).
The loop in line 3 is executed (n – j) times, therefore, the running time of lines 3 – 4 is
O((n – j) * 1) = O(n – j)
Line 2 takes O(1)
Lines 2-4 take O(max((n –j), 1)) = O(n – j)
Lines 2-7 take O(max(n-j),1) = O(n-j)
The loop in 1 is executed n – 1 times
n −1
Therefore, lines 1 - 7 take ∑ (n − j ) = n-1, n-2, n-3,…, 1
j =1

n
Sn = (a + l )
2

n −1 n −1
= (n − 1 + 1) = ( n)
2 2
= O(n2)

2. Analysis of insertion Sort


Insertion sort is given by:
insertsort(A)
1. for i =2 to n
begin
2. temp = A[i]
3. j = i -1
4. while (j > 0 and a[j] > temp)
begin
5) A[j + 1] = A[j]
6) j = j - 1
end
7) A [j + 1] = temp
end
Line 7 takes O(1) time
Line 6 runs O(1) time
Line 5 runs O(1) time
Lines 5 -6 runs Omax(1,1) = O(1)
For the worst case, line 4 will be executed (n-1) times
Therefore, lines 4 -6 will run for O((n -1) * 1) = O(n -1)
Lines 4 -7 will run for O(max((n-1),1)) time = O(n-1)
Line 3 takes O(1)
Line 3-7 take O(max((n-1),1)) = O(n-1)
Line 2 takes O(1)
Lines 2 -7 take O(max((n-1),1)) =O(n-1)
The loop in 1 is executed (n-1) times
Therefore, lines 1 -7 will take O((n-1) * (n-1)) = O(n2)
The running time of insertion sort is therefore, O(n2).

The following shows the running times of the sorting algorithms considered:
Insertion Sort: O(n2)
Selection Sort: O(n2)
Shell Sort: O(n1. 5) , proved by Donald Knuth
Quicksort: O(n2)
Heapsort: O(nlogn)
Mergesort: O(nlogn)
How to calculate time complexity of recursive functions?
Time complexity of a recursive function can be written as a mathematical recurrence relation. To
calculate time complexity, we must know how to solve recurrences. We will soon be discussing
recurrence solving techniques in the next module(s).

You might also like