Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

FUNDAMENTALS OF THE ANALYSIS

OF ALGORITHM EFFICIENCY
Fundamentals of Analysis of Algorithm

• Analysis of Framework
• Measuring an input size
• Units for measuring runtime
• Worst case, Best case and Average case
• Asymptotic Notations
ANALYSIS FAMEWORK
• Efficiency of an algorithm can be in terms of
time or space.
• There is a systematic approach that has to be
applied for analyzing any given algorithm.
• This systematic approach is modelled by a
framework called as ANALYSIS FRAMEWORK.
Algorithm Analysis
Analysis of algorithm is the process of investigation
of an algorithm’s efficiency respect to two
resources:
– Running time
– Memory space
The reason for selecting these two criteria are
– Simplicity
– Generality
– Speed
– Memory
ANALYSIS OF ALGORITHMS
Efficiency
• Time efficiency or time complexity indicates
how fast an algorithm runs.
• Space Efficiency or space complexity is the
amount of memory units required by the
algorithm including the memory needed for
the i/p & o/p
Space complexity
• Space Complexity of an algorithm is total
space taken by the algorithm with respect to
the input size.
• To Compute the space complexity we use two
factors: Auxiliary Space, Input space
• Input space -> Constant -> space taken by
instruction, variable and identifiers
• Auxiliary Space – space required for
temporary usage while executing the
algorithm(stack, instructions)
Space complexity
• Constant Space Complexity
• Linear Space Complexity
• Quadratic Space Complexity
• Logarithmic Space Complexity
Space complexity
{
int z = a + b + c;
Return z;
}
• variables a, b, c and z - 4 bytes each,
• 4 bytes is for return value
• So (4(4) + 4) = 20 bytes,
• This space requirement is fixed, hence it is called Constant Space Complexity.

// n is the length of array a[]


int sum(int a[], int n)
{
int x = 0;
for(int i = 0; i < n; i++)
Time complexity
• The amount of time required by an algorithm to run
for completion.
• For instance in multiuser system , executing time
depends on many factors such as:
– System load
– Number of other programs running
– Instruction set used
– Speed of underlying hardware
• Frequency count is a count denoting number of times
of execution of statement
• Time complexity normally denotes in terms of Oh
notation(O)
Time complexity
• Constant Time Complexity
• Linear Time Complexity
• Quadratic Time Complexity
• Logarithmic Time Complexity
Constant Time Complexity
• C=a+b;
• Its Time Complexity will be Constant. The
running time of the statement will not change
Linear Time Complexity
Calculating sum of n numbers:

For(i=0;i<n;i++)
{
sum=sum+a[i];
}
Calculating sum of n numbers
Quadratic Time Complexity
Matrics addition :
For(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
c[i][j]=a[i][j]+b[i][j]
}
}
Matrics addition
Logarithmic Time Complexity
while(low <= high)
{
mid = (low + high) / 2;
if (target < list[mid])
high = mid - 1;
else if (target > list[mid])
low = mid + 1;
else
break;
}
Time space Trade-off
• Time space trade-off is basically a situation
where either a space efficiency can be
achieved at the cost of time or a time
efficiency can be achieved at the cost of
memory
Measuring an Input size
• Efficiency measure of an algorithm is directly
proportional to the input size or range
• There are two natural measures of size for
algorithm.
– The matrix order n
– The total number of elements N in the matrices being
multiplied.
• For ex: when multiplying two matrices, the
efficiency of an algorithm depends on the no. of
multiplication performed not on the order of
matrices.
Units for measuring Running time
The running time of an algorithm depends on:
• Dependence on the speed of a particular computer
• Dependence on the quality of a program implementing the
algorithm
• The compiler used in generating the machine code
• The difficulty of clocking the actual running time of the program.
To measure the algorithm efficiency:
• Identify the important operation(core logic) of an algorithm. This
operation is called basic operation
• So compute the no. of times the basic operation is executed will
give running time
• Basic operation mostly will be in inner loop, it is time consuming
Units for measuring Running time
Efficiency of an Algorithms

Worst Case Efficiency


Average Case Efficiency
Best Case Efficiency
Efficiency of an Algorithms
Worst case efficiency is the maximum number of steps that an
algorithm can take for any collection of data values.
Best case efficiency is the minimum number of steps that an
algorithm can take any collection of data values.
Average case efficiency
- the efficiency averaged on all possible inputs
- must assume a distribution of the input
- we normally assume uniform distribution (all keys are equally
probable)

• Worst case: Cworst(n) – maximum over inputs of size n


• Best case: Cbest(n) – minimum over inputs of size n
• Average case: Cavg(n) – “average” over inputs of size n and
expected under uniform distribution.
Example: Sequential search
ALGORITHM Sequential Search(A[0..n -1], K)
//Searches for a given value in a given array by sequential
search
//Input: An array A[0..n -1] and a search key K
//Output: Returns the index of the first element of A that
matches K or -1 if there are no matching elements
i←0
while i < n and A[i] ≠ K do
i←i+1
if i < n return i
else return -1
Example: Sequential search
• In the worst case, Cworst (n) = n
• In the best-case, Cbest(n) = 1.
• In average case, if the search is successful, the
average number of key comparisons made by
sequential search is (n + 1) /2.
• If the search must be unsuccessful, the
average number of key comparisons will be n
Amortized efficiency
• It applies not to a single run of an algorithm,
but rather to a sequence of operations
performed on the same data structure
Asymptotic Notations
The main idea of asymptotic analysis is to have
a measure of efficiency of algorithms that
doesn’t depend on machine specific
constants, and doesn’t require algorithms to
be implemented and time taken by programs
to be compared.
Asymptotic notations are mathematical tools
to represent time complexity of algorithms
for asymptotic analysis.
▪ The following 3 asymptotic notations are
mostly used to represent time complexity of
algorithms.

Θ Notation: The theta


Big O Notation
Ω Notation
• Θ Notation: The theta notation bounds a
functions from above and below, so it defines
exact asymptotic behavior.
• A simple way to get Theta notation of an
expression is to drop low order terms and ignore
leading constants. For example, consider the
following expression.
3n3 + 6n2 + 6000 = Θ(n3)
• Dropping lower order terms is always fine because
there will always be a n0 after which Θ(n3) has
higher values than Θn2) irrespective of the
constants involved.
Big O Notation: The Big O notation defines an
upper bound of an algorithm, it bounds a
function only from above.
For example, consider the case of Insertion
Sort. It takes linear time in best case and
quadratic time in worst case.
We can safely say that the time complexity of
Insertion sort is O(n^2). Note that O(n^2) also
covers linear time.
If we use Θ notation to represent time complexity
of Insertion sort, we have to use two statements
for best and worst cases:
✔ 1. The worst case time complexity of Insertion
Sort is Θ(n^2).
✔ 2. The best case time complexity of Insertion
Sort is Θ(n).

The Big O notation is useful when we only have


upper bound on time complexity of an algorithm.
Many times we easily find an upper bound by
simply looking at the algorithm.
Ω Notation: Just as Big O notation
provides an asymptotic upper bound on
a function, Ω notation provides an
asymptotic lower bound.
Ω Notation can be useful when we have
lower bound on time complexity of an
algorithm.
Properties of Asymptotic Notations :

General Properties
Reflexive Properties
Transitive Properties
Symmetric Properties
Transpose Symmetric Properties
• General Properties :If f(n) is O(g(n)) then a*f(n) is
also O(g(n)) ; where a is a constant.
• Example: f(n) = 2n²+5 is O(n²)
then 7*f(n) = 7(2n²+5)
= 14n²+35 is also O(n²)
• Similarly this property satisfies for both Θ and Ω
notation.
We can say
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)) ; where
a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)) ;
where a is a constant.
• Reflexive Properties :If f(n) is given then f(n) is
O(f(n)).
• Example: f(n) = n² ; O(n²) i.e O(f(n))
• Similarly this property satisfies for both Θ and
Ω notation.
We can say
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).
• Transitive Properties :If f(n) is O(g(n)) and g(n) is
O(h(n)) then f(n) = O(h(n)) .
• Example: if f(n) = n , g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³)
then n is O(n³)
• Similarly this property satisfies for both Θ and Ω
notation.
We can say
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ
(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω
(h(n))
• Symmetric Properties :If f(n) is Θ(g(n)) then
g(n) is Θ(f(n)) .
• Example: f(n) = n² and g(n) = n²
then f(n) = Θ(n²) and g(n) = Θ(n²)
• This property only satisfies for Θ notation.
• Transpose Symmetric Properties :If f(n) is
O(g(n)) then g(n) is Ω (f(n)).
• Example: f(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)
• This property only satisfies for O and Ω
notations.

You might also like