DSA Chapter 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 89

CHAPTER TWO

DSA- CH-2: Complexity Analysis

Complexity Analysis

3/2/2024 1
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

3/2/2024 2
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

3/2/2024 3
Introduction to Data Structures and
Algorithms Analysis

• A program is written in order to solve a


problem.
– Solution consists of two things:
• A way to organize the data
• Sequence of steps to solve the problem
– The way data are organized in a computers
memory is said to be Data Structure and
Algorithm
– The clearly defined sequence of
computational steps the computer follows to
solve a problem is said to be an algorithm.
• Therefore, a program is nothing but data
3/2/2024 structures plus algorithms. 4
Algorithm Analysis - Concepts
• Refers to the process of determining the amount of
computing time and storage space required by
different algorithms.
– Or it’s a process of predicting the resource requirement of
algorithms in a given environment.
• To solve a problem, there are many possible
algorithms.
– One has to be able to choose the best algorithm for the
problem at hand using some scientific method.
• To classify some data structures and algorithms as good,
we need precise ways of analyzing them in terms of
resource requirement.
– The main resources are: Running Time, Memory Usage and
Communication Bandwidth

3/2/2024 5
– Running time is usually treated as the most
important since computational time is the most
precious resource in most problem domains.
• Two approaches to measure the efficiency of
algorithms:
– Theoretical: Determining the quantity of resources
required mathematically (Execution time, memory
space, etc.) needed by each.
– Empirical: Programming competing algorithms and
trying them on different instances.

3/2/2024 6
• Theoretical - we can analyze an algorithm
according to the number of operations
required, rather than according to an
absolute amount of time involved.
– This can show how an algorithm’s efficiency
changes according to the size of the input.

3/2/2024 7
• Empirical - However, it is difficult to use
actual clock-time as a consistent measure
of an algorithm’s efficiency, because clock-
time can vary based on many things. For
example,
– Specific processor speed,
– Current processor load and
– Specific data for a particular run of the
program (Input Size and Input Properties)
– Operating Environment
3/2/2024 8
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

3/2/2024 9
Important terms
• Computational complexity
– It is a measuring of the amount of time, storage, or
other resources needed to execute the algorithm in
order to compare the efficiency of algorithms.
• Asymptotic complexity
– Asymptotic time complexity
• The limiting behavior of the execution time of an algorithm
when the size of the problem goes to infinity. This is usually
denoted in big-O notation
– Asymptotic space complexity
• The limiting behavior of the use of memory space of
an algorithm when the size of the problem goes to infinity.
This is usually denoted in big-O notation.

3/2/2024 10
Theoretical Approach - Complexity
Analysis
• Complexity Analysis is the systematic study of the cost of computation, measured either in time
units or in operations performed, or in the amount of storage space required.
– The goal is to have a meaningful measure that permits comparison of algorithms independent of
operating platform.
• There are two things to consider:
– Time Complexity: Determine the approximate number of operations required to solve a problem of size n.
– Space Complexity: Determine the approximate memory required to solve a problem of size n.
– You are right about the definitions of asymptotic time and space complexity. They are useful ways to
measure the efficiency of algorithms for large inputs. Here are some simple examples of both types of
complexity:
– Asymptotic time complexity: This is the amount of time an algorithm takes to run as a function of the input
size. For example, if an algorithm takes O(n) time, it means that the time is proportional to the input size n. If
an algorithm takes O(n^2) time, it means that the time is proportional to the square of the input size n. The
smaller the time complexity, the faster the algorithm. Some common time complexities are O(1), O(log n),
O(n), O(n log n), O(n^2), O(2^n), O(n!).
– Asymptotic space complexity: This is the amount of memory an algorithm uses as a function of the input
size. For example, if an algorithm takes O(n) space, it means that the space is proportional to the input size
n. If an algorithm takes O(1) space, it means that the space is constant and does not depend on the input
size. The smaller the space complexity, the less memory the algorithm uses. Some common space
complexities are O(1), O(log n), O(n), O(n^2).
– To illustrate these concepts, let us consider two sorting algorithms: Selection Sort and Quick Sort. Selection
Sort is an algorithm that sorts an array by repeatedly finding the smallest element and swapping it with the
first unsorted element. Quick Sort is an algorithm that sorts an array by recursively partitioning it into two
subarrays based on a pivot element and sorting them.

– Selection Sort has an asymptotic time complexity of O(n^2) in all cases, because it always performs n -11
3/2/2024 1
comparisons and swaps in each iteration of the outer loop, resulting in (n - 1) + (n - 2) + … + 1 = n(n - 1) / 2
= O(n^2) operations. It has an asympt
• Complexity analysis involves two distinct
phases:
1. Algorithm Analysis: Analysis of the
algorithm or data structure to produce a
function T(n) that describes the algorithm in
terms of the operations performed in order to
measure the complexity of the algorithm.
2. Order of Magnitude Analysis: Analysis of
the function T(n) to determine the general
complexity category to which it
belongs.Complexity analysis is a way of
3/2/2024
comparing algorithms based on how much12
Algorithm Analysis Rules:

• OPTION 1 : Exact count


– There is no generally accepted set of rules for
algorithm analysis. However, an exact count of
operations is commonly used.
• We assume an arbitrary time unit.
1. Execution of one of the following operations
takes time 1:
• Assignment Operation
• Single Input / Output Operation
• Single Boolean Operations
• Single Arithmetic Operations
• Function Return

3/2/2024 13
2. Running time of a selection statement (if,
switch) is the time for the condition evaluation
+ the maximum of the running times for the
individual clauses in the selection.
3. Loops: Running time for a loop is equal to the
running time for the statements inside the loop
* number of iterations.
4. For nested loops, analyze inside out.
– Always assume that the loop executes the
maximum number of iterations possible.

3/2/2024 14
5. The total running time of a statement inside a
group of nested loops is the running time of
the statements multiplied by the product of the
sizes of all the loops.
6. Running time of a function call is 1 for setup +
the time for any parameter calculations + the
time required for the execution of the function
body.

3/2/2024 15
Complexity Analysis: Loops
• We start by considering how to count
operations in for-loops.
– We use integer division throughout.
• First of all, we should know the number of
iterations of the loop; say it is x.
– Then the loop condition is executed x + 1
times.
– Each of the statements in the loop body is
executed x times.
– The loop-index update statement is executed
x times.

3/2/2024 16
Complexity Analysis: Loops (with <)

• In the following for-loop:


for(int i=k; i<n; i+=m) for(int i=n; i>k; i-=m)
{ {
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (n – k ) / m
• The initialization statement, i = k, is executed one
time.
• The condition, i < n, is executed (n – k) / m + 1
times.
• The update statement, i = i + m, is executed (n – k)
/ m times.
• Each of statement1 and statement2 is executed (n
– k) / m times.
3/2/2024 17
Complexity Analysis : Loops (with <=)

• In the following for-loop:


for(int i=k; i<=n; i+=m) for(int i=n; i>=k; i-=m)
{ {
statement1; statement1;
statement2; statement2;
} }

The number of iterations is: (n – k) / m + 1


• The initialization statement, i = k, is executed one time.
• The condition, i <= n, is executed (n – k) / m + 2 times.
• The update statement, i = i + m, is executed (n – k) / m
+ 1 times.
• Each of statement1 and statement2 is executed (n – k)
/ m + 1 times.
3/2/2024 18
Complexity Analysis:
Loops With Logarithmic Iterations
• In the following for-loop: (with <)
for (int i=k; i<n; i*=m){ for (int i=n; i>k; i/=m){
statement1; statement1;
statement2; statement2;
} }
– The number of iterations is: (Logm (n / k) )

• In the following for-loop: (with <=)

for (int i=k; i<=n; i*=m){ for (int i=n; i>=k; i/=m){
statement1; statement1;
statement2; statement2;
} }
– The number of iterations is:  (Logm (n / k) + 1) 

3/2/2024 19
Example 1

3/2/2024 20
• Time Units to Compute
– Line 3: 1 for the assignment statement: int k=0
– Line 4: 1 for the output statement.
– Line 5: 1 for the input statement.
– Line 6: In the for loop:
• 1 assignment, n+1 tests, and n increments.
– Line 7: n loops of 2 units for an assignment, and an
addition.
– Line 8: 1 for the return statement.
• T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6 = O(n)

3/2/2024 21
Example 2

3/2/2024 22
• Time Units to Compute
– Line 3: 1 for the assignment statement: int
sum=0
– Line 4: In the for loop:
• 1 assignment, n+1 tests, and n increments.
– Line 5: n loops of 2 units for an assignment, and an
addition.
– Line 6: 1 for the return statement.
• T (n)= 1+ (1+n+1+n)+2n+1 = 4n+4 = O(n)

3/2/2024 23
Example: 3

3/2/2024 24
• Time Units to Compute
– Line 2: 2 for the assignment statement: x=2.5 ,
y=3.0
– Line 3: In the for loop:
• 1 assignment, n+1 tests, and n increments.
– Line 5: n loops of 2 units for an assignment, and a
multiplication
– Line 6: n loops of 2 units for an assignment, and a
multiplication
– Line 7: n loops of 2 units for an assignment, and an
addition
• T (n)= 2+ (1+n+1+n)+2n+2n+2n = 8n+4 =
O(n)
3/2/2024 25
Exercise 1

3/2/2024 26
Exercise 2

3/2/2024 27
• OPTION 2 : Formal Approach to
Algorithm Analysis
– It can be simplified by using some formal
approach in which case we can ignore
initializations, loop control, and book
keeping.

3/2/2024 28
For loops: formally
• In general, a for loop translates to a
summation. The index and bounds of the
summation are the same as the index and
N
N
1 = N
for bounds
(int i = 1; iof
<=the
N; for
i++) {
for (int i = 1; i <= N; i++) {loop.
sum
sum =
= sum+i;
sum+i;
}}
ii =1
=1

• Suppose we count the number of additions that


are done. There is 1 addition per iteration of the
3/2/2024 29
Nested Loops: Formally
• Nested for loops translate into multiple
summations, one for each for loop.
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= M; j++) { N M N

}
sum = sum+i+j;  2 =  2M = 2MN
i =1 j =1 i =1
}

• Again, count the number of additions. The


outer summation is for the outer for loop.
3/2/2024 30
Consecutive Statements: Formally
• Add the running times of the separate
blocks of your code
for (int i = 1; i <= N; i++) {
for (int isum
= 1;= sum+i;
i <= N; i++) {
sum
} = sum+i;  N N  N N N  N
   
+   = 
+ 2
1 2 N 2 N
  
} for (int i = 1; i <= N; i++) {
for (int ifor= (int
1; ij <=
= 1; N;
j <=i++)
N; j++)
{ {  i =1  1 i + 
=1 j =1  2 = N +
for (int
}
j
sum = sum+i+j;
= 1; j <= N; j++) {  i =1   i =1 j =1 
} sum = sum+i+j;
}
31
}
3/2/2024
Conditionals: Formally
• If (test) s1 else s2: Compute the maximum
of the running time for s1 and s2.

if (testif ==
(test1)
=={1) {
for (intfori(int
= 1;i =i 1;
<=i <=
N;N; i++){{
i++)  N N N NN N 
sum sum = sum+i;
= sum+i; max 1, 1
max   , 2  =2  =
}} }}  i =1i =1 i =1 i =j 1=1 j =1 
(( ) )
else(int
else for for i(int i =i1;<=
= 1; i <=
N;N;i++)
i++){{
for (int j = 1; j <= N; j++) {
for (int j = 1; j <= N; j++) { max N , 2 N 2
max N , 2 N = 2 N 2 =
2 2N
2

sum = sum+i+j;
}}
sum = sum+i+j;
}}
3/2/2024 32
Example: 4

• Suppose we have hardware capable of


executing 106 instructions per second.
How long would it take to execute an
algorithm whose complexity function is:
– T (n) = 2n2 on an input size of n=108?

3/2/2024 33
• The total number of operations to be
performed would be T (108):
– T(108) = 2*(108)2 =2*1016
– The required number of seconds would be
given by
• T(108)/106 so: Running time =2*1016/106 = 2*1010
– Days?→ 86,400 days
– Years? → 634 years

3/2/2024 34
Important mathematics series formulas

3/2/2024 35
Exercises

• Determine the run time


equation and complexity
of each of the following
code segments.

3/2/2024 36
Exercise: 3

for (i=0;i<n;i++)
{
for (j=0;j<n; j++)
{
sum=sum+i+j;
}
}

3/2/2024 37
Exercise: 4

for(int i=1; i<=n; i++)


{
for (int j=1; j<=i; j++)
{
sum++;
}
}
• What is the value of the sum if n=20?

3/2/2024 38
Exercise: 5
int k=0;
for (int i=0; i<n; i++)
{
for (int j=i; j<n; j++)
{
k++;
}
}
• What is the value of k when n is equal to 20?

3/2/2024 39
Exercise: 6
int k=0;
for (int i=1; i<n; i*=2)
{
for(int j=1; j<n; j++)
{
k++;
}
}
• What is the value of k when n is equal to 20?

3/2/2024 40
Exercise: 7

int x=0;
for(int i=1;i<n;i=i+5)
x++;
What is the value of x when n=25?

int x=0;
for(int k=n;k>=n/3;k=k-5)
x++;
• What is the value of x when n=25?

3/2/2024 41
Exercise: 8

int x=0;
for (int i=1; i<n;i=i+5)
for (int k=n;k>=n/3;k=k-5)
x++;
• What is the value of x when n=25?

int x=0;
for(int i=1;i<n;i=i+5)
for(int j=0;j<i;j++)
for(int k=n;k>=n/2;k=k-3)
x++;

3/2/2024 42
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

9/13/2017 43
Order of Magnitude Analysis - Measures
of Times
• In order to determine the running time of an
algorithm it is possible to define three functions
Tbest(n), Tavg(n) and Tworst(n) as the best, the
average and the worst case running time of the
algorithm respectively.
1. Average Case (Tavg): The amount of time the
algorithm takes on an "average" set of inputs.
2. Worst Case (Tworst): The amount of time the
algorithm takes on the worst possible set of inputs.
3. Best Case (Tbest): The amount of time the algorithm
takes on the smallest possible set of inputs.
• We are interested in the worst-case time, since
it provides a bound for all input – this is called
the “Big-Oh” estimate.
9/13/2017 44
Asymptotic Analysis
• Asymptotic analysis is concerned with how the
running time of an algorithm increases with the
size of the input in the limit, as the size of the
input increases without bound.
• There are five notations used to describe a
running time function. These are:
– Big-Oh Notation (O) (<=)
– Big-Omega Notation ()(>=)
– Theta Notation ()(=)
– Little-o Notation (o)(<)
– Little-Omega Notation ()(>)

9/13/2017 45
The Big-Oh Notation
• Big-Oh notation is a way of comparing algorithms
and is used for computing the complexity of
algorithms;
• i.e., the amount of time that it takes for computer
program to run .
– It’s only concerned with what happens for very large
value of n.
– Therefore only the largest term in the expression
(function) is needed.
• For example, if the number of operations in an algorithm is n2
– n, n is insignificant compared to n2 for large values of n.
Hence the n term is ignored. Of course, for small values of n,
it may be important.
• However, Big-Oh is mainly concerned with large
values of n.
9/13/2017 46
• Formal Definition: f (n)= O (g (n)) if there
exist c, k ∊ ℛ+ such that for all n ≥ k, f (n) ≤
c.g (n).
– By the definition above, demonstrating that a
function f is big-O of a function g requires that
we find specific constants c and k for which
the inequality holds (and show that the
inequality does, in fact, hold).
• Big-O expresses an upper bound on the
growth rate of a function, for sufficiently
large values of n.
•9/13/2017
Example 47

2 2
Standard Method to Prove Big -oh

1. Choose k=1
2. Assuming n>1, find/derive a c such that

– This shows that n>1 implies f(n) <= cg(n).


– Keep in mind
• n >1 implies 1<n, n<n2, n2<n3 ….
• Increase numerator to simplify fraction.

9/13/2017 48
Example 1: Proving Big -Oh

• Show that f(n)= n2 +2n +1 is O(n2)


– Choose k=1
– Assuming n > 1, then
• f(n)/g(n)= (n2 + 2n +1)/n2 < (n2 + 2n2+n2)/n2 =
(4n2)/n2=4
• Choose c=4. Note that 2n<2n2 and 1< n2
• Thus, n2 + 2n +1 is O(n2), because n2 + 2n + 1
<=4n2 whenever n>1.

9/13/2017 49
Example 2: Proving Big -Oh

• Show that f(n)= 4n +6 is O(n)


– Choose k=1
– Assuming n > 1, then
• f(n)/g(n)= (4n +6)/n < (4n+6n)/n = (10n)/n=10
• Choose c=10. Note that 6<6n
• Thus, 4n +6 is O(n), because 4n + 6 <=10n
whenever n>1.

9/13/2017 50
Exercise 1: Proving Big -Oh

• Show that f(n)= (n +1)3 is O(n3)


– Where (n +1)3 = n3 + 3n2 + 3n +1
– ???

9/13/2017 51
Standard Method to Prove Not Big -oh

1. Assuming n>1
2. Show

3. n > h-1(c) implies h(n) > c,


– which implies f(n) > cg(n)
– So choosing n > 1, n > k, and n > h-1(c)
implies n > k and f(n) > cg(n)
9/13/2017 52
Example 3: Proving Not Big -Oh

• Show that f(n)= n2 – 2n +1 is not O(n).


– Assuming n > 1, then
• f(n)/g(n)= (n2 – 2n +1)/n > (n2 – 2n)/n = n -2
– n > c+2 implies n-2 > c and f(n) >cn.
– So choosing n > 1, n > k, and n > c+2 implies
n > k and f(n) > cn.
– Decrease numerator to simplify fraction.

9/13/2017 53
Example 4: Proving Not Big -Oh

• Show that f(n)= n2/2 is not O(n).


– Assuming n > 1, then
• f(n)/g(n)= (n2/2)/n = n/2
– n > 2c implies n/2 > c and f(n) >cn.
– So choosing n > 1, n > k, and n > 2c implies n
> k and f(n) > cn.

9/13/2017 54
Exercise 2: Proving Not Big -Oh

• Show that f(n)= (n +1)3 is not O(n2)


– Where (n +1)3 = n3 + 3n2 + 3n +1
– ???

9/13/2017 55
• Examples: The following points are facts that
you can use for Big-Oh problems:
– 1<=n for all n>=1
– n<=n2 for all n>=1
– 2n <=n! for all n>=4
– log2n<=n for all n>=2
– n<=nlog2n for all n>=2

9/13/2017 56
Big-Oh Rules
• If f(n) is a polynomial of degree d, then
f(n) is O(nd), i.e.,
1.Drop lower-order terms
2.Drop constant factors
• Use the smallest possible class of
functions
– 2n is O(n) instead of 2n is O(n2)
• Use the simplest expression of the class
– 3n + 5 is O(n) instead of 3n + 5 is O(3n)

9/13/2017 57
Big-O Theorems
• For all the following theorems, assume
that f(n) is a non-negative function of n
and that K is an arbitrary constant.
• Theorem 1: K is O(1)
• Theorem 2: A polynomial is O(the term
containing the highest power of n)
– f (n) = 7n4 + 3n2 + 5n +1000 is O(7n4 )
• Theorem 3: K*f(n) is O(f(n))
– i.e., constant coefficients can be dropped
– g(n) = 7n4 is O(n4 )
9/13/2017 58
• Theorem 4: If f(n) is O(g(n)) and g(n) is
O(h(n)) then f(n) is O(h(n)). [transitivity]
– f (n) = 6n2 + 3n -8 is O(6n2)= O(g(n))
– g (n) = 6n2 is O(n3) = O(h(n))
– h (n) = n3
• f (n) =O( h (n))
• Theorem 5: For any base b, logb(n) is
O(log(n)).
– All logarithms grow at the same rate
– logbn is O(logdn) where b, d > 1
9/13/2017 59
• Theorem 6: Each of the following functions is strictly
big-O of its successors:
– K [constant]
– logb(n) [always log base 2 if no base is shown]
–n
– n logb(n)
– n2
– n to higher powers
– 2n
– 3n
• larger constants to the n-th power
– n! [n factorial]
– nn
• f(n)= 3nlogbn + 4 logbn+2 is O(nlogbn) and )(n2) and O(2n) 60
• Theorem 7: In general, f(n) is big-O of the
dominant term of f(n), where “dominant”
may usually be determined from Theorem
6.
– f (n) = 7n2+ 3n log(n) + 5n +1000 is O(n2 )
– g(n) = 7n4 + 3n +1000000 is O(3n )
– h(n) = 7n(n + log(n)) is O(n2)

9/13/2017 61
Properties of the Big-O

• Higher powers grow faster


– nr is O( ns) if 0 <= r <= s
• Fastest growing term dominates a sum
– If f(n) is O(g(n)), then f(n) + g(n) is O(g)
– E.g. 5n4 + 6n3 is O (n4)
• Exponential functions grow faster than
powers,
– E.g. n20 is O( 1.05n)
• Logarithms grow more slowly than powers
– E.g. log2n is O( n0.5)
9/13/2017 62
Big-Omega Notation: 

• Just as O-notation provides an asymptotic


upper bound on a function,  notation
provides an asymptotic lower bound.
• Formal Definition: A function f(n) is ( g
(n)) if there exist constants c and k ∊ ℛ+
such that
– f(n) >=c. g(n) for all n>=k.
– f(n)= ( g (n)) means that f(n) is greater than
or equal to some constant multiple of g(n) for
all values of n greater than or equal to some
9/13/2017k. 63
Example: If f(n) =n2, then f(n)= ( n)
• In simple terms, f(n)= (g(n)) means that
the growth rate of f(n) is greater that or
equal to g(n).

9/13/2017 64
Theta Notation: 
• A function f (n) belongs to the set of  (g(n))
if there exist positive constants c1 and c2
such that it can be sandwiched between
c1.g(n) and c2.g(n), for sufficiently large
values of n.
• Formal Definition: A function f (n) is  (g(n))
if it is both O( g(n) ) and  ( g(n) ).
– In other words, there exist constants c1, c2, and k
>0 such that c1.g (n)<=f(n)<=c2. g(n) for all n >=
k
• If f(n)=  (g(n)), then g(n) is an asymptotically
tight bound for f(n).
• In simple terms, f(n)=  (g(n)) means that f(n)
and g(n) have the same rate of growth.
9/13/2017 65
Example:
1. If f(n)=2n+1, then f(n) =  (n)
2. f(n) =2n2 then
– f(n)=O(n4)
– f(n)=O(n3)
– f(n)=O(n2)
• All these are technically correct, but the
last expression is the best and tight one.
Since 2n2 and n2 have the same growth
rate, it can be written as f(n)= (n2).
9/13/2017 66
Little-o Notation
• Big-Oh notation may or may not be asymptotically
tight, for example:
– 2n2 = O(n2)
=O(n3)
• f(n)=o(g(n)) means for all c>0 there exists some
k>0 such that f(n)< c.g(n) for all n>=k.
• Informally, f(n)=o(g(n)) means f(n) becomes
insignificant relative to g(n) as n approaches
infinity.
Example: f(n)=3n+4 is o(n2)
• In simple terms, f(n) has less growth rate
compared to g(n).
• g(n)= 2n2 g(n) =o(n3), O(n2), g(n) is not o(n2).
9/13/2017 67
Little-Omega ( notation)

• Little-omega () notation is to big-omega


() notation as little-o notation is to Big-Oh
notation.
• We use  notation to denote a lower
bound that is not asymptotically tight.
• Formal Definition: f(n)=  (g(n)) if there
exists a constant no>0 such that 0<= c.
g(n)<f(n) for all n>=k.
– Example: 2n2=(n) but it’s not  (n2).

9/13/2017 68
Relational Properties of the Asymptotic
Notations
• Transitivity
– if f(n)=(g(n)) and g(n)= (h(n)) then
f(n)=(h(n)),
– if f(n)=O(g(n)) and g(n)= O(h(n)) then
f(n)=O(h(n)),
– if f(n)=(g(n)) and g(n)= (h(n)) then f(n)=
(h(n)),
– if f(n)=o(g(n)) and g(n)= o(h(n)) then
f(n)=o(h(n)), and
– if f(n)= (g(n)) and g(n)= (h(n)) then f(n)=
9/13/2017
(h(n)). 69
• Transpose symmetry
– f(n)=O(g(n)) if and only if g(n)=(f(n)),
– f(n)=o(g(n)) if and only if g(n)=(f(n)).
• Reflexivity
– f(n)=(f(n)),
– f(n)=O(f(n)),
– f(n)=(f(n)).

9/13/2017 70
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

3/2/2024 71
Common complexity classes

• Algorithm can be classified by their time


and space complexities, and in this
respect, several classes of such
algorithms can be distinguished.
– An algorithm is called constant if its
execution time remains the same for any
number of elements; it is called quadratic if
its execution time is O(n2).
• For each of classes, a number of operations is
shown along with the real time needed for
executing them on a machine able to perform 1
3/2/2024 million operations per second, or one operation72
per microsecond(µsec)
Class 10 102 103
Constant O(1) 1 1µsec 1 1µsec 1 1µsec

Logarithmic O(log 3.32 3µsec 6.64 7µsec 9.97 10µsec


n)
Linear O(n) 10 10µse 102 100µsec 103 1msec
c
Linear O(nlog 33.2 33µse 664 664µsec 9970 10msec
logarithmic n) c
Quadratic O(n2) 102 1msec 104 10msec 106 1sec

Cubic O(n3) 103 10mse 106 1sec 109 16.7min


c
exponential O(2n) 102 10mse 1030 3.17*1017yr 10301 -
4 c

3/2/2024 73
Class 104 105 106
Constant O(1) 1 1µsec 1 1µsec 1 1µsec

Logarithmi O(log 13.5 13µsec 16.6 7µsec 19.93 20µsec


c n)
Linear O(n) 104 10msec 105 0.1sec 106 1sec

Linear O(nlog 133*103 133mse 166*103 1.6sec 199.3*105 20sec


logarithmic n) c

Quadratic O(n2) 108 1.7min 1010 16.7min 1012 11.6day


s
Cubic O(n3) 1012 11.6day 1015 31.7yr 1018 31709yr
s
exponenti O(2n) 103010 - 1030103 - 10301030 -
al

3/2/2024 74
3/2/2024 75
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

3/2/2024 76
Types of analysis

• The term analysis of algorithms is used to


describe approaches to the study of the
performance of algorithms. In this course
we will perform the following types of
analysis:
– The worst-case runtime
– The best case runtime
– The average case runtime
– The amortized runtime
3/2/2024 77
• The worst-case runtime complexity of the
algorithm is
– The function defined by the maximum number
of steps taken on any instance of size a.
• The best-case runtime complexity of the
algorithm is
– The function defined by the minimum number of
steps taken on any instance of size a.
• The average case runtime complexity of the
algorithm is
– The function defined by an average number of
steps taken on any instance of size a.
• The amortized runtime complexity of the
algorithm is
3/2/2024 78
Example

• Let us consider an algorithm of sequential


searching in an array of size n.

3/2/2024 79
• Its worst-case runtime complexity is O(n)
• Its best-case runtime complexity is O(1)
• Its average case runtime complexity is
O(n/2)=O(n)

3/2/2024 80
CH-2 Contents
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity

3/2/2024 81
Amortized Time Complexity
• Worst case analysis of runtime complexity is
often too pessimistic.
• Average case analysis may be difficult
because
I. It is not clear what is “average data”,
II. Uniformly random data is usually not average,
III. Probabilistic arguments can be difficult.
• Amortized complexity analysis is a different
way to estimate run times.
– The main ideas is to spread out the cost of
operations, charging more than necessary when
they are cheap — there by “saving up for later
use” when they occasionally be come expensive.
3/2/2024 82
Three Methods of Amortized Analysis

• Aggregate analysis:
– Total cost of n operations/n,
• Accounting method:
– Assign each type of operation an (different) amortized
cost
– Overcharge some operations,
– Store the overcharge as credit on specific objects,
– Then use the credit for compensation for some later
operations.
• Potential method:
– Same as accounting method
– But store the credit as “potential energy” and as a
whole. 83
3/2/2024
• Consider a dynamic array stack.
– In this model push() will double up the array
size if there is no enough space.
– Since copying arrays cannot be performed in
constant time, we say that push is also cannot
be done in constant time.
• In this section, we will show that push()
takes amortized constant time.
– Let us count the number of copying
operations needed to do a sequence of
pushes.
3/2/2024 84
push() copy old array size new array size
1 0 1 -
2 1 1 2
3 2 2 4
4 0 4 -
5 4 4 8
6 0 8 -
7 0 8 -
8 0 8 -
9 8 8 16
3/2/2024 85
• We see that 3 pushes requires 2 + 1 = 3
copies.
• We see that 5 pushes requires 4 + 2 + 1 =
7 copies.
• We see that 9 pushes requires 8 + 4 + 2 +
1 = 15 copies.
• In general, 2n+1 pushes requires 2n + 2n-1+
... + 2 + 1 = 2n+1 - 1 copies.

3/2/2024 86
• Asymptotically speaking, the number of
copies is about the same as the number of
pushes.

• We say that the algorithm runs


at amortized constant time.

3/2/2024 87
Theoretical analysis is a way of estimating the resources needed by an algorithm without actually
running it on a computer. It involves using mathematical formulas and techniques to derive the
time and space complexity of an algorithm based on its input size and structure. Theoretical
analysis can be done for the best case, worst case, and average case scenarios of an algorithm.
Theoretical analysis is useful for comparing different algorithms and predicting their behavior in
general.

Empirical analysis is a way of measuring the resources used by an algorithm by actually running it on
a computer. It involves implementing the algorithm in a programming language, generating a
sample of inputs, and recording the time and space taken by the algorithm on each input.
Empirical analysis can be done for different implementations of the same algorithm or different
algorithms for the same problem. Empirical analysis is useful for testing the accuracy of
theoretical analysis and evaluating the performance of an algorithm on a specific machine.

Here is a simple example of both types of analysis for two sorting algorithms: Selection Sort and
Quick Sort.

Selection Sort is an algorithm that sorts an array by repeatedly finding the smallest element and
swapping it with the first unsorted element. Its pseudocode is:

selection_sort(A):
for i = 0 to n - 2
3/2/2024 min_index = i 88
for j = i + 1 to n - 1
Thank You
3/2/2024 89

You might also like