Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

LEC-2 ALGORITHMS

EFFICIENCY & COMPLEXITY


WHAT IS AN ALGORITHM?
• An algorithm is a set of instructions designed to perform a
specific task
• A step-by-step problem-solving procedure, especially an
established, recursive computational procedure for solving a
problem in a finite number of steps.
• An algorithm is any well-defined computational procedure
that takes some value, or set of values, as input and produces
some value, or set of values, as output.
• An algorithm is thus a sequence of computational steps that
transform the input into the output.
HOW TO WRITE AN ALGORITHMS

• Index started with 1


• No variable declaration
• No use of semicolon
• Assignment statement a3
• Comparison if a=3
• Reputational structures while a1 to N or for a1 to N
• When a statement is continued from one line to another
within structure, indent the continued line(s).
EFFICIENCY(ALGORITHMIC COMPLEXITY)?
• Properties of algorithms
 Correctness
 Deterministic
 Efficiency
• Algorithmic Complexity: how many steps our algorithm will
take on any given input instance by simply executing it on the
given input.
• Algorithmic complexity is concerned about how fast or slow
particular algorithm performs.
Efficiency of an algorithm can be measured in terms of:
− Execution time (time complexity)
− The amount of memory required (space complexity)
EFFICIENCY
Which measure is more important?
time complexity comparisons are more interesting than
space complexity comparisons
Time complexity: A measure of the amount of time
required to execute an algorithm
Factors that should not affect time complexity analysis:
• The programming language chosen to implement the algorithm
• The quality of the compiler
• The speed of the computer on which the algorithm is to be
executed
(TIME) EFFICIENCY OF AN ALGORITHM
worst case efficiency
is the maximum number of steps that an algorithm can take for any
collection of data values.
Best case efficiency
is the minimum number of steps that an algorithm can take any
collection of data values.
Average case efficiency
the efficiency averaged on all possible inputs
Example:
Consider: Search for an element in a list
• Best case search when item at beginning
• Worst case when item at end
• Average case somewhere between
If the input has size n, efficiency will be a function of n
MEASURING EFFICIENCY
Simplified analysis can be based on:
• Number of arithmetic operations performed
• Number of comparisons made
• Number of times through a critical loop
• Number of array elements accessed
• etc
Three algorithms for computing the sum 1 + 2 + . . . + n for an integer n > 0
MEASURING EFFICIENCY

Java code for algorithms


MEASURING EFFICIENCY

The number of basic operations required by the algorithms


A Simple Example
ANALYSIS OF SUM (2)
// Input: int A[N], array of N integers
// Output: Sum of all numbers in array A
int Sum(int A[], int N)
{
int s=0; 1
for (int i=0; i< N; i++ )
{ 2 4
5 s = s + A[i]; 3
} 7 1,2,8: Once time
return s; 6 3,4,5,6,7: Once per each iteration
} 8 of for loop, N iteration
Total: 5N + 3
The complexity function of the
algorithm is : f(N) = 5N +3
Asymptotic Notation
− The notations we use to describe the asymptotic running time of an
algorithm are defined in terms of functions whose domains are the set
of natural numbers N = {0, 1, 2, …}. Such notations are convenient
for describing the worst-case running-time function T (n), which
usually is defined only on integer input sizes.
− We will use asymptotic notation primarily to describe the running
times of algorithms
order of growth
− Running time of an algorithm increases with the size of the input in the
limit as the size of the input increases without bound
Growth of function
− Means if we increase the number of inputs(values of n) the function
growth speedily.
Big “O” Notation
Definition: function f(n) is O(g(n)) if there exist constants c and
n0 such that for all n>=n0: f(n) <= c (g(n)).
-The notation is often confusing: f = O(g) is read "f is big-oh of g.“
-Generally, when we see a statement of the form f(n)=O(g(n)):

-f(n) is the formula that tells us exactly how many operations the
function/algorithm in question will perform when the problem size
is n.
-g(n) is like an upper bound for f(n). Within a constant factor, the
number of operations required by your function is no worse than
g(n).
Big “O” Notation
Why is this useful?
–We want out algorithms to scalable. Often, we write
program and test them on relatively small inputs. Yet, we
expect a user to run our program with larger inputs.
Running-time analysis helps us predict how efficient our
program will be in the `real world'.
Big “O” Notation
Big “O” Notation
Big “O” Notation
Big “O” Notation
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
 Constant time f (n) = C.
An algorithm is said to have a constant time when it’s run-time not
dependent on the input data(n). This means that the algorithm/operation
will always take the same amount of time regardless of the number of
elements we’re working with. For example, accessing the first element
of a list is always O (1) regardless of how big the list is.
 Logarithmic time f (n) = log n.
Algorithms with logarithmic time complexity reduce the input data size
in each step of the operation. Usually, Binary trees and Binary search
operations have O(log n ) as their time complexity.
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
 Linear time f (n) = n.
An algorithm is said to have a linear time complexity when
the run-time is directly and linearly proportional to the size of
the input data. This is the best possible time complexity when
the algorithm has to examine all the items in the input data.
For example:
for value in data:
print(value)
Example of such operations would be linear search hence the
iteration over the list is O(n).
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
 Quasilinear time (n log n)
Where each operation in the input data have a logarithm
time complexity. Commonly seen in optimized sorting
algorithms such as merge sort, timsort, heapsort.
In merge sort, the input data is broken down into several
sub-lists until each sublist consists of a single element
and then the sub lists are merged into a sorted list. This
gives us a time complexity of O(nlogn
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
 Quadratic time f (n) = n2.
An algorithm is said to have a quadratic time complexity when the time it
takes to perform an operation is proportional to the square of the items in
the collection. This occurs when the algorithm needs to perform a linear
time operation for each item in the input data. Bubble sort has
O(n^2) .For example, a loop within a loop:
 Exponential time f (n) = bn,
An algorithm is said to have an exponential time complexity when the
growth doubles with each addition to the input data set. This kind of time
complexity is usually seen in brute-force algorithms. For example, the
recursive Fibonacci algorithm has O(2n) time complexity.
RUN-TIME COMPLEXITY TYPES (BIG-O
NOTATION TYPES)
 Factorial time f (n) = n!
An algorithm is said to have a factorial time complexity
when every single permutation of a collection is
computed in an operation and hence the time it takes to
perform an operation is factorial of the size of the items
in the collection. The Travelling Salesman Problem and
the Heap’s algorithm(generating all possible
permutations of n objects) have O(n!) time complexity.
Disadvantage: It is very slow
RUN-TIME COMPLEXITY TYPES
BIG-OH NOTATION: FEW EXAMPLES
BIG-OH NOTATION: FEW EXAMPLES
BIG-OH NOTATION: FEW EXAMPLES
Big “” Notation
Definition: function f(n) is (g(n)) if there exist constants c and
n0 such that for all n>=n0: f(n) >= c (g(n)).
-The notation is often confusing: f = (g) is read "f is big-omega
of g.“
-Generally, when we see a statement of the form f(n)= (g(n)):
Big “” Notation

You might also like