Algorithm, Notation, Performance Analysis

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 84

Subject Code- U20CBT201

Subject Title- Data Structures and


Algorithms
Course Objectives
• To understand performance analysis of an
algorithm
• To learn linear and non-linear data structures
• To understand and sorting, searching and
hashing algorithms
• To learn file organization and accessing
methods
• Course Outcomes
After completion of the course, the students will be able to
– CO1 - Understand the usage and analysis of algorithms in
computing.
– CO2 - Implement and apply linear data structures to solve
various problems
– CO3 - Represent and apply non-linear data structures to solve
real time problems
– CO4 - Develop and analyse algorithms for sorting and searching
data organized in linear and non- Linear data structures
– CO5 - Understand various file organization and accessing
methods
UNIT I Concepts Of Algorithm And Data Organisation (9 Hrs)
• Algorithm specification – Recursion - Performance
analysis - Asymptotic Notation - The Big-O - Omega
and Theta notation - Programming Style - Refinement
of Coding - Time-Space Trade Off – Testing - Data
Abstraction
UNIT II Linear Data Structure (9 Hrs)
• Array - Stack - Queue - Linked-list and its types -
Various Representations - Operations & Applications
of Linear Data Structures
UNIT III Non-linear Data Structure (9 Hrs)
• Trees - Binary Tree - Threaded Binary Tree - Binary Search
Tree – B-Tree - B+ Tree - AVL Tree - Splay Tree. Graphs:
Basic Terminologies - Directed – Undirected - Various
Representations - Operations - Graph search and
traversal algorithms - complexity analysis - Applications
of Non-Linear Data Structures.
UNIT IV Searching And Sorting On Various Data Structures (9 Hrs)
• Sequential Search - Binary Search - Comparison Trees -
Breadth First Search - Depth First Search Insertion Sort -
Selection Sort - Shell Sort - Divide and Conquer Sort -
Merge Sort - Quick Sort- Heapsort - Introduction to
UNIT V File Concepts (9 Hrs)
• File Organisation – Sequential – Direct -
Indexed Sequential - Hashed and various types
of accessing schemes.
Text Books
1. E. Horowitz, S. Sahni, S. A-Freed, “Fundamentals of Data
Structures”, Universities Press, Second Edition, 2008.
2. A. V. Aho, J. E. Hopperoft, J. D. UIlman, “Data Structures and
Algorithms”, Pearson, First Edition, 2003.
Reference Books
3. Donald E. Knuth, “The Art of Computer Programming: Volume 1:
Fundamental Algorithms”, Third Edition, Dorling Kindersley Pvt Ltd,
Third Edition, 1997.
4. Thomas, H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford
Stein, “Introduction to Algorithms”, The MIT Press,Third
Edition,2009 .
5. Pat Morin, “Open Data Structures: An Introduction (Open Paths to
Enriched Learning)”, UBC Press, Thirty First Edition, 2013.
Web Resources
• https://www.tutorialspoint.com/
data_structures_algorithms/index.htm
• https://nptel.ac.in/courses/
106/102/106102064/
• https://www.geeksforgeeks.org/data-
structures/
• https://www.javatpoint.com/data-structure-
tutorial
• Algorithm specification
• An algorithm is defined as a finite set of
instructions that, if followed, performs a
particular task.
• Algorithm refers to a set of rules/instructions
that step-by-step define how a work is to be
executed upon in order to get the expected
results.
All algorithms must satisfy the following criteria
• Input - An algorithm has zero or more inputs, taken
or collected from a specified set of objects.
• Output - An algorithm has one or more outputs
having a specific relation to the inputs.
• Definiteness - Each step must be clearly defined;
each instruction must be clear and unambiguous.
• Finiteness - The algorithm must always finish or
terminate after a finite number of steps.
• Effectiveness - All operations to be accomplished
must be sufficiently basic that they can be done
exactly and in finite length.
• Clear and Unambiguous: Algorithm should be clear and
unambiguous. Each of its steps should be clear in all
aspects and must lead to only one meaning.

• Well-Defined Inputs: If an algorithm says to take inputs, it


should be well-defined inputs.

• Well-Defined Outputs: The algorithm must clearly define


what output will be yielded and it should be well-defined
as well.

• Finite-ness: The algorithm must be finite, i.e. it should not


end up in an infinite loops or similar.
• Feasible: The algorithm must be simple, generic and
practical, such that it can be executed upon will the
available resources. It must not contain some future
technology, or anything.

• Language Independent: The Algorithm designed


must be language-independent, i.e. it must be just
plain instructions that can be implemented in any
language, and yet the output will be same, as
expected.
• Advantages of Algorithms
– It is easy to understand.
– Algorithm is a step-wise representation of a
solution to a given problem.
– In Algorithm the problem is broken down into
smaller pieces or steps hence, it is easier for the
programmer to convert it into an actual program.
• Disadvantages of Algorithms
– Writing an algorithm takes a long time so it is
time-consuming.
– Branching and Looping statements are difficult to
show in Algorithms.
• We can depict an algorithm in many ways.
• Natural language: implement a natural
language like English
• Flow charts: Graphic representations denoted
flowcharts, only if the algorithm is small and
simple.
• Pseudo code: This pseudo code skips most
issues of ambiguity; no particularity regarding
syntax programming language.
ALGORITHMIC NOTATION
1. Format conventions
The basic format conventions used in
formulation of algorithms.
• Example for Algorithm: Greatest. This
algorithm finds the largest algebraic element
of vector A which contains N elements and
places the result in MAX. I is used to subscript
A.
• In the above algorithm, step 1 is executed first. If vector
a is empty, the algorithm terminates; otherwise, step 2
is performed:
• In this step MAX is initialized to the value of A[1] and
the subscript variable I , is assigned the value 2.
• Step 3 is used to terminate the algorithm, if we already
tested the last element of A;
• otherwise step 4 is performed. MAX element is
compared with the next element of the vector, If MAX
less than the next element, then it is assigned to this
new value. If the test fails no reassignment takes place.
Then the subscript is incremented then again testing is
continued.
Format Convention Description Example (Above Mentioned)
Name of the Every algorithm is given an identifying name GREATEST in the name of the
algorithm written in capital letters. algorithm.

Introductory The algorithm name is followed by the brief Finds the largest algebraic element of
comment description of task the algorithm performs. vector A
The description gives names and types of the
variable used in the algorithm.

Steps Algorithm is made up of sequence of numbered Steps beginning with the phrase
steps. enclosed in the square brackets that
It describe action to be executed or tasks to be describes that sequence of an
performed algorithm.
Statements are executed in a left to right order.

Comment Algorithm steps terminate with a comment


enclosed in parenthesis which helps better [Examine all elements of vector]
understanding of the step.
They specify no action.
2. Statements and control structures
2.1 Assignment statements:
• The assignment statement is indicated by
placing the arrow between the RHS of the
statement and the variable receiving the value
2.2. If statement: It is a control structure that
branches to the condition mentioned if TRUE.
General Form If the condition is TRUE the statement after then are
if condition executed
then
----------- If it is FALSE the statement after the if loop are executed
-----------

if condition If the condition is TRUE the statement after then are executed
then
----------- If it is FALSE the statement after the else clause loop is
----------- executed
else
-----------
2.3. Case statements: It is used when a choice
among from several alternatives must be made.

General form The expression is evaluated and its value is


Select case (Expression) compared, to that of all of the cases.
Case value 1:
Case value 2: The case that matches is branched to if the
expression value does not match that of any
Case value n:
case then a branch is made to default case .
Default :
2.4. Repeat statements: For easy control of iteration
(looping) a repeat statement as been provided. This
statement has one of the forms:
Type General forms Description
1 Repeat for INDEX = sequence INDEX is variable used as a loop counter.
Variable sequence takes some value.

2 Repeat while logical To repeat a step until a given logical expression is false.
expression The evaluation and the testing of the logical
expression is performed at the beginning of the loop

3 Repeat for INDEX = sequence Types 1 and 2 and is used to repeat a step for a
while logical expression. sequence whose values are taken successively by
INDEX until logical expression is false.
• Eg:
Repeat while true
read array A[I]
if there is no more data
then exit.
else
I=I+1
2.5. Go to and exit loop statements: Go to and
exit loop statements are conditional statements,
these causes unconditional transfer of control.
Go to statement: This statement Eg: Label :
causes unconditional control ----
transfer to the step referenced,
----
Regardless of the other statements.
Go to label.

Exit statement: This causes the Eg: if I <10


immediate termination of the Exit ( Causes the
control. loop to exit when I value is read as
less than 10)
2.6. Exit statement: This is used to terminate an algorithm. eg:
[Finished] exit;

2.7. Variables name:


• A variable is entities that possess the value and its name is
chosen to be meaningful of the value it holds.
• A variable name must always begin with a letter followed by
characters which may be chosen from a set of possible
characters including letters, numeric digits and special
characters.
• No blank spaces are allowed and all letters are capitalized.
Examples: MAX_VALUE
3. Data Structures
3.1. Arrays:
• Array elements are denoted by ARRAY
[DIM1,DIM2,...DIM N] , where ARRAY is the
array name and DIM1 thru DIM N are its
subscripts.
• Subscripts are enclosed in square brackets
that denote the location of array elements.
• 3.2. Dynamic storage
• Block of storage contains certain fields such as data or
info field (INFO, DATA) and link or pointer field (PTR,
LINK).
• Allocation symbol is () indicating that a pointer
variable is to be given the address of an available block
of storage (for eg. X NODE)
• Referencing a field of a given block is done by naming
the field and following this by the pointer variable name
enclosed in the round parenthesis. INFO(X)  'SAMPLE'
sets the information field of node x to the given string.
• Restore (X) restores the block pointed by X to the
available storage area.
4. Arithmetic operations and expressions:
• Algorithmic notations include the standard
binary and unary operators. Arithmetic
Expressions may contain variables which have
been assigned numeric values and are
evaluated.
5. Strings and String Operation:
• A string of capitalized characters enclosed in
single quotes.
• Null Or Empty is denoted by two adjacent
quotation marks " “.
• Eg: 'Valid String'.
6. Relations and Relational Operators:
• The relational operators are <, >, <=, >=,! =.
• Relation between variables and expressions will be
considered if the variables have been assigned some
value.
• A relation evaluates a logical expression, ie, it has
one of two possible values True or False.
• Eg: Z<=9/3+2 (If Z=10 then 3+2(RHS) 10<=5)
7. Logical Operations and Expressions:
Algorithmic notation also includes the standard

Logical operators such as: Precedence given to Operators is as


follows
8. Input and Output:
• In algorithmic notation input is obtained and
place in a variable by the statement READ
(Variable name).
• Output has WRITE (Literal or variable name)
Eg: Repeat for I= 1 to 10
Read A[I]
Write A[I]
9. Sub algorithms:
• A sub algorithm is an independent component of an algorithm and
it is provided to perform some computation when required under
the control of the main algorithm.
• Sub algorithm may invoke each other and that it may also invoke
itself recursively.
• Eg: Function FACTORIAL (N)
• This function computes N factorial recursively, where N is assumed
to be a non negative number.
• [Apply Recursive definition]
• if N =0
• then return 1
• else return (N* FACTORIAL (N-1))
• In algorithmic notation two types of sub algorithms are available:
• 1. Functions 2. Procedures
9.1. Functions:
• It is used when one wants a single value returned to the calling routine.
• Transfer of control and returning of the value are accomplished by
Return (‘Value’)
• General form:
• Function NAME (Parm1,Parm 2,....Parm n)
• Eg:
• Function AVERAGE (VAL 1,VAL 2).
• The purpose of this function is to compute the average of two values. AV
is real type.
• 1. [Compute average] AV(Val 1 + Val 2)/2.0
• 2. [Return result] Return AV
9.2. Procedures:
• A procedure is similar to function but it returns no value explicitly.
• Procedure is invoked differently where there are parameters the procedure
returns its results through the parameters.
• Eg:
• Procedure DIVIDE (DIVIDEND , DIVISOR , QUOTIENT , REMAINDER)
• This procedure divides the dividend by the divisor giving the quotient and
remainder.
• 1. [Perform integer division] QUOTIENT  DIVIDENED/DIVISOR
• 2. [DETERMINE Remainder] REMAINDER  DIVIDENE - QUOTIENT *
DIVISOR
• 3. [RETURN to the point of call] RETURN
• No value is returned explicitly but the quotient and remainder are returned
through two of the parameters.
9.3. Parameters
• With the average function the val1, val2, val3
and the arguments x, y, z of the invocation of
A Average(x,y,z) here x,y,z are arguments
and average( ) is a function.
• Parameters are called by value and by
reference.
Example : Algorithm for calculating factorial value of a
number
• Step 1: Start
• Step 2: Declare Variable n, fact, i Step
• 3: Read number from User Step
• 4: Initialize Variable fact=1 and i=1
• Step 5: Repeat Until i<=number
– 5.1 fact=fact*i
– 5.2 i=i+1 Step
• 6: Print fact Step
• 7: Stop
Pseudo Code for Factorial of a given number
Read number
Fact = 1 i = 1
WHILE i<=number
Fact=Fact*i
i=i+1
ENDWHILE
WRITE Fact
Recursive Algorithms

• A recursive algorithm calls itself which generally passes the


return value as a parameter to the algorithm again. This
parameter indicates the input while the return value indicates
the output.
• Recursive algorithm is defined as a method of simplification
that divides the problem into sub-problems of the same
nature.
• The result of one recursion is treated as the input for the next
recursion. The repletion is in the self-similar fashion manner.
• The algorithm calls itself with smaller input values and
obtains the results by simply accomplishing the operations on
these smaller values.
• Examples of recursive algorithms
Generation of factorial
Fibonacci number series
Example: Writing factorial function using recursion
intfactorialA(int n)
{
return n * factorialA(n-1);
}
Performance Analysis of an algorithm
• If we want to go from city "A" to city "B", there can be many ways of
doing this.
• We can go by flight, by bus, by train and also by bicycle. Depending on
the availability and convenience, we choose the one which suits us.
• Similarly, in computer science, there are multiple algorithms to solve a
problem.
• When we have more than one algorithm to solve a problem, we need
to select the best one.
• Performance analysis helps us to select the best algorithm from
multiple algorithms to solve a problem.
• When there are multiple alternative algorithms to solve a problem, we
analyze them and pick the one which is best suitable for our
requirements. The formal definition is as follows...
• Performance of an algorithm is a process of
making evaluative judgement about
algorithms.
• It can also be defined as follows...
• Performance of an algorithm means
predicting the resources which are required
to an algorithm to perform its task.
• Generally, the performance of an algorithm
depends on the following elements...
– Whether that algorithm is providing the exact
solution for the problem?
– Whether it is easy to understand?
– Whether it is easy to implement?
– How much space (memory) it requires to solve the
problem?
– How much time it takes to solve the problem? Etc.,
• Analysis of efficiency of an algorithm can be performed at two
different stages, before implementation and after implementation,
as
• A priori analysis −
– This is defined as theoretical analysis of an algorithm.
– Efficiency of algorithm is measured by assuming that all other factors e.g.
speed of processor, are constant and have no effect on implementation.
• A posterior analysis −
– This is defined as empirical analysis of an algorithm. The chosen algorithm
is implemented using programming language.
– Next the chosen algorithm is executed on target computer machine. In this
analysis, actual statistics like running time and space needed are collected.
• Algorithm analysis is dealt with the execution or running time of
various operations involved. Running time of an operation can be
defined as number of computer instructions executed per operation.
• Algorithm Complexity
• Suppose X is treated as an algorithm and N is treated as the
size of input data, the time and space implemented by the
Algorithm X are the two main factors which determine the
efficiency of X.
• Time Factor − The time is calculated or measured by counting
the number of key operations such as comparisons in sorting
algorithm.
• Space Factor − The space is calculated or measured by
counting the maximum memory space required by the
algorithm.
• The complexity of an algorithm f(N) provides the running
time and / or storage space needed by the algorithm with
respect of N as the size of input data.
• Performance analysis of an algorithm is
performed by using the following measures...
• Space required to complete the task of that
algorithm (Space Complexity). It includes
program space and data space
• Time required to complete the task of that
algorithm (Time Complexity)
• Space needed by an algorithm is equal to the sum of
the following two components
• A fixed part that is a space required to store certain
data and variables (i.e. simple variables and
constants, program size etc.), that are not
dependent of the size of the problem.
• A variable part is a space required by variables,
whose size is totally dependent on the size of the
problem. For example, recursion stack space,
dynamic memory allocation etc.
• Space complexity?
• When we design an algorithm to solve a problem, it
needs some computer memory to complete its
execution. For any algorithm, memory is required for
the following purposes...
• To store program instructions.
• To store constant values.
• To store variable values.
• And for few other things like funcion calls, jumping
statements etc,.
• Space complexity of an algorithm can be
defined as follows...
• Total amount of computer memory required
by an algorithm to complete its execution is
called as space complexity of that algorithm.
• Auxiliary Space
• Auxiliary space is extra space or temporary
space used by the algorithms during its
execution.
• Memory Usage during program execution
• Generally, when a program is under execution it uses the
computer memory for THREE reasons. They are as follows...
• Instruction Space: It is the amount of memory used to store
compiled version of instructions.
• Environmental Stack: It is the amount of memory used to
store information of partially executed functions at the time
of function call.
• Data Space: used to store data, variables, and constants
which are stored by the program and it is updated during
execution.
• Note
– When we want to perform analysis of an
algorithm based on its Space complexity,
– we consider only Data Space and ignore
Instruction Space as well as Environmental Stack.
– we calculate only the memory required to store
Variables, Constants, Structures, etc.,
• Space complexity S(P) of any algorithm P is
S(P) = C + SP(I)
• Where C is the fixed part and S(I) is the
variable part of the algorithm which depends
on instance characteristic I.
• Following is a simple example that tries to explain the
concept
• Algorithm
– SUM(P, Q)
– Step 1 - START
– Step 2 - R ← P + Q + 10
– Step 3 - Stop
• Here we have three variables P, Q and R and one constant.
Hence S(p) = 1+3.
• Now space is dependent on data types of given constant
types and variables and it will be multiplied accordingly.
• To calculate the space complexity,
• we must know the memory required to store different
datatype values (according to the compiler).
• For example, the C Programming Language compiler
requires the following...

• 2 bytes to store Integer value.


• 4 bytes to store Floating Point value.
• 1 byte to store Character value.
• 6 (OR) 8 bytes to store double value.
• Consider the following piece of code...

• Example 1
int square(int a)
{
return a*a;
}
• n the above piece of code, it requires 2 bytes of memory
to store variable 'a' and another 2 bytes of memory is
used for return value.
• That means, totally it requires 4 bytes of memory to
complete its execution.
• And this 4 bytes of memory is fixed for any input value of
'a'. This space complexity is said to be Constant Space
Complexity.
• If any algorithm requires a fixed amount of space for all
input values then that space complexity is said to be
Constant Space Complexity.
• Consider the following piece of code...

• Example 2
int sum(int A[ ], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}
• In the above piece of code it requires
– 'n*2' bytes of memory to store array variable 'a[ ]'
– 2 bytes of memory for integer parameter 'n'
– 4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
– 2 bytes of memory for return value.

• That means, totally it requires '2n+8' bytes of memory to


complete its execution.
• Here, the total amount of memory required depends on the
value of 'n'.
• As 'n' value increases the space required also increases
proportionately. This type of space complexity is said to be Linear
Space Complexity.
• If the amount of space required by an
algorithm is increased with the increase of
input value, then that space complexity is said
to be Linear Space Complexity.
• Time complexity
• Every algorithm requires some amount of
computer time to execute its instruction to
perform the task. This computer time required is
called time complexity.
The time complexity of an algorithm can be
defined as follows...
• The time complexity of an algorithm is the total
amount of time required by an algorithm to
complete its execution.
• Time requirements can be defined as a numerical function
• T(P) = C + TP(I)
• Compile time (C): independent of instance characteristics
in a program P.
• run (execution) time TP
• I – particular instance
• Definition: A program step is a syntactically or
semantically meaningful program segment whose
execution time is independent of the instance
characteristics.
• Methods to compute the step count
• Introduce variable count into programs
• Tabular method
– Determine the total number of steps contributed
by each statement step per execution  frequency
(ie. number of time a statement can be executed)
– add up the contribution of all statements
• Example:
• Step count table for Program : Iterative
function to sum a list of numbers
Statement s/e Frequency Total steps
float rsum(float list[ ], int n) 0 0 0
{ 0 0 0
if (n) 1 n+1 n+1
return rsum(list, n-1)+list[n-1]; 1 n n
return list[0]; 1 1 1
} 0 0 0
Total 2n+2
• Generally, the running time of an algorithm depends upon
the following...

1. Whether it is running on Single processor machine or


Multi processor machine.
2. Whether it is a 32 bit machine or 64 bit machine.
3. Read and Write speed of the machine.
4. The amount of time required by an algorithm to perform
Arithmetic operations, logical operations, return value and
assignment operations etc.,
5. Input data
• When we calculate time complexity of an
algorithm, we consider only input data and
ignore the remaining things, as they are
machine dependent.
• We check only, how our program is behaving
for the different input values to perform all
the operations like Arithmetic, Logical, Return
value and Assignment etc.,
• Calculating Time Complexity of an algorithm
based on the system configuration is a very
difficult task because the configuration
changes from one system to another system.
• To solve this problem, we must assume a
model machine with a specific configuration.
• We can able to calculate generalized time
complexity according to that model machine.
• To calculate the time complexity of an algorithm, we need
to define a model machine. Let us assume a machine with
following configuration...
1. It is a Single processor machine
2. It is a 32 bit Operating System machine
3. It performs sequential execution
4. It requires 1 unit of time for Arithmetic and Logical
operations
5. It requires 1 unit of time for Assignment and Return value
6. It requires 1 unit of time for Read and Write operations
• we calculate the time complexity of following example code by using the
above-defined model machine...
• Consider the following piece of code...
• Example 1
int sum(int a, int b)
{
return a+b;
}
• In the above sample code, it requires 1 unit of time to calculate a+b and
1 unit of time to return the value.
• Totally it takes 2 units of time to complete its execution. And it does not
change based on the input values of a and b.
• For all input values, it requires the same amount of time i.e. 2 units.
• If any program requires a fixed amount of
time for all input values then its time
complexity is said to be Constant Time
Complexity.
• Consider the following piece of code...
• Example 2
int sum(int A[], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}
• In above calculation
• Cost is the amount of computer time required for a single
operation in each line.
• Repeatation is the amount of computer time required by each
operation for all its repetitions.
• Total is the amount of computer time required by each
operation to execute.
• So above code requires '4n+4' Units of computer time to
complete the task. Here the exact time is not fixed. And it
changes based on the n value. If we increase the n value then
the time required also increases linearly.
• Totally it takes '4n+4' units of time to
complete its execution and it is Linear Time
Complexity.
• If the amount of time required by an
algorithm is increased with the increase of
input value then that time complexity is said
to be Linear Time Complexity.
• How to determine the complexity: In general, how can you
determine the running time of a piece of code? The answer is that
it depends on what kinds of statements are used.

• Sequence of statements
– statement 1;
– statement 2;
– ...
– statement k;
– The total time is found by adding the times for all statements: total time
= time (statement 1) + time(statement 2) + ... + time(statement k)
– If each statement is "simple"then the time for each statement is
constant and the total time is also constant: O(1).
• If-Then-Else
– if (cond) then
– block 1 (sequence of statements)
– else
– block 2 (sequence of statements)
– end if;
– Here, either block 1 will execute, or block 2 will
execute. max(time(block 1), time(block 2))
– If block 1 takes O(1) and block 2 takes O(N), the if-
then-else statement would be O(N).
• Loops
for I in 1 .. N loop
sequence of statements
end loop;
– The loop executes N times, so the sequence of
statements also executes N times.
– If we assume the statements are O(1), the total
time for the for loop is N * O(1), which is O(N)
overall.
• Nested loops
for I in 1 .. N loop
for J in 1 .. M loop
sequence of statements
end loop;
end loop
– The outer loop executes N times. Every time the outer loop
executes, the inner loop
– executes M times. As a result, the statements in the inner loop
execute a total of N * M
– times. Thus, the complexity is O(N * M).
– Special case: where the stopping condition of the inner loop is J
<N instead of J<M, the total complexity for the two loops is O(N2).
• Statements with function/ procedure calls
– When a statement involves a function/ procedure call, the complexity
of the statement includes the complexity of the function/ procedure.
– Assume that you know that function/ procedure f takes constant
time, and that function/procedure g takes time proportional to
(linear in) the value of its parameter k.
– Then the statements below have the time complexities indicated.
– f(k) has O(1) g(k) has O(k)
– When a loop is involved, the same rule applies. For example:
for J in 1 .. N loop
g(J);
end loop;
– It has complexity (N2). The loop executes N times and each
function/procedure call g(N) is complexity O(N).
Asymptotic Notation

• The main idea of asymptotic analysis is to have a


measure of efficiency of algorithms that doesn’t depend
on machine specific constants, mainly because this
analysis doesn’t require algorithms to be implemented
and time taken by programs to be compared
• It is often used to describe how the size of the input data
affects an algorithm’s usage of computational resources.
• Running time of an algorithm is described as a function
of input size n for large n.
• Asymptotic analysis of an algorithm - refers to
defining the mathematical framing of its run-time
performance.
• Usually, time required by an algorithm falls under
three types:
– Best Case − Minimum time required for program
execution.
– Average Case − Average time required for program
execution.
– Worst Case − Maximum time required for program
execution.

You might also like