Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Definition of Algorithm

The word Algorithm means ” A set of finite rules or instructions to be


followed in calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps
that frequently involves recursive operations”.

 Language Independent: The Algorithm designed must be language-


independent, i.e. it must be just plain instructions that can be
implemented in any language, and yet the output will be the same, as
expected.

Properties of Algorithm:
 Input: An algorithm has zero or more inputs. Each that contains a
fundamental operator must accept zero or more inputs.
 Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous,
precise, and easy to interpret. By referring to any of the instructions in an
algorithm one can clearly understand what is to be done. Every
fundamental operator in instruction must be defined without any
ambiguity.
 Finiteness: An algorithm must terminate after a finite number of steps in
all test cases. Every instruction which contains a fundamental operator
must be terminated within a finite amount of time. Infinite loops or
recursive functions without base conditions do not possess finiteness.
 Effectiveness: An algorithm must be developed by using very basic,
simple, and feasible operations so that one can trace it out by using just
paper and pencil.
Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same
input case.
 Every step in the algorithm must be effective i.e. every step should do
some work.
Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given
problem.
 In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms :
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in
Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem
definition.
2. The constraints of the problem must be considered while solving the
problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that
it solves the problem.
Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given
problem.
 In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms :
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in
Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem
definition.
2. The constraints of the problem must be considered while solving the
problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that
it solves the problem.
Time Complexity: The time complexity of an algorithm refers to the amount
of time required by the algorithm to execute and get the result. This can be
for normal operations, conditional if-else statements, loop statements, etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the
following 2 components:
 Constant time part: Any instruction that is executed just once comes in
this part. For example, input, output, if-else, switch, arithmetic operations,
etc.
 Variable Time Part: Any instruction that is executed more than once, say
n times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the
algorithm, which depends on the instance characteristic I.
Time Complexity: The time complexity of an algorithm refers to the amount
of time required by the algorithm to execute and get the result. This can be
for normal operations, conditional if-else statements, loop statements, etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the
following 2 components:
 Constant time part: Any instruction that is executed just once comes in
this part. For example, input, output, if-else, switch, arithmetic operations,
etc.
 Variable Time Part: Any instruction that is executed more than once, say
n times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the
algorithm, which depends on the instance characteristic I.
How to express an Algorithm?
1. Natural Language:- Here we express the Algorithm in the natural English
language. It is too hard to understand the algorithm from it.
2. Flow Chart:- Here we express the Algorithm by making
a graphical/pictorial representation of it. It is easier to understand than
Natural Language.
3. Pseudo Code:- Here we express the Algorithm in the form of annotations
and informative text written in plain English which is very much similar to
the real code but as it has no syntax like any of the programming
languages, it can’t be compiled or interpreted by the computer. It is the
best way to express an algorithm because it can be understood by even a
layman with some school-level knowledge.

4. Algorithm: It’s an organized logical sequence of the actions or the


approach towards a particular problem. A programmer implements an
algorithm to solve a problem. Algorithms are expressed using natural
verbal but somewhat technical annotations.
5. Pseudo code: It’s simply an implementation of an algorithm in the
form of annotations and informative text written in plain English. It has
no syntax like any of the programming language and thus can’t be
compiled or interpreted by the computer.
How to write a Pseudo-code?
1. Arrange the sequence of tasks and write the pseudocode accordingly.
2. Start with the statement of a pseudo code which establishes the main
goal or the aim. Example:
This program will allow the user to check
the number whether it's even or odd.

1. The way the if-else, for, while loops are indented in a program, indent the
statements likewise, as it helps to comprehend the decision control and
execution mechanism. They also improve the readability to a great extent.
Example:
if "1"
print response
"I am case 1"
if "2"
print response
"I am case 2"

1. Use appropriate naming conventions. The human tendency follows the


approach to follow what we see. If a programmer goes through a pseudo
code, his approach will be the same as per it, so the naming must be
simple and distinct.
2. Use appropriate sentence casings, such as CamelCase for methods,
upper case for constants and lower case for variables.
3. Elaborate everything which is going to happen in the actual code. Don’t
make the pseudo code abstract.
4. Use standard programming structures such as ‘if-then’, ‘for’, ‘while’,
‘cases’ the way we use it in programming.
5. Check whether all the sections of a pseudo code is complete, finite and
clear to understand and comprehend.
6. Don’t write the pseudo code in a complete programmatic manner. It is
necessary to be simple to understand even for a layman or client, hence
don’t incorporate too many technical terms.

6. Advantages of Pseudocode

 Improves the readability of any approach. It’s one of the best approaches
to start implementation of an algorithm.
 Acts as a bridge between the program and the algorithm or flowchart.
Also works as a rough documentation, so the program of one developer
can be understood easily when a pseudo code is written out. In industries,
the approach of documentation is essential. And that’s where a pseudo-
code proves vital.
 The main goal of a pseudo code is to explain what exactly each line of a
program should do, hence making the code construction phase easier for
the programmer.
How to write a Pseudo-code?
1. Arrange the sequence of tasks and write the pseudocode accordingly.
2. Start with the statement of a pseudo code which establishes the main
goal or the aim. Example:
This program will allow the user to check
the number whether it's even or odd.

1. The way the if-else, for, while loops are indented in a program, indent the
statements likewise, as it helps to comprehend the decision control and
execution mechanism. They also improve the readability to a great extent.
Example:
if "1"
print response
"I am case 1"
if "2"
print response
"I am case 2"

1. Use appropriate naming conventions. The human tendency follows the


approach to follow what we see. If a programmer goes through a pseudo
code, his approach will be the same as per it, so the naming must be
simple and distinct.
2. Use appropriate sentence casings, such as CamelCase for methods,
upper case for constants and lower case for variables.
3. Elaborate everything which is going to happen in the actual code. Don’t
make the pseudo code abstract.
4. Use standard programming structures such as ‘if-then’, ‘for’, ‘while’,
‘cases’ the way we use it in programming.
5. Check whether all the sections of a pseudo code is complete, finite and
clear to understand and comprehend.
6. Don’t write the pseudo code in a complete programmatic manner. It is
necessary to be simple to understand even for a layman or client, hence
don’t incorporate too many technical terms.

What is Performance Analysis of an algorithm?


If we want to go from city "A" to city "B", there can be many ways of doing this. We can go by
flight, by bus, by train and also by bicycle. Depending on the availability and convenience,
we choose the one which suits us. Similarly, in computer science, there are multiple
algorithms to solve a problem. When we have more than one algorithm to solve a problem,
we need to select the best one. Performance analysis helps us to select the best algorithm
from multiple algorithms to solve a problem.
When there are multiple alternative algorithms to solve a problem, we analyze them and
pick the one which is best suitable for our requirements.

The formal definition is as follows...


Performance of an algorithm is a process of making evaluative judgement about
algorithms.

It can also be defined as follows...

Performance of an algorithm means predicting the resources which are required to


an algorithm to perform its task.

That means when we have multiple algorithms to solve a problem, we need to select a
suitable algorithm to solve that problem.
We compare algorithms with each other which are solving the same problem, to select the
best algorithm. To compare algorithms, we use a set of parameters or set of elements like
memory required by that algorithm, the execution speed of that algorithm, easy to
understand, easy to implement, etc.,

Generally, the performance of an algorithm depends on the following elements...

1. Whether that algorithm is providing the exact solution for the problem?
2. Whether it is easy to understand?
3. Whether it is easy to implement?
4. How much space (memory) it requires to solve the problem?
5. How much time it takes to solve the problem? Etc.,

When we want to analyse an algorithm, we consider only the space and time required by
that particular algorithm and we ignore all the remaining elements.
Based on this information, performance analysis of an algorithm can also be defined as
follows...

Performance analysis of an algorithm is the process of calculating space and time


required by that algorithm.

Performance analysis of an algorithm is performed by using the following measures...

1. Space required to complete the task of that algorithm (Space Complexity). It


includes program space and data space
2. Time required to complete the task of that algorithm (Time Complexity)

Whenever a solution to a problem is written some memory is


required to complete. For any algorithm memory may be used for
the following:

1. Variables (This include the constant values, temporary values)

2. Program Instruction

3. Execution
Space complexity is the amount of memory used by the algorithm
(including the input values to the algorithm) to execute and produce
the result.
Sometime Auxiliary Space is confused with Space Complexity. But
Auxiliary Space is the extra space or the temporary space used by
the algorithm during it's execution.

Space Complexity = Auxiliary Space + Input space

Memory Usage while Execution

While executing, algorithm uses memory space for three reasons:

1. Instruction Space

It's the amount of memory used to save the compiled version


of instructions.

2. Environmental Stack

Sometimes an algorithm(function) may be called inside


another algorithm(function). In such a situation, the current
variables are pushed onto the system stack, where they wait
for further execution and then the call to the inside
algorithm(function) is made.

For example, If a function A() calls function B() inside it, then all
th variables of the function A() will get stored on the system
stack temporarily, while the function B() is called and executed
inside the funciton A().

3. Data Space

Amount of space used by the variables and constants.


But while calculating the Space Complexity of any algorithm, we
usually consider only Data Space and we neglect the Instruction
Space and Environmental Stack.

Calculating the Space Complexity

For calculating the space complexity, we need to know the value of


memory used by different type of datatype variables, which
generally varies for different operating systems, but the method for
calculating the space complexity remains the same.

Type Size

bool, char, unsigned char, signed char, __int8 1 byte

__int16, short, unsigned short, wchar_t, __wchar_t 2 bytes

float, __int32, int, unsigned int, long, unsigned long 4 bytes

double, __int64, long double, long long 8 bytes

Now let's learn how to compute space complexity by taking a few


examples:
{

int z = a + b + c;

return(z);

Copy
In the above expression, variables a, b, c and z are all integer types,
hence they will take up 4 bytes each, so total memory requirement
will be (4(4) + 4) = 20 bytes, this additional 4 bytes is for return value.
And because this space requirement is fixed for the above example,
hence it is called Constant Space Complexity.

Let's have another example, this time a bit complex one,

// n is the length of array a[]

int sum(int a[], int n)

int x = 0; // 4 bytes for x

for(int i = 0; i < n; i++) // 4 bytes for


i

x = x + a[i];

return(x);

}
 In the above code, 4*n bytes of space is required for the
array a[] elements.

 4 bytes each for x, n, i and the return value.

Hence the total memory requirement will be (4n + 12), which is


increasing linearly with the increase in the input value n, hence it is
called as Linear Space Complexity.

Similarly, we can have quadratic and other complex space


complexity as well, as the complexity of an algorithm increases.

But we should always focus on writing algorithm code in such a way


that we keep the space complexity minimum.

For any defined problem, there can be N number of solution. This is


true in general. If I have a problem and I discuss about the problem
with all of my friends, they will all suggest me different solutions.
And I am the one who has to decide which solution is the best
based on the circumstances.

Similarly for any problem which must be solved using a program,


there can be infinite number of solutions. Let's take a simple
example to understand this. Below we have two different algorithms
to find square of a number(for some time, forget that square of any
number n is n*n):

One solution to this problem can be, running a loop for n times, starting with the
number n and adding n to it, every time.

/*

we have to calculate the square of n


*/

for i=1 to n

do n = n + n

// when the loop ends n will hold its square

return n

Copy

Or, we can simply use a mathematical operator * to find the square.

/*

we have to calculate the square of n

*/

return n*n

Copy

In the above two simple algorithms, you saw how a single problem can have many
solutions. While the first solution required a loop which will execute for n number of
times, the second solution used a mathematical operator * to return the result in one
line. So which one is the better approach, of course the second one.

What is Time Complexity?

Time complexity of an algorithm signifies the total time required by the program to
run till its completion.

The time complexity of algorithms is most commonly expressed using the big O
notation. It's an asymptotic notation to represent the time complexity. We will study
about it in detail in the next tutorial.
Time Complexity is most commonly estimated by counting the number of
elementary steps performed by any algorithm to finish execution. Like in the example
above, for the first code the loop will run n number of times, so the time complexity
will be n atleast and as the value of n will increase the time taken will also increase.
While for the second code, time complexity is constant, because it will never be
dependent on the value of n, it will always give the result in 1 step.

And since the algorithm's performance may vary with different types of input data,
hence for an algorithm we usually use the worst-case Time complexity of an
algorithm because that is the maximum time taken for any input size.

Calculating Time Complexity

Now lets tap onto the next big topic related to Time complexity, which is How to
Calculate Time Complexity. It becomes very confusing some times, but we will try to
explain it in the simplest way.

Now the most common metric for calculating time complexity is Big O notation. This
removes all constant factors so that the running time can be estimated in relation
to N, as N approaches infinity. In general you can think of it like this :

statement;

Above we have a single statement. Its Time Complexity will be Constant. The running
time of the statement will not change in relation to N.

for(i=0; i < N; i++)

statement;

Copy

The time complexity for the above algorithm will be Linear. The running time of the
loop is directly proportional to N. When N doubles, so does the running time.
for(i=0; i < N; i++)

for(j=0; j < N;j++)

statement;

Copy

This time, the time complexity for the above code will be Quadratic. The running
time of the two loops is proportional to the square of N. When N doubles, the
running time increases by N * N.

while(low <= high)

mid = (low + high) / 2;

if (target < list[mid])

high = mid - 1;

else if (target > list[mid])

low = mid + 1;

else break;

Copy
This is an algorithm to break a set of numbers into halves, to search a particular
field(we will study this in detail later). Now, this algorithm will have
a LogarithmicTime Complexity. The running time of the algorithm is proportional to
the number of times N can be divided by 2(N is high-low here). This is because the
algorithm divides the working area in half with each iteration.

void quicksort(int list[], int left, int right)

int pivot = partition(list, left, right);

quicksort(list, left, pivot - 1);

quicksort(list, pivot + 1, right);

Taking the previous algorithm forward, above we have a small logic of Quick Sort(we
will study this in detail later). Now in Quick Sort, we divide the list into halves every
time, but we repeat the iteration N times(where N is the size of list). Hence time
complexity will be N*log( N ). The running time consists of N loops (iterative or
recursive) that are logarithmic, thus the algorithm is a combination of linear and
logarithmic.

NOTE: In general, doing something with every item in one dimension is linear, doing
something with every item in two dimensions is quadratic, and dividing the working
area in half is logarithmic.

Types of Notations for Time Complexity

Now we will discuss and understand the various notations used for Time Complexity.

1. Big Oh denotes "fewer than or the same as" <expression> iterations.

2. Big Omega denotes "more than or the same as" <expression> iterations.

3. Big Theta denotes "the same as" <expression> iterations.


4. Little Oh denotes "fewer than" <expression> iterations.

5. Little Omega denotes "more than" <expression> iterations.

Understanding Notations of Time Complexity with Example

O(expression) is the set of functions that grow slower than or at the same rate as
expression. It indicates the maximum required by an algorithm for all input values. It
represents the worst case of an algorithm's time complexity.

Omega(expression) is the set of functions that grow faster than or at the same rate
as expression. It indicates the minimum time required by an algorithm for all input
values. It represents the best case of an algorithm's time complexity.

Theta(expression) consist of all the functions that lie in both O(expression) and
Omega(expression). It indicates the average bound of an algorithm. It represents the
average case of an algorithm's time complexity.

Suppose you've calculated that an algorithm takes f(n) operations, where,

f(n) = 3*n^2 + 2*n + 4. // n^2 means square of n

Since this polynomial grows at the same rate as n2, then you could say that the
function f lies in the set Theta(n2). (It also lies in the sets O(n2) and Omega(n2) for
the same reason.)

The simplest explanation is, because Theta denotes the same as the expression.
Hence, as f(n) grows by a factor of n2, the time complexity can be best represented
as Theta(n2).

Now that we have learned the Time Complexity of Algorithms, you should also learn
about Space Complexity of Algorithms and its importance.

Asymptotic Notations:
.Asymptotic Notations are mathematical tools that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
.This is also referred to as an algorithm’s growth rate.
.You can’t compare two algorithm’s head to head.
.You compare space and time complexity using asymptotic analysis.
.It compares two algorithms based on changes in their performance as the input size is
increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
.Theta (Average Case) You add the running times for each possible input combination
and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is
said to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1*
g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0

Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤
f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is
always between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition
of theta also requires that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s
time complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms
and ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000
= Θ(n3), the dropping lower order terms is always fine because there will always be a
number(n) after which Θ(n3) has higher values than Θ(n2) irrespective of the constants
involved. For a given function g(n), we denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an algorithm.
Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to
complete statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a
positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for
all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case
and quadratic time in the worst case. We can safely say that the time complexity of the
Insertion sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use
two statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time
complexity of an algorithm. Many times we easily find an upper bound by simply
looking at the algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O provides
exact or upper bounds .
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus,
it provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement
execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is
said to be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤
f(n) for all n ≥ n0

Mathematical Representation of Omega notation :


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for
all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity of
Insertion Sort can be written as Ω(n), but it is not very useful information about
insertion sort, as we are generally interested in worst-case and sometimes in the
average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω provides
exact or lower bounds

You might also like