Professional Documents
Culture Documents
DAAnotes
DAAnotes
Properties of Algorithm:
Input: An algorithm has zero or more inputs. Each that contains a
fundamental operator must accept zero or more inputs.
Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
Definiteness: All instructions in an algorithm must be unambiguous,
precise, and easy to interpret. By referring to any of the instructions in an
algorithm one can clearly understand what is to be done. Every
fundamental operator in instruction must be defined without any
ambiguity.
Finiteness: An algorithm must terminate after a finite number of steps in
all test cases. Every instruction which contains a fundamental operator
must be terminated within a finite amount of time. Infinite loops or
recursive functions without base conditions do not possess finiteness.
Effectiveness: An algorithm must be developed by using very basic,
simple, and feasible operations so that one can trace it out by using just
paper and pencil.
Properties of Algorithm:
It should terminate after a finite time.
It should produce at least one output.
It should take zero or more input.
It should be deterministic means giving the same output for the same
input case.
Every step in the algorithm must be effective i.e. every step should do
some work.
Advantages of Algorithms:
It is easy to understand.
An algorithm is a step-wise representation of a solution to a given
problem.
In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms :
Writing an algorithm takes a long time so it is time-consuming.
Understanding complex logic through algorithms can be very difficult.
Branching and Looping statements are difficult to show in
Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem
definition.
2. The constraints of the problem must be considered while solving the
problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that
it solves the problem.
Advantages of Algorithms:
It is easy to understand.
An algorithm is a step-wise representation of a solution to a given
problem.
In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms :
Writing an algorithm takes a long time so it is time-consuming.
Understanding complex logic through algorithms can be very difficult.
Branching and Looping statements are difficult to show in
Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem
definition.
2. The constraints of the problem must be considered while solving the
problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that
it solves the problem.
Time Complexity: The time complexity of an algorithm refers to the amount
of time required by the algorithm to execute and get the result. This can be
for normal operations, conditional if-else statements, loop statements, etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the
following 2 components:
Constant time part: Any instruction that is executed just once comes in
this part. For example, input, output, if-else, switch, arithmetic operations,
etc.
Variable Time Part: Any instruction that is executed more than once, say
n times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the
algorithm, which depends on the instance characteristic I.
Time Complexity: The time complexity of an algorithm refers to the amount
of time required by the algorithm to execute and get the result. This can be
for normal operations, conditional if-else statements, loop statements, etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the
following 2 components:
Constant time part: Any instruction that is executed just once comes in
this part. For example, input, output, if-else, switch, arithmetic operations,
etc.
Variable Time Part: Any instruction that is executed more than once, say
n times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C + TP(I),
where C is the constant time part and TP(I) is the variable part of the
algorithm, which depends on the instance characteristic I.
How to express an Algorithm?
1. Natural Language:- Here we express the Algorithm in the natural English
language. It is too hard to understand the algorithm from it.
2. Flow Chart:- Here we express the Algorithm by making
a graphical/pictorial representation of it. It is easier to understand than
Natural Language.
3. Pseudo Code:- Here we express the Algorithm in the form of annotations
and informative text written in plain English which is very much similar to
the real code but as it has no syntax like any of the programming
languages, it can’t be compiled or interpreted by the computer. It is the
best way to express an algorithm because it can be understood by even a
layman with some school-level knowledge.
1. The way the if-else, for, while loops are indented in a program, indent the
statements likewise, as it helps to comprehend the decision control and
execution mechanism. They also improve the readability to a great extent.
Example:
if "1"
print response
"I am case 1"
if "2"
print response
"I am case 2"
6. Advantages of Pseudocode
Improves the readability of any approach. It’s one of the best approaches
to start implementation of an algorithm.
Acts as a bridge between the program and the algorithm or flowchart.
Also works as a rough documentation, so the program of one developer
can be understood easily when a pseudo code is written out. In industries,
the approach of documentation is essential. And that’s where a pseudo-
code proves vital.
The main goal of a pseudo code is to explain what exactly each line of a
program should do, hence making the code construction phase easier for
the programmer.
How to write a Pseudo-code?
1. Arrange the sequence of tasks and write the pseudocode accordingly.
2. Start with the statement of a pseudo code which establishes the main
goal or the aim. Example:
This program will allow the user to check
the number whether it's even or odd.
1. The way the if-else, for, while loops are indented in a program, indent the
statements likewise, as it helps to comprehend the decision control and
execution mechanism. They also improve the readability to a great extent.
Example:
if "1"
print response
"I am case 1"
if "2"
print response
"I am case 2"
That means when we have multiple algorithms to solve a problem, we need to select a
suitable algorithm to solve that problem.
We compare algorithms with each other which are solving the same problem, to select the
best algorithm. To compare algorithms, we use a set of parameters or set of elements like
memory required by that algorithm, the execution speed of that algorithm, easy to
understand, easy to implement, etc.,
1. Whether that algorithm is providing the exact solution for the problem?
2. Whether it is easy to understand?
3. Whether it is easy to implement?
4. How much space (memory) it requires to solve the problem?
5. How much time it takes to solve the problem? Etc.,
When we want to analyse an algorithm, we consider only the space and time required by
that particular algorithm and we ignore all the remaining elements.
Based on this information, performance analysis of an algorithm can also be defined as
follows...
2. Program Instruction
3. Execution
Space complexity is the amount of memory used by the algorithm
(including the input values to the algorithm) to execute and produce
the result.
Sometime Auxiliary Space is confused with Space Complexity. But
Auxiliary Space is the extra space or the temporary space used by
the algorithm during it's execution.
1. Instruction Space
2. Environmental Stack
For example, If a function A() calls function B() inside it, then all
th variables of the function A() will get stored on the system
stack temporarily, while the function B() is called and executed
inside the funciton A().
3. Data Space
Type Size
int z = a + b + c;
return(z);
Copy
In the above expression, variables a, b, c and z are all integer types,
hence they will take up 4 bytes each, so total memory requirement
will be (4(4) + 4) = 20 bytes, this additional 4 bytes is for return value.
And because this space requirement is fixed for the above example,
hence it is called Constant Space Complexity.
x = x + a[i];
return(x);
}
In the above code, 4*n bytes of space is required for the
array a[] elements.
One solution to this problem can be, running a loop for n times, starting with the
number n and adding n to it, every time.
/*
for i=1 to n
do n = n + n
return n
Copy
/*
*/
return n*n
Copy
In the above two simple algorithms, you saw how a single problem can have many
solutions. While the first solution required a loop which will execute for n number of
times, the second solution used a mathematical operator * to return the result in one
line. So which one is the better approach, of course the second one.
Time complexity of an algorithm signifies the total time required by the program to
run till its completion.
The time complexity of algorithms is most commonly expressed using the big O
notation. It's an asymptotic notation to represent the time complexity. We will study
about it in detail in the next tutorial.
Time Complexity is most commonly estimated by counting the number of
elementary steps performed by any algorithm to finish execution. Like in the example
above, for the first code the loop will run n number of times, so the time complexity
will be n atleast and as the value of n will increase the time taken will also increase.
While for the second code, time complexity is constant, because it will never be
dependent on the value of n, it will always give the result in 1 step.
And since the algorithm's performance may vary with different types of input data,
hence for an algorithm we usually use the worst-case Time complexity of an
algorithm because that is the maximum time taken for any input size.
Now lets tap onto the next big topic related to Time complexity, which is How to
Calculate Time Complexity. It becomes very confusing some times, but we will try to
explain it in the simplest way.
Now the most common metric for calculating time complexity is Big O notation. This
removes all constant factors so that the running time can be estimated in relation
to N, as N approaches infinity. In general you can think of it like this :
statement;
Above we have a single statement. Its Time Complexity will be Constant. The running
time of the statement will not change in relation to N.
statement;
Copy
The time complexity for the above algorithm will be Linear. The running time of the
loop is directly proportional to N. When N doubles, so does the running time.
for(i=0; i < N; i++)
statement;
Copy
This time, the time complexity for the above code will be Quadratic. The running
time of the two loops is proportional to the square of N. When N doubles, the
running time increases by N * N.
high = mid - 1;
low = mid + 1;
else break;
Copy
This is an algorithm to break a set of numbers into halves, to search a particular
field(we will study this in detail later). Now, this algorithm will have
a LogarithmicTime Complexity. The running time of the algorithm is proportional to
the number of times N can be divided by 2(N is high-low here). This is because the
algorithm divides the working area in half with each iteration.
Taking the previous algorithm forward, above we have a small logic of Quick Sort(we
will study this in detail later). Now in Quick Sort, we divide the list into halves every
time, but we repeat the iteration N times(where N is the size of list). Hence time
complexity will be N*log( N ). The running time consists of N loops (iterative or
recursive) that are logarithmic, thus the algorithm is a combination of linear and
logarithmic.
NOTE: In general, doing something with every item in one dimension is linear, doing
something with every item in two dimensions is quadratic, and dividing the working
area in half is logarithmic.
Now we will discuss and understand the various notations used for Time Complexity.
2. Big Omega denotes "more than or the same as" <expression> iterations.
O(expression) is the set of functions that grow slower than or at the same rate as
expression. It indicates the maximum required by an algorithm for all input values. It
represents the worst case of an algorithm's time complexity.
Omega(expression) is the set of functions that grow faster than or at the same rate
as expression. It indicates the minimum time required by an algorithm for all input
values. It represents the best case of an algorithm's time complexity.
Theta(expression) consist of all the functions that lie in both O(expression) and
Omega(expression). It indicates the average bound of an algorithm. It represents the
average case of an algorithm's time complexity.
Since this polynomial grows at the same rate as n2, then you could say that the
function f lies in the set Theta(n2). (It also lies in the sets O(n2) and Omega(n2) for
the same reason.)
The simplest explanation is, because Theta denotes the same as the expression.
Hence, as f(n) grows by a factor of n2, the time complexity can be best represented
as Theta(n2).
Now that we have learned the Time Complexity of Algorithms, you should also learn
about Space Complexity of Algorithms and its importance.
Asymptotic Notations:
.Asymptotic Notations are mathematical tools that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
.This is also referred to as an algorithm’s growth rate.
.You can’t compare two algorithm’s head to head.
.You compare space and time complexity using asymptotic analysis.
.It compares two algorithms based on changes in their performance as the input size is
increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
.Theta (Average Case) You add the running times for each possible input combination
and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is
said to be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1*
g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0
Theta notation
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a
positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.