AOP UNIT-1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

UNIT-1

 Role of algorithm in computer:

 An algorithm is a step-by-step procedure used for solving a problem


(or) performing a computation.

 Algorithm play a crucial role in computing by providing a set of


instructions for a computer to perform a specific task.

 They are used to solve problems and carry out tasks in computer
systems, such as sorting data, searching for information, image
processing and much more.

 Algorithm are widely used in various industrial areas to improve


efficiency, accuracy, and decision making.

1. Finance: Algorithm are used to analyze financial data and


make predictions, enabling traders and investors to make
informed decisions.
2. Healthcare: algorithms are used to process and analyze
medical images, assist in diagnosing diseases.
3. Security: Algorithms are used to detect and prevent security
threats such as hacking, fraud and cyber attacks.
4. Data processing: Algorithms are used to process and analyze
larger amounts of data, such as sorting and searching
algorithms.
5. Database management: Algorithms are used to manage and
organize large amount of data in database, such as indexing
algorithms and query optimization algorithms.
6. Network communication: Algorithms are used for efficient
communication and data transfer in networks such as routing
algorithm.

 Algorithm as a technology:

Computers may be fast but they are not infinitely fast.

And memory may be cheap, but its not free.

Therefore, computer time is bounded resource and so space in memory.

These resources should be used wisely and algorithms that are different in
terms of time or space.

Efficiency: Algorithms devised to solve the same problem often differ


dramatically in their efficiency.
Example: we will see two algorithms for sorting.

 Insertion sort take time roughly equal to c1n2(c is constant that does not
depend on n) to sort n times.
 Merge sort take time roughly equal to c2nlgn(c2 is another constant that does
not depend on n) where lg n stands for log n . 2

 Insertion sort usually has smaller constant factor then merge sort. So that
c1<c2.

Computer A:
2.(10^6)^2 instructions/ (10^7 instructions/second )=2000 seconds

Computer B:
50.10^6 lg 10^6 instructions/ (10^7 instructions/second )=100 seconds

By using an algorithm where running time grows more slowly, even with a
poor complier.
 Computer B runs 20 times faster than computer A.
 Algorithm analysis:
Algorithm analysis is the process of evaluating the performance of an
algorithm, usually in terms of its time and space complexity.
 Time complexity: Time complexity is a measure of how long an algorithm
takes to run, based on the size of the input.

 Space complexity: space complexity is a measure of how much memory


an algorithm uses, based on the size of the input.

 Both time and space complexity are expressed using big 0 notation,
which captures the worst-case scenario of an algorithm’s behavior.

 Time and space complexity are important because they affect the
scalability and feasibility of an algorithm.

 If an algorithm has a high time complexity, it may take too long to run on
large inputs.

 If an algorithm has a high space complexity, it may use too much


memory and cause issues such as swapping, paging etc.

 Therefore, we want to design our algorithm to have the lowest possible


time and space complexity.

 Algorithms design technique (or) methods:

1. Greedy method: In the greedy method, at each step, a decision is


made to choose the local optimum, without thinking about the
future consequences.
Ex: Fractional knapsack.
2. Divide and conquer: The divide and conquer strategy involves
dividing the problem into sub problem, recursively solving them,
and then combing them for the final answer.
Ex: merge sort, quick sort.

 Fractional knapsack.

Input: arr[] = {{60, 10}, {100, 20}, {120, 30}}, W = 50


Output: 240

Explanation: By taking items of weight 10 and 20 kg and 2/3


fraction of 30 kg.
Hence total price will be 60+100+(2/3)(120) = 240

 Merge sort.
 Dynamic programming:
The approach of dynamic programming is similar to divide and conquer. The
difference is that whenever we have recursive function calls with the same
result, instead of calling them again we try to store the result in data structure
in the form of table and retrieve the result from the table. Thus, the overall
time complexity is reduced.

“Dynamic” means we dynamically decide, whether to call a function or


retrieve values from the table.
EX: 0-1 knapsack.

 Linear programming:
In linear programming, there are inequalities in terms of inputs and
maximizing or minimizing some linear functions of inputs.
Ex: maximum flow of directed graph.

 Reduction(transform and conquer):


in this method, we solve a difficult problem by transforming it into a known
problem for which we have an optimal solution.

Basically, the goal is to find a reducing algorithm whose complexity is not


dominated by the resulting reduced algorithms.

 Backtracking:
This technique is very useful in solving combinatorial problems that have single
unique solution.

It finds a solution by building a solution step by step, increasing levels over


time, using recursive calling.

In backtracking algorithm, the algorithm seeks a path to a feasible solution


that includes some intermediate checkpoints. If the checkpoint do not lead a
viable solution, the problem can return to the checkpoint and take another
path to find solution.

 Branch and bound:


This technique is very useful in solving combinatorial optimization problem
that have multiple solutions and we are interested in find the most optimum
solution. In this approach, the entire solution space in represented in the form
of state space.
Ex: travelling saleman problem.

 Asymptotic notation:

Asymptotic notation are mathematical tools that allow us to analyze an


algorithm running time by identifying its behavior as its input size grows. This
is also referred to as an algorithm growth rate.

We compare space and time complexity using asymptotic analysis.

Three types of asymptotic notation

1. Big O notation:
 Big O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an
algorithm.

 It is the most widely used notation for asymptotic analysis.

 It specifies the upper bound of a function. The maximum time required


by an algorithm (or) the worst-case time complexity.

 It returns the highest possible output value (big-O) for a given input.

 Big-Oh (worst case) it is defined as the condition that allows an


algorithm to complete statement execute in the longest amount of
time possible.
Big-0 notation

 Mathematically,
O(g(n)) = { f(n)} then there exist positive constants like C and n0 such
that 0 ≤ f(n) ≤ Cg(n) : n ≥ n0 }.

 If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if


these exist a positive constant c and n0 such.

 The execution time serves as an upper bound on the algorithm’s time


complexity.

Ex: Linear search: O(N), where N is the number of elements in the given
array

Binary search: O(Log N), where N is the number of elements in the


given array

Bubble sort: O(N2), where N is the number of elements in the given


array

O(N2)+O(N3)+O(NlogN)= O(N3), where N is the number of elements in


the given array.
2. Theta notation (Θ-Notation):

 Theta notation encloses the function from above and below. Since it
represents the upper and lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
 Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0.
Theta notation

 Mathematically,
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such
that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}.

 The execution time serves as both a lower and upper bound on the
algorithm’s time complexity.

Ex: (N2), (N log N), (3*N2) gives average time complexity as Θ(N2)

(N), (log N), (5*N) gives average time complexity as Θ(N)


3. Omega notation(Ω-Notation):
 Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best-case complexity of an algorithm.
 The execution time serves as a lower bound on the algorithm’s time
complexity.

 It is defined as the condition that allows an algorithm to complete


statement execution in the shortest amount of time.

 Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) ≤ f(n) for all n ≥ n0.
Omega notation(Ω-Notation)

 Mathematically,
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0 }.

Ex: {(N2+N), (4*N), (N2+LogN)} gives the best time complexity as Ω(N2)
 Growth of function:

 Given function f and g, we wish to show how to quantity the


statement: ”g, frows as fast as f”.
 The growth of function is directly related to the complexity of
algorithm.
 Algorithm rate of growth enables us to figure out an algorithm’s
efficiency along with the ability to compare the performance of other
algorithms.
Big O notation:
EX: n^2+n = O(n^3)
Here f(n) = n^2 + n g(n) = n^3
Notice that if n ≥ 1, n ≤ n^3 is clear
Also, notice that if n ≥ 1, n^2 ≤ n^3 is clear.

 Fundamental algorithm:

Exchange the values of two variables:

Algorithm:
1. Start.
2. Read value of a
3. Read value of b
4. c = a
5. a = b
6. b = c
7. print a
8. print b
9. end
program:
#include <stdio.h>
int main()
{
int a, b, temp;
a = 11;
b = 99;
printf("values before swapping \n a = %d, b = %d \n\n",a,b);
temp = a;
a = b;
b = temp;
printf("After Swapping the values /n a = %d, b = %d", a, b);
}

summation of set of numbers:

Algorithm

Step1: start
Step2: get the values of ‘n’ number that need to be summed up.
Step3: initialize sum to zero.
Step4: initialize i to 1.
Step5: while i is less than or equal to n, do the following steps:
a) read the ith number
b) add the ith number to current value of sum.
c) Increment i by 1.
Step6: print the value of sum.
Step7:stop.

Program:
#include <stdio.h>
int main()
{
int a=5, b=10, sum = 0;
for (int i = a; i <= b; ++i) {
sum = sum + i
printf("Sum = %d", sum);
return 0;
}

 Fractional computation:
Product of all consecutive integer numbers upto n is called factorial of a
number and is denoted by n :
Ex: 5! = 120

Algorithm
Step1: start
Step2: declare variable n, fact, i.
Step3: read number from user.
Step4: initialize variable fact = 1 and i = 1.
Step5:reapeat until i<=number:
a) Fact = fact * i.
b) i = i +1.
Step6: print fact.
Step7:stop.
Program
#include<stdio.h>
Void main()
{
int i, n, fact=1;
printf("Enter a number: ");
scanf("%d", &n);
for(i=1;i<=n;i++)
{
fact=fact*i;
}
printf("Factorial of %d is: %d", n, fact);
return 0;
}

 Generating of Fibonacci sequence :


Fibonacci series generates subsequent number by adding previous
numbers.

Algorithm
Step1: start
Step2: take integer variable a, b, c.
Step3: a = 0, b = 0.
Step4: display a, b.
Step5:c = a + b
Step6: display c.
Step7:set a = b, b = c.
Step8:repeat from 5 – 7, for n times.
Step9: stop.
Program
#include<stdio.h>
int main()
{
int a, b, c, I, n;
n = 4;
a = b= 1;
printf("%d %d", a, b);
for(i=1;i<=n-2;i++)
{
C = a + b;
Printf(“%d”, c);
a=b;
b=c;
}
return 0;
}

 Reversing the digits of a integer:

Algorithm
Step1: start
Step2:read a number num.
Step3: set num = 0, dup = num.
Step4: while num>0 true continue else go to step 8.
Step5: set rem = num % 10.
Step6: set sum = sum * 10 + rem.
Step7:set num = num/10 go to step 4.
Step8:print sum value that is reverse number. Step9: stop.
Program
#include<stdio.h>
int main()
{
int n, rev = 0, remainder;
printf("enter an integer:");
scanf(“%d”, &n);
while (n! = 0)
{
Remainder = n%10;
Rev = rev *10 + remainder;
n1=10;
}
Printf(“reversed number = %d”, rev);
return 0;
}

You might also like