Data Structure and Algorithms

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Assignment no: 2

Semester 3rd(Mor-B)
Class BS(SE)
Subject: Data Structure and Algorithms (CSI-401)
Submitted By: Muhammad salman Razzaq(4532)
Syed Saleeh Shah (4521)
Muhammad Umar Bashart (4539)
Submitted To: Dr.AWAIS
Q.1. Provide the O time complexities of the following problems along
with the arguments supporting the provided time complexities.

Ans:

Sparse matrices are those matrices which have the majority of their elements equal to zero .Computing
time can be saved by logically designing a data structure traversing only non zero elements
Two-dimensional array is used to represent a sparse matrix in which we have three rows named as:
Row: index of row, where non-zero element is located
Column: index of column, where non-zero element is located
Value: value of non-zero element located at index
EXAMPLE:
2.
The complexity of matrix multiplication is determined by counting the no. of multiplications that are
required to solve the problem
Thus the complexity of algorithm 4.7 is O(n^3).
3.
Complexity of address calculation in an array.
In this sorting algorithm, Hash Function f is used with the property of Order Preserving Function which
states that if .
Hash Function:
f(x) = floor( (x/maximum) * SIZE )
where maximum => maximum value in the array,
SIZE => size of the address table (10 in our case),
floor => floor function
This algorithm uses an address table to store the values which is simply a list (or array) of Linked lists.
Examples:
Input : arr = [29, 23, 14, 5, 15, 10, 3, 18, 1]
Output:
After inserting all the values in the address table, the address table looks like this:
ADDRESS 0: 1 --> 3
ADDRESS 1: 5
ADDRESS 2:
ADDRESS 3: 10
ADDRESS 4: 14 --> 15
ADDRESS 5: 18
ADDRESS 6:
ADDRESS 7: 23
ADDRESS 8:
ADDRESS 9: 29
4.
Insertion into an Array using the algorithm.
Insertion Operation
Insert operation is to insert one or more data elements into an array. Based on the requirement, a new
element can be added at the beginning, end, or any given index of array.
Example
Following is the implementation of the above algorithm −
#include<stdio.h>
main(){
int LA[]={1,3,5,7,8};
int item =10, k =3, n =5;
int i =0, j = n;
printf("The original array elements are :\n");
for(i =0; i<n; i++){
printf("LA[%d] = %d \n", i, LA[i]);
}
n = n +1;
while( j >= k){
LA[j+1]= LA[j];
j = j -1;
}
LA[k]= item;
printf("The array elements after insertion :\n");
for(i =0; i<n; i++){
printf("LA[%d] = %d \n", i, LA[i]);
}}
When we compile and execute the above program, it produces the following result −
Output
The original array elements are :
LA[0] = 1
LA[1] = 3
LA[2] = 5
LA[3] = 7
LA[4] = 8
The array elements after insertion :
LA[0] = 1
LA[1] = 3
LA[2] = 5
LA[3] = 10
LA[4] = 7
LA[5] = 8
5.
Algorithm sorts the array A with N elements
The algorithm implementation
#include<stdio.h>
#include<conio.h>
void main()
{
void insertion_sort{int [] , int};
int a[10] , i ,N;
clrscr();
N= 10;
printf(“\n Enter The ten elements to sort :\n”);
for (i=0 ;i<N ;i++)
scanf(“%d”,&a[i]);
insertion_sort(a,N);
printf(“\n \nThe sorted elements are:\n”);
for(i=0;i<N;i++)
printf(“%d\n”,a[i]);
getch();
}
void insertion_sort (int a[] , int N)
{
int i=0 , j= 0 , temp=0;
for(i=0 ; i<N ; i++)
{
temp=a[i];
for(j=i-1;j>=0;j--)
if(a[j]>temp)
a[j+1]=a[j];
else
break;
a[j+1]=temp;
}
}
OUTPUT:
Enter the ten elements to sort:
55
66
9
8
7
41
23
69
33
22
The sorted elements are
7
8
9
22
23
33
41
55
66
69
Complexity of insertion sort
The number f(n) of the comparisions in the insertion sort algorithm can be easily compared . First of all,
the worst case occurs when the array A is in reverse order and the inner loop must use the maximum
number K-1 of comparision . So,
f(n)=1+2+….+(n-1)=n(n-1)/2=O(n2)
Similarly, on average (K-1)/2 comparisons in the inner loop
f(n)= 1/2 + 2/2 +…+ n-1/2=n(n-1)/4 = O(n2)
So the insertion sort algorithm is extremely small when we have the n large
Time can be saved by using the binary search rather then the linear search, to find the location on which
to enter A[K] , use the average log K comparisons rather then K-1/2 comparisons . When n is small linear
search is more efficient rather then the binary search

Q.2 What is the difference between O and o, and ω?

The difference between Big O notation and Big Ω notation is that Big O is used to describe the worst
case running time for an algorithm. But, Big Ω notation, on the other hand, is used to describe the best
case running time for a given algorithm.

Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound. Everything that
is Theta(f(n)) is also O(f(n)) , but not the other way around. For this reason big-Theta is more informative
than big-O notation, so if we can say something is big-Theta, it's usually preferred.

But if we needed to make a blanket statement for all cases, we would use Big O, and, for example, say
that Insertion Sort is O(n^2).
 f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus there exists some constant c such
that f(n) is always ≤ c · g(n), for large enough n (i.e., n ≥ n0 for some constant n0).
 f(n) = Ω(g(n)) means c · g(n) is a lower bound on f(n). Thus there exists some constant c such
that f(n) is always ≥ c · g(n), for all n ≥ n0.
Q.3 Discuss the following equations with respect to Time complexity.
Ans:
1st part.
Ꙫ-notation
In Chapter 2, we found that the worst-case running time of insertion sort is T (n) = 2(n 2 ). Let us define
what this notation means. For a given function g(n), we denote by 2(g(n)) the set of functions

t(g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1g(n) ≤ f (n) ≤ c2g(n) for all n ≥
n0}

A function f (n) belongs to the set 2(g(n)) if there exist positive constants c1 and c2 such that it can be
“sandwiched” between c1g(n) and c2g(n), for sufficiently large n. Because 2(g(n)) is a set, we could write
“ f (n) ∈ 2(g(n))” to indicate that f (n) is a member of 2(g(n)). Instead, we will usually write “ f (n) =
2(g(n))” to express the same notion. This abuse of equality to denote set membership may at first
appear confusing, but we shall see later in this section that it has advantages.

2nd part.

O-notation
The O-notation asymptotically bounds a function from above and below. When we have only an
asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n))
(pronounced “big-oh of g of n” or sometimes just “oh of g of n”) the set of functions

O(g(n)) = { f (n) : there exist positive constants c and n 0 such that 0 ≤ f (n) ≤ cg(n) for all n ≥ n0} .
We use O-notation to give an upper bound on a function, to within a constant factor. Figure 3.1(b)
shows the intuition behind O-notation. For all values n to the right of n0, the value of the function f (n) is
on or below cg(n).

We write f (n) = O(g(n)) to indicate that a function f (n) is a member of the set O(g(n)). Note that f (n) =
2(g(n)) implies f (n) = O(g(n)), since 2- notation is a stronger notion than O-notation. Written set-
theoretically, we have 2(g(n)) ⊆ O(g(n)). Thus, our proof that any quadratic function an 2 + bn + c, where
a > 0, is in 2(n 2 ) also shows that any such quadratic function is in O(n 2 ). What may be more surprising
is that when a > 0, any linear function an + b is in O(n 2 ), which is easily verified by taking c = a + |b| and
n0 = max(1, −b/a).

3rd part.
ꭥ-notation
Just as O-notation provides an asymptotic upper bound on a function, ꭥ-notation provides an
asymptotic lower bound. For a given function g(n), we denote by ꭥ(g(n)) (pronounced “big-omega of g
of n” or sometimes just “omega of g of n”) the set of functions

ꭥ (g(n)) = { f (n) : there exist positive constants c and n 0 such that 0 ≤ cg(n) ≤ f (n) for all n ≥ n0} .

The intuition behind ꭥ-notation is shown in Figure 3.1(c). For all values n to the right of n0, the value of f
(n) is on or above cg(n)f (n) and g(n), we have f (n) = 2(g(n)) if and only if f (n) = O(g(n)) and f (n) =
ꭥg(n)).

our proof that an 2 + bn + c = 2(n 2 ) for any constants a, b, and c, where a > 0, immediately implies that
an 2 + bn + c = ꭥ (n 2 ) and an 2 + bn + c = O(n 2 ). In practice, rather than using Theorem to obtain
asymptotic upper and lower bounds from asymptotically tight bounds, as we usually use it to prove
asymptotically tight bounds from asymptotic upper and lower bounds

References.

seas.gwu.edu

homes.sice.indian.edu

cs.dartmouth.edu

Columbia.edu
The end

You might also like