Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

ALD ASSIGNMENT 5

AMAN AGGARWAL (2018327)

DIVIDE AND CONQUER

A-1. This problem can be solved by dividing the problem into two equal halves and solving them then merging
solutions of subproblems to get solution to original problem.
Now, suppose the two n-bit integers are X and Y. Now, divide the integers into two equal halves. Let us denote it
with X1 and X2 & Y1 and Y2. Now, each of these four have n/2 bits. Dividing the multiplication mathematically,

X * Y = (X1 * 2n/2 + X 2) * (Y 1 * 2n/2 + Y 2)


= ( 2n/2 * X 1 * 2n/2 * Y 1) + 2n/2 (X1.Y 2 + Y 1.X2 ) + X 2.Y 2
= 2n .X1.Y 1 + 2n/2 (X1.Y 2 + Y 1.X2 ) + X 2.Y 2 - (1)

X 1.Y 2 + Y 1.X2 = (X1 + X 2).(Y 1 + Y 2) − X1.Y 1 − X 2.Y 2 - (2)

It can be seen that we can find the left side of the Equation (2) by adding results obtained on the right side.
Let , X 1 + X 2 = X3 and Y 1 + Y 2 = Y 3
Therefore,
X 1.Y 2 + Y 1.X2 = X3.Y 3 − X1.Y 1 − X 2.Y 2 - (3)

Putting this value in Equation (1),


X * Y = 2n .X1.Y 1 + 2n/2 (X3.Y 3 − X1.Y 1 − X2.Y 2 ) + X 2.Y 2 - (4)

Using eq. (4) to find the product of two n-bit integers will include computing result of three n/2-bit multiplications
and some adding operations as once computed the result can be stored and reused.
Multiplying two bit-integers of size n/2 each is a problem of size n/2. Addition operation is done in linear time i.e.
O(n).

Recurrence Relation -
T (n) = 3.T (n/2) + O(n)

Using Master Theorem,


log b a = log 2 3 > 1 = c ,

T (n) = O(nlogb a )
T (n) = O( n1.58 )

A-2. Given a base and exponent we need to find baseexp .


Mathematically,

base exp = ( base2 ) exp/2 if exp is even


2 exp/2
= base . ( base ) if exp is odd

AMAN AGGARWAL 2018327


Using this mathematical formula, we can define a iterative algorithm,

find_exponent( base , exp ):


int result=1
If exp == 0:
Return 1
If exp ==1:
Return base
While exp > 0:
If exp is odd:
Result = result * base
Base = base * base
Exp = exp/2

We are just using the mathematical formula in iterative way. If exponent is odd, we multiply the base to the
current result. Then change our base to its square and reduce exponent by half as defined by the mathematical
formula.
T (n) = T (n/2) + O(1)
T (n) = O(logn) using Master theorem

A-3. The tower of hanoi problem is based on divide and conquer strategy. The algorithm is,
Tower_of_hanoi(source, aux, destination, n):
If n>0:
Tower_of_hanoi(source, destination , aux , n-1 )
Move_disk ( source , destination , nth ) //only move needed to be
//performed
Tower_of_hanoi ( aux, source , destination , n-1 )

Recurrence Relation : for algorithm as well as number of moves

T (n) = 2.T (n − 1) + 1

This is equivalent to T (n) = 2n − 1


We know T (0) = 0
Proof by induction -
Base Case- T (1) = 1
T (1) = 2.T (0) + 1 = 1 True

Induction Hypothesis - T (n) is true f or n = k


T (k) = 2k − 1

Induction Step - T (n) is true f or n = k + 1


T (k + 1) = = 2.T (k) + 1 = 2.(2k − 1) + 1 = 2k+1 − 1
Hence Proved.

AMAN AGGARWAL 2018327


Total number of moves required to transfer n disks from one rod to another are 2n − 1
A-4. If there is a cost associated with each move, we can modify the current algorithm to accomodate the cost
factor.

find_cost(src, dest, aux, n, cost[][] )


If n==0:
Return 0
Total_cost = 0
cost1=0
cost2=0
cost1+=find_cost(src,aux, dest, n-1) + cost[src][dest] + find_cost(aux, dest, src, n-1)
cost2+=find_cost(src,dest,aux, n-1) + cost[src][aux] + find_cost(dest,src,aux, n-1) + cost[aux][dest] +
find_cost(src,dest,aux, n-1)
total_cost= min(cost1,cost2)
Return total_cost

Time Complexity is exponential currently of the order ( 5n ) = O( 5n )


The above algorithm can use memoisation to reduce complexity.
We can create a DP table dimension [n][3][3]. DP[k][i][j] will store the cost to move k discs from i to j. Since the
cost to move all discs for a particular source and destination are same.
To fill each entry, it will take constant time to calculate the value given all other values on which it depends are
pre-calculated reducing the complexity to order of its size i.e. O(n * 9)

T (n) = O(9n) = O(n)

DYNAMIC PROGRAMMING

A-2. We need to find the longest alternating subsequence that starts with increasing order.
Given - A sequence of integers in form of an Array ‘A’.
Algorithm-
Create a dp table of size nx2.
Now, dp[ i ][ 0 ] will contain the length of longest alternating sequence using the elements from 1 to i only and the
last element of the resultant sequence is greater than its previous element.
dp [ i ][ 1 ] will contain the length of longest alternating sequence using the elements from 1 to i only and the last
element of the resultant sequence is smaller than its previous element.

Recurrence relation-
Dp[ i ][ 0 ] = M axk=i
k=1 ( Dp[ k ][ 1 ] + 1 ) where A[ k ]<A[ i ]
If there is no k that satisfy the condition, then Dp[ i ][ 0 ]=0 because we cannot make a sequence of length 1 and
label it as if its last element is greater than its previous element as if we do so, then we will add a smaller element
next time. This will disobey one constraint which is the sequence should start with increasing order.

Dp[ i ][ 1 ] = M axk=i
k=1 ( Dp[ k ][ 0 ] + 1 ) where A[ k ]>A[ i ]

AMAN AGGARWAL 2018327


If there exists no such k that satisfy the above condition, then Dp[ i ][ 1 ] =1 because we have now labelled it as a
sequence that ends with decreasing relation at its end. Now, we can add a greater element in it next time and this
will satisfy the original constraints properly.
Time Complexity -
T (n) = O(n2 )

A-3. We want to go from 1 to n in minimum cost. The Recurrence relation,


Given cost[ i ][ j ] = cost to rent canoe from i to j
Min_cost ( i ) = minimum cost to travel from i to n

Make an array “val” of size n


Initialize all values to be infinity

Min_cost ( i ) :
If i==n:
Return 0
ans= infinity
For j = i+1 to n:
Cur_cost = cost[ i ] [ j ] + min_cost( j )
If ( cur_cost < ans )
Ans = cur_cost
Val [ i ] = ans
Return ans

This algorithm will use memoization and will use val array to store intermediate results. Val [ i ] will store the
minimum cost to travel from i to n.
Recurrence Relation-
n
min_cost( i ) = minimumk=i+1 ( min_cost(k) + cost[i][k] )

Time Complexity - O(n2 )

Proof by contradiction-
Suppose the above algorithm yields a sub-optimal solution S for going from i to n but the actual solution is S’.
Also, let i be first such point that yields a sub-optimal solution. By our assumption, S’ < S. Now, suppose the
optimal path from i to n was going directly to j, then to go from j to n. But our solution will check this path and the
reason it did not choose it was because it found some another path that have lesser cost than this. This
contradicts our assumption that S’ was optimal and had minimum cost. Therefore, our algorithm yields a optimal
solution.

A-4. Idea is to pick one character from Z and check both possibilities of taking that character from X or Y.
X k , Y k , Z k will fetch kth characters in respective strings ( 1- indexed).
Algorithm-
Create a DP table of dimension m+1 * n+1.
Now, DP[ i ][ j ] will store whether it is possible to interleave first i characters of X and first j characters of Y into
first i+j characters of Z.
Initialize all values to false

AMAN AGGARWAL 2018327


DP[ 0 ][ 0 ] = true //base case is always true
Initialize first row i.e. i=0 with following condition,
If Y[ j ] == Z[ j ] :
DP[ 0 ][ j ]=true
Initialize first column i.e. j=0 with following condition,
If X[ i ] == Z [ i ]:
DP[ i ][ 0 ] =true
In the general case follow following Recurrence Relation-
DP[ i ][ j ] = DP[ i-1 ][ j ] OR(boolean OR) DP[ i ][ j-1 ] if A[ i ] = Z[ i+j ] and B[ j ] = Z[ i+j ]
DP[ i-1 ][ j ] if A[ i ] = Z[ i+j ]
DP[ i ][ j-1 ] if B[ j ] = Z[ i+j ]

Time Complexity -
To create and fill the DP table - O(mn)
T (n) = O(mn)

A-7. Assumption - We cannot rotate the boxes. B i can only be stacked on B j iff. li ≤ lj and wi ≤ wj .
Algorithm- Sort the boxes in decreasing order of their dimensions. Give priority in Area of base> Length > width
order to resolve collisions. Lets call this Set be S.
Now, define a max_height array of size n. max_height(i) will store the maximum height of stack that can be made
using boxes B k (possibly none) such that k<i in Set S and last box on the stack is B i .

Recurrence Relation -
max_height ( i ) = M AX k=i−1
k=1 ( max_height ( k ) + hi ) satisfying lk ≤ li and w k ≤ w i
If no k exists satisfying the conditions of length and width , max_height ( i ) = hi

Our solution to the original problem is stored the maximum value that is stored in all of the positions of
max_height. The idea is to check if there exists a stack on which the current box can be placed satisfying given
constraints. If there exists such stacks, choose which results in a resultant stack of maximum height. If there
exists no such stack till now, then we have only one option i.e. to make a new stack with current box only in which
case height of stack would be equivalent to height of current box.

Time Complexity -
T (n) = O(n2 )
For each entry in max_height, we need to do maximum of O(n) work for each entry and there are n entries.

STABLE MARRIAGE AND RECURSION

A-1. a) To prove that a given matching is not a stable matching, we need to find a pair which is not stable. The
given matching is
B1:G1 B2:G2 B3:G3 B4:G4

B1 has its first preference to be G1 so there exists no other girl to which it can make a more stable matching.
B2 has its preference as G2 so he also cannot make a more stable pair with any girl.

AMAN AGGARWAL 2018327


B3 prefers all other girls than G3. If B3 proposes to G1 or G2, then they will definitely reject B3 because B3 is at
lesser preference than their current partners. If B3 proposes to G4, then G4 will also reject because it has B4
which is a better partner. So, no stable matching exists for B3.
B4 prefers all other girls than G4. If B4 proposes to G1 or G2, then they will definitely reject B4 because he is at
lesser preference than their current partners. If B4 proposes to G3, then G3 will also reject because it has B3
which is more preferred than B4. So, B4 cannot make a more stable matching.
Since, no person from a particular side can make a better partner , the above matching is stable.

b) Proof 1- The above matching is not man-optimal. Consider another stable matching S’. In S’, B3:G4 and B4:G3
else are same. As proved earlier also, B3 cannot make better stable pair with any other girl and same goes with
B4 also. So, S’ is a stable matching. But in S’, B3 and B4 got better partners than they got in S. Therefore, it
shows that there existed better partners for some men matched in S which concludes that every man have not
get his best possible partner which means this matching is not man-optimal.

Proof 2- The above matching is not man-pessimal. Consider another stable matching S’. In S’, B1:G2 and B2:G1
else are same. Now, B1 can propose to G1 but she will reject as B2 is her top priority. Similarly, B2 can propose
to G2 but she will reject as B1 is her top priority. And as proved earlier, B3:G3 and B4:G4 does not make blocking
pair. So, S’ is a stable matching. But in S’, B1 and B2 got less preferred partners than they got in S. Therefore, it
shows that there existed more worse partners for some men matched in S which concludes that every man has
not got his worst possible partner which means this matching is not man-pessimal.

c) Man-optimal - B1:G1 B2:G2 B3:G4 B4:G3


As proved earlier, in b) this matching is stable. B1 and B2 has already got their best valid partners as they are
their top priority. B3 could not have got better partner than G4 as both G1 and G2 prefers their current partners
more. So G4 is the best valid partner for B3. Similarly, B4 has got his best valid partner. Therefore, this matching
is man-optimal as every men has got his best valid partner.

Man-pessimal - B1:G2 B2:G1 B3:G3 B4:G4


This matching is stable as proved in b). Now, B3 and B4 have already got their worst partners. B1 and B2 both
have G3 and G4 as their lesser priority. But there is no stable matching that exists with B1 or B2 paired with any
of G3 or G4 as that would result in a blocking pair for sure because B1 or B2 (whichever in question) will prefer
G1 or G2 and will leave G3 or G4( to whichever he is paired). Now, B1 and B2 both have their current partners
i.e. G2 and G1 respectively as least priority which are possible which means they are worst valid partners for both
B1 and B2. This concludes that this is a man-pessimal matching because each man have got their worst valid
partners.

A-2. Idea - Take the input string as original and reverse. Try to make both the string equal with minimum number
of removals of characters. The resultant string would be a palindrome because we are trying to make longest
substring that is part of both the strings and that can be a palindrome only.

Recursive solution -
Min_palin( str1, str2, a, b) // a is length of str1 and b is length of str2
If a==0 || b==0:
Return a+b // if one of the strings is empty we have to remove all the characters of other
// string to make them equal
If last character of str1 = last character of str2
Return min_palin( str1, str2, a-1, b-1)
AMAN AGGARWAL 2018327
Int sub_ans= min( min_palin(str1, str2, a-1, b) , min_palin( str1, str2, a, b-1 ))
Return 1 + sub_ans

Time Complexity- O(2n )


The algorithm is not efficient but it satisfy the condition that the approach should be recursive.

A-4. Idea - Given i and j and input string(1-indexed), we can find the longest repeating subsequence using
recurison.

Recursive solution-
max_lrs( str, i , j)
If i==0 || j==0:
Return 0
If i != j and str[ i ] == str[ j ]
Return 1+max_lrs( str, i-1, j-1)
Else
Return max( max_lrs( str, i-1, j) , max_lrs( str, i, j-1 ) )

The method can be called like max_lrs( str, n, n) where n is the length of string str.

A-5. The answer can be found with a recursive algorithm. Given, a node as root, if we want to find the largest
binary tree size, we can see if both the subtrees are binary search trees or not. If both are valid BSTs and they
are placed correctly with respect of the current root. Then, the whole tree rooted at current node is a BST.

Assumption - If one of the subtrees of the given root does not satisfy the BST property then the tree with given
root will not be considered a valid BST.

Algorithm pseudocode-
Largest_BST(root)
If root = null:
Return 0
If root.left==null and root.right==null:
Return 1
If root, root.left and root.right satisfy the BST property: // left child is smaller than root and right child
// is greater than root (including null cases)
Return 1 + largest_bst(root.left) + largest_bst(root.right)

GREEDY ALGORITHM

A-1. Algorithm- We are given points in non-decreasing order of their positions


Initialize k with 1st point in the set
Step 1 - Choose a unit-length interval with starting point equal to xk and ending position as xk + 1 .
Step 2 - Add this interval to the selected set of intervals.
Step 3 - Traverse through the set of points and ignore all those points that lies inside this set.
Step 4 - Let m be the first point that does not lie in the selected set.
Step 5 - Make m the new k.

AMAN AGGARWAL 2018327


Time Complexity-
T (n) − O(n)
In this algorithm the set of points is traversed only once.

Proof by contradiction - Let the solution given by the above greedy approach to be sub-optimal S. Now, consider
the optimal solution S’, the first different interval selected in the optimal solution be
[dk, dk + 1] in S and [dk ′, dk ′ + 1] in S ′. Till now, because of our assumption both the solutions have covered exactly
the same set of points with same number of intervals. Now, there are two cases,
dk < dk ′ - This means there existed xi = dk because greedy will choose the interval with starting point equivalent to
first unexplored point. This shows that dk ′ missed including the xi in its solution, resulting into a wrong answer.
Therefore, this case should not happen.
dk > dk ′ - This means that interval starting with dk ′ ended earlier than the interval from the greedy solution. There
is a possibility that there were some point j such that dk ′ + 1 < xj ≤ dk + 1 . Now, the optimal solution S’ have not
covered this point into the current interval whereas greedy solution covered this point along with those that were
covered in S’. Now, while choosing subsequent intervals, S will always require less number of intervals than S’.
This contradicts our assumption that S is suboptimal.
Hence, our assumption that greedy approach gives a suboptimal solution is wrong.

A-2. Algorithm-
Initialize A with St. Louis
Step 1 - Start at A
Step 2 - Refill at the farthest reachable gas station G from A ( this means the gas station with maximum distance
from current position and that distance is less than or equal to m i.e. capacity of tank)
Step 3 - Make G the new A
Step 4 - Repeat the above steps until G becomes B

Proof by contradiction - Assume our greedy approach S resulted in a suboptimal solution and there existed S’
with lesser number of gas stations. Let dk in S and dk ′ be the first contradiction in the selection of the next gas
station. There can be two cases,
dk < dk ′ - Since we choose that gas station that can be reached and has the maximum distance from the current
position. This case cannot happen so it is neglected.
dk > dk ′ - In this case, we chose the farther gas station from the optimal solution. Now, if we consider dk+1 ′ , it
must be reachable from dk ′. But, it also means that we can reach it from dk as it will be nearer and can create a
path will equal or lesser number of gas stations. This contradicts our assumption that S was suboptimal.
Hence, the greedy approach will always work.

Time Complexity-
T (n) − O(n)
We can traverse the array once, and find the gas stations one by one from left to right.

A-3. Greedy approach in this problem can give non-optimal solution in many cases. Consider a case where you
want to make change for 30 cents. Greedy algorithm will suggest the solution {25,1,1,1,1,1}. Total coins used are
6. But a better solution exists, {10,10,10}. In this solution, we only use 3 coins to change 30 cents. With this
counterexample, it is proved that a greedy approach can fail in coin change problem.

AMAN AGGARWAL 2018327


A-5. Approach - Sort the given events in decreasing order of profits. Now, take the events one by one and see
which slot is free that is closest to ti and less than ti . Allot this slot to this event and continue the process.

Pseudocode-
Sort events in non-increasing order of their profit values. Let this set to be S.
For each i in S:
T = max allowed time for scheduling ith event
Find j <= T such that j is maximum and is not alloted to another event
Allot this jth slot to i

Time Complexity-
T (n) = O(n2 )
Quadratic because for each event we have to find the most suitable position. This means traversing the free slots
in reverse directions in linear time.

ASYMPTOTIC ANALYSIS

A-1. Big-theta notation is used when we have a asymptotically tight bound. Tight bound means we can put the
running time within a constant factor above as well as below.

Given f (n) = O(g(n)) and f (n) = Ω(g(n))


Using definition of Big-Omega,
f (n) ≥ k 1 .g(n)
Using definition of Big-O,
f (n) ≤ k 2 .g(n)

Merging both equations,


k 1 .g(n) ≤ f (n) ≤ k 2 .g(n)

This is in accordance with the definition of Big-Theta that we can bound the given function by a constant factor
from both sides. This means that f (n) = Θ (g(n))

Similarly, if we are given that f (n) = Θ (g(n)) , we can write,


k 1 .g(n) ≤ f (n) ≤ k 2 .g(n)

This can be divided into independent inequalities,


f (n) ≥ k 1 .g(n) // using definition of Big-Omega, f (n) = Ω(g(n))
f (n) ≤ k 2 .g(n) // using definition of Big-O, f (n) = O(g(n))

By proving both sides, we can say for any two functions f(n) and g(n), f (n) = Θ (g(n)) if and only if
f (n) = O(g(n)) and f (n) = Ω(g(n))

A-4.

AMAN AGGARWAL 2018327


a) T (n) = O(n) because in the worst case also, this algorithm will run n times with constant operations each
time. Also, T (n) = Ω(n) because in the best case also, this algorithm will run n times with constant operations
each time. Therefore,
T (n) = Θ (n)

b) Pseudocode -
ans=0
For i =0 to n
Local = ai
For j=1 to i
local=local*x
ans=ans+local

Time Complexity-
T (n) = O(n2 )

Running time of this algorithm is much higher than of HorneraAZ rule.

c) We are given that at the start of an iteration in the for loop,


n−(i+1)
y = ∑ ak+i+1 xk
k=0

Initially, i=n using the formula above,


n−(n+1)
y = ∑ ak+n+1 xk
k=0
−1
y = ∑ ak+n+1 xk // there are no terms in this summation, therefore y=0 satisfying the base case
k=0

Suppose for i=j, this above formula gives correct value of y at start of the iteration with i=j+1
Now,
n−(j+1)
y = ∑ ak+j+1 xk
k=0
The next line in the loop, will update the value of y,
n−(j+1)
y = aj + x. ∑ ak+j+1 xk
k=0
n−(j+1)
y = aj + ∑ ak+j+1 xk+1
k=0
n−(j)
y = ∑ ak+j xk
k=0
This is exactly the summation giving the value of y at start of iteration where i=j-1 i.e. the next iteration.
This shows that at the end, we can put i=-1 in this summation to get the end value because at the end, i will
become -1.

n−(−1+1)
y = ∑ ak+−1+1 xk
k=0

AMAN AGGARWAL 2018327


n
y = ∑ ak xk
k=0

The above proof goes in a similar manner as of proof by induction.

d) As proved in part c) , y gives the above summation at the end of the code fragment. Now, our end result was to
calculate,
y = a0 + a1 x + a2 x2 + a3 x3 + ... + an−1 xn−1 + an xn

Converting the above equation into a summation,

n
y = ∑ ak xk
k=0
This is the value that we will be getting at the end. Hence, the code fragment correctly evaluates the value of a
polynomial.

A-5. To prove - T (n) = { 2 if n = 2 }


{ 2.T (n/2) + n, if n = 2k , f or k > 1 }
T (n) = nlogn

Base Case - T (2) = 2


T (2) = 2.log 2 2 = 2

T (4) = 2.T (2) + 4 = 2.2 + 4 = 4 + 4 = 8


T (4) = 4.log4 = 4.2 = 8
Base Cases are true

Induction Hypothesis - T (2k ) is true and it evaluates to 2k .k

Induction Step - Prove T (2k+1 ) is true if T (2k ) is true


T (2k+1 ) = 2.T (2k ) + 2k+1 = 2.2k .k + 2k+1 = 2k+1 .( k + 1)

T (2k+1 ) = 2k+1 .log( 2K+1 ) = 2k+1 .(k + 1 )


LHS = RHS
Hence proved.

The above recurrence relation evaluates to nlogn for all n of form 2k .

A-8. We are given a polynomial ad x d + ad−1 xd−1 + .... + a1 x + a0 . Also, it is given that ad is always positive which
means that term with degree-d is the leading term no matter what values coefficients take.
a) When k>=d, then p(n) = O(nk )
Big-O notation provides an upper bound but it states a “<=” relation which means the upper bound can be
greater or equal to the actual value. In this case, then p(n) = O(nd ) , but as k can be equal to d, we have
to use Big-O.

AMAN AGGARWAL 2018327


b) When k<=d, then p(n) = Ω(nk )
Big-omega notation provides an lower bound but it states a “>=” relation which means the lower bound
can be smaller or equal to the actual value. For p(n) having nd as the leading term, the lower bound can
be put at this term because it says that value of polynomial will be atleast a. nd but as k can be equal to d,
we have to use Big-Omega.

c) When k=d, then p(n) = Θ(nk )


p(n) have the leading term to be nd . Now, even if we add all the smaller terms we can provide a upper
value in the form k 2 .nd . Also, we can put a lower bound by saying that ad nd is the minimum value
polynomial will take. Ignoring the constants in both the bounds we get a single function f(n) = nd that gives
both upper and lower bound on a function, therefore, by definition we can say p(n) = Θ(nd ) and since k=d
always, p(n) = Θ(nd )

d) When k>d, then p(n) = o(nk )


Small-o notation is used when we want to provide a strict upper bound meaning that the actual complexity
is always going to be strictly smaller than said in the order. It states a “<” relation. For p(n) = O(nd ) , but as
k cannot be equal to d, we have to use small-o.

e) When k<d, then p(n) = ω (nk )


Small-omega notation provides an lower bound but it states a “>” relation which means the lower bound
have to be strictly smaller than the actual value. For p(n) having nd as the leading term, the lower bound can be
put at this term because it says that value of polynomial will be at least a. nd but as k cannot be equal to d, we
have to use Small-Omega.

AMAN AGGARWAL 2018327

You might also like