CS260: Algorithms: First Exercise Sheet (To Be Discussed in Week 2 Seminars)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

CS260: Algorithms

First exercise sheet (to be discussed in week 2 seminars)


October 2020

Reading assignment for weeks 1–2


Introductory material and revision (required):
[KT] Preface and Section 2.
[S] Preface, and Sections 1 and 2.
Check if everything looks simple and familiar. This is meant as revision material, and it will not be
discussed in detail in the lectures. Please make sre that you understand it well.
Good understanding of the concepts related to reasoning about efficiency is essential for confident
performance in this module. The first exercise sheet gives you the opportunity to practice.

[KT] Algorithm Design, by J. Kleinberg and É. Tardos


[S] Algorithm Design Manual, by S. S. Skiena

1
Seminar exercises
1. Suppose you have algorithms with six running times listed below. (Assume that these are the
exact number of operations performed as a function of the input size n.) Suppose you have a
computer that can perform 1010 operations per second, and you need to compute a result in
at most an hour of computation. For each of the algorithms, what is the largest input size n
for which you would be able to get the result within an hour?
n
n2 n3 100n2 n log n 2n 22

Explain how you obtained your results. If you cannot obtain the exact number analytically,
provide estimates (lower and upper bounds). For the function n log n, choose the base of
the logarithm to be a number between 2 and 10 that you find the most convenient for your
calculations.

2. Take the following list of functions and arrange them in ascending order of growth rate.

f1 (n) = 2n , f2 (n) = n + 10 log n, f3 (n) = n3 ,


2020
f4 (n) = log n, f5 (n) = 2π , f6 (n) = n2 + 5n
Justify your answer by arguing that if function g(n) immediately folows function f (n) in your
list, then it is the case that f (n) is O(g(n)).

3. Consider the following basic problem. You’re given an array A consisting of n integers
A[1], A[2], . . . , A[n]. You’d like to output a two-dimensional n-by-n array B in which B[i, j]
(for i < j) contains the sum of array entries A[i] through A[j]—that is, the sum
j
X
A[k] = A[i] + A[i + 1] + · · · + A[j].
k=i

(The value of array entry B[i, j] is left unspecified whenever i ≥ j, so it doesn’t matter what
is output for these values.)
Here’s a simple algorithm to solve this problem.
for i = 1, 2, . . . , n do
for j = i + 1, i + 2, . . . , n do
Add up array entries A[i] through A[j]
Store the result in B[i, j]
end for
end for

(a) For some function f that you should choose, give and justify a bound of the form O(f (n))
on the running time of this algorithm on an input of size n (i.e., a bound on the number
of operations performed by the algorithm).
(b) For this same function f , show that the running time of the algorithm on an input of
size n is also Ω(f (n)). (This shows an asymptotically tight bound of Θ(f (n)) on the
running time of the algorithm.)

2
(c) Although the algorithm you analyzed in parts (a) and (b) is the most natural way to
solve the problem—after all, it just iterates through the relevant entries of the array B,
filling in a value for each—it contains some highly unnecessary sources of inefficiency.
Give a different algorithm to solve this problem, with an asymptotically better running
time. In other words, you should design an algorithm with running time O(g(n)), where
limn→∞ g(n)/f (n) = 0.

Revision exercises
For the revision, self-study, and supplementary exercises below, you may need to use the fact that
for all constants a, b > 1 we have that loga n is Θ(logb n). In other words, failing to explicitly state
what the base of the logarithm is will not make the statements of the problems below ambiguous.

4. Take the following list of functions and arrange them in ascending order of growth rate.

f1 (n) = n2.5 , f2 (n) = 2n, f3 (n) = n + 10,

f4 (n) = 10n , f5 (n) = 100n , f6 (n) = n2 log n


Justify your answer by arguing that if function g(n) immediately folows function f (n) in your
list, then it is the case that f (n) is O(g(n)).

5. Determine whether the following statements are true or false:

(a) 3n is O(n2 )
(b) n2 log n is O(n2 )
(c) 2n is O(n2 )
Pn 2
(d) i=1 i is Θ(n )
(e) (n2 − n log n) is Ω(n2 )

Provide justifications for your answers; in parts 5(c) and 5(e) you may use the facts from 9(c)
and 9(e).

6. Let f (n), g(n) and h(n) be non-negative and monotonically non-decreasing functions. Prove
or disprove the following statements.

(a) If f (n) is Θ(h(n)) and g(n) = O(h(n)) then f (n) + g(n) is O(h(n)).
(b) If f (n) is Θ(h(n)) and g(n) = O(h(n)) then f (n) + g(n) is Θ(h(n)).
(c) If f (n) is Ω(h(n)) and g(n) = Θ(k(n)) then f (n) · g(n) is Ω(h(n) · k(n)).
(d) If h(n) is Ω(f (n)) and g(n) = Θ(k(n)) then f (n) · g(n) is O(h(n) · k(n)).

Self-study and supplementary exercises


7. Consider the Fibonacci number fn , defined by the rule

f0 = 0, f1 = 1, fn = fn−1 + fn−2 .

In this problem we will confirm that this sequence grows exponentially fast and obtain some
bounds on its growth.

3
(a) Use induction to prove that fn ≥ 20.5n for n ≥ 6.
(b) Find a constant c < 1 such that fn ≤ 2cn for all n ≥ 0. Show that your answer is correct.
(c) What is the largest c you can find for which fn = Ω(2cn )?

8. Let f (n) and g(n) be non-negative and monotonically non-decreasing functions such that f (n)
is O(g(n)). For each of the following statements, decide whether you think it is true or false,
and justify your answers for each statement, by either giving a proof or a counterexample.

(a) log2 (f (n)) is O(log2 (g(n)))


(b) 2f (n) is O(2g(n) )
(c) f (n)2 is O(g(n)2 )

9. Prove that for arbitrary constants a > 1 and b > 0, we have that:

(a) n2+b is not O(n2 )


(b) en is not O(n2020 ), where e is the base of the natural Q
logarithm
Hint: Consider n ≥ 2020 and use the identity f (n) = n−1 k=2020 f (k + 1)/f (k) · f (2020). Al-
n
P∞ ni
ternatively, use the Maclaurin series expansion of the exponential function: e = i=0 i! .
(c) (*) an is not O(nb )
Hint: Generalize the argument from (b).

(d) n n is not O(n(log n)b )
You may assume that for all c > 0, we have that nc is not O(log n).
(e) (*) nb is not O(log n)
Hint: Use (c).

10. (*) You’re doing some stress-testing on various models of glass jars to determine the height
from which they can be dropped and still not break. The setup for this experiment, on a
particular type of jar, is as follows. You have a ladder with n rungs, and you want to find the
highest rung from which you can drop a copy of the jar and not have it break. We call this
the highest safe rung.
It might be natural to try binary search: drop a jar from the middle rung, see if it breaks,
and then recursively try from rung n/4 or 3n/4 depending on the outcome. But this has the
drawback that you could break a lot of jars in finding the answer.
If your primary goal were to conserve jars, on the other hand, you could try the following
strategy. Start by dropping a jar from the first rung, then the second rung, and so forth,
climbing one higher each time until the jar breaks. In this way, you only need a single jar—at
the moment it breaks you have the correct answer—but you may have to drop it n times
(rather than log2 n times as in the binary search solution.)
So here is the trade-off: it seems you can perform fewer drops if you’re willing to break more
jars. To understand better how this trade-off works in a quantitative sense, let’s consider how
to run this experiment given a fixed “budget” of k ≥ 1 jars. In other words, you have to
determine the correct answer—the highest safe rung—and can use at most k jars in doing so.

(a) Suppose you are given a budget of k = 2 jars. Describe a strategy for finding the highest
safe rung that requires you to drop a jar at most f (n) times, for some function f (n) that
grows slower than linearly. (In other words, it should be the case that limn→∞ f (n)/n = 0.

4
(b) Now suppose you have a budget of k > 2 jars, for some given k. Describe a strategy
for finding the highest safe rung using at most k jars. If fk (n) denotes the number of
times you need to drop a jar according to your strategy, then the functions f1 , f2 , f3 , . . .
should have the property that each grows asymptotically slower than the previous one:
limn→∞ fk (n)/fk−1 (n) = 0 for each k.

You might also like