Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Advanced Algorithms

Homework 1: Solutions
March 1, 2016

Problem 1. (amortized analysis)


Consider the following binary counter that support both incrementation and decrementation.
We use two bit strings, X and Y ; the value of the integer is the value represented by X minus
that represented by Y . X and Y must satisfy the condition that, for every index i, we never
have X[i] = Y [i] = 1. These strings are stored as singly-linked lists, with a pointer to the least
significant bit of the list; for convenience, we speak of a bit as having an index within the string,
which is just the number of links (including the external pointer) that must be traversed to reach
that bit.
Increment: Find the least significant bit of X that is equal to 0, i.e., the rightmost index k
with X[k] = 0. Set X[i] ← 0 for all 1 ≤ i < k. If Y [k] = 1 then set Y [k] ← 0; else set X[k] ← 1.
Decrement: Find the least significant bit of Y that is equal to 0, i.e., the rightmost index k with
Y [k] = 0. Set Y [i] ← 0 for all 1 ≤ i < k. If X[k] = 1 then set X[k] ← 0; else set Y [k] ← 1.
+ + + −
For example,3 = (1001, 0110) − → 4 = (1000, 0100) − → 5 = (1001, 0100) − → 6 = (1010, 0100) − →
5 = (1010, 0101).
1. Propose a potential function to prove that both Increment and Decrement run in constant
amortized time.
The real cost of either operation is the index of the least significant bit set to 0 in one or the
other string. It is therefore tempting to define the potential as the sum of the index of that
bit in X plus the index of that bit in Y , but there is a problem: the potential could increase
abruptly—if we have a suffix of length m of the form . . . 11 . . . 10, then switching the last 0
to a 1 (which will happens if the other string ends with a 0) increases the potential from 1
to at least m. This is a familiar problem: it comes from wanting too much precision from
the potential function. So we switch instead to the “quick and dirty” version: count the
number of 1 bits in the two sequences. If the real cost of either operation is i, then it had
to go past a suffix of i − 1 bits set to 1 before encountering the least significant bit set to 0.
Each of these i − 1 bits gets set to 0, decreasing the potential by i − 1; in the case where
Y [k] is set to 1, the potential goes up again by 1, so the minimum decrease in potential is
i − 2, which suffices to pay for the real cost and leave a constant amortized cost.
2. Verify that the value 0 can be represented only by pairs of strings of 0s.
That a pair of strings of 0s represents a value of 0 is obvious. But it would appear that
any pair of identical strings also represents a value of 0; however, if these strings contain
any bit set to 1, then both have a 1 at the same position and so violate our constraint.
3. Add one more operation, Zero, which tests whether the integer stored is 0. To that end,
modify the algorithms above and devise one for Zero, such that all three operations run
in constant amortized time (use the potential method to verify this claim).
Hint: maintain pointers to the most significant bit set to 1 in each string.
From the previous part, the cost of Zero is the cost of resetting all bits of the two strings
that are currently set to 1. Since we can access the strings only from the least significant
end, we must avoid traversing an arbitrary long prefix made entirely of 0s, hence the
hint: the pointer to the most significant bit set to 1 serves as a high-water mark. We can
maintain this pointer at no extra cost: if the k index computed in the procedures is larger
than the current pointer, then we either increase the current pointer if we set X[k] (or Y [k])
to 1, or reset the pointer to 0. With the help of these two pointers, the real cost of a Zero
operation equals the sum of the values of the pointers, so we define our new potential to
be the number of bits set to 1 in the two strings plus the values of the two pointers. Now
running the Zero operations resets both pointers to 0, decreasing the potential by the real
cost of the operation.
Problem 2. (amortized analysis)
Consider augmenting a meldable priority queue (of any type) with a linked list. To insert, we
simply add the new item to the linked list. We will need a unification operation that puts all
items from the linked list into the priority queue proper; that operation proceeds in two steps:
first it builds a new meldable priority queue from the elements of the linked list—we know
how to do this in linear worst-case time for any of the meldable heaps; second, it melds this
newly constructed priority queue with the original one, something we know takes (amortized)
logarithmic time. Now, DeleteMin first does a unification, then calls the original Deletemin.
Similarly, Meld for two such structures first does unification on each, then calls the original
Meld.
Verify that this new structure still requires logarithmic amortized time to run Meld and
DeleteMin, but now takes constant amortized time to run Insert.
Most of the problem is solved in the statement, but we need to show it all amortizes properly.
Since unification can take linear time (after a linear number of insertions), we will need a
potential that reflects not just what happens with the original meldable PQ, but also with the
linked list. This is simpler than it may sound: we just add (a multiple of) the size of the linked
list to the potential of the original meldable PQ.
Now Insert runs in constant real time and only adds one to the potential (the linked list
grows by one item), so its amortized time is constant. Unification takes time linear in the size
of the linked list to build the new meldable PQ, but in the process zeroes out the contribution
to the potential from the linked list, so that the two cancel out (pick the right coefficient for
the potential contribution of the size of the linked list). Meld and DeleteMin now are the reg-
ular Meld and DeleteMin for the original meldable PQ, preceded by unification on each data
structure. But we have seen that the net effect of unification in amortized terms is zero, so we
are in fact left with the analysis of the original Meld and DeleteMin and we already know that
these run in logarithmic amortized time. The total additional contribution to the potential from
the linked list cannot exceed the total number of items in the data structure, so that the overall
amortization works. Hence we are done.

Problem 5. (basic probability)


1. Consider the following scenario. There are three identical bank safes in the room, all
locked. You can choose to open one and will then receive what is stored in it. One of the
safes contains a large gold ingot; each of the other two contains a large chocolate bar.
The choice works as follows: first you designate a safe—your initial choice; then an
assistant (who knows which safe contains what) opens one of the other two safes to reveal
a chocolate bar (if both of the other safes contain a chocolate bar, the assistant chooses
one at random). Finally, you are asked whether you want to keep your original choice or
switch to the other unopened safe, and upon your answer the designated safe is opened.
Which is the correct decision, if any? Work out the probabilities.
This is a famous problem, usually called the Monty Hall problem, after the host of the
1963 US TV show “Let’s Make A Deal!”, which featured games based on elementary
probability in which the player usually made a choice, received some additional informa-
tion in a disguised form (here the assistant’s opening of a safe that contains a chocolate
bar), and then had to decide to switch choices or stick with the original choice.
You should switch. When you first picked a safe, you had a 13 probability of picking the
one with the gold bar in it, and a 23 probability of picking one with a chocolate bar in it.
If you switch no matter what, then you lose in the first case—you’ll just get a chocolate
bar, but you win in the second; since the second case is twice as likely as the first, you
should always switch.
2. In a roll of 10 standard six-sided dice, what is the probability that the sum of the 10 values
is divisible by 6?
Let us use the principle of deferred decisions and look at the situation after rolling 9 of
the 10 dice. The sum of these 9 rolls is some integer k, 9 ≤ k ≤ 54; now we are going to

2
add to k a random integer between 1 and 6, obtaining a total between k + 1 and k + 6. Of
the six consecutive values k + 1, k + 2, . . . , k + 6, only one can be divisible by 6, so the
probability that our 10 rolls (or 1’000’000 rolls!) sum up to a multiple of 6 is just 61 .

You might also like