Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Lecture 2: Ski-Rental / Online Rent-Or-Buy Problem

The ski-rental or online rent/buy problem is one of the poster boys of online algorithms. Although the
problem seems like a toy example, the problem has been studied with a lot of interest since it demonstrates
many essential features of the online computational framework including the power of randomization.
Definition The input is a sequence σ1 , σ2 , · · · σt , · · · , where each σt is a Boolean indicating whether it is
snowing on day t (indicated by 1) or not (indicated by 0). The sequence is a monotone in the sense that if
σi = 0, then σj = 0 for all j > i. The algorithm has to decide between two choices -

1. Rent a Ski for unit buck for a specific day

2. Buy a Ski for D bucks and use it forever (assume D to be an integer)

The goal is to optimize the total cost without prior knowledge of the sequence σ.
Clearly the offline optimum solution with knowledge of the entire sequence σ is to check if σD = 1, in
which case we buy, else we rent. Now let us see what can be achieved online.

1 Deterministic Algorithms
We start of with a negative result.

Theorem 1. No online deterministic algorithm for ski-rental can be better then 2-competitive.

This is an opprtunity for us to introduce the general method of proving results of the above nature, which
we will usually term as ‘lower bound’ results. Essentially these results put fundamental barriers on what can
be achieved for a certain problem with online algorithms. What we need to do is, for any deterministic online
algorithm, we need to show one bad example, that is, an instance of the problem where the competitive ratio
of that algorithm is close to 2. Now let us proceed with the formal proof.

Proof. Any online deterministic algorithm for the problem can be characterized by an integer parameter B
- we name such an algorithm AB . This algorithm rents the ski for B − 1 days and buys on day B. We claim
that the family of algorithms {AB : B ≥ 1} is exactly the set of all possible online deterministic algorithms
for the problem.
Now consider AB for a specific value of B. We create a bad example for this algorithm as follows : the
sequence is snow for B − 1 days and stops snowing exactly on day B. Let us see the performance of AB
on this instance. We need a case analysis.

• Case 1 (B > D) : In this case it snows for B − 1 days, that is at least D days, hence offline optimum
is D. The cost paid by AB is B − 1 + D > 2D − 1. Hence the competitive ratio is at least 2 − 1/D
which is 2 in the limit as D grows.(note that it can be much worse if B >> D but of course that is a
stupid algorithm anyway !)

• Case 2 (B ≤ D) : In this case, optimum is clearly B. On the other hand, AB pays B−1+D ≥ 2B−1.
Hence, again the competitive ratio is roughly 2.

1
Note the important fact, that for every algorithm, there is a different bad example. In fact, the example
might not be bad for two different algorithms that uses different values of B.
On the positive side, we have a very simple deterministic algorithm that actually achieves the above
bound.
Theorem 2. The algorithm AD is 2-competitive
Proof. Again, there are two cases.
• Case 1: (Snows for less than D days) : It is straightforward to observe that AD is optimum in this
case.

• Case 2 : (Snows for D or more days) : In this case, offline optimum is D (just buy). But AD pays
2D − 1. Hence c.r. is 2

2 Randomized Algorithm
The idea of a randomized algorithm at its core is relatively simple : keep a bag of several different algorithms
at your disposal and every time you need to run, pick one following some probability distribution. The reason
this idea often works (in expectation) is that a single bad instance cannot be bad for all the algorithms. Let
us take the ski-rental example. Suppose D = 10 and T = 10 (the number of snowing days). Clearly, the
optimum cost here is 10. Now consider the algorithm A10 . This one pays 19. On the other hand, consider
the algorithm A5 . This one pays 14 ! Now suppose you design a meta algorithms which picks one of the
above two with probability 0.5. Then clearly the expected cost of such an algorithm is 16.5 in expectation
and hence the competitive ratio in expectation is 1.65. Now we exploit this idea to beat the lower bound
result for deterministic algorithms.

2.1 Randomized Ski-Rental I


The algorithm is simple - run AD/2 w.p. 0.1 and AD w.p. 0.9. We would need a few case analyses to prove
the expected competitive ratio. Let T be the number of snowing days.

• T < D/2 : In this case, the algorithm cost is T w.p. 1 and optimum is 1. Hence expected c.r. = 1.

• D/2 ≤ T ≤ D : This is a more interesting case. The optimum cost is still T . If the algorithm chooses
AD/2 , then the cost is D + D/2 − 1 < 3D/2, whereas if it chooses AD , the cost is at most T (I will
always ignore -1). Hence the exptected c.r. is
0.1 × 1.5D + 0.9 × T
T

The above is maximized at T = D/2 with value 1.2

• T ≥ D : In this case, the expected cost is at most 0.1 × 1.5D + 0.9 × 2D = 1.95D whereas the
optimum cost is D. Hence the exptected c.r. is 1.95

As we can see, this is already an improvement over the deterministic competitive ratio of 2. In the next
section, we will generalize this idea and improve upon the competitive ratio even more.

2
2.2 Randomized Ski-Rental II
We will keep a family of algorithms A1 , A2 , · · · AD . Recall that the algorithm Ai rents for min T, i − 1
days and buys on day i. The actual randomized algorithm runs Ai w.p. pi for all i = 1, 2, · · · D. We will
figure out the right values of the pi ’s that help us achieve an optimal competitive ratio, say α. To this end, we
will essentially write down a set of inequalities and minimize α subject to those. Here are the inequalities :

• T = 1 : In this case, the optimum pays 1. The algorithm, in expectation pays

p1 D + (p2 + p3 + · · · + pD )
since w.p. p1 it chooses A1 and hence pays D, while with all the other probabilities it just rents. Thus
we have the desirable inequality

p1 D + (p2 + p3 + · · · + pD ) ≤ α × 1

• T = 2 : In this case, the optimum pays 2. The algorithm, in expectation pays


Thus we have the desirable inequality

p1 D + p2 (1 + D) + 2(p3 + · · · + pD ) ≤ α × 2

• In general, we have for any T = i < D

p1 D + p2 (1 + D) + p3 (2 + D) + · · · + pi (i − 1 + D) + i(pi+1 + · · · + pD ) ≤ α × i

• T ≥D:
p1 D + p2 (1 + D) + p3 (2 + D) + · · · + pD (2D − 1)+ ≤ α × D

Note that the inequalities for all T ≥ D are exactly the same sinceP both the algorithm and the optimum
will behave in the same way. Finally, we have one more inequality D i=1 pi = 1 Hence, we need to minimize
α subject to the above inequalities ,where α ≥ 0. This can be achieved through a linear programming (LP)
solver. However, in this particular case, since we have D + 1 inequalities and D + 1 unknowns, from
the theory of LPs, we conclude that the optimum solution will be obtained by solving the set of equations
corresponding to the above inequality (this will become more clear once we introduce LPs. For now, just
believe it.).
Now there is a nice trick to actually solve the above set of equations. Denote the ith inequality by Si .
Now let Si′ = Si+1 − Si for all i = 1, 2, · · · D − 1. Then, for all i

Si′ = pi+1 (i + D) + (pi+2 + pi+3 + · · · + pD ) − ipi+1 = pi+1 D + (pi+2 + pi+3 + · · · + pD ) = α

Now again Si′ − Si−1


′ yields

pi+1 (D − 1) = pi D
D−1
=⇒ pi = pi+1
D
for all i = 1, 2, · · · D − 1. Hence we see that the probabilities of picking Ai increases exponentially from
i = 1 to i = D. We obtain

3
p1 = (1 − 1/D)D−1 pD , p2 = (1 − 1/D)D−2 pD
PD
Further, using i=1 pi = 1, we get:

1 1
pD = × [(1 − (1 − )B )]−1
D B
Taking D →
− ∞:
1 1
=⇒ pD ≈ ×
D (1 − 1e )
e 1 D−i−1 1 1
From this, we get α = e−1 ≈ 1.6 and pi = (1 − D) × D × (1− 1e )
.
Hence we obtain the final theorem.

Theorem 3. The generalized randomized algorithm is 1.6-competitive for ski-rental

You might also like