Download as ps, pdf, or txt
Download as ps, pdf, or txt
You are on page 1of 4

1 Half-plane intersection

(Chapter 4, “Computational Geometry: Algorithms and Applications” by Mark de Berg et al.)


The problem that we study today is: find all points that satisfy n given constraints: ai x + bi y 
c; i = 1; :::; n.
A line in the plane is represented as ax + by = c. The constraint or inequality ax + by  c represents
a (closed) half-plane bounded by this line. Geometrically, the points that satisfy all the n constraints
lie in the common intersection of the n half-planes. So the problem that we consider here is: given a
set of n half-planes (as n constraints) compute their intersection.

Figure 1: The arrows denote which side of the line is our given half-plane. The shaded region is the
intersection of the given half-planes.

Each half-plane is convex and the intersection of convex sets is again convex. So the intersection of
n half-planes is a convex set. The boundary of the region consists of edges contained in the bounding
lines. Since this is a convex set, each bounding line can contribute at most one edge to the intersection.
Hence, it follows that the intersection of n half-planes is a convex polygonal region bounded by at most
n edges. This region could be unbounded or just a line or a point or empty also.
We will describe this region by listing in clockwise order the bounding lines that contribute to the
edges of the intersection.

Divide-and-Conquer Algorithm: We sketch a divide-and-conquer algorithm for computing the


intersection of n half-planes. The basic approach is very simple:

1. If n = 1, then return the line bounding this half-plane as the answer.

2. Else, split the set H of n half-planes into subsets H1 and H2 of sizes bn=2 and dn=2e respec-
tively.

3. Call this procedure recursively to compute the intersections C1 and C2 of H1 and H2 respectively.

4. Intersect C1 and C2 into a single convex polygon C and return C. (C might be unbounded also.)

In order to complete the algorithm, we still need to describe how to compute the intersection of 2
convex polygonal regions. In the last lecture, we saw a linear time plane sweep algorithm to compute
the intersection of two convex polygons. The only difference between that problem and the one here
is that here the regions might be unbounded. But this is not a big problem and the same algorithm will
work for this problem too.

1
So this gives us the following recurrence for the running time of the divide-and-conquer algorithm.
(
O(1) if n = 1
T (n) =
2T (n=2) + O(n) if n > 1

This solves to T (n) = O(n log n).

Theorem 1 The common intersection of a set of n half-planes in the plane can be computed in
O(n log n) time.

2 Linear Programming in 2 Dimensions


We just saw an algorithm to compute the intersection of n half-planes. This problem is closely related
to the problem of linear programming: given n constraints, find the point that maximises a given linear
function. In 2 dimensions this problem is:

maximize cx x + cy y
subject to ai x + bi y  c for i = 1 to n.

Feasible region: The common intersection of the n constraints (let us call them h1 ; h2 ; :::; hn ) is
called the feasible region. It might be the case that the feasible region is unbounded in the direction
of ~c = (cx ; cy ) or the feasible region is empty.
We now look at an incremental algorithm for this problem. We want to compute a point p =
( px ; py ) in the feasible region that maximizes the value of cx px + cy py among all the points that lie in
the intersection of the given half-planes. p is called the optimum feasible vertex.
Let us assume for simplicity that the feasible region is bounded. We can find 2 half-planes h1 and
h2 whose intersection is bounded with respect to ~c in O(n) time.

direction of c
h1

h2

We will add the halfplanes one by one: h3 ; h4 ; ::: and with each addition we update the current
optimum. If the intersection of h1 ; h2 ; :::; hi is not empty, then the optimum is always a vertex of the
region in the intersection of these i half-planes.
Let vi 1 denote the optimal feasible vertex after the addition of fh1 ; :::; hi 1 g. We need to under-
stand how to update vi 1 to obtain vi when hi is added. The crucial observation is:

 If vi 1 2 hi , then vi = vi 1.

2
 If vi 1 2
= hi , then either vi 2 `i or the intersection of fh1 ; :::; hi g is empty, (where `i is the line
bounding hi ).

So the question now is how to find the optimal vertex lying on a line `i . This is a 1-dimensional
LP problem. Solving it is very easy. This turns out to be just an intersection of intervals which can be
solved in O(i) time. If the interval is empty, then the feasible region is empty. Otherwise, the optimal
vertex is the left endpoint or the right endpoint of the interval, depending upon ~c.
So, the total running time of the algorithm is a constant times ∑ni=3 i which is O(n2 ).

3 Randomized LP
The above O(n2 ) algorithm is not an efficient algorithm but we studied it because it leads us to a very
elegant O(n) randomized algorithm.
The deterministic incremental algorithm could have a running time of O(n2 ) when we add the
half-planes in an order such that the optimum vertex was changing every time. That is, it was the case
that vi 1 2= hi for each i. Suppose then, we had added the half-planes in the order hn ; hn 1 ; :::; h3 . Then

the optimal vertex would not change and we find that our algorithm has an O(n) running time. So,
this tells us that for every set of half-planes there is indeed a good order to add them such that our
incremental algorithm turns out to be very efficient.
This idea is not very helpful since we do not know how to find that good order. How about a
random order? By a random order we mean an order where each permutation of fh3 ; :::; hn g is equally
likely.
So what we do here is the following: we again assume that h1 and h2 are two hyperplanes such
that their intersection is bounded w.r.t. ~c. Then we compute a random permutation of fh3 ; :::; hn g. We
assume that this random order is stored in the array H. Let us assume for convenience that H [1℄ = h1
and H2 = h2 and a random order of fh3 ; :::; hn g is stored in H [3℄ to H [n℄.
We will add the halfplanes one by one: H [3℄; H [4℄; ::: instead of h3 ; h4 ::: In each step the algorithm
is the same as the deterministic one. That is, we update vi 1 into vi by checking if vi 1 2 H [i℄ or not
and so on.
This algorithm always returns the correct answer. What is “random” about it is its running time.
What is its running time? We express its running time in terms of its expected value. This is NOT an
average case analysis. The expectancy is over the random choices made in the algorithm. That means,
for any set of half-planes, the expected running time is what we will compute.
Let Xi be a random variable which is 1 if vi 1 2 = H [i℄ and 0 otherwise. The algorithm spends time

O(1) if Xi = 0 and it spends time O(i) if Xi = 1. So the time spent over H [i℄ is O(1) + Xi O(i). The
total running time of the algorithm is ∑ni=3 O(1) + Xi O(i). To bound the expected value, we use the
linearity of expectation: E [∑ni=3 (O(1) + Xi O(i))℄ = O(n) + ∑ni=3 E [Xi ℄O(i).
E [Xi℄ is the probability that the point vi 1 2
= H [i℄ or equivalently, vi 6= vi 1 . How do we estimate

this value?
We will do this with a technique called backward analysis. We will fix the subset of the first i
half-planes. This fixes vi . To analyse when vi 6= vi 1 , we will look at the algorithm “backwards”.
Instead of adding H [i℄ to H [3℄; :::; H [i 1℄ and then asking when will vi 6= vi 1 , we will think of it
as when will vi change when we remove one of the half-planes from H [3℄; H [4℄; ::::; H [i℄. This event
happens for at most 2 of the half-planes of our fixed set fH [3℄; H [4℄; ::::; H [i℄g (see Figure 2). Since the
half-planes are added in random order, the probability that H [i℄ is one of these two special half-planes
is at most 2=(i 2). We derived this probability under the condition that the first i half-planes are

3
vi direction of c

h"

h’

Figure 2: vi 6= vi 1 only if H [i℄ is h0 or h00 ..

some fixed subset. But since the derived bound holds for any fixed subset, it holds unconditionally.
Hence E [Xi℄  2=(i 2).
The expected running time of the algorithm is O(n) + ∑ni=3 2=(i 2)  i = O(n).

Theorem 2 Linear Programming in 2 dimensions can be solved in O(n) expected time.

You might also like