Professional Documents
Culture Documents
Opt1 16
Opt1 16
function subject to certain conditions known as constraints. If the objective function is a linear
function say from Rn to R and the constraints are also given by linear functions, then the problem
is called a linear programming problem.
Given c Rn , a column vector with n components, b Rm , a column vector with m components, and an A Rmn , a matrix with m rows and n columns.
A linear programming problem is given by :
Max or Min cT x
subject to Ax b (or Ax b),
x 0.
The function f (x) = cT x is called the objective function, the constraints x 0 are called the
nonnegativity constraints.
Note that the above problem can also be written as:
Max or Min cT x
aTi x bi for i = 1, 2, . . . , m,
xj 0 for j = 1, 2, . . . , n, or ej T x 0 for all j = 1, 2, . . . , n,
where aTi is the i th row of the matrix A, and ej is the j th column of the identity matrix of order
n, In .
Note that each of the functions ai T x, for i = 1, 2 . . . , m, ej T x, for j = 1, 2, . . . , n,
and cT x are all linear functions (refer to (*)) from Rn : R (check this), hence the name linear
programming problem.
((*) A function T : Rn R is called a linear map if T (x + y) = T (x) + T (y) for all x, y Rn and
T (x) = T (x) for all R and all x Rn ).
An x 0 satisfying the constraints Ax b (or Ax b ) is called a feasible solution of the
linear programming problem ( LPP ). The set of all feasible solutions of a LPP is called the feasible
solution set or the feasible region of the LPP.
Hence the feasible region of a LPP, denoted by Fea(LPP)is given by,
Fea(LPP)= {x Rn : x 0, Ax b}.
A feasible solution of an LPP is said to be an optimal solution if it minimizes or maximizes the
objective function, depending on the nature of the problem. The set of all optimal solutions is called
the optimal solution set of the LPP. If the LPP has an optimal solution, then the value of the objective function cT x when x is an optimal solution of the LPP is called the optimal value of the LPP.
Note that as discussed before, the feasible region can also be written as
Fea(LPP)= {x Rn : xj 0 for j = 1, 2, . . . , n, aTi x bi for i = 1, 2, . . . , m} or as
= {x Rn : eTj x 0 for j = 1, . . . , n, aTi x bi for i = 1, . . . , m},
where aTi is the i th row of the matrix A, and ej is the jth standard unit vector, or the j th column
of the identity matrix In .
Definition: A subset H of Rn is called a hyperplane if it can be written as:
H= {x Rn : aT x = d} for some a Rn and d R, or equivalently as
H= {x Rn : aT (x x0 ) = 0} for some a Rn , d R, and x0 satisfying aT x0 = d.
(**)
2
So geometrically a hyperplane in R is just an element of R (a single point), in R it is just a straight
line, in R3 it is just the usual plane we are familiar with.
The vector a is called a normal to the hyperplane H, since it is orthogonal (or perpendicular) to
each of the vectors on the hyperplane starting from (with tail at) x0 (for precision refer to (**)).
Associated with the hyperplane H are two closed half spaces H1 = {x Rn : aT x d} and
H2 = {x Rn : aT x d}.
Note that the hyperplane H, and the two half spaces H1 , H2 are all closed subsets of Rn (since
each of these sets contains all its boundary points, the boundary points being x Rn satisfying
the condition aT x = d).
Hence the feasible region of a LPP is just the intersection of a finite number of closed half spaces,
which is called a polyhedral set.
Definition: A set which is the intersection of a finite number of closed half spaces is called a
polyhedral set.
Hence the feasible region of a LPP is a polyhedral set.
Since the intersection of any collection of closed subsets of Rn is again a closed subset of Rn hence
Fea(LPP) is always a closed subset of Rn (also geometrically you can see (or guess) that the feasible region of a LPP contains all its boundary points).
Let us consider two very familiar real life problems which can be formulated as a linear programming problem.
Problem 1: (The Diet Problem) Let there be m nutrients N1 , N2 , ..., Nm and n food products, F1 , F2 , ...., Fn , available in the market which can supply these nutrients. For healthy survival
a human being requires say, bi units of the i th nutrient, i = 1, 2, . . . , m, respectively. Let aij be
the amount of the i th nutrient (Ni ) present in unit amount of the j th food product (Fj ), and
let cj , j = 1, 2, ..., n be the cost of unit amount of Fj . So now the problem is to decide on a diet
of minimum cost consisting of the n food products ( in various quantities) so that one gets the
required amount of each of the nutrients.
Let xj be the amount of the jth food product to be used in the diet (so xj 0 for j = 1, 2, .., n,)
and the
P problem reduces to (under certain simplifying assumptions):
Min ni=1 ci xi = cT x
subject
to
Pn
a
xj bi , for all i = 1, 2, ..., m,
ij
j=1
xj 0 for all j = 1, 2, . . . , n.
or as
Ax b, ( or alternatively as Ax b ) x 0,
where A is an m n matrix (a matrix with m rows and n columns), the (i, j) th entry of A is given
by aij , b = [b1 , b2 , . . . , bm ]T and x = [x1 , x2 , . . . , xn ]T .
Problem 2: (The Transportation Problem) Let there be m supply stations, S1 , S2 , .., Sm
for a particular product and n destination stations, D1 , D2 , ..., Dn where the product is to be transported. Let cij be the cost of transportation of unit amount of the product from Si to Dj . Let si
be the amount of the product available at Si and let dj be the corresponding demand at Dj . The
problem is to find xij , i = 1, 2, .., m, j = 1, 2, ..., n, where xij is the amount of the product to be
transported from Si to Dj such that the cost of transportation is minimum.
The problem
can be modeled as (under certain simplifying assumptions)
P
Min i,j cij xij
subject
to
Pn
x
ij si , for all i = 1, 2, ..., m,
Pj=1
m
i=1 xij dj , for all j = 1, 2, ...., n,
xij 0 for all i = 1, 2, .., m, j = 1, 2, .., n.
Note that the constraints of the above LPP can again be written as:
Min cT x
subject to Ax b, x 0,
2
Example 3: Note that in the above problem keeping the feasible region same, if we just change
the objective function to Min x, then the changed problem has a unique optimal solution.
Example 4: Also in Example 2 if we change objective function to Min x + 2y
then the changed problem has infinitely many optimal solutions, although the set of optimal solutions is bounded.
Example 5: Note that in Example 2 keeping the feasible region same, if we just change the
objective function to Min y, then the changed problem has infinitely many optimal solutions and
the optimal solution set is unbounded.
Example 6: Max x + 2y
subject to
x + 2y 1
x + y 1,
x 0, y 0.
Clearly the feasible region of this problem is the empty set. So this problem is called infeasible,
and since this problem does not have a feasible solution it obviously does not have an optimal
solution.
So we observed from the previous examples that a linear programming solution may not have a
feasible solution (Example 6). Even if it has a feasible solution it may not have an optimal solution
(Example 2). If it has an optimal solution then it can have a unique solution (Example 3)or it
can also have infinitely many solutions( Example 4 and Example 5). Few more questions which
naturally arises are the following:
Question 1: Can there be exactly 2 or 5, or say exactly 100 solutions of a LPP?
And do the set of optimal solutions have some nice geometric structure, that is if we have two
optimal solutions then what about points in between and on the line segment joining these two
solutions?
Question 2: If x1 and x2 are two optimal solutions of an LPP, then is any y of the form
y = x1 + (1 )x2 , 0 1, also an optimal solution?
Sets satisfying the above property are called convex sets,
or in other words is the set of optimal solutions of a LPP necessarily a convex set?
Definition: A nonempty set S Rn is said to be a convex set if for all x1 , x2 S,
x1 + (1 )x2 S, for all 0 1.
Here x1 + (1 )x2 , 0 1 is called a convex combination of x1 and x2 .
If in the above expression, 0 < < 1, then the convex combination is said to be a strict convex
combination of x1 and x2 .
Let us first try to answer Question 2, since if the answer to this question is a YES then that
would imply that if a LPP has more than one solution then it would have infinitely many solutions,
so the answer to Question 1 would be a NO.
So let x1 and x2 be two optimal solutions of the LPP given by
Max cT x
subject to Ax b, x 0.
4
x1 = x2 = x.
bi = bi for i = 1, 2, . . . , m
= 0 for i = m + 1, m + 2, . . . , m + n.
Hence the constraints Ax b and x 0 can be together written as:
T
b,
Ax
whereA is the matrix whose i th row is given by ai , or
Amn
A =
Inn
= [b1 , . . . , bm+n ]T , where bi s are as defined before.
and b
Hence x0 satisfies the following conditions,
aj T x0 = bj for j = 1, . . . , n for some n LI vectors {a1 , . . . , an } (we have taken WLOG the first n ),
where aj could be either one of the ai s or one of the ei s and the bj s are either equal to one of
the bi s or is equal to 0.
Then to show that x0 cannot be written as a strict convex combination of two distinct points of S.
Suppose if x0 = x1 + (1 )x2 for some x1 , x2 S and some , 0 < < 1, then to show that
x0 = x1 = x2 .
Since x1 , x2 S, then note that aj T x1 bj and aj T x2 bj for all j = 1, 2, . . . , n, and
(**)
aj T x0 = aj T x1 + (1 )aj T x2 .
Hence if for any j = 1, 2, . . . , n,
aj T x1 < bj or aj T x2 < bj , then since 0 < < 1, aj T x0 will also be strictly less that bj which is a
contradiction, hence each of x1 and x2 must also lie in each of these n LI hyperplanes.
Hence each of x0 , x1 , x2 must satisfy the following system of equations:
aj T x = bj , j = 1, 2, .., n.
OR
Bx = p, where B is an n n matrix with n linearly independent rows given by aj T , j = 1, 2, ..., n
and p is the vector with n components given by bj , j = 1, 2, . . . , n.
Since B is a nonsingular matrix the system Bx = p has a unique solution given by x = B 1 p.
Hence x0 = x1 = x2 , or in other words x0 is an extreme point of S.
To show the converse, that is if x is an extreme point of S (that is it cannot be written as a
strict convex combination of 2 distinct points of S) then it should lie at the point of intersection of
n LI hyperplanes defining the feasible region, we show that if x0 is an element of S which lies in exactly k, 0 k < n, LI hyperplanes defining the feasible region then x0 is not an extreme point of S.
If x0 S is one such point then let us assume (without loss of generality) that
aj T x0 = bj for j = 1, 2, .., k,
where aj T = aTj for j = 1, . . . , m and aj T = eTjm for j = m + 1, . . . , m + n.
Ti , for i = 1, . . . , m, and
Let as before, A be the (m + n) n matrix with the ith row equal to a
equal to eTim , for i = m + 1, . . . , n.
Let B be the k n matrix (which is a submatrix of A ) whose rows are given by ai T , i = 1, .., k.
Then x0 satisfies the equation Bkn x = p where p is a column vector with k components obtained
by removing the last (m + n k) components.
from b
Since rank(B) = k < n, the columns of B, denoted by say, b01 , .., b0n are linearly dependent.
Hence there exists real numbers d1 , ..., dn not all zeros such that
P
n
0
i=1 di bi = Bd = 0,
where d 6= 0 is a column vector with n components, d1 , d2 , .., dn and 0 is the column vector with k
components, all of which are equal to zero.
Bx
Bx + Bd
B
T
T x0 +
a
a
Tk+1
aTk+1 d
aTk+1 d
k+1
a
k+1 x0 +
0 + d) =
=
(x
+
d)
=
Since A(x
.
..
..
..
0
.
.
.
Tm+n x0 +
a
aTm+n d
1 , . . . , a
k } is LI then ai T x0 < bi ,
Note that for any i = k + 1, . . . , m + n, if the set {
ai , a
T
Tm+n x0 +
a
aTm+n d
Ti x0 +
Hence we can choose an > 0 sufficiently small, such that a
aTi d < bi for all such is
(since there are only finitely many( m + n k) such is).
1 , . . . , a
k } is linearly dependent, then
If however for some i = k + 1, . . . , m + n, the set {
ai , a
k } is LI, it implies that a
i can be expressed as a linear combination of the
since the set {
a1 , . . . , a
1 , . . . , a
k ,
vectors, a
k .
i = u1 a
1 + u2 a
2 . . . + uk a
that is there exists real numbers u1 , u2 , .., uk , such that a
T
T
T
T
1 d + u2 a
2 d . . . + uk a
k d = 0.
i d = u1 a
Hence a
Ti x0 +
Ti x0 bi .
Hence a
aTi d = a
Hence (x0 + d) S. By choosing > 0 sufficiently small, we can similarly make (x0 d) S.
Hence for > 0 sufficiently small, we have both, (x0 + d) S and (x0 d) S.
Since d 6= 0, note that (x0 + d) and (x0 d) are two distinct points of S.
But x0 = 21 (x0 + d) + 12 (x0 d), implies that x0 is not an extreme point of S = F ea(LP P ).
Remark: From the above equivalent definition of extreme points of the feasible region of a
LPP it is clear that the total number of extreme points of the feasible region is (m + n)C n .
Exercise: Think of a LPP such that the number of extreme points of the F ea(LP P ), is equal
to that given by the upper bound.
Question: Does the feasible region of a LPP (where the feasible region is of the form given
before) always have an extreme point?
Answer: Yes, we will see this later.
Exercise: Think of a convex set defined by only one hyperplane. Will it have any extreme
point?