Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 10

Simplex algorithm

In mathematical optimization theory, the simplex algorithm, created by the American mathematician George
Dantzig in 1947, is a popular algorithm for numerically solving linear programming problems. The journal
Computing in Science and Engineering listed it as one of the top 10 algorithms of the century.[1]

The method uses the concept of a simplex, which is a polytope of N + 1 vertices in N dimensions: a line
segment in one dimension, a triangle in two dimensions, a tetrahedron in three-dimensional space and so forth.

Overview

A system of linear inequalities defines a polytope as a feasible region. The simplex algorithm begins at a
starting vertex and moves along the edges of the polytope until it reaches the vertex of the optimum solution.

Consider a linear programming problem,

maximize
subject to

with the variables of the problem, a vector representing the linear form to
optimize, A a rectangular p,n matrix and the linear constraints.

In geometric terms, each inequality specifies a half-space in n-dimensional Euclidean space, and their
intersection is the set of all feasible values the variables can take. The region is convex and either empty,
unbounded, or a polytope.

The set of points where the objective function obtains a given value v is defined by the hyperplane cTx = v. We
are looking for the largest v such that the hyperplane still intersects the feasible region. As v increases, the
hyperplane translates in the direction of the vector c. Intuitively, and indeed it can be shown by convexity, the
last hyperplane to intersect the feasible region will either just graze a vertex of the polytope, or a whole edge or
face. In the latter two cases, it is still the case that the endpoints of the edge or face will achieve the optimum
value. Thus, the optimum value will always be achieved on one of the vertices of the polytope.

The simplex algorithm applies this insight by walking along edges of the (possibly unbounded) polytope to
vertices with higher objective function value. When a local maximum is reached, by convexity it is also the
global maximum and the algorithm terminates. It also finishes when an unbounded edge is visited, concluding
that the problem has no solution. The algorithm always terminates because the number of vertices in the
polytope is finite ; moreover since we jump between vertices always in the same direction (that of the objective
function), we hope that the number of vertices visited will be small. Usually there are more than one adjacent
vertices which improve the objective function, so a pivot rule must be specified to determine which vertex to
pick. The selection of this rule has a great impact on the runtime of the algorithm.

In 1972, Klee and Minty[2] gave an example of a linear programming problem in which the polytope P is a
distortion of an n-dimensional cube. They showed that the simplex method as formulated by Dantzig visits all
2n vertices before arriving at the optimal vertex. This shows that the worst-case complexity of the algorithm is
exponential time. Since then it has been shown that for almost every deterministic rule there is a family of
simplices on which it performs badly. It is an open question if there is a pivot rule with polynomial time, or
even sub-exponential worst-case complexity.

Nevertheless, the simplex method is remarkably efficient in practice. It has been known since the 1970s that it
has polynomial-time average-case complexity under various distributions. These results on "random" matrices
still didn't quite capture the desired intuition that the method works well on "typical" matrices. In 2001
Spielman and Teng introduced the notion of smoothed complexity to provide a more realistic analysis of the
performance of algorithms.[3]

Other algorithms for solving linear programming problems are described in the linear programming article.

[edit] Algorithm

The simplex algorithm requires the linear programming problem to be in augmented form, so that the
inequalities are replaced by equalities. The problem can then be written as follows in matrix form:

Maximize Z in:

where are the introduced slack variables from the augmentation process, ie the non-
negative distances between the point and the hyperplanes representing the linear constraints A.

By definition, the vertices of the feasible region are each at the intersection of n hyperplanes (either from A or
). This corresponds to n zeros in the n+p variables of (x, xs). Thus 2 feasible vertices are adjacent when
they share n-1 zeros in the variables of (x, xs). This is the interest of the augmented form notations : vertices and
jumps along edges of the polytope are easy to write.

The simplex algorithm goes as follows :

• Start off by finding a feasible vertex. It will have at least n zeros in the variables of (x, xs), that will be
called the n current non-basic variables.
• Write Z as an affine function of the n current basic variables. This is done by transvections to move the
non-zero coefficients in the first line of the matrix above.
• Now we want to jump to an adjacent feasible vertex to increase Z's value. That means increasing the
value of one of the basic variables, while keeping all n-1 others to zero. Among the n candidates in the
adjacent feasible vertices, we choose that of greatest positive increase rate in Z, also called direction of
highest gradient.
o The changing basic variable is therefore easily identified as the one with maximum positive
coefficient in Z.
o If there are already no more positive coefficients in Z affine expression, then the solution vertex
has been found and the algorithm terminates.
• Next we need to compute the coordinates of the new vertex we jump to. That vertex will have one of the
p current non-basic variables set to 0, that variable must be found among the p candidates. By convexity
of the feasible polytope, the correct non-basic variable is the one that, when set to 0, minimizes the
change in the moving basic variable. This is easily found by computing p ratios of coefficients and
taking the lowest ratio. That new variable replaces the moving one in the n basic variables and the
algorithm loops back to writing Z as a function of these.

[edit] Example

Rewrite and :

x=0 is clearly a feasible vertex so we start off with it : the 3 current basic variables are x, y and z. Luckily Z is
already expressed as an affine function of x,y,z so no transvections need to be done at this step. Here Z 's
greatest coefficient is -4, so the direction with highest gradient is z. We then need to compute the coordinates of
the vertex we jump to increasing z. That will result in setting either s or t to 0 and the correct one must be found.
For s, the change in z equals 10/1=10 ; for t it is 15/3=5. t is the correct one because it minimizes that change.
The new vertex is thus x=y=t=0. Rewrite Z as an affine function of these new basic variables :

Now all coefficients on the first row of the matrix have become nonnegative. That means Z cannot be improved
by increasing any of the current basic variables. The algorithm terminates and the solution is the vertex
x=y=t=0. On that vertex Z=20 and this is its maximum value. Usually the coordinates of the solution vertex are
needed in the standard variables x, y, z ; so a little substitution yields x=0, y=0 and z=5.

[edit] Implementation

The tableau form used above to describe the algorithm lends itself to an immediate implementation in which the
tableau is maintained as a rectangular (m+1)-by-(m+n+1) array. It is straightforward to avoid storing the m
explicit columns of the identity matrix that will occur within the tableau by virtue of B being a subset of the
columns of . This implementation is referred to as the standard simplex method. The storage and
computation overhead are such that the standard simplex method is a prohibitively expensive approach to
solving large linear programming problems.
In each simplex iteration, the only data required are the first row of the tableau, the (pivotal) column of the
tableau corresponding to the entering variable and the right-hand-side. The latter can be updated using the
pivotal column and the first row of the tableau can be updated using the (pivotal) row corresponding to the
leaving variable. Both the pivotal column and pivotal row may be computed directly using the solutions of
linear systems of equations involving the matrix B and a matrix-vector product using A. These observations
motivate the revised simplex method, for which implementations are distinguished by their invertible
representation of B.

In large linear programming problems A is typically a sparse matrix and, when the resulting sparsity of B is
exploited when maintaining its invertible representation, the revised simplex method is a vastly more efficient
solution procedure than the standard simplex method. Commercial simplex solvers are based on the primal (or
dual) revised simplex method.
 “A project is a series of activities directed to accomplishment of a desired objective.”
Plan your work first…..then work your plan

Network analysis
Introduction

Network analysis is the general name given to certain specific techniques which can be used for the planning,
management and control of projects.
One definition of a project:
“A project is a temporary endeavour undertaken to create a "unique" product or service”

History

 Developed in 1950’s
 CPM by DuPont for chemical plants
 PERT by U.S. Navy for Polaris missile

CPM was developed by Du Pont and the emphasis was on the trade-off between the cost of the project and its
overall completion time (e.g. for certain activities it may be possible to decrease their completion times by
spending more money - how does this affect the overall completion time of the project?)

PERT was developed by the US Navy for the planning and control of the Polaris missile program and the
emphasis was on completing the program in the shortest possible time. In addition PERT had the ability to cope
with uncertain activity completion times (e.g. for a particular activity the most likely completion time is 4 weeks
but it could be anywhere between 3 weeks and 8 weeks).

CPM - Critical Path Method


 Definition: In CPM activities are shown as a network of precedence relationships using activity-on-node
network construction
– Single estimate of activity time
– Deterministic activity times

USED IN : Production management - for the jobs of repetitive in nature where the activity time estimates can
be predicted with considerable certainty due to the existence of past experience.

PERT -
Project Evaluation & Review Techniques
 Definition: In PERT activities are shown as a network of precedence relationships using activity-on-
arrow network construction
– Multiple time estimates
– Probabilistic activity times

USED IN : Project management - for non-repetitive jobs (research and development work), where the time
and cost estimates tend to be quite uncertain. This technique uses probabilistic time estimates.

Gantt chart
Advantages
- Gantt charts are quite commonly used. They provide an easy graphical representation of when activities
(might) take place.

Limitations
- Do not clearly indicate details regarding the progress of activities
- Do not give a clear indication of interrelation ship between the separate activities

CPM/PERT

These deficiencies can be eliminated to a large extent by showing the interdependence of various activities by
means of connecting arrows called network technique.

 Overtime CPM and PERT became one technique

 ADVANTAGES:
– Precedence relationships
– large projects
– more efficient

The Project Network


 Use of nodes and arrows

Arrows  An arrow leads from tail to head directionally


– Indicate ACTIVITY, a time consuming effort that is required to perform a part of the work.

Nodes n A node is represented by a circle


- Indicate EVENT, a point in time where one or more activities start and/or finish.
Activity on Node

- A completion of an activity is represented by a node

Activity on Arrow

- An arrow represents a task, while a node is the completion of a task


- Arrows represent order of events

Activity Slack
Each event has two important times associated with it :
- Earliest time , Te , which is a calendar time when a event can occur when all the predecessor events
completed at the earliest possible times
- Latest time , TL , which is the latest time the event can occur with out delaying the subsequent events
and completion of project.

 Difference between the latest time and the earliest time of an event is the slack time for that event

Positive slack : Slack is the amount of time an event can be delayed without delaying the project completion

Critical Path
 Is that the sequence of activities and events where there is no “slack” i.e.. Zero slack

 Longest path through a network


 minimum project completion time

Benefits of CPM/PERT
 Useful at many stages of project management
 Mathematically simple
 Give critical path and slack time
 Provide project documentation
 Useful in monitoring costs

Example
Illustration of network analysis of a minor redesign of a product and its associated packaging.
The key question is: How long will it take to complete this project ?
For clarity, this list is kept to a minimum by specifying only immediate relationships, that is relationships
involving activities that "occur near to each other in time".

Before starting any of the above activity, the questions asked would be
• "What activities must be finished before this activity can start"
• could we complete this project in 30 weeks?
• could we complete this project in 2 weeks?

One answer could be, if we first do activity 1, then activity 2, then activity 3, ...., then activity 10, then activity
11 and the project would then take the sum of the activity completion times, 30 weeks.

“What is the minimum possible time in which we can complete this project ? “

We shall see below how the network analysis diagram/picture we construct helps us to answer this question.
CRITICAL PATH TAKES 24 WEEKS FOR THE COMPLETION OF THE PROJECT

Limitations to CPM/PERT
 Clearly defined, independent and stable activities
 Specified precedence relationships
 Over emphasis on critical paths
First-come, first-served
From Wikipedia, the free encyclopedia

"FCFS" redirects here. For the figure skating competition, see Four Continents Figure Skating Championships.
This article is about a general service policy. For the technical concept, see FIFO.

First-come, first-served (sometimes first-in, first-served; first-come, first choice; or simply FCFS) is a
service policy whereby the requests of customers or clients are attended to in the order that they arrived, without
other biases or preferences. The policy can be employed when processing sales orders, in determining restaurant
seating, or on a taxi stand, for example.

Festival seating (also known as general seating and stadium seating) is seating done on a FCFS basis. See
Riverfront Coliseum for details on a December 1979 disaster involving "festival seating" at a concert by The
Who in Cincinnati, Ohio.

The practice is also becoming common among low-cost airlines in Europe where seats cannot be reserved either
in advance or at check-in. These airlines allow passengers to board in small groups based upon their order of
check-in and sit in whatever seat on the aircraft they wish to. On the basis of first come, first served, the earlier
you check-in the earlier you board the aircraft to get the seat you want.

Southwest Airlines and major European low-cost airlines such as EasyJet also apply first come, first served
seating. Passengers are sequentially (on a first come, first served basis) assigned into one of three "boarding
groups". The passengers then are boarded onto the plane in group-order.

You might also like