Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

1.

Introduction
1.1 Introduction
Management Science (MS) can be defined as “A problem-solving process used by an
interdisciplinary team to develop mathematical models that represent simple-to-complex
functional relationships and provide management with a basis for decision-making and a means
of uncovering new problems for quantitative analysis”. Management science encompasses,
however, more than just the development of models for specific problems. It makes a substantial
contribution in a much broader area: the application of the output from management science
models for decision-making at the lower, middle, and top management levels. Management
science is the application of the scientific method to the study of the operations of large, complex
organizations or activities. Two disciplines intimately associated with management science are
industrial engineering and operations research.
There are four major characteristics of management science:-
(1) Examine Functional Relationships from a Systems Overview: The activity of any one
function of a company will have some effect on the activity of each of the other functions.
Therefore it is necessary to identify all important interactions and determine their impact on the
company as a whole. Initially, the functional relationships in a management science project are
expanded deliberately so that all the significantly interacting parts and their related components
are contained in a statement of the problem. A systems overview examines the entire area under
the manager’s control. This approach provides a basis for initiating inquiries into problems that
seem to be affecting performance at all levels.
(2) Use the Interdisciplinary Approach: Management science makes good use of a simple
principle, it looks at the problem from different angles and approaches. For example, a
mathematician might look at the inventory problem and formulate some type of mathematical
relationships between the manufacturing departments and customer demand. A chemical
engineer might look at the same problem and formulate it in terms of flow theory. A cost
accountant might conceive the inventory problem in terms of component costs (e.g., direct
material cost, direct labor cost, overheads etc.) and how such costs can be controlled and
reduced, etc. Therefore, management science emphasizes over the interdisciplinary approach
because each of the individual aspects of a problem can be best understood and solved by those,
experts in different fields such as accounting, biological, economic, engineering, mathematics,
physical, psychological, sociological, statistical etc.
(3) Uncover New Problems for Study: The third characteristic of management science,
which is often overlooked, is that the solution of an MS problem brings new problems to light.
All interrelated problems uncovered by the MS approach do not have to be solved at the same
time. However, each must be solved with consideration for other problems if maximum benefits
are to be obtained.

Page|1
(4) Use a Modeling-Process Approach to Problem Solving: Management science takes a
systematic approach to problem solving. It may use a modeling process approach taking the help
of mathematical models.
Linear Programming (LP) is one of the most frequently applied OR techniques in real-world
problems. Traditional LP requires the decision maker to have deterministic and precise data
available but this assumption is not realistic in many cases for several reasons: a) many real life
problems and models contain linguistic and/or vague variables and constraints. b) Collecting
precise data is often challenging because the environment of the system is unstable or collecting
precise data results in high information cost. c) Decision makers might not be able to express
goals or constraints precisely because their utility functions are not defined precisely. One of the
most important discoveries in the early development of linear programming was the concept of
duality and its many important ramifications. This discovery revealed that every linear
programming problem has associated with it another linear programming problem called the
dual. The relationships between the dual problem and the original problem prove to be extremely
useful in a variety of ways. Linear Programming (LP) includes valuable insights that are based
on duality. The particular usefulness of duality theory is not only given through its algorithmic
(e.g. dual simplex algorithm) and mathematical benefits (e.g. weak/strong duality theorems), but
it also includes explanatory power in economic interpretation.

1.2 Objective of the study:


1. To have a general idea about Management Science and its use in decision making.
2. To have clear understanding about the Duality of LP and primal-Dual relationship
3. To hove clear understanding about the dual simplex method and sensitivity analysis.

1.3 Methodology: The data and information for preparing this term paper have been
collected from secondary sources. For collecting secondary sources of information I read the
annual reports, websites, and study of relevant reports, documents and different manuals.

1.4 Limitations of the study:

Although this study was able to reach its aim there were some unavoidable limitations. They are
given below:
• Lack of available and/or reliable data.
• Lack of prior research studies on the topic.
• Short research time period

Page|2
2. Literature Review
Singh (1972) made a feasibility study of crop insurance in U.P. using cross sectional data from
Tarai farms for the year 1970-71, with the help of Linear Programming technique. He studied the
crop variability in U.P. during 1951-70 and examined the feasibility of crop insurance
programme. He also evaluated two alternative causes, namely crop insurance and diversification,
which will reduce income variance or minimize the probability of loss to achieve a more stable
farm income. It was concluded that the fluctuating crop production is a chronic problem in U.P.
and diversification stabilizes farm incomes at a higher level than the crop insurance programme.
Van de Paane and Stangeland (1974) used linear programming technique to study the optimum
concentration of cattle feed supplements, which were prepared by feed mills and sold to cattle
feeders. Two problems were studied, in the first problem the ratio in which supplement and main
feed are utilized was considered as given, while in the second problem this ratio was optimally
chosen. The relationships between these two problems as well as their dual problem were
analyzed. It is concluded that feed milk profit margins for their supplements based on the
quantity of the supplement and the inputs costs, generally lead to supplements which are too
concentrated. K. Srinivas Raju, D.Nagesh Kumar (2000): Irrigation planning of Sri Ram Sagar
Project using Multi Objective Fuzzy Linear Programming. Fuzzy Linear Programming (FLP)
irrigation planning model is developed for the evaluation of management strategies for the case
study of Sri Ram Sagar Project, Andhra Pradesh, India. Three conflicting objectives net benefits,
crop production and labor employment are considered in the irrigation planning scenario. The
present paper demonstrates how vagueness and imprecision in the objective function values can
be quantified my membership functions in a fuzzy multi objective framework. Uncertainty in the
flow is considered by stochastic programming. Fuzzy Linear Programming (FLP) solution yields
bet benefits 1,633 million Rupees, 0.70 million tons of crop production, 42.89 million man days
with degree of truth 0.69. Analysis of results indicated that net benefits, crop production and
labor employment in FLP have decreased by 2.38%, 9.6% and 7.22% as compared to ideal
values in the crisp Linear Programming (LP) model. Comparison of results indicated that the
methodology can be extended to other similar situations.
Kanti Swarup (1968 A) have given a paper on it with nonlinear constraints but he has not given
the proof of converse duality. Sharma and Kanti Swarup (1972) proved this result of converse
duality by making use of Dorn’s (1960) technique. Kaska (1969) also gave some result on
duality involving primal variables in dul. Kyland (1972) has given some approach on duality
based on the work of Wolfe (1961). Various research workers used the strict condition of
differentiability while giving the paper on duality. Some works on duality in L.F.P.P.’s are
available in Chadha (1971), Carven and Mond (1973), Kanti Swarup (1967, 1968 A, B) and
Bector (1974). For the solution of L.F.P.P. some other methods were also developed by (Birtran
and Novaes (1973), Kanti Swarup (1970) and Gilmore and Gomory (1963). Kanti swarup (1970)
also developed a technique for solving L.F.P.P. with upper bond variables. Attempts have also
been made for the solution of Nonlinear Fractional Programming Problem (N.L.F.P.P.’s).

Page|3
According to Bector (1968), the problems which comes under the category of convex programs
can be solved by the usual techniques. There are various methods available for solving convex
programming Rosen (1960, 1961) has given the method for the solution of Nonlinear
Programming, called Rosen’s gradient projection method. Zoutendzik (1959) gave a method of
feasible direction for its solution. Cheney and Goldsteen (1959) also gave a method called
Newton’s method of convex programming. Killey (1960) introduced a new method called
cutting plan method for solving convex programs. Jagannathan (1973) used parametric approach
in duality in N.L.F.P.P.’s Bector (1973) used fractional Lagrangian approach and Schaible (1983,
1974, 1976 A, 1876 B), used variables transformation technique in duality problem. Aggarwal
and Saxena (1975) established duality results for standard error fractional program. The work in
all these papers are based on the duality theory of Chandra and Gulati (1976). However Mond
(1978) has further extended the duality theory of non-differentiable fractional programming by
including the case of non-linear constraints. Bector, CR; Chandra, S; Husain (1992): Generalized
continuous fractional programming duality-A parametric approach. Using a parametric approach,
duality is presented for a continuous minimax fractional programming problem that involves
several ratios in the objective. Duality results presented in the present paper can be regarded as
the dynamic generalizations of those of finite dimensional nonlinear programming problems
recently explored. G.J. Zalmai (1996) Continuous- time multi objective fractional programming.
Both parametric and semi parametric necessary and sufficient proper parametric are established
for a class of continuous- time multi objective fractional programming problems. Based on the
forms and contents of these proper efficiency results, two parametric and four semi parametric
duality models are constructed in each case, weak and strong duality theorems are proved. These
proper efficiency and duality results contain, as special cases, similar results for continuous time
programming problems with multiple non-fractional, single fractional, and conventional
objective functions. These results improve and generalize a number of existing results in the area
of continuous-time programming and, moreover, provide continuous-time analogues of various
kindred results previously obtained for certain classes of finite dimensional nonlinear
programming problems. A. Chandra, V.Kumar, I. Husain (1996): Symmetric duality for
multiplicatively separable fractional mixed integer programming problem. A pair of symmetric
dual fractional mixed integer programming problems is formulated and an appropriate duality
theorem is established under suitable and multiplicative separability assumptions on the kenel
function. A self-duality theorem and the extension of the formulation to convex cone domains
are also discussed.
De and Yadav (2011) provide a mathematical model for multi criteria transportation problem
under fuzzy environment considering the exponential membership function instead of taking
linear membership function. However, in contrast with the vast literature on modelling and
solution procedures for a linear program in a fuzzy environment (Lai and Hwang, 1993; Lai,
1995; Zimmermann, 1978, 1991), the studies in duality are rather scarce. The most basic results
on duality in FLP are due to Rodder and Zimmermann (1980) and Hamacher et al. (1978). In
Rodder and Zimmermann (1980), a generalisation of maxmin and minmax problems in a fuzzy

Page|4
environment is presented and thereby a pair of fuzzy dual linear programming problems is
constructed. An economic interpretation of this duality in terms of market and industry is also
discussed in that paper. In Bector and Chandra (2002), a pair of linear programming primal-dual
problem is introduced under fuzzy environment and appropriate results were proved to establish
the duality relationship between them. In Liu et al. (1995), a constructive approach has been
proposed to duality for fuzzy multiple criteria and multiple constraints level linear programming
problems. Samuel and Venkatachalapathy (2012) proposed a new algorithm for solving a special
type of transportation problem by assuming that a decision maker is uncertain about the precise
values of transportation fuzzy cost only but there is no uncertainty about the supply and demand
of the product. A new dual-based approach has been proposed to apply on real life transportation
problems. Zhong and Yong (2002) give a parametric approach for the duality in fuzzy multi
criteria and multi constraint level linear programming problem. In Samuel and
Venkatachalapathy (2012), a new dual-based algorithm is proposed for solving a special type of
fuzzy transportation problem by assuming that a decision maker is uncertain about the precise
values of transportation fuzzy cost only but there is no uncertainty about the supply and demand
of the product. In Gupta and Mehlawat (2009), a study of a pair of fuzzy primal-dual linear
programming problems has been presented and calculated duality results using an aspiration
level approach using exponential membership function, while a discussion of fuzzy primal dual
linear programming problem with fuzzy coefficients has been presented in Wu (2003, 2004). In
Mahadavi-Ameri and Nasseri (2007), a new dual algorithm for solving linear programming with
fuzzy variables has been explained. In Gupta and Danger (2012), the authors established the
duality results for second order symmetric multi-objective programming with cone constraints.
Ebrahimnejad and Nasseri (2012b) generalised the dual simplex method in crisp environment for
obtaining the fuzzy optimal solution. Their method begins with a basic dual solution and
proceeds by pivoting through a series of dual basic fuzzy solution until the associated
complementary primal basic solution is feasible. However, Ebrahimnejad and Nasseri (2012a)
give the fuzzified version of conventional primal-dual method of linear programming problems
that any dual feasible solution, whether basic or not, is adequate to initiate this method.

Page|5
3. Finding & Analysis
3.1 Duality in Linear Program, dual form of the Problem and Primal & Dual
Relationship
Duality is a unifying theory that develops the relationships between a given linear program and
another related linear program stated in terms of variables with this shadow-price interpretation.
This unified theory is important-

 Because it allows fully understanding the shadow-price interpretation of the optimal


simplex multipliers, which can prove very useful in understanding the implications of a
particular linear-programming model.
 Because it is often possible to solve the related linear program with the shadow prices as
the variables in place of, or in conjunction with, the original linear program, thereby
taking advantage of some computational efficiencies.
For Example, there is a small company in Melbourne which has recently become engaged in the
production of office furniture. The company manufactures tables, desks and chairs. The
production of a table requires 8 kg of wood and 5 kg of metal and is sold for $80; a desk uses 6
kg of wood and 4 kg of metal and is sold for $60; and a chair requires 4 kg of both metal and
wood and is sold for $50. We would like to determine the revenue maximizing strategy for this
company, given that their resources are limited to 100 kg of wood and 60 kg of metal.
There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual)
the dual of a dual linear program is the original primal linear program. Additionally, every
feasible solution for a linear program gives a bound on the optimal value of the objective
function of its dual. The weak duality theorem states that and provides a bound on the optimal
value of the objective function of either the primal or the dual. Simply stated, the value of the
objective function for any feasible solution to the primal maximization problem is bounded from
above by the value of the objective function for any feasible solution to its dual. Similarly, the
value of the objective function for its dual is bounded from below by the value of the objective

function of the primal. Pictorially, we might represent the situation as follows:

Page|6
The strong duality theorem states that if the primal has an optimal solution, x*, then the dual also
has an optimal solution, y*, and cTx*=bTy*.
A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is
unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is
unbounded, then the primal must be infeasible. However, it is possible for both the dual and the
primal to be infeasible.
The Duality in Linear Programming states that every linear programming problem has another
linear programming problem related to it and thus can be derived from it. The original linear
programming problem is called “Primal” while the derived linear problem is called “Dual.”
Before solving for the duality, the original linear programming problem is to be formulated in its
standard form. Standard form means, all the variables in the problem should be non-negative and
“≥,” ”≤” sign is used in the minimization case and the maximization case respectively.
The concept of Duality can be well understood through a problem given below:
Maximize P = 50X1+30X2
Subject to: 4X1 + 3X2 ≤ 100
3X1 + 5X2 ≤ 150
X1, X2 ≥ 0
The duality can be applied to the above original linear programming problem as:
Minimize C = 100Y1 + 150Y2
Subject to: 4Y1 + 3Y1 ≥ 50
3Y1 +5Y2 ≥ 30
Y1, Y2 ≥ 0
The following observations were made while forming the dual linear programming problem:
1. The primal or original linear programming problem is of the maximization type while the
dual problem is of minimization type.
2. The constraint values 100 and 150 of the primal problem have become the coefficient of
dual variables y1 and y2 in the objective function of a dual problem and while the
coefficient of the variables in the objective function of a primal problem has become the
constraint value in the dual problem.
3. The first column in the constraint inequality of primal problem has become the first row
in a dual problem and similarly the second column of constraint has become the second
row in the dual problem.

Page|7
4. The directions of inequalities have also changed, i.e. in the dual problem, the sign is the
reverse of a primal problem. Such that in the primal problem, the inequality sign was “≤”
but in the dual problem, the sign of inequality becomes “≥”.
The dual model of a Linear Programming problem consists of an alternative modeling instance
that allows us to recover the information of the original problem commonly known as primal
model. Therefore it is sufficient to solve one of them (primal or dual) to obtain the optimal
solution and the optimal value of the equivalent problem (primal or dual as applicable). The
number of variables in the dual problem is equal to the number of constraints in the original
(primal) problem. The number of constraints in the dual problem is equal to the number of
variables in the original problem. Coefficient of the objective function in the dual problem come
from the right-hand side of the original problem. If the original problem is a max model, the dual
is a min model. If the original problem is a min model, the dual problem is the max problem. The
coefficient of the first constraint function for the dual problem are the coefficients of the first
variable in the constraints for the original problem, and the similarly for other constraints. The
right-hand sides of the dual constraints come from the objective function coefficients in the
original problem. Primal Dual Relationships can be summarized in the following table:

The dual of the dual problem is again the primal problem. Either of the two problems has an
optimal solution if and only if the other does. If one problem is feasible but unbounded, then the
other is infeasible. If one is infeasible, then the other is either infeasible or feasible/unbounded.
In the Weak Duality Theorem, the objective function value of the primal (dual) to be maximized
evaluated at any primal (dual) feasible solution cannot exceed the dual (primal) objective
function value evaluated at a dual (primal) feasible solution. cTx >= bTy (in the standard
equality form). In Strong Duality Theorem, when there is an optimal solution, the optimal
objective value of the primal is the same as the optimal objective value of the dual is cTx*=bTy*.

Page|8
3.2 Dual Simplex Method
The dual simplex method maintains a non-negative row 0 (dual feasibility) and eventually
obtains a tableau in which each right-hand side is non-negative (primal feasibility).The dual
simplex method for a max problem.
Step 1: Is the right-hand side of each constraint non negative? If not, go to step 2.
Step 2: Choose the most negative basic variable as the variable to leave the basis. The row it is
in will be the pivot row. To select the variable that enters the basis, compute the following ratio
for each variable xj that has a negative coefficient in the pivot row:

Coefficient of xj in row 0
Coefficient of xj in pivot raw

Choose the variable with the smallest ratio as the entering variable. Now use EROs to make the
entering variable a basic variable in the pivot row.
Step 3: If there is any constraint in which the right-hand side is negative and each variable has a
non-negative coefficient, then the LP has no feasible solution. If no constraint infeasibility is
found, return to step 1.
The dual simplex method is often used to find the new optimal solution to an LP after a
constraint is added.
Example (1.0) Dual of the minimization problem is the following maximization problem:
Maximize P under the following constraints.
P = 12Y1 + 16Y2
Y1 + 2Y2 ≤ 16
Y1 + Y2 ≤ 9
3Y1 + Y2 ≤ 21
Y1, Y2 ≥ 0
Forming the Dual problem with slack variables is X1, X2, and X3. Result:
Y1 + 2Y2 + X1 = 16
Y1 + Y2 + X2 =9
3Y1 + Y2 + X3 = 21
– 12Y1 – 16Y2 + P = 0

Page|9
Form the simplex tableau for the dual problem and determine the pivot element: The first pivot
element is 2 (in red) because it is located in the column with the smallest negative number at the
bottom (-16) and when divided into the rightmost constants, yields the smallest quotient (16
divided by 2 is 8).
Y1 Y2 X1 X2 X3 P
X1 1 2 1 0 0 16
X2 1 1 0 1 0 9
X3 3 1 0 0 1 21
P -12 -16 0 0 0 0
Divide row 1 by the pivot element (2) and change the exiting variable to Y2 (in red). Result:
Y1 Y2 X1 X2 X3 P
Y2 . 5 1 .5 0 0 8
X2 1 1 0 1 0 9
X3 3 1 0 0 1 21
P -12 -16 0 0 0 0
Perform row operations to get zeros in the column containing the pivot element. So, the next
pivot element (0.5) (in red).
Y1 Y2 X1 X2 X3 P  -1*row 1 + R2=R2
 -1*row1+r3 =r3
Y2 . 5 1 .5 0 0 8
 16*r1+r4 = r4
X2 .5 0 -.5 1 0 1
X3 2.5 0 -.5 0 1 13
P -4 0 8 0 0 128
New pivot
element

Pivot element
located in this
column

P a g e | 10
Variable Y1 becomes new entering variable. Now divide row 2 by 0.5 to obtain a 1 in the pivot
position.
Y1 Y2 X1 X2 X3 P
Y2 . 5 1 .5 0 0 8
Y1 1 0 -1 2 0 2
X3 2.5 0 -.5 0 1 13
P -4 0 8 0 0 128

Y1 Y2 X1 X2 X3 P
Y2 0 1 1 -1 0 8  -0.5*row2 + row1=Row1
Y1 1 0 -1 2 0 2  -2.5*row 2 + row3=row3
X3 0 0 2 -5 1 8  4*row2+row4=row4

P 0 0 4 8 0 136
We know that, an optimal solution to a minimization problem can always be obtained from the
bottom row of the final simplex tableau for the dual problem.
So, in this problem the Minimum of P is 136. It occurs at, X1 = 4, X2 = 8, X3 = 0.
Y1 Y2 X1 X2 X3 P
Y2 0 1 1 -1 0 8
Y1 1 0 -1 2 0 2
X3 0 0 2 -5 1 8
P 0 0 4 8 0 136

P a g e | 11
Example (2.0) form the dual problem. Suppose
Minimize C = 3X1 + 2X2
Subject to: 2X1 + X2 ≥ 6
X1 + X2 ≥ 4
X1, X2 ≥ 0

Step 1. Form the matrix A


2 1 6
A= 1 1 4
3 2 1
Step 2. Find the transpose of A, AT.
2 1 3
AT = 1 1 2
6 4 1
Step 3. State the dual problem.
Maximize P = 6Y1 + 4Y2
Subject to: 2Y1 + Y2 ≤ 3
Y1 + Y2 ≤ 2
Y1, Y2 ≥ 0
After writing the dual problem in standard form now we consider the above parametric linear
programming problem. Let Y3 and Y4 be the slack variables for the respective functional
constraints. So we obtain the initial tableau.
Tableau 1
Basic Y1 Y2 Y3 Y4
P -6 -4 0 0 0
Y3 2 1 1 0 3
Y4 1 1 0 1 2

P a g e | 12
We see from the tableau that the pivot column is the Y1. The quotients are 3/2 = 1.5 and 2/1 = 2.
Hence the Y3 row is the pivot row. Thus Y1 is the entering variable which replaces Y3, the
leaving variable. The pivot element at the intersection of the pivot row and pivot column is 2. To
update the tableau we performing the Gauss reductions we obtain Tableau 2 given below.
Tableau 2
Basic Y1 Y2 Y3 Y4
P 0 -1 3 0 9
Y1 1 1/2 1/2 0 3/2
Y4 0 1/2 -1/2 1 1/2
We deduce that the current solution is not optimal solution. We need to update once more we
obtain Tableau 3 given below.
Tableau 3
Basic Y1 Y2 Y3 Y4
P 0 0 2 2 10
Y1 1 0 1 -1 1
Y2 0 1 -1 2 1
The current solution is optimal solution since all the coefficients in the First row (P) are
nonnegative.
We are now going to extract the solution of the primal problem from the final simplex tableau of
the dual problem. The optimal objective value is: P = C = 10. Since the above final tableau is for
the dual problem, we recall that in transposing the primal problem the objective coefficients of
the original variables became the right-hand values of the constraints. This means that each
original variable now corresponds to a slack variable. The optimal values of the original
variables correspond to the slack variables in the final tableau of the dual problem. So, the
objective value is in the usual column i.e. Y1 = 1, Y2 = 1.
Note that if we substitute the basic variables of the dual problem in the dual objective function
we
Have: P = 6Y1 + 4Y2 = (6) (1) + (4) (1) = 10.

P a g e | 13
3.3 Dual Graphical Method
The graphical method of solving a linear programming problem is used when there are only two
decision variables. If the problem has three or more variables, the graphical method is not
suitable. There are some important definitions and concepts that are used in the methods of
solving linear programming problems.
1. Solution: A set of values of decision variables satisfying all the constraints of a linear
programming problem is called a solution to that problem.
2. Feasible solution: Any solution which also satisfies the non-negativity restrictions of
the problem is called a feasible solution.
3. Optimal feasible solution: Any feasible solution which maximizes or minimizes the
objective function is called an optimal feasible solution.
4. Feasible region: The common region determined by all the constraints and non-
negativity restriction of a LPP is called a feasible region.
5. Corner point: A corner point of a feasible region is a point in the feasible region that is
the intersection of two boundary lines.
In the example (2.0), the decision variables X1 and X2 of the primal problem correspond to the
slack variables of the dual problem. The objective function is Maximize P = 3X 1 + 2X2. And the
Constraint are: 2X1 + X2 ≤ 6; X1 + X2 ≤ 4. And also the non-negativity constraints are: X1, X2 ≥
0.
The boundary of the feasible region consists of the lines obtained from changing the inequalities
of equalities. The lines is-
2X1 + X2 = 6……………….. (1)
X1 + X2 = 4………………… (2)
In equation (1), Let X1 = 0 then X2 = 6
Let X2 = 0 then X1 = 3
In equation (2), Let X1 = 0 then X2 = 4
Let X2 = 0 then X1 = 4

P a g e | 14
So, the corner points (or extreme points) and their corresponding objective functional values are:
Extreme points Profit (P = 3X1 + 2X2 )
(0, 4) 8
(2, 2) 10
(3, 0) 9
We therefore deduce that the optimal solution is X 1 = 2, X2 = 2 corresponding to a profit P = 10.
Thus profits are maximized when X1 = 2 and X2 = 2.

P a g e | 15
4. Conclusion
Above I have discussed the duality theory in Linear Program. The theory of duality is a very
elegant and important concept within the field of operations research. This theory was first
developed in relation to linear programming, but it has many applications, and perhaps even a
more natural and intuitive interpretation, in several related areas such as nonlinear programming,
networks. The notion of duality within linear programming asserts that every linear program has
associated with it a related linear program called its dual. The original problem in relation to its
dual is termed the primal. The relationship between the primal and its dual, both on a
mathematical and economic level that is truly the essence of duality theory. Every linear
programming problem has associated with it a dual linear programming problem. There are a
number of very useful relationships between the primal problem and its dual problem that
enhance the ability to analyze the primal problem. For example, the economic interpretation of
the dual problem gives shadow prices that measure the marginal value of the resources in the
primal problem and provides an interpretation of the simplex method. Because the simplex
method can be applied directly to either problem in order to solve both of them simultaneously,
considerable computational effort sometimes can be saved by dealing directly with the dual
problem. Duality theory, including the dual simplex method for working with super optimal
basic solutions, also plays a major role in sensitivity analysis. The values used for the parameters
of a linear programming model generally are just estimates. Therefore, sensitivity analysis needs
to be performed to investigate what happens if these estimates are wrong. The general objectives
are to identify the sensitive parameters that affect the optimal solution, to try to estimate these
sensitive parameters more closely, and then to select a solution that remains good over the range
of likely values of the sensitive parameters. This analysis is a very important part of most linear
programming studies.

P a g e | 16

You might also like