Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Page |1

Topic NO 1:
Introduction Of Operation Research:
Operations Research (OR) is a field of study that uses mathematical
modelling, statistical analysis, and optimization techniques to aid decision-
making and problem-solving in complex systems. It originated during World
War II when military planners faced complex logistical problems. Since then,
it has expanded to various sectors including business, healthcare,
transportation, manufacturing, and finance.
Key components of Operations Research include:
1. Mathematical Modeling:
OR involves translating real-world problems into mathematical
representations. This often involves defining decision variables, constraints,
and objectives.
2. Optimization Techniques:
OR employs optimization algorithms to find the best possible solution given
a set of constraints. These techniques include linear programming, integer
programming, dynamic programming, and nonlinear programming.
3. Simulation:
Simulation involves creating computer models to mimic real-world systems.
This allows researchers to study the behavior of complex systems under
different conditions and make predictions about their performance.
4. Probability and Statistics:
OR utilizes probabilistic models and statistical analysis to deal with
uncertainty and variability in real-world systems. This includes techniques
such as queuing theory, inventory modeling, and statistical forecasting.
5. Decision Analysis:
OR provides frameworks for making decisions in situations involving multiple
objectives, uncertainties, and trade-offs. Decision analysis techniques help
decision-makers choose the best course of action by considering various
possible outcomes and their associated risks.
Features of Operations Research:
1. Quantitative Analysis:
OR employs mathematical models and statistical techniques to analyze and
solve problems. This ensures that decisions are based on rigorous analysis
and empirical evidence rather than intuition or guesswork.
Page |2

2. Interdisciplinary Approach:
OR draws upon principles from mathematics, statistics, computer science,
economics, engineering, and other disciplines. This interdisciplinary nature
allows it to address a diverse array of problems across different industries
and sectors.
3. Optimization:
A central feature of OR is optimization, which involves finding the best
possible solution given constraints and objectives. This may include
maximizing profits, minimizing costs, optimizing resource allocation, or
achieving other desired outcomes.
4. Decision Support:
OR provides decision-makers with tools and methodologies to evaluate
alternative courses of action and make informed decisions. This includes
techniques such as decision analysis, simulation, and probabilistic
modeling.
5. Risk Management:
OR includes techniques for managing risk and uncertainty in decision-
making. This may involve probabilistic modeling, scenario analysis, or
optimization under uncertainty to mitigate potential risks and ensure
robustness in decision-making.
Scope of Operations Research:
1. Logistics and Supply Chain Management:
OR is widely used in logistics and supply chain management to optimize
transportation, inventory management, warehousing, and distribution
processes. This helps companies reduce costs, improve efficiency, and
enhance customer service.
2. Manufacturing and Production:
OR techniques are applied in manufacturing and production to optimize
production schedules, resource allocation, and facility layout. This leads to
increased productivity, reduced lead times, and improved resource
utilization.
3. Healthcare:
OR is used in healthcare to optimize patient flow, hospital scheduling, staff
rostering, and resource allocation. This helps healthcare providers improve
the quality of care, reduce waiting times, and manage resources more
effectively.
Page |3

4. Finance:
OR techniques are employed in finance for portfolio optimization, risk
management, asset allocation, and algorithmic trading. This enables
investors to make better-informed decisions, manage risk more effectively,
and optimize their investment strategies.
5. Transportation and Logistics:
OR plays a crucial role in optimizing transportation networks, route planning,
vehicle scheduling, and traffic management. This helps transportation
companies improve efficiency, reduce costs, and minimize environmental
impact.
Importance of Operations Research:
1. Efficiency Improvement:
OR helps organizations optimize processes, allocate resources efficiently,
and streamline operations. This leads to cost savings, increased productivity,
and improved performance.
2. Better Decision-Making:
OR provides decision-makers with quantitative tools and techniques to
evaluate alternatives, assess risks, and make informed decisions. This
reduces uncertainty and improves the quality of decision-making.
3. Competitive Advantage:
Organizations that effectively utilize OR techniques gain a competitive
advantage by improving efficiency, reducing costs, and delivering superior
products or services to customers.
4. Resource Optimization:
OR enables organizations to optimize the use of resources, whether it's
manpower, materials, equipment, or financial assets. This leads to improved
resource utilization and better overall performance.
5. Innovation and Problem-Solving:
OR encourages innovation and creativity in problem-solving by providing
systematic approaches to tackle complex problems and identify optimal
solutions.
Phases of Operation Research:
1. Formulate the problem:
This is the most important process; it is generally lengthy and time
consuming. The activities that constitute this step are visits, observations,
research, etc. With the help of such activities, the O.R. scientist gets
Page |4

sufficient information and support to proceed and is better prepared to


formulate the problem.
This process starts with understanding of the organizational climate, its
objectives and expectations. Further, the alternative courses of action are
discovered in this step.
2. Develop a model:
Once a problem is formulated, the next step is to express the problem into a
mathematical model that represents systems, processes or environment in
the form of equations, relationships or formulas. We have to identify both the
static and dynamic structural elements, and device mathematical formulas
to represent the interrelationships among elements. The proposed model
may be field tested and modified in order to work under stated environmental
constraints. A model may also be modified if the management is not satisfied
with the answer that it gives.
3. Select appropriate data input:
Garbage in and garbage out is a famous saying. No model will work
appropriately if data input is not appropriate. The purpose of this step is to
have sufficient input to operate and test the model.
4. Solution of the model:
After selecting the appropriate data input, the next step is to find a solution.
If the model is not behaving properly, then updating and modification is
considered at this stage.
5. Validation of the model:
A model is said to be valid if it can provide a reliable prediction of the system’s
performance. A model must be applicable for a longer time and can be
updated from time to time taking into consideration the past, present and
future aspects of the problem.
6. Implement the solution:
The implementation of the solution involves so many behavioural issues and
the implementing authority is responsible for resolving these issues. The gap
between one who provides a solution and one who wishes to use it should be
eliminated. To achieve this, O.R. scientist as well as management should
play a positive role. A properly implemented solution obtained through O.R.
techniques results in improved working and wins the management support.
Page |5

Topic No 2:
Introduction to Foundation Mathematics and Statistics:
Foundation mathematics and statistics form the backbone of Operations
Research (OR), providing the essential tools and techniques for modeling,
analyzing, and solving complex problems. In this introduction, we'll explore
the key concepts in foundation mathematics and statistics that are
fundamental to OR.
Mathematical Foundations:
1. Linear Algebra:
Linear algebra is indispensable in OR for representing and manipulating
systems of linear equations. Matrices and vectors are used to model decision
variables, constraints, and objective functions in optimization problems.
Techniques such as matrix operations, determinants, and eigenvalues play a
crucial role in solving linear programming and other optimization problems.
2. Calculus:
Calculus provides the mathematical machinery for optimization,
differentiation, and integration. In OR, calculus is used to find optimal
solutions by analyzing the behavior of objective functions and constraints.
Techniques such as derivatives and gradients are employed in optimization
algorithms to find local or global optima.
3. Probability Theory:
Probability theory is essential for dealing with uncertainty and randomness in
OR. Probability distributions, such as the normal distribution and the Poisson
distribution, are used to model random variables and uncertainties in
decision-making processes. Probabilistic models and statistical techniques
are employed to analyze uncertain outcomes and make probabilistic
forecasts.
Statistical Foundations:
• Descriptive Statistics:
Descriptive statistics are used to summarize and describe the characteristics
of data sets. Measures such as mean, median, variance, and standard
deviation provide insights into the central tendency, variability, and
distribution of data. Descriptive statistics are essential for understanding the
properties of data and identifying patterns or trends.
Page |6

• Inferential Statistics:
Inferential statistics are used to draw conclusions or make inferences about
populations based on sample data. Techniques such as hypothesis testing,
confidence intervals, and regression analysis are employed to make
predictions, test hypotheses, and estimate parameters. Inferential statistics
play a crucial role in analyzing data, drawing conclusions, and making
decisions in OR.
• Probability Distributions:
Probability distributions describe the likelihood of different outcomes in a
random process. Common probability distributions used in OR include the
normal distribution, the binomial distribution, the Poisson distribution, and
the exponential distribution. These distributions are used to model random
variables and uncertainties in decision-making processes.
• Statistical Modeling:
Statistical modeling involves developing mathematical models that describe
the relationship between variables in a system. Techniques such as
regression analysis, time series analysis, and multivariate analysis are used
to build statistical models based on observed data. Statistical modeling is
essential for analyzing complex systems, identifying relationships between
variables, and making predictions about future outcomes.
Topic NO 3:
Linear Programming (LP):
Linear Programming (LP) is a mathematical optimization technique used to
maximize or minimize a linear objective function subject to a set of linear
equality and inequality constraints. In LP, decision variables are manipulated
within linear relationships to achieve an optimal solution that meets certain
criteria.
The key components of a linear programming problem include:
1. Decision Variables:
These represent the quantities to be determined, controlled, or optimized.
Decision variables are typically denoted by symbols such as (X1, X2………., Xn)
2. Objective Function:
This is a linear expression that represents the goal to be optimized, such as
maximizing profit, minimizing costs, or maximizing resource utilization. The
objective function is usually formulated as a linear combination of decision
variables, with coefficients representing the contribution of each variable to
the objective.
Page |7

3. Constraints:
These are linear relationships that impose limitations or restrictions on the
decision variables. Constraints can represent resource availability, capacity
constraints, demand requirements, or other constraints that must be
satisfied. Constraints are typically formulated as linear inequalities or
equalities involving decision variables.
The general form of a linear programming problem can be expressed as
follows:
Maximize cTx
Subject to Ax ≤ b
Where x ≥ 0
Where:
• C is the vector of coefficients of the objective function.
• X is the vector of decision variables.
• A is the matrix of coefficients of the constraints.
• B is the vector of constants on the right-hand side of the constraints.
• ≥ denotes "greater than or equal to", and = denotes "equal to" for the
constraints.
The goal of solving a linear programming problem is to find values for the
decision variables that maximize or minimize the objective function while
satisfying all constraints. This is typically achieved using optimization
algorithms such as the simplex method, interior-point method, or other
specialized algorithms designed for linear programming problems.
Topic No 4:
Linear Programming (LP), LP and allocation of resources.
Linear Programming (LP) is a powerful optimization technique used in
Operations Research to allocate limited resources efficiently and achieve
optimal outcomes. LP is particularly well-suited for problems where the
objective and constraints can be represented as linear relationships.
In the context of resource allocation, LP can be applied to various
scenarios, including:
1. Production Planning:
LP can help optimize production plans by determining the optimal allocation
of resources such as labor, raw materials, and machine time to maximize
production output while minimizing costs or meeting specific demand
requirements.
Page |8

2. Inventory Management:
LP can aid in optimizing inventory levels by determining the optimal quantities
to order or produce at different points in time to minimize holding costs while
ensuring sufficient stock to meet demand.
3. Supply Chain Management:
LP can optimize the flow of goods and materials through a supply chain by
determining the most cost-effective transportation routes, warehouse
locations, and inventory levels to minimize transportation costs and meet
customer demand.
4. Project Management:
LP can assist in project scheduling and resource allocation by determining
the optimal assignment of resources such as personnel, equipment, and
budget allocations to minimize project duration or costs while meeting
project objectives and constraints.
5. Financial Planning:
LP techniques can be used in financial planning and portfolio optimization to
allocate investment funds across different assets or securities to maximize
returns while minimizing risk.
In LP, the problem is typically formulated as follows:
• Decision Variables:
These represent the quantities to be determined, such as the amount of
resources allocated to different activities or the quantities of goods produced
or purchased.
• Objective Function:
This represents the goal to be optimized, such as maximizing profit,
minimizing costs, or maximizing resource utilization.
• Constraints:
These represent the limitations or restrictions on the decision variables, such
as resource availability, capacity constraints, demand requirements, or
regulatory requirements.
Linearity requirement Maximization Then Minimization
problems.
The linearity requirement in linear programming (LP) refers to the condition
that both the objective function and the constraints must be linear functions
of the decision variables. This means that the coefficients of the decision
variables in both the objective function and the constraints must be
Page |9

constants and appear only in linear terms (i.e., not raised to any power other
than 1).

Maximization Problems:
In maximization problems, the objective is to find the values of the decision
variables that maximize the objective function while satisfying all constraints.
The objective function is typically of the form:
Maximize z = c1x1 + c2x2 + … + cnxn
Where:

• x1,x2,…,x n are the decision variables,


• c1,c2,…,cn are the coefficients of the objective function representing
the contribution of each variable to the objective, and
• z is the objective function value.

The constraints are linear inequalities or equalities of the form:

a11x1 + a12x2+ … +a1n xn ≤ b1


a21x1 + a22x2+ … +a2n xn ≤ b2
.
.
.
am1x1 + am2x2+ … +amn xn ≤ b2m

Where:
• aij are the coefficients of the constraints,
• bi are the constants on the right-hand side of the constraints, and
• m is the number of constraints.

Minimization Problems:
In minimization problems, the objective is to find the values of the decision
variables that minimize the objective function while satisfying all constraints.
The objective function is similar to maximization problems but is minimized
instead:
Minimize z = c1x1 + c2x2 + … + cnxn
P a g e | 10

Constraints are the same as in maximization problems.


Both maximization and minimization problems in linear programming can be
solved using optimization algorithms such as the simplex method, interior-
point method, or other specialized algorithms designed for linear
programming problems. These algorithms find the values of the decision
variables that optimize the objective function subject to the constraints while
adhering to the linearity requirement.
Topic No 5:
Introduction to Graphical LP Minimization Solution.
Graphical method is a visual approach used to solve linear programming (LP)
problems with two decision variables. It provides an intuitive way to find the
optimal solution by graphically representing the feasible region, objective
function, and optimal solution on a two-dimensional graph.
Steps to Solve a Graphical LP Minimization Problem:
1. Formulate the Problem:
Start by formulating the LP problem with two decision variables and linear
constraints. Write down the objective function to be minimized and the
constraints in standard form.
2. Graph the Feasible Region:
Plot each constraint on a graph to define the feasible region. Each constraint
corresponds to a linear inequality, so plot the boundary lines (equalities) and
shade the feasible region that satisfies all constraints.
3. Identify the Corner Points:
The vertices or corner points of the feasible region represent the intersection
points of the constraint lines. These are the points where one or more
constraints become binding (equality holds).
4. Evaluate the Objective Function:
Calculate the value of the objective function at each corner point. Substitute
the coordinates of each corner point into the objective function and
determine the corresponding objective function value.
5. Find the Optimal Solution:
Identify the corner point that yields the minimum value of the objective
function. This corner point represents the optimal solution to the
minimization problem.
6. Verify the Solution:
Once you have identified the corner point with the minimum objective
function value, verify that it satisfies all constraints. If the optimal solution lies
P a g e | 11

within the feasible region and satisfies all constraints, it is the optimal
solution to the minimization problem.
P a g e | 12

Topic No 6:
Simplex Method:
The simplex method is a widely used algorithm for solving linear programming
(LP) problems. Developed by George Dantzig in the late 1940s, the simplex
method efficiently finds the optimal solution to LP problems by moving from
one feasible solution to another along the edges of the feasible region until it
reaches the optimal solution.
Key Components of the Simplex Method:
1. Feasible Region:
The feasible region is the set of all feasible solutions that satisfy the
constraints of the LP problem. It is defined by the intersection of the
constraint inequalities.
2. Vertices:
The vertices of the feasible region are the extreme points where the
constraints intersect. These vertices correspond to basic feasible solutions,
where a subset of variables takes on non-zero values, while the remaining
variables are set to zero.
3. Objective Function:
The objective function is the linear function to be optimized, either maximized
or minimized. The goal is to find the values of the decision variables that
optimize the objective function while satisfying all constraints.
4. Pivot:
In each iteration of the simplex method, a pivot operation is performed to
move from one basic feasible solution to another along the edges of the
feasible region. The pivot operation involves selecting a pivot element and
using elementary row operations to transform the current tableau (matrix
representation of the LP problem).
5. Optimality Test:
At each iteration, the optimality of the current solution is checked by
examining the coefficients of the objective function in the tableau. If all
coefficients are non-negative (for minimization problems) or non-positive (for
maximization problems), the current solution is optimal, and the algorithm
terminates.
Steps Involved in the Simplex Method:
1. Initialization:
Convert the LP problem into standard form, initialize the tableau, and identify
the initial basic feasible solution.
P a g e | 13

2. Select Entering Variable:


Choose the entering variable (column) with the most negative coefficient in
the objective function row.
3. Select Exiting Variable:
Determine the exiting variable (row) by selecting the pivot row using the ratio
test.
4. Pivot Operation:
Perform the pivot operation to make the pivot element equal to 1 and all other
elements in the pivot column equal to 0.
5. Update Tableau:
Update the tableau by applying the pivot operation and adjusting the
coefficients accordingly.
6. Iterate:
Repeat steps 2-5 until an optimal solution is reached.
7. Read Solution:
Once the optimal solution is found, read the values of the decision variables
from the tableau.
Formulating the Simplex Model:
Formulating a simplex model involves defining the decision variables, the
objective function, and the constraints in a standard form that the simplex
algorithm can operate on. Here's a step-by-step guide to formulating a
simplex model:
Step 1: Define Decision Variables:
Identify the decision variables that represent the quantities to be determined
or optimized in the problem. Assign symbols to these variables, typically
denoted by x1,x2,…, xn .
Step 2: Formulate the Objective Function:
Define the objective function to be maximized or minimized. The objective
function is typically a linear combination of the decision variables, where
each variable is multiplied by a coefficient representing its contribution to the
objective. The general form of the objective function is:
Maximize z=c1x1+c2x2+…+cn xn
or
Minimize z=c1x1+c2x2+…+cnxn
where c1,c2,…,cn are the coefficients of the objective function.
P a g e | 14

Step 3: Formulate the Constraints:


Identify the constraints that limit the values of the decision variables.
Constraints are typically linear inequalities or equalities involving the
decision variables. Write down each constraint in standard form, where the
left-hand side is a linear expression of the decision variables, and the right-
hand side is a constant. For example:
a11x1 + a12x2+ … +a1n xn ≤ b1
a21x1 + a22x2+ … +a2n xn ≤ b2
.
.
.
am1x1 + am2x2+ … +amn xn ≤ b2m
where aij are the coefficients of the constraints,
b1,b2,…,bm are the constants on the right-hand side, and m is the number
of constraints.
Step 4: Convert to Standard Form:
Convert the LP problem into standard form by ensuring that all constraints are
of the form ≤ , ≥, or = and all decision variables are non-negative (i.e., ≥0xi≥0).
Step 5: Set Up the Initial Simplex Tableau:
Construct the initial simplex tableau by organizing the coefficients of the
decision variables and the constraints into a matrix form. Include additional
columns for the slack or surplus variables and the right-hand side constants.
Once the simplex model is formulated in standard form and the initial simplex
tableau is set up, it is ready to be solved using the simplex algorithm to find
the optimal solution to the linear programming problem.
P a g e | 15

Topic No 7:
Linear Programming – Simplex method for Maximizing.
Linear programming is a mathematical method used to determine the best
possible outcome in a given mathematical model for a set of constraints
represented by linear equations. The simplex method is one of the primary
techniques used to solve linear programming problems. It's an iterative
algorithm that moves from one feasible solution to another, aiming to improve
the objective function value at each step until reaching an optimal solution.
To maximize a linear objective function using the simplex method, you
typically follow these steps:
1. Formulate the linear programming problem:
Write down the objective function to be maximized and the set of linear
constraints that the solution must satisfy. These constraints are typically
represented as a system of linear inequalities or equations.
2. Convert inequalities to equations:
If any constraints are in the form of inequalities (≤ or ≥), convert them to
equations. This might involve adding slack or surplus variables.
3. Set up the initial simplex tableau:
Convert the equations into a tableau, which is a matrix representation of the
problem. Include the objective function and the coefficients of the decision
variables.
4. Select a pivot column:
Identify the most negative coefficient in the bottom row (the objective
function row) of the tableau. This column will be the pivot column.
5. Select a pivot row:
Determine the pivot row by finding the minimum ratio of the right-hand side
value to the corresponding coefficient in the pivot column.
6. Perform pivot operation:
Use the pivot element (intersection of the pivot row and pivot column) to
create zeros in the pivot column, making the pivot element one. Adjust the
tableau accordingly.
7. Repeat:
Continue iterating through steps 4-6 until there are no negative values in the
objective function row, indicating the optimal solution has been reached.
P a g e | 16

Topic No 8:
Simplex maximizing example for similar limitations
Let's consider a simple example to illustrate the simplex method for
maximizing a linear objective function subject to linear constraints.
Problem Statement:
Maximize: Z=3x+2y
Subject to:
1. x+y ≤ 4
2. 2x+y ≤ 5
3. x,y ≥ 0
Solution:
1. Convert inequalities to equations (if necessary):
We don't need to convert any inequalities to equations in this example as all
constraints are already in equation form.
2. Set up the initial simplex tableau:

Equation x y s1 s2 RHS
1. x + y + s1 1 1 1 0 4
2. 2x + y + s2 2 1 0 1 5
Z. -3x - 2y -3 -2 0 0 0

Here, s1and s2 are slack variables introduced to convert the inequalities to


equations. The coefficients of the decision variables (x and y) and slack
variables (s1 and s2) are arranged in the tableau along with the right-hand
side (RHS) values of the constraints.
1. Select a pivot column:
The most negative coefficient in the bottom row (the objective function row)
is -3, corresponding to the variable x. So, the pivot column is x.
2. Select a pivot row:
Calculate the ratio of the RHS to the coefficient of x for each constraint:
• For the first constraint: 4/1= 4
• For the second constraint: 5/2 = 2.5
The minimum ratio is 2.5, corresponding to the second constraint. So, the
pivot row is the row associated with the second constraint.
P a g e | 17

3. Perform pivot operation:


Use the pivot element (2 in this case) to create zeros in the pivot column:
Equation x y s1 s2 RHS
1. x + y + s1 0 1 -1 1 1
2. 2x + y + s2 1 0 1 -2 3
Z. -3x - 2y 0 -1 3 4 6

1. Repeat:
Repeat steps 3-5 until there are no negative values in the objective function
row.
2. Interpret the solution:
At the final tableau, the values of the decision variables are:
• x=3
• y=1
The maximum value of the objective function Z is Z = 3(3) + 2(1) = 9 + 2 = 11
So, the optimal solution is x = 3, y = 1 with a maximum value of Z = 11
Topic No 9:
Mixed Limitations Examples containing mixed constraints:
Sure, let's consider an example with mixed constraints, which include both
equality and inequality constraints.
Problem Statement:
Maximize: Z = 5x + 3y

Subject to:
1. x + 2y ≤ 10
2. 2x + y ≥8
3. x - y = 2
4. x, y ≥ 0

Solution:
1. Convert inequalities to equations (if necessary):
We already have one equality constraint (the third one), so no conversion is
needed.
P a g e | 18

2. Set up the initial simplex tableau:


Equation x y s1 s2 a RHS
1. x + 2y + s1 1 2 1 0 0 10
2. 2x + y - s2 2 1 0 -1 0 8
3. x - y 1 -1 0 0 1 2
Z. -5x - 3y -5 -3 0 0 0 0
Here, s1and s2 are slack variables introduced for the inequality constraints,
and a is a surplus variable introduced for the inequality constraint.
3. Select a pivot column:
The most negative coefficient in the bottom row (the objective function row)
is -5, corresponding to the variable x. So, the pivot column is x.
4. Select a pivot row:
Calculate the ratio of the RHS to the coefficient of x for each constraint:
• For the first constraint: 10/1= 10
• For the second constraint: 8/2 = 4
The minimum ratio is 4, corresponding to the second constraint. So, the pivot
row is the row associated with the second constraint.
5. Perform pivot operation:
Use the pivot element (2 in this case) to create zeros in the pivot column:
Equation x y s1 s2 a RHS
1. x + 2y + s1 0 3 1 1 0 18
2. 2x + y - s2 1 0 0 -2 0 4
3. x - y 0 -1 0 1 1 6
Z. -5x - 3y 0 -3 0 10 0 20
6. Repeat:
Repeat steps 3-5 until there are no negative values in the objective function
row.
7. Interpret the solution:
At the final tableau, the values of the decision variables are:
• x=4
• y=6
The maximum value of the objective function Z is Z = 5(4) + 3(6) = 20 + 18 = 38.
So, the optimal solution is x = 4 , y = 6 with a maximum value of Z = 38.
P a g e | 19

Topic NO 10:
Minimizing example of similar limitations
Problem Statement:
Maximize: Z = 5x + 3y
Subject to:
1. x + 2y ≤ 10
2. 2x + y ≥8
3. x - y = 2
4. x, y ≥ 0

Solution:
1. Convert inequalities to equations (if necessary):
We already have one equality constraint (the third one), so no conversion is
needed.
2. Set up the initial simplex tableau:
Equation x y s1 s2 a RHS
1. x + 2y + s1 1 2 1 0 0 10
2. 2x + y - s2 2 1 0 -1 0 8
3. x - y 1 -1 0 0 1 2
Z. -5x - 3y -5 -3 0 0 0 0
Here, s1and s2 are slack variables introduced for the inequality constraints,
and a is a surplus variable introduced for the inequality constraint.
1. Select a pivot column:
The most negative coefficient in the bottom row (the objective function row)
is -5, corresponding to the variable x. So, the pivot column is x.
2. Select a pivot row:
Calculate the ratio of the RHS to the coefficient of x for each constraint:
• For the first constraint: 10/1= 10
• For the second constraint: 8/2 = 4
The minimum ratio is 4, corresponding to the second constraint. So, the pivot
row is the row associated with the second constraint.
3. Perform pivot operation:
Use the pivot element (2 in this case) to create zeros in the pivot column:
P a g e | 20

Equation x y s1 s2 a RHS
1. x + 2y + s1 0 3 1 1 0 18
2. 2x + y - s2 1 0 0 -2 0 4
3. x - y 0 -1 0 1 1 6
Z. -5x - 3y 0 -3 0 10 0 20
1. Repeat:
Repeat steps 1-3 until there are no positive values in the objective function
row.
2. Interpret the solution:
At the final tableau, the values of the decision variables are:
• x=4
• y=6
The Minimum value of the objective function Z is Z = 5(4) + 3(6) = 20 + 18 = 38.
So, the optimal solution is x = 4 , y = 6 with a maximum value of Z = 38.
Topic NO 11;
Sensitivity Analysis:
Sensitivity analysis is a technique used in linear programming to assess how
changes in the coefficients of the objective function or the constraints affect
the optimal solution. It helps decision-makers understand the robustness of
the solution and provides insights into the impact of parameter variations.
Here's how sensitivity analysis can be conducted in linear programming:
1. Objective Function Coefficients:
• If the coefficients of the objective function change, you can analyze
how the optimal solution and the optimal value of the objective
function are affected.
• If the coefficient of a decision variable increases, the optimal value of
the objective function will increase, and vice versa.
• The shadow price (also known as dual price or marginal value)
associated with each constraint indicates how much the optimal value
of the objective function would increase if the right-hand side of the
constraint were increased by one unit. Shadow prices provide
information on the value of relaxing or tightening constraints.
2. Right-Hand Side (RHS) Values of Constraints:
• If the RHS values of the constraints change, you can analyze how the
optimal solution and the optimal value of the objective function are
affected.
P a g e | 21

• For each constraint, you can perform sensitivity analysis to determine


the allowable range (or interval) of the RHS values within which the
current optimal solution remains feasible and optimal. This range is
called the allowable increase or decrease for that constraint.
• The range of values for the shadow prices can also indicate the
sensitivity of the solution to changes in the RHS values.
3. Range of Optimality:
• This analysis determines how much the coefficients of the objective
function can change without changing the optimal solution. It helps in
understanding the stability of the solution concerning changes in the
objective function coefficients.
• It's typically conducted by introducing a small perturbation (change) to
each objective function coefficient and observing whether the optimal
solution remains unchanged within a certain range.
4. Degeneracy and Multiple Optimal Solutions:
• Sensitivity analysis can also help identify situations of degeneracy,
where multiple basic feasible solutions correspond to the same
optimal value of the objective function.
• By exploring the shadow prices associated with constraints, one can
identify redundant or nonbinding constraints and assess the impact of
relaxing these constraints.
When conducting sensitivity analysis in linear programming, changes in the
objective function coefficients and changes in the right-hand side (RHS)
values of constraints can have significant impacts on the optimal solution.
Let's explore each of these scenarios in detail:
P a g e | 22

Topic No 12:
Changes in Objective Function Coefficients:
Suppose we have the following linear programming problem:
Maximize: Z=c1x1+c2x2+…+cnxn

Subject to:
a11x1 + a12x2+ … +a1n xn ≤ b1
a21x1 + a22x2+ … +a2n xn ≤ b2
.
.
am1x1 + am2x2+ … +amn xn ≤ b2m

To analyze the effect of changes in the objective function coefficients:


• Increase or decrease each objective function coefficient \( c_i \) by a
small amount and re-solve the linear programming problem.
• Observe the change in the optimal solution (if any) and the optimal
objective function value.
• The shadow price associated with each constraint indicates how
much the optimal objective function value would increase if the RHS
of that constraint were increased by one unit.
• If the shadow price is positive, it means increasing the RHS would
increase the optimal objective function value, and vice versa.
Changes in RHS Values of Constraints:
To analyze the effect of changes in the RHS values of constraints:
• Increase or decrease the RHS value of each constraint bi by a small
amount and re-solve the linear programming problem.
• Observe how the optimal solution (if any) and the optimal objective
function value change.
• The range within which the RHS value of a constraint can vary without
changing the optimal solution is called the allowable increase or
decrease.
• The shadow price associated with each constraint indicates how
much the optimal objective function value would increase if the RHS
of that constraint were increased by one unit (or decrease if decreased
by one unit).
P a g e | 23

Topic No 13:
What do you mean by Transportation Model? Basic Assumptions.
Transportation Model:
A Transportation Model is a mathematical optimization technique used to
determine the most cost-effective way to transport goods from multiple
sources (e.g., factories or warehouses) to multiple destinations (e.g., retailers
or customers). It helps in optimizing transportation routes, minimizing
transportation costs, and allocating available resources efficiently. The
transportation model is widely used in logistics, supply chain management,
distribution planning, and inventory control.
Basic Assumptions of Transportation Model:
1. Fixed Supply and Demand:
The transportation model assumes that the total supply of goods from
sources and the total demand for goods at destinations are fixed and known
in advance.
2. Single Commodity:
The transportation model typically deals with the transportation of a single
homogeneous commodity. This simplifies the modeling process by focusing
on the transportation of one type of product.
3. Cost Proportionality:
The transportation cost per unit of goods transported between any pair of
source-destination locations is assumed to be constant and proportional to
the quantity of goods transported.
4. Linear Relationships:
The transportation model assumes linear relationships between the
quantities of goods transported, transportation costs, and other relevant
factors. This allows for the use of linear programming techniques to optimize
transportation routes and costs.
5. Non-Negative Supply and Demand:
The transportation model assumes that both the supply at sources and the
demand at destinations are non-negative. That is, there are no negative
supplies or demands.
P a g e | 24

Topic No 14:
Feasible Solution: the northwest method, the lowest cost
method.
The Northwest Method:
This is one of the basic methods for finding an initial feasible solution to
transportation problems. It starts with the cell in the northwest corner of the
transportation table and allocates shipments as much as possible in a step-
by-step manner, moving either horizontally or vertically. It's called the
"northwest" method because it starts at the northwest corner of the
transportation matrix.
1. Initialization:
Start at the northwest corner (top-left) of the transportation matrix.
2. Allocation:
Allocate as much as possible to the cell identified in the northwest corner.
This allocation is based on either the supply available at that row or the
demand required at that column, whichever is smaller.
3. Updating:
After allocation, update the supply and demand values accordingly. Subtract
the allocated quantity from the respective supply and demand values. If
either supply or demand becomes zero, eliminate the corresponding row or
column from further consideration.
4. Move to Next Cell:
Move to the adjacent cell either horizontally or vertically, depending on
whether the supply or demand has been exhausted in the current row or
column. Repeat the allocation and updating steps until all supply and
demand requirements are met.
5. Termination:
The process terminates when all supply and demand requirements are
fulfilled.
P a g e | 25

The Least cost Method:


The Lowest Cost Method is another approach to finding an initial feasible
solution for transportation problems. It works by identifying the cell with the
lowest cost in the transportation table and allocating as much as possible to
that cell. This process is repeated until all supplies and demands are fulfilled.
The Lowest Cost Method is another approach used to find an initial feasible
solution for transportation problems.
Here's a step-by-step explanation of how it works:
1. Initialization:
Start by identifying the cell with the lowest transportation cost in the
transportation matrix. This cell represents the lowest cost option for shipping
goods from a supplier to a destination.
2. Allocation:
Allocate as much as possible to the cell identified with the lowest cost. This
allocation is based on either the supply available at that row or the demand
required at that column, whichever is smaller.
3. Updating:
After allocation, update the supply and demand values accordingly. Subtract
the allocated quantity from the respective supply and demand values. If
either supply or demand becomes zero, eliminate the corresponding row or
column from further consideration.
4. Repetition:
Repeat steps 1 to 3 until all supply and demand requirements are met or until
no feasible allocation can be made.
5. Termination:
The process terminates when all supply and demand requirements are
fulfilled.
P a g e | 26

Topic No 15:
Optimal solution: The stepping Stone Method, Modified.
The Stepping Stone Method:
The Stepping Stone Method is an optimization technique used to find the
optimal solution to transportation problems. It is particularly useful for
improving upon an initial feasible solution obtained from methods like the
Northwest Corner Method or the Lowest Cost Method.
Here's how it works:
1. Initial Feasible Solution:
Start with an initial feasible solution where all supply and demand constraints
are satisfied. This solution can be obtained using methods like the Northwest
Corner Method or the Lowest Cost Method.
2. Identify Basic Variables:
In the initial solution, identify the basic variables, which are the allocated
cells (non-zero cells). These cells form the basis of the solution.
3. Find Closed Loops:
For each non-basic cell (empty cell), attempt to find a closed loop of basic
cells (allocated cells) that starts and ends at the non-basic cell. This loop is
formed by moving horizontally and vertically through the allocated cells,
never revisiting the same cell twice.
4. Calculate Improvement Potential:
Once a closed loop is found, calculate the improvement potential by
considering the change in cost associated with moving along the loop. This
change in cost is determined by the difference between the transportation
cost of entering the loop and leaving the loop.
5. Identify Improving Loops:
Identify the loop with the greatest improvement potential. This loop is referred
to as the improving loop.
6. Update Solution:
Increase the flow along the forward arcs (edges) of the improving loop and
decrease the flow along the reverse arcs by the minimum amount necessary
to maintain feasibility. This adjustment improves the solution and reduces the
total transportation cost.
7. Repeat:
Continue this process, identifying improving loops and updating the solution
until no further improvement can be made.
P a g e | 27

8. Optimality Test:
Once no further improvements are possible, perform an optimality test to
verify if the current solution is optimal. This test involves checking whether all
reduced costs (the difference between the actual cost and the shadow price
of a cell) are non-negative.
9. Termination:
If the optimality test confirms that the current solution is optimal, the process
terminates. Otherwise, return to step 3 and continue iterating until an optimal
solution is reached.
Distribution (MODI) Method:
The modified distribution method, is also known as MODI method or (u - v)
method provides a minimum cost solution to the transportation problems.
MODI method is an improvement over stepping stone method. This model
studies the minimization of the cost of transporting a commodity from a
number of sources to several destinations. The supply at each source and the
demand at each destination are known. The objectives are to develop and
review an integral transportation schedule that meets all demands from the
inventory at a minimum total transportation cost.
Here's a simplified overview of how the MODI method works:
1. Begin with an initial feasible solution, often obtained using the Northwest
Corner Rule, Least Cost Method, or Vogel's Approximation Method.
2. Calculate the opportunity costs for each non-basic variable (i.e., empty
cells in the transportation tableau). Opportunity cost represents the amount
by which the objective function value would change if one unit of the
corresponding variable is moved from its current route to another route with
a lower cost.
3. Determine the row and column that have the largest opportunity cost. This
will identify the cell that should be added to the basic feasible solution to
improve its optimality.
4. Trace a closed loop starting from the selected cell through the cells that are
currently in the basic feasible solution. This loop is formed by alternating
between cells that are currently in the solution and those that are not.
5. Once the loop is identified, determine the minimum value among the
quantities associated with the non-basic cells in the loop. This quantity
represents how much flow can be moved along the loop without violating the
capacity constraints.
P a g e | 28

6. Update the basic feasible solution by adjusting the flow along the loop
according to the minimum value found.
7. Recalculate the opportunity costs for all non-basic variables, and repeat
steps 3 to 6 until all opportunity costs are non-negative, indicating optimality.

Topic No 16:
The Assignment Model:
The Assignment Model is a mathematical framework used to solve
optimization problems where a set of tasks needs to be assigned to a set of
agents or resources in a manner that minimizes or maximizes a certain
objective function, such as cost, time, or distance. This model is
characterized by its focus on one-to-one assignments, where each task is
assigned to exactly one agent and each agent is assigned to exactly one task.
Key components of the Assignment Model include:
1. Tasks and Agents:
There are two sets of entities involved in the assignment problem: tasks and
agents. Tasks represent the items or activities that need to be assigned, while
agents represent the individuals or resources capable of performing those
tasks.
2. Objective Function:
The objective of the assignment problem is to optimize a certain criterion,
typically represented by a cost matrix or a benefit matrix. This criterion could
be the cost of completing the tasks, the time required to complete them, or
some other measure associated with the assignment.
3. Cost or Benefit Matrix:
The costs (or benefits) associated with assigning each task to each agent are
provided in a matrix format. Each cell (i, j) in the matrix contains the cost (or
benefit) of assigning task i to agent j. The matrix is usually square, with rows
representing tasks and columns representing agents.
4. Constraints:
In the basic assignment problem, the primary constraint is that each task
must be assigned to exactly one agent, and each agent must be assigned to
exactly one task. This constraint ensures that all tasks are completed and all
agents are utilized.
5. Optimization Techniques:
Various optimization techniques can be employed to solve assignment
problems, including the Hungarian algorithm, the auction algorithm, and
P a g e | 29

linear programming methods such as the simplex method. These techniques


systematically explore different task-agent assignments to find the optimal
assignment that minimizes (or maximizes) the objective function.
Basic assumptions of Assignment Model:
Here are the basic assumptions of the Assignment Model:
1. One-to-One Assignment:
Each task is assigned to exactly one agent, and each agent is assigned to
exactly one task. There are no unassigned tasks or agents.
2. Complete Assignment:
Every task must be assigned, and every agent must be assigned to a task. In
other words, there are no leftover tasks or agents.
3. Objective Function:
The objective is to minimize or maximize a certain criterion, such as cost,
time, distance, or some other measure associated with the assignment. This
criterion is typically represented by a cost matrix or a benefit matrix.
4. Cost Matrix:
The costs (or benefits) associated with assigning each task to each agent are
known and represented in a matrix format. The matrix is usually square, with
rows representing tasks and columns representing agents. Each cell (i, j) in
the matrix contains the cost (or benefit) of assigning task i to agent j.
5. Uniqueness of Assignments:
The costs are assumed to be such that there is a unique optimal assignment,
meaning there is only one assignment that minimizes (or maximizes) the total
cost (or benefit).
6. No Restrictions on Assignment:
There are no restrictions or constraints on the assignments, such as capacity
constraints or precedence constraints. Each task can be assigned to any
agent, subject only to minimizing the total cost (or maximizing the total
benefit).
P a g e | 30

Topic No 17:
Solution Methods: Different Combination Methods in
operation Research
In operations research, combination methods are often used to solve
optimization problems where the goal is to find the best combination of
decision variables that optimize a certain objective function, subject to
constraints. Here are some common combination methods used in
operations research:
1. Linear Programming (LP):
LP is a method used to maximize or minimize a linear objective function
subject to linear equality and inequality constraints. It's widely used in
various industries for resource allocation, production planning,
transportation, and more.
2. Integer Programming (IP):
IP extends linear programming by adding the requirement that some or all of
the variables must be integers. This is useful when decision variables
represent quantities that must be whole numbers, such as the number of
units produced or the number of machines used.
3. Mixed Integer Programming (MIP):
MIP is a generalization of both LP and IP, where some variables are restricted
to be integers while others can take any real value. This is useful when a
problem involves both discrete and continuous decision variables.
4. Dynamic Programming (DP):
DP is a method for solving complex problems by breaking them down into
simpler subproblems and solving each subproblem only once. It's
particularly useful for problems with overlapping subproblems and optimal
substructure, such as the knapsack problem or the traveling salesman
problem.
5. Genetic Algorithms (GA):
GAs are optimization algorithms inspired by the process of natural selection
and genetics. They work by evolving a population of candidate solutions over
multiple generations, with selection, crossover, and mutation operators
mimicking the processes of selection, reproduction, and genetic variation.
6. Simulated Annealing (SA):
SA is a probabilistic optimization technique inspired by the annealing process
in metallurgy. It starts with an initial solution and iteratively explores the
P a g e | 31

solution space by allowing "worse" solutions with a decreasing probability,


simulating the annealing process of cooling a material.
7. Tabu Search (TS):
TS is a local search method that aims to explore the solution space by
iteratively moving from one solution to a neighboring solution, while avoiding
revisiting previously visited solutions (tabu list). It's effective for finding near-
optimal solutions in large solution spaces.
8. Constraint Programming (CP):
CP is a declarative programming paradigm for modeling and solving
combinatorial problems with constraints. It allows expressing complex
constraints and relationships between variables, making it suitable for
problems with intricate constraints.
9. Heuristic Methods:
Heuristic methods are problem-solving strategies that prioritize finding good
solutions quickly over guaranteeing optimality. These methods include
techniques like greedy algorithms, constructive algorithms, and local search
algorithms.
Topic No 18:
Short-cut Method (Hungarian Method).
The Hungarian Method is a combinatorial optimization algorithm used to
solve the assignment problem in operations research. The assignment
problem involves assigning a set of tasks to a set of agents in such a way that
the total cost or time is minimized or total profit is maximized. The Hungarian
Method is particularly efficient for solving assignment problems with square
cost matrices (equal number of rows and columns).
Here's a simplified overview of the Hungarian Method:
1. Step 1: Reduce the Matrix:
• Subtract the smallest element in each row from all the elements in that
row. Then, do the same for each column. This step ensures that at least
one zero is present in each row and each column.
2. Step 2: Find Minimum Number of Lines to Cover All Zeros:
• Draw lines (rows or columns) through the zeros in the matrix in such a
way that every row and every column contains exactly one zero, and
minimize the number of lines drawn.
3. Step 3: Adjust Zeros:
• If the number of lines drawn is equal to the matrix size, an optimal
assignment has been found. If not, proceed to the next step.
P a g e | 32

4. Step 4: Find Minimum Uncovered Element:


• Find the smallest uncovered element in the matrix. Subtract this value
from all uncovered elements, and add it to all elements covered by two
lines. Then, go back to Step 2.
5. Step 5: Repeat:
• Repeat Steps 2-4 until an optimal assignment is found.
6. Step 6: Assign Tasks:
• Once an optimal assignment is found, assign tasks based on the
positions of the zeros in the matrix.
The Hungarian Method guarantees finding the optimal solution for the
assignment problem with a time complexity of O(n^3), where n is the number
of rows or columns in the cost matrix. This makes it particularly efficient for
small to medium-sized problems but might become computationally
expensive for very large problems.
P a g e | 33
P a g e | 34

Sure, I can explain both Dijkstra's algorithm and Floyd's algorithm, which are
used to find the shortest routes in a graph.
Topic No 19:
1. Dijkstra's Algorithm:
Dijkstra's algorithm is a greedy algorithm used to find the shortest path from
a source node to all other nodes in a weighted graph with non-negative edge
weights. It works as follows:
1.1 Initialize the distance of the source node to 0 and all other nodes to
infinity.
1.2 Create a priority queue to store nodes with their distances from the
source node.
1.3 Repeat until the priority queue is empty:
• Extract the node with the minimum distance from the priority queue.
• Update the distances of its neighbors by considering the edge weights
and the distance to the current node.
- If the new distance is shorter than the current distance, update it.
1.4 After processing all nodes, the distances stored for each node represent
the shortest paths from the source node.
• Dijkstra's algorithm is efficient for finding the shortest paths in graphs
with non-negative edge weights. However, it does not work for graphs
with negative edge weights or cycles.
2. Floyd's Algorithm:
Floyd's algorithm, also known as Floyd-Warshall algorithm, is a dynamic
programming algorithm used to find the shortest paths between all pairs of
nodes in a weighted graph, including graphs with negative edge weights (as
long as there are no negative cycles). It works as follows:
2.1 Initialize a distance matrix where each element represents the shortest
distance between each pair of nodes.
2.2 For each intermediate node k from 1 to n:
• Update the distance matrix by considering whether the path from node
i to node j through node k is shorter than the current distance from i to
j.
2.3 After processing all intermediate nodes, the distance matrix contains
the shortest paths between all pairs of nodes.
P a g e | 35

• Floyd's algorithm is efficient for finding shortest paths in dense graphs


or graphs where the number of nodes is relatively small, as it has a time
complexity of O(n^3), where n is the number of nodes.
In summary, Dijkstra's algorithm is suitable for finding the shortest path
from a single source to all other nodes in a graph with non-negative edge
weights, while Floyd's algorithm is suitable for finding the shortest paths
between all pairs of nodes in a graph, including graphs with negative edge
weights.

Topic No 20:
Difference Between Transportation Mdodel And
Assignment Model:
P a g e | 36

The transportation model and assignment model are both types of


optimization models used in operations research, but they serve different
purposes and are applied to different types of problems. Here's a comparison
between the two:
1. Transportation Model:
The transportation model is used to determine the optimal way to
transport goods from multiple sources to multiple destinations while
minimizing transportation costs or maximizing profit. It is typically
applied in scenarios such as distribution, logistics, supply chain
management, and transportation planning.
• In the transportation model, the objective is to minimize the total
transportation cost by deciding how much to transport from each
source to each destination, subject to constraints such as supply and
demand at each source and destination, capacity constraints on
transportation routes, and non-negativity constraints.
• The decision variables in the transportation model represent the
amount of goods transported from each source to each destination,
and the objective function represents the total transportation cost.
• Common solution methods for the transportation model include the
transportation simplex method and the North-West Corner method.
2. Assignment Model:
• The assignment model is used to determine the optimal assignment of
a set of tasks or resources to a set of agents or jobs in such a way that
the overall cost or time is minimized or overall profit is maximized. It is
commonly applied in personnel assignment, project management, job
scheduling, and machine assignment problems.
• In the assignment model, the objective is to minimize the total
assignment cost by deciding which agent is assigned to each task or
which job is assigned to each resource, subject to constraints such as
the availability of agents and tasks, capacity constraints on agents or
tasks, and exclusivity constraints.
• The decision variables in the assignment model represent the
assignment of agents to tasks, and the objective function represents
the total assignment cost.
• Common solution methods for the assignment model include the
Hungarian algorithm, the shortest path algorithm, and linear
programming techniques.

You might also like