Professional Documents
Culture Documents
Operation Research by AD
Operation Research by AD
Topic NO 1:
Introduction Of Operation Research:
Operations Research (OR) is a field of study that uses mathematical
modelling, statistical analysis, and optimization techniques to aid decision-
making and problem-solving in complex systems. It originated during World
War II when military planners faced complex logistical problems. Since then,
it has expanded to various sectors including business, healthcare,
transportation, manufacturing, and finance.
Key components of Operations Research include:
1. Mathematical Modeling:
OR involves translating real-world problems into mathematical
representations. This often involves defining decision variables, constraints,
and objectives.
2. Optimization Techniques:
OR employs optimization algorithms to find the best possible solution given
a set of constraints. These techniques include linear programming, integer
programming, dynamic programming, and nonlinear programming.
3. Simulation:
Simulation involves creating computer models to mimic real-world systems.
This allows researchers to study the behavior of complex systems under
different conditions and make predictions about their performance.
4. Probability and Statistics:
OR utilizes probabilistic models and statistical analysis to deal with
uncertainty and variability in real-world systems. This includes techniques
such as queuing theory, inventory modeling, and statistical forecasting.
5. Decision Analysis:
OR provides frameworks for making decisions in situations involving multiple
objectives, uncertainties, and trade-offs. Decision analysis techniques help
decision-makers choose the best course of action by considering various
possible outcomes and their associated risks.
Features of Operations Research:
1. Quantitative Analysis:
OR employs mathematical models and statistical techniques to analyze and
solve problems. This ensures that decisions are based on rigorous analysis
and empirical evidence rather than intuition or guesswork.
Page |2
2. Interdisciplinary Approach:
OR draws upon principles from mathematics, statistics, computer science,
economics, engineering, and other disciplines. This interdisciplinary nature
allows it to address a diverse array of problems across different industries
and sectors.
3. Optimization:
A central feature of OR is optimization, which involves finding the best
possible solution given constraints and objectives. This may include
maximizing profits, minimizing costs, optimizing resource allocation, or
achieving other desired outcomes.
4. Decision Support:
OR provides decision-makers with tools and methodologies to evaluate
alternative courses of action and make informed decisions. This includes
techniques such as decision analysis, simulation, and probabilistic
modeling.
5. Risk Management:
OR includes techniques for managing risk and uncertainty in decision-
making. This may involve probabilistic modeling, scenario analysis, or
optimization under uncertainty to mitigate potential risks and ensure
robustness in decision-making.
Scope of Operations Research:
1. Logistics and Supply Chain Management:
OR is widely used in logistics and supply chain management to optimize
transportation, inventory management, warehousing, and distribution
processes. This helps companies reduce costs, improve efficiency, and
enhance customer service.
2. Manufacturing and Production:
OR techniques are applied in manufacturing and production to optimize
production schedules, resource allocation, and facility layout. This leads to
increased productivity, reduced lead times, and improved resource
utilization.
3. Healthcare:
OR is used in healthcare to optimize patient flow, hospital scheduling, staff
rostering, and resource allocation. This helps healthcare providers improve
the quality of care, reduce waiting times, and manage resources more
effectively.
Page |3
4. Finance:
OR techniques are employed in finance for portfolio optimization, risk
management, asset allocation, and algorithmic trading. This enables
investors to make better-informed decisions, manage risk more effectively,
and optimize their investment strategies.
5. Transportation and Logistics:
OR plays a crucial role in optimizing transportation networks, route planning,
vehicle scheduling, and traffic management. This helps transportation
companies improve efficiency, reduce costs, and minimize environmental
impact.
Importance of Operations Research:
1. Efficiency Improvement:
OR helps organizations optimize processes, allocate resources efficiently,
and streamline operations. This leads to cost savings, increased productivity,
and improved performance.
2. Better Decision-Making:
OR provides decision-makers with quantitative tools and techniques to
evaluate alternatives, assess risks, and make informed decisions. This
reduces uncertainty and improves the quality of decision-making.
3. Competitive Advantage:
Organizations that effectively utilize OR techniques gain a competitive
advantage by improving efficiency, reducing costs, and delivering superior
products or services to customers.
4. Resource Optimization:
OR enables organizations to optimize the use of resources, whether it's
manpower, materials, equipment, or financial assets. This leads to improved
resource utilization and better overall performance.
5. Innovation and Problem-Solving:
OR encourages innovation and creativity in problem-solving by providing
systematic approaches to tackle complex problems and identify optimal
solutions.
Phases of Operation Research:
1. Formulate the problem:
This is the most important process; it is generally lengthy and time
consuming. The activities that constitute this step are visits, observations,
research, etc. With the help of such activities, the O.R. scientist gets
Page |4
Topic No 2:
Introduction to Foundation Mathematics and Statistics:
Foundation mathematics and statistics form the backbone of Operations
Research (OR), providing the essential tools and techniques for modeling,
analyzing, and solving complex problems. In this introduction, we'll explore
the key concepts in foundation mathematics and statistics that are
fundamental to OR.
Mathematical Foundations:
1. Linear Algebra:
Linear algebra is indispensable in OR for representing and manipulating
systems of linear equations. Matrices and vectors are used to model decision
variables, constraints, and objective functions in optimization problems.
Techniques such as matrix operations, determinants, and eigenvalues play a
crucial role in solving linear programming and other optimization problems.
2. Calculus:
Calculus provides the mathematical machinery for optimization,
differentiation, and integration. In OR, calculus is used to find optimal
solutions by analyzing the behavior of objective functions and constraints.
Techniques such as derivatives and gradients are employed in optimization
algorithms to find local or global optima.
3. Probability Theory:
Probability theory is essential for dealing with uncertainty and randomness in
OR. Probability distributions, such as the normal distribution and the Poisson
distribution, are used to model random variables and uncertainties in
decision-making processes. Probabilistic models and statistical techniques
are employed to analyze uncertain outcomes and make probabilistic
forecasts.
Statistical Foundations:
• Descriptive Statistics:
Descriptive statistics are used to summarize and describe the characteristics
of data sets. Measures such as mean, median, variance, and standard
deviation provide insights into the central tendency, variability, and
distribution of data. Descriptive statistics are essential for understanding the
properties of data and identifying patterns or trends.
Page |6
• Inferential Statistics:
Inferential statistics are used to draw conclusions or make inferences about
populations based on sample data. Techniques such as hypothesis testing,
confidence intervals, and regression analysis are employed to make
predictions, test hypotheses, and estimate parameters. Inferential statistics
play a crucial role in analyzing data, drawing conclusions, and making
decisions in OR.
• Probability Distributions:
Probability distributions describe the likelihood of different outcomes in a
random process. Common probability distributions used in OR include the
normal distribution, the binomial distribution, the Poisson distribution, and
the exponential distribution. These distributions are used to model random
variables and uncertainties in decision-making processes.
• Statistical Modeling:
Statistical modeling involves developing mathematical models that describe
the relationship between variables in a system. Techniques such as
regression analysis, time series analysis, and multivariate analysis are used
to build statistical models based on observed data. Statistical modeling is
essential for analyzing complex systems, identifying relationships between
variables, and making predictions about future outcomes.
Topic NO 3:
Linear Programming (LP):
Linear Programming (LP) is a mathematical optimization technique used to
maximize or minimize a linear objective function subject to a set of linear
equality and inequality constraints. In LP, decision variables are manipulated
within linear relationships to achieve an optimal solution that meets certain
criteria.
The key components of a linear programming problem include:
1. Decision Variables:
These represent the quantities to be determined, controlled, or optimized.
Decision variables are typically denoted by symbols such as (X1, X2………., Xn)
2. Objective Function:
This is a linear expression that represents the goal to be optimized, such as
maximizing profit, minimizing costs, or maximizing resource utilization. The
objective function is usually formulated as a linear combination of decision
variables, with coefficients representing the contribution of each variable to
the objective.
Page |7
3. Constraints:
These are linear relationships that impose limitations or restrictions on the
decision variables. Constraints can represent resource availability, capacity
constraints, demand requirements, or other constraints that must be
satisfied. Constraints are typically formulated as linear inequalities or
equalities involving decision variables.
The general form of a linear programming problem can be expressed as
follows:
Maximize cTx
Subject to Ax ≤ b
Where x ≥ 0
Where:
• C is the vector of coefficients of the objective function.
• X is the vector of decision variables.
• A is the matrix of coefficients of the constraints.
• B is the vector of constants on the right-hand side of the constraints.
• ≥ denotes "greater than or equal to", and = denotes "equal to" for the
constraints.
The goal of solving a linear programming problem is to find values for the
decision variables that maximize or minimize the objective function while
satisfying all constraints. This is typically achieved using optimization
algorithms such as the simplex method, interior-point method, or other
specialized algorithms designed for linear programming problems.
Topic No 4:
Linear Programming (LP), LP and allocation of resources.
Linear Programming (LP) is a powerful optimization technique used in
Operations Research to allocate limited resources efficiently and achieve
optimal outcomes. LP is particularly well-suited for problems where the
objective and constraints can be represented as linear relationships.
In the context of resource allocation, LP can be applied to various
scenarios, including:
1. Production Planning:
LP can help optimize production plans by determining the optimal allocation
of resources such as labor, raw materials, and machine time to maximize
production output while minimizing costs or meeting specific demand
requirements.
Page |8
2. Inventory Management:
LP can aid in optimizing inventory levels by determining the optimal quantities
to order or produce at different points in time to minimize holding costs while
ensuring sufficient stock to meet demand.
3. Supply Chain Management:
LP can optimize the flow of goods and materials through a supply chain by
determining the most cost-effective transportation routes, warehouse
locations, and inventory levels to minimize transportation costs and meet
customer demand.
4. Project Management:
LP can assist in project scheduling and resource allocation by determining
the optimal assignment of resources such as personnel, equipment, and
budget allocations to minimize project duration or costs while meeting
project objectives and constraints.
5. Financial Planning:
LP techniques can be used in financial planning and portfolio optimization to
allocate investment funds across different assets or securities to maximize
returns while minimizing risk.
In LP, the problem is typically formulated as follows:
• Decision Variables:
These represent the quantities to be determined, such as the amount of
resources allocated to different activities or the quantities of goods produced
or purchased.
• Objective Function:
This represents the goal to be optimized, such as maximizing profit,
minimizing costs, or maximizing resource utilization.
• Constraints:
These represent the limitations or restrictions on the decision variables, such
as resource availability, capacity constraints, demand requirements, or
regulatory requirements.
Linearity requirement Maximization Then Minimization
problems.
The linearity requirement in linear programming (LP) refers to the condition
that both the objective function and the constraints must be linear functions
of the decision variables. This means that the coefficients of the decision
variables in both the objective function and the constraints must be
Page |9
constants and appear only in linear terms (i.e., not raised to any power other
than 1).
Maximization Problems:
In maximization problems, the objective is to find the values of the decision
variables that maximize the objective function while satisfying all constraints.
The objective function is typically of the form:
Maximize z = c1x1 + c2x2 + … + cnxn
Where:
Where:
• aij are the coefficients of the constraints,
• bi are the constants on the right-hand side of the constraints, and
• m is the number of constraints.
Minimization Problems:
In minimization problems, the objective is to find the values of the decision
variables that minimize the objective function while satisfying all constraints.
The objective function is similar to maximization problems but is minimized
instead:
Minimize z = c1x1 + c2x2 + … + cnxn
P a g e | 10
within the feasible region and satisfies all constraints, it is the optimal
solution to the minimization problem.
P a g e | 12
Topic No 6:
Simplex Method:
The simplex method is a widely used algorithm for solving linear programming
(LP) problems. Developed by George Dantzig in the late 1940s, the simplex
method efficiently finds the optimal solution to LP problems by moving from
one feasible solution to another along the edges of the feasible region until it
reaches the optimal solution.
Key Components of the Simplex Method:
1. Feasible Region:
The feasible region is the set of all feasible solutions that satisfy the
constraints of the LP problem. It is defined by the intersection of the
constraint inequalities.
2. Vertices:
The vertices of the feasible region are the extreme points where the
constraints intersect. These vertices correspond to basic feasible solutions,
where a subset of variables takes on non-zero values, while the remaining
variables are set to zero.
3. Objective Function:
The objective function is the linear function to be optimized, either maximized
or minimized. The goal is to find the values of the decision variables that
optimize the objective function while satisfying all constraints.
4. Pivot:
In each iteration of the simplex method, a pivot operation is performed to
move from one basic feasible solution to another along the edges of the
feasible region. The pivot operation involves selecting a pivot element and
using elementary row operations to transform the current tableau (matrix
representation of the LP problem).
5. Optimality Test:
At each iteration, the optimality of the current solution is checked by
examining the coefficients of the objective function in the tableau. If all
coefficients are non-negative (for minimization problems) or non-positive (for
maximization problems), the current solution is optimal, and the algorithm
terminates.
Steps Involved in the Simplex Method:
1. Initialization:
Convert the LP problem into standard form, initialize the tableau, and identify
the initial basic feasible solution.
P a g e | 13
Topic No 7:
Linear Programming – Simplex method for Maximizing.
Linear programming is a mathematical method used to determine the best
possible outcome in a given mathematical model for a set of constraints
represented by linear equations. The simplex method is one of the primary
techniques used to solve linear programming problems. It's an iterative
algorithm that moves from one feasible solution to another, aiming to improve
the objective function value at each step until reaching an optimal solution.
To maximize a linear objective function using the simplex method, you
typically follow these steps:
1. Formulate the linear programming problem:
Write down the objective function to be maximized and the set of linear
constraints that the solution must satisfy. These constraints are typically
represented as a system of linear inequalities or equations.
2. Convert inequalities to equations:
If any constraints are in the form of inequalities (≤ or ≥), convert them to
equations. This might involve adding slack or surplus variables.
3. Set up the initial simplex tableau:
Convert the equations into a tableau, which is a matrix representation of the
problem. Include the objective function and the coefficients of the decision
variables.
4. Select a pivot column:
Identify the most negative coefficient in the bottom row (the objective
function row) of the tableau. This column will be the pivot column.
5. Select a pivot row:
Determine the pivot row by finding the minimum ratio of the right-hand side
value to the corresponding coefficient in the pivot column.
6. Perform pivot operation:
Use the pivot element (intersection of the pivot row and pivot column) to
create zeros in the pivot column, making the pivot element one. Adjust the
tableau accordingly.
7. Repeat:
Continue iterating through steps 4-6 until there are no negative values in the
objective function row, indicating the optimal solution has been reached.
P a g e | 16
Topic No 8:
Simplex maximizing example for similar limitations
Let's consider a simple example to illustrate the simplex method for
maximizing a linear objective function subject to linear constraints.
Problem Statement:
Maximize: Z=3x+2y
Subject to:
1. x+y ≤ 4
2. 2x+y ≤ 5
3. x,y ≥ 0
Solution:
1. Convert inequalities to equations (if necessary):
We don't need to convert any inequalities to equations in this example as all
constraints are already in equation form.
2. Set up the initial simplex tableau:
Equation x y s1 s2 RHS
1. x + y + s1 1 1 1 0 4
2. 2x + y + s2 2 1 0 1 5
Z. -3x - 2y -3 -2 0 0 0
1. Repeat:
Repeat steps 3-5 until there are no negative values in the objective function
row.
2. Interpret the solution:
At the final tableau, the values of the decision variables are:
• x=3
• y=1
The maximum value of the objective function Z is Z = 3(3) + 2(1) = 9 + 2 = 11
So, the optimal solution is x = 3, y = 1 with a maximum value of Z = 11
Topic No 9:
Mixed Limitations Examples containing mixed constraints:
Sure, let's consider an example with mixed constraints, which include both
equality and inequality constraints.
Problem Statement:
Maximize: Z = 5x + 3y
Subject to:
1. x + 2y ≤ 10
2. 2x + y ≥8
3. x - y = 2
4. x, y ≥ 0
Solution:
1. Convert inequalities to equations (if necessary):
We already have one equality constraint (the third one), so no conversion is
needed.
P a g e | 18
Topic NO 10:
Minimizing example of similar limitations
Problem Statement:
Maximize: Z = 5x + 3y
Subject to:
1. x + 2y ≤ 10
2. 2x + y ≥8
3. x - y = 2
4. x, y ≥ 0
Solution:
1. Convert inequalities to equations (if necessary):
We already have one equality constraint (the third one), so no conversion is
needed.
2. Set up the initial simplex tableau:
Equation x y s1 s2 a RHS
1. x + 2y + s1 1 2 1 0 0 10
2. 2x + y - s2 2 1 0 -1 0 8
3. x - y 1 -1 0 0 1 2
Z. -5x - 3y -5 -3 0 0 0 0
Here, s1and s2 are slack variables introduced for the inequality constraints,
and a is a surplus variable introduced for the inequality constraint.
1. Select a pivot column:
The most negative coefficient in the bottom row (the objective function row)
is -5, corresponding to the variable x. So, the pivot column is x.
2. Select a pivot row:
Calculate the ratio of the RHS to the coefficient of x for each constraint:
• For the first constraint: 10/1= 10
• For the second constraint: 8/2 = 4
The minimum ratio is 4, corresponding to the second constraint. So, the pivot
row is the row associated with the second constraint.
3. Perform pivot operation:
Use the pivot element (2 in this case) to create zeros in the pivot column:
P a g e | 20
Equation x y s1 s2 a RHS
1. x + 2y + s1 0 3 1 1 0 18
2. 2x + y - s2 1 0 0 -2 0 4
3. x - y 0 -1 0 1 1 6
Z. -5x - 3y 0 -3 0 10 0 20
1. Repeat:
Repeat steps 1-3 until there are no positive values in the objective function
row.
2. Interpret the solution:
At the final tableau, the values of the decision variables are:
• x=4
• y=6
The Minimum value of the objective function Z is Z = 5(4) + 3(6) = 20 + 18 = 38.
So, the optimal solution is x = 4 , y = 6 with a maximum value of Z = 38.
Topic NO 11;
Sensitivity Analysis:
Sensitivity analysis is a technique used in linear programming to assess how
changes in the coefficients of the objective function or the constraints affect
the optimal solution. It helps decision-makers understand the robustness of
the solution and provides insights into the impact of parameter variations.
Here's how sensitivity analysis can be conducted in linear programming:
1. Objective Function Coefficients:
• If the coefficients of the objective function change, you can analyze
how the optimal solution and the optimal value of the objective
function are affected.
• If the coefficient of a decision variable increases, the optimal value of
the objective function will increase, and vice versa.
• The shadow price (also known as dual price or marginal value)
associated with each constraint indicates how much the optimal value
of the objective function would increase if the right-hand side of the
constraint were increased by one unit. Shadow prices provide
information on the value of relaxing or tightening constraints.
2. Right-Hand Side (RHS) Values of Constraints:
• If the RHS values of the constraints change, you can analyze how the
optimal solution and the optimal value of the objective function are
affected.
P a g e | 21
Topic No 12:
Changes in Objective Function Coefficients:
Suppose we have the following linear programming problem:
Maximize: Z=c1x1+c2x2+…+cnxn
Subject to:
a11x1 + a12x2+ … +a1n xn ≤ b1
a21x1 + a22x2+ … +a2n xn ≤ b2
.
.
am1x1 + am2x2+ … +amn xn ≤ b2m
Topic No 13:
What do you mean by Transportation Model? Basic Assumptions.
Transportation Model:
A Transportation Model is a mathematical optimization technique used to
determine the most cost-effective way to transport goods from multiple
sources (e.g., factories or warehouses) to multiple destinations (e.g., retailers
or customers). It helps in optimizing transportation routes, minimizing
transportation costs, and allocating available resources efficiently. The
transportation model is widely used in logistics, supply chain management,
distribution planning, and inventory control.
Basic Assumptions of Transportation Model:
1. Fixed Supply and Demand:
The transportation model assumes that the total supply of goods from
sources and the total demand for goods at destinations are fixed and known
in advance.
2. Single Commodity:
The transportation model typically deals with the transportation of a single
homogeneous commodity. This simplifies the modeling process by focusing
on the transportation of one type of product.
3. Cost Proportionality:
The transportation cost per unit of goods transported between any pair of
source-destination locations is assumed to be constant and proportional to
the quantity of goods transported.
4. Linear Relationships:
The transportation model assumes linear relationships between the
quantities of goods transported, transportation costs, and other relevant
factors. This allows for the use of linear programming techniques to optimize
transportation routes and costs.
5. Non-Negative Supply and Demand:
The transportation model assumes that both the supply at sources and the
demand at destinations are non-negative. That is, there are no negative
supplies or demands.
P a g e | 24
Topic No 14:
Feasible Solution: the northwest method, the lowest cost
method.
The Northwest Method:
This is one of the basic methods for finding an initial feasible solution to
transportation problems. It starts with the cell in the northwest corner of the
transportation table and allocates shipments as much as possible in a step-
by-step manner, moving either horizontally or vertically. It's called the
"northwest" method because it starts at the northwest corner of the
transportation matrix.
1. Initialization:
Start at the northwest corner (top-left) of the transportation matrix.
2. Allocation:
Allocate as much as possible to the cell identified in the northwest corner.
This allocation is based on either the supply available at that row or the
demand required at that column, whichever is smaller.
3. Updating:
After allocation, update the supply and demand values accordingly. Subtract
the allocated quantity from the respective supply and demand values. If
either supply or demand becomes zero, eliminate the corresponding row or
column from further consideration.
4. Move to Next Cell:
Move to the adjacent cell either horizontally or vertically, depending on
whether the supply or demand has been exhausted in the current row or
column. Repeat the allocation and updating steps until all supply and
demand requirements are met.
5. Termination:
The process terminates when all supply and demand requirements are
fulfilled.
P a g e | 25
Topic No 15:
Optimal solution: The stepping Stone Method, Modified.
The Stepping Stone Method:
The Stepping Stone Method is an optimization technique used to find the
optimal solution to transportation problems. It is particularly useful for
improving upon an initial feasible solution obtained from methods like the
Northwest Corner Method or the Lowest Cost Method.
Here's how it works:
1. Initial Feasible Solution:
Start with an initial feasible solution where all supply and demand constraints
are satisfied. This solution can be obtained using methods like the Northwest
Corner Method or the Lowest Cost Method.
2. Identify Basic Variables:
In the initial solution, identify the basic variables, which are the allocated
cells (non-zero cells). These cells form the basis of the solution.
3. Find Closed Loops:
For each non-basic cell (empty cell), attempt to find a closed loop of basic
cells (allocated cells) that starts and ends at the non-basic cell. This loop is
formed by moving horizontally and vertically through the allocated cells,
never revisiting the same cell twice.
4. Calculate Improvement Potential:
Once a closed loop is found, calculate the improvement potential by
considering the change in cost associated with moving along the loop. This
change in cost is determined by the difference between the transportation
cost of entering the loop and leaving the loop.
5. Identify Improving Loops:
Identify the loop with the greatest improvement potential. This loop is referred
to as the improving loop.
6. Update Solution:
Increase the flow along the forward arcs (edges) of the improving loop and
decrease the flow along the reverse arcs by the minimum amount necessary
to maintain feasibility. This adjustment improves the solution and reduces the
total transportation cost.
7. Repeat:
Continue this process, identifying improving loops and updating the solution
until no further improvement can be made.
P a g e | 27
8. Optimality Test:
Once no further improvements are possible, perform an optimality test to
verify if the current solution is optimal. This test involves checking whether all
reduced costs (the difference between the actual cost and the shadow price
of a cell) are non-negative.
9. Termination:
If the optimality test confirms that the current solution is optimal, the process
terminates. Otherwise, return to step 3 and continue iterating until an optimal
solution is reached.
Distribution (MODI) Method:
The modified distribution method, is also known as MODI method or (u - v)
method provides a minimum cost solution to the transportation problems.
MODI method is an improvement over stepping stone method. This model
studies the minimization of the cost of transporting a commodity from a
number of sources to several destinations. The supply at each source and the
demand at each destination are known. The objectives are to develop and
review an integral transportation schedule that meets all demands from the
inventory at a minimum total transportation cost.
Here's a simplified overview of how the MODI method works:
1. Begin with an initial feasible solution, often obtained using the Northwest
Corner Rule, Least Cost Method, or Vogel's Approximation Method.
2. Calculate the opportunity costs for each non-basic variable (i.e., empty
cells in the transportation tableau). Opportunity cost represents the amount
by which the objective function value would change if one unit of the
corresponding variable is moved from its current route to another route with
a lower cost.
3. Determine the row and column that have the largest opportunity cost. This
will identify the cell that should be added to the basic feasible solution to
improve its optimality.
4. Trace a closed loop starting from the selected cell through the cells that are
currently in the basic feasible solution. This loop is formed by alternating
between cells that are currently in the solution and those that are not.
5. Once the loop is identified, determine the minimum value among the
quantities associated with the non-basic cells in the loop. This quantity
represents how much flow can be moved along the loop without violating the
capacity constraints.
P a g e | 28
6. Update the basic feasible solution by adjusting the flow along the loop
according to the minimum value found.
7. Recalculate the opportunity costs for all non-basic variables, and repeat
steps 3 to 6 until all opportunity costs are non-negative, indicating optimality.
Topic No 16:
The Assignment Model:
The Assignment Model is a mathematical framework used to solve
optimization problems where a set of tasks needs to be assigned to a set of
agents or resources in a manner that minimizes or maximizes a certain
objective function, such as cost, time, or distance. This model is
characterized by its focus on one-to-one assignments, where each task is
assigned to exactly one agent and each agent is assigned to exactly one task.
Key components of the Assignment Model include:
1. Tasks and Agents:
There are two sets of entities involved in the assignment problem: tasks and
agents. Tasks represent the items or activities that need to be assigned, while
agents represent the individuals or resources capable of performing those
tasks.
2. Objective Function:
The objective of the assignment problem is to optimize a certain criterion,
typically represented by a cost matrix or a benefit matrix. This criterion could
be the cost of completing the tasks, the time required to complete them, or
some other measure associated with the assignment.
3. Cost or Benefit Matrix:
The costs (or benefits) associated with assigning each task to each agent are
provided in a matrix format. Each cell (i, j) in the matrix contains the cost (or
benefit) of assigning task i to agent j. The matrix is usually square, with rows
representing tasks and columns representing agents.
4. Constraints:
In the basic assignment problem, the primary constraint is that each task
must be assigned to exactly one agent, and each agent must be assigned to
exactly one task. This constraint ensures that all tasks are completed and all
agents are utilized.
5. Optimization Techniques:
Various optimization techniques can be employed to solve assignment
problems, including the Hungarian algorithm, the auction algorithm, and
P a g e | 29
Topic No 17:
Solution Methods: Different Combination Methods in
operation Research
In operations research, combination methods are often used to solve
optimization problems where the goal is to find the best combination of
decision variables that optimize a certain objective function, subject to
constraints. Here are some common combination methods used in
operations research:
1. Linear Programming (LP):
LP is a method used to maximize or minimize a linear objective function
subject to linear equality and inequality constraints. It's widely used in
various industries for resource allocation, production planning,
transportation, and more.
2. Integer Programming (IP):
IP extends linear programming by adding the requirement that some or all of
the variables must be integers. This is useful when decision variables
represent quantities that must be whole numbers, such as the number of
units produced or the number of machines used.
3. Mixed Integer Programming (MIP):
MIP is a generalization of both LP and IP, where some variables are restricted
to be integers while others can take any real value. This is useful when a
problem involves both discrete and continuous decision variables.
4. Dynamic Programming (DP):
DP is a method for solving complex problems by breaking them down into
simpler subproblems and solving each subproblem only once. It's
particularly useful for problems with overlapping subproblems and optimal
substructure, such as the knapsack problem or the traveling salesman
problem.
5. Genetic Algorithms (GA):
GAs are optimization algorithms inspired by the process of natural selection
and genetics. They work by evolving a population of candidate solutions over
multiple generations, with selection, crossover, and mutation operators
mimicking the processes of selection, reproduction, and genetic variation.
6. Simulated Annealing (SA):
SA is a probabilistic optimization technique inspired by the annealing process
in metallurgy. It starts with an initial solution and iteratively explores the
P a g e | 31
Sure, I can explain both Dijkstra's algorithm and Floyd's algorithm, which are
used to find the shortest routes in a graph.
Topic No 19:
1. Dijkstra's Algorithm:
Dijkstra's algorithm is a greedy algorithm used to find the shortest path from
a source node to all other nodes in a weighted graph with non-negative edge
weights. It works as follows:
1.1 Initialize the distance of the source node to 0 and all other nodes to
infinity.
1.2 Create a priority queue to store nodes with their distances from the
source node.
1.3 Repeat until the priority queue is empty:
• Extract the node with the minimum distance from the priority queue.
• Update the distances of its neighbors by considering the edge weights
and the distance to the current node.
- If the new distance is shorter than the current distance, update it.
1.4 After processing all nodes, the distances stored for each node represent
the shortest paths from the source node.
• Dijkstra's algorithm is efficient for finding the shortest paths in graphs
with non-negative edge weights. However, it does not work for graphs
with negative edge weights or cycles.
2. Floyd's Algorithm:
Floyd's algorithm, also known as Floyd-Warshall algorithm, is a dynamic
programming algorithm used to find the shortest paths between all pairs of
nodes in a weighted graph, including graphs with negative edge weights (as
long as there are no negative cycles). It works as follows:
2.1 Initialize a distance matrix where each element represents the shortest
distance between each pair of nodes.
2.2 For each intermediate node k from 1 to n:
• Update the distance matrix by considering whether the path from node
i to node j through node k is shorter than the current distance from i to
j.
2.3 After processing all intermediate nodes, the distance matrix contains
the shortest paths between all pairs of nodes.
P a g e | 35
Topic No 20:
Difference Between Transportation Mdodel And
Assignment Model:
P a g e | 36