In Depth Critique

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Vendor Risk Management in Global Supply Chains by A.

Ravi Ravindran

The presentation by A. Ravi Ravindran on Vendor Risk Management in Global

Supply Chains discusses the impact of supply chain risks on global sourcing and

presents multicriteria models and methods to address these risks. Dr. A. Ravi

Ravindran is the Emeritus Professor and former Department Head of Industrial and

Manufacturing Engineering at Penn State. Formerly, he was a faculty member in the

School of Industrial Engineering at Purdue University for 13 years (1969-82) and at

the University of Oklahoma for 15 years (1982- 97). His research interests are in

multiple criteria decision-making, financial engineering, healthcare delivery systems,

and supply chain optimization. The presentation includes an actual application to a

global IT company and discusses strategies being considered by US companies to

make their supply chains more resilient. The presentation begins by the impact of

supply chain and how consumers realize its role in their daily life. He also sheds some

light on the government policies introduced by leaders as they realize its importance

to their economies. He shifts his attention to the importance of sourcing decisions. He

discusses the important sourcing decisions that have been made by major corporates

such as Walmart and General Motors. He then discussed the practices that are

affecting the supply chain such as global sourcing, outsourcing core and non-core

functions, supplier consolidation and the lean approach. He gives examples of key

supply chain disruptions such as 8 day strike at Delphi Brake Plant in 1996 idled 26

GM plants, costing $900 million, 1997 fire at Toyota’s sole source brake supplier

disrupted 20 assembly plants, costing Toyota $1.8B in sales, US West Coast Ports

lockout lasted11 days in 2002, causing $11-22B in lost sales, airfreight, and spoilage.

Then discusses the need for a good vendor management strategy, it speaks of rare and

frequent events such as the Global pandemic, natural disasters which directly impact
this strategy. While these are few of the rare events, commonly occurring events are

also in a prolonged run affect the vendor, management strategy. He discusses how

supply risk management is the process of identifying, assessing, and mitigating risks

to a company's supply chain. The steps involved in supply risk management include

identifying and assessing supply risks, developing a supply risk map, mitigating

supply risks, and monitoring and reviewing the supply risk management program. In

his presentation he further discusses the risk mitigation strategies that divide these

into high, low impact and high low occurrence. He discusses the objective of the

study was to demonstrate the use of multiple criteria optimization models

incorporating supplier risk when making sourcing decisions. The study developed two

different risk models, Value-at-Risk (VaR) for rare events and Miss-the-target (MtT)

risk for others. The study also developed a two-phase risk-adjusted supplier selection

model. In Phase 1, suppliers are screened and shortlisted. In Phase 2, suppliers are

selected and their order quantities are determined. The solution methods were

demonstrated using case scenarios and company staff as decision makers. The

research was conducted to demonstrate the use of multiple criteria optimization

models incorporating supplier risk when making sourcing decisions. For this objective

two risk models were created, Value-at-Risk (VaR) for rare events, and Miss-the-

target (MtT) risk for others. The work begins with quantification of risk such that it

can be divided into the two strategies. Supply chain risks can be either natural or man-

made events. Value-at-Risk (VaR) is a rare but severe risk, such as a hurricane, strike,

fire, or terrorist attack. Miss-the-Target (MtT) risk is a more frequent but less severe

risk, such as late delivery of raw materials or low quality replenishment. By

quantifying risks, companies can better understand the potential impact of these

events and develop strategies to mitigate them. After which different suppliers were
selected test multi criteria optimization methods to rank suppliers using rating of each

attribute (1-10) scale, pairwise comparison of attributes and strength of preference (1-

9 scale) for pairwise comparisons. He explains that phase 1 includes the rating method

where each criterion is rated on a scale of 1 to 10, with 10 being the most important.

The weights associated with each criterion are then obtained through normalization.

Second the pair-wise comparison method using Borda count: This method is based on

pair-wise comparison of criteria. If there are P criteria, the most important criterion

gets P points, the second most important gets (P-1) points, and so on. The weights are

then calculated via normalization. And the analytic Hierarchy Process (AHP): This

method uses pair-wise comparison of criteria with strength of preference reported on a

1-9 scale. The AHP method is a more complex method than the other methods, but it

is also more accurate. He explains that the choice of which MCDM method to use

depends on the specific decision-making problem. If the decision-making problem is

relatively simple, the rating method may be sufficient. If the decision-making problem

is more complex, the pair-wise comparison method using Borda count or the AHP

method may be more appropriate. From phase 1 results with the smaller set of

suppliers the phase 2 was conducted. For phase 2 goal programming was conducted

and based on the results Including conflicting criteria in supplier selection improves

decision-making by providing tradeoff information to optimize the supply network.

Goal programming models provide multiple solutions that can be discussed, while the

Value Path Approach can effectively visualize tradeoff information. He concludes the

presentation by giving key points using which resilient supply chains can be built.
N-person Games and Linear Programming by Allen L. Soyster

The presentation by Allen L. Soyster concentrates on understanding whether there are

n-Person Cooperative Games Hidden Inside Linear Programs. He is an Emeritus

Professor of Industrial Engineering at Northeastern University from where he retired

in 2014. The presentation begins by explaining what n person games are. He

compares Zero-sum games and n person games, where zero sum games are situations

where the gain of one player is directly offset by the loss of the other player. The total

payoff is constant, and any gain made by one player comes at the expense of the other.

These games are often used to model competitive situations, such as in sports, military

conflict, or business negotiations. And n-person cooperative games are situations

where players can work together to achieve a common goal. The payoff depends on

the collective efforts of all the players, and the players can gain more by cooperating

than by acting alone. He explains how they can be used to model situations where

multiple parties must work together to achieve a common goal, such as in a supply

chain where different companies cooperate to produce and deliver a product to the

market. They can also be used to study the behavior of countries in international

relations, where multiple nations may need to cooperate to address global issues such

as climate change or nuclear disarmament. He explains the mathematical definition of

these games where the number of players is n, a value function defined on all possible

2^n power subsets, and the requirement that the game is super-additive, meaning that

the value of the union of two subsets is greater than or equal to the sum of their

individual values. Thus, the condition ensures that the value of working together is

greater than the sum of the values of working separately. Dr Soyster gives a

demonstration in class. In this example he uses, four political parties with varying
percentages of votes. There are 16 possible coalitions that can be formed among these

parties. But, to pass anything and gain control of $100 million, a coalition needs to

have at least 51% of the votes. The value function is defined such that if a coalition is

formed, it will receive a payout of $100 million. In this case, there are three coalitions

that can pass something: A-B, A-C, and A-D. The other coalitions do not have the

required 51% majority to pass anything. However, since A has the most votes and can

form a coalition with any other party, it has the most power. Therefore, A can dictate

the terms of the coalition and will receive the largest payout. The other parties will

receive a smaller payout, proportional to their power in the coalition. To help us

understand this example he regarded different professors as different political parties,

each having a certain power value, and the combination or merger of two parties can

be done to reach the threshold power value. After establishing the foundation on how

n person games work, he moves ahead and shows real life application of these games,

few of which is US House of representatives Speaker Contest. In the US House of

Representatives Speaker Contest, the game is modeled as an N-person cooperative

game where N=435, the number of representatives in the House. The objective of the

game is to form coalitions to reach a majority of 218 representatives and gain control

of the House. There are three important coalitions or subsets: A (Democrats), B

(moderate Republicans), and C (conservative Republicans). The value of a coalition is

zero if it has less than 218 members and 4 trillion dollars if it has 218 members or

more, which is the US annual budget. The value of a coalition is defined by the

amount of power it can exert. In this case, each coalition has a value of 4, which

means that they can control the budget if they have 218 members or more. The value

of a coalition is determined based on its power to make decisions and influence the

outcome. In this game, the objective is to form a coalition that can control the House
and therefore the budget. He then explains what the core of the game will be. He

explains how the core of an n-person cooperative game refers to the set of imputations

that are stable and cannot be improved upon by any coalition of players. An

imputation is a set of allocation or payoffs to each player in the game. The core of the

game is “all those imputations whose allocations equal or exceed their respective sub-

coalitions”. For an imputation to be considered as part of the core, it must satisfy the

following conditions: it is individually rational, meaning each player receives at least

their minimum payoff, and it is collectively rational, meaning the sum of payoffs to all

players is at least the value of the grand coalition. This problem is then converted into

a linear program, where the core can be defined as the set of inequalities. He further

explains that if all members are in the core, then the program will have stability as

they will be willing to accept the payout. But if there is no core then that means that

there is no deep agreed upon strategy. When there is no core, it means that there is no

stable outcome or agreed-upon strategy among the players in the game. In such

situations, other incentives can be used to reach a solution. He uses the work of Lloyd

Shapley who had proposed an idea to distribute the grand coalition (all players

together) based on each player's marginal contribution to each coalition. This concept

is known as the Shapley value and has become an important tool for analyzing

cooperative games. The Shapley value is a way of distributing the total payoff or

resources of a coalition game among the players according to their individual

contributions to the game. It considers all possible ways of forming coalitions and

calculates each player's contribution to each coalition. This value provides a fair way

of dividing the payoff among the players, considering each player's contribution to the

game. In terms of a linear program, we can use Shapely which will require 2^n LPs

and n calculations n! times. Few of next steps of this research will be formulating a
special type of n-person cooperative game is a Convex Game, understanding when

does an LP generate a convex game, to name a few. The major application of this is

advise companies considering mergers.

Ensemble Data Assimilation: A Paradigm for Improving Predictions by Dr.

Steven J. Greybush

The presentation by Dr Steven J Greybrush concentrates on the ensemble data

assimilation. He is an Associate Professor in the Department of Meteorology and

Atmospheric Science at Penn State, applies computational techniques such as

computer simulations, data assimilation, and machine learning to study weather

patterns on Earth and Mars. His presentation is about the 10 steps that must be

executed for implementing data assimilation. He starts by explaining data

assimilation. He discusses the importance of weather and economic impact. He then

explains the application of ensemble data assimilation in meteorology and how it

plays a crucial role in generating accurate weather forecasts, which have significant

implications for decision-making across various sectors. Each day, millions of new

observations are ingested into numerical weather prediction models, which are

combined with physics-based models to produce probabilistic and deterministic

weather forecasts. He discusses how weather contributes significantly economic

impacts, with 3-6% variability in US GDP attributed to weather, up to $1344 billion

annually. The economic value from weather is estimated at $13 billion across sectors,

and the public values weather information at $280 per household per year, or $30

billion annually. Since 1980, there have been 258 weather/climate disasters exceeding

$1 billion dollars, with a total cost exceeding $1.75 trillion. Weather also impacts

related disciplines, including risk to life and property, insurance and reinsurance,

disruption of commerce, supply chains and logistics, agriculture, water availability


and hydrology, power systems, energy markets, renewable energy, transportation

systems and infrastructure, and health and disease risk. He explains that it is a

powerful technique that combines information from predictive models and new

observations to improve predictions and quantify uncertainty. The predictive model

can be human-driven predictions (e.g. a forecaster), Physics-driven predictions (e.g.

Numerical Weather Prediction), Data-driven predictions (e.g. Statistics or Machine

Learning) or Hybrid predictions (e.g. combination of physics-driven and data-driven).

Where we give inputs to the model and get outputs. He also links this to the operation

research field and emphasis how the link between these two fields lies in the fact that

data assimilation can provide valuable input data for operations research models. By

incorporating data assimilation techniques into their models, operations researchers

can create more accurate and reliable models that consider real-world data. This can

lead to more effective decision-making and improved system performance. He states

the basics of data assimilation which includes estimating the state of the system by

combining a prior field (background) with observations by three major areas

weighting information according to their uncertainties in a Bayesian framework,

spreading information across variables (and in space) using covariances and Using

information dynamically evolving in time through a forecast model. He further

explains how data assimilation can answer the three major questions that are: the best

(ensemble or deterministic) initial conditions for a dynamical system given

incomplete, noisy observations, the impact of individual observing systems on

forecast skill, the state of the system in the past or predicted in the future (a forecast).

He starts by first step which is to understand the goal of data assimilation. One goal

may be to create a reanalysis, which is an estimate of the past state of the system that

is consistent with both observations and a model. Another goal may be to improve
predictions of the future state of the system, by incorporating current observations into

a forecast model. The physical phenomena that data assimilation aims to capture will

also depend on the specific application. For example, in weather forecasting, the goal

is to capture the atmospheric state variables such as temperature, humidity, and wind,

which are critical for predicting future weather conditions. In oceanography, the goal

may be to estimate ocean currents, sea level, or the distribution of heat and salt. The

success of data assimilation depends on several factors, including the quality and

quantity of the observations, the accuracy of the model, the appropriateness of the

assimilation algorithm, and the ability to verify the results. DA is most successful

when the system being studied is well observed, and there is a set of known equations

that describe its time evolution. He takes an example of a snowstorm on earth and

Mars how Data assimilation can be used to predict and forecast weather. The second

step includes coming up with prognostic equations and then defining the model state

variables, the dynamic system behavior. He uses the Lorenz model is a mathematical

model used to simulate atmospheric convection. It was first proposed by

meteorologist Edward Lorenz in 1963. The fifth step uses synthetic observations

based on a known truth. The OSSE allows for more accurate comparisons between

analyses and the true state. OSSE is a useful tool for evaluating the performance of

novel data assimilation setups, as it provides an upper bound on performance. The

applicability of OSSE conclusions may be limited by imperfect knowledge of model

and observation errors. Forecast skill improvement from assimilation analyses over

free model simulations can be assessed using OSSE. The model is based on a set of

three nonlinear ordinary differential equations and is a simplified version of the

Navier-Stokes equations that describe fluid flow. One of the key insights from the

Lorenz model is that small differences in initial conditions can lead to vastly different
outcomes over time, a concept known as chaos. This helps understand predictability.

From the models he extracts the observations, performance and ensemble diagnostics,

the correlation structure, and the forecasting skill. Finally, he concludes by discussing

the last step which is feature based verification.

You might also like