Professional Documents
Culture Documents
Olson DL Wu D Enterprise Risk Management in Finance
Olson DL Wu D Enterprise Risk Management in Finance
and
David L. Olson
College of Business Administration, University of Nebraska, USA
© Desheng Dash Wu and David L. Olson 2015
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this work
in accordance with the Copyright, Designs and Patents Act 1988.
First published 2015 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN: 978–1–137–46628–0
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Wu, Desheng Dash.
Enterprise risk management in finance / Desheng Dash Wu, David L. Olson.
pages cm
ISBN 978–1–137–46628–0 (hardback)
1. Risk management. 2. Financial risk management. I. Olson, David L. II. Title.
HD61.W796 2015
332.1068’1—dc23 2014049736
Contents
List of Figures x
Preface xv
Acknowledgements xvii
2 Enron 11
Risk management 11
California electricity 12
Accounting impact 13
SOX 13
Conclusions 14
v
vi Contents
Northern Rock 26
AIG 29
Risk management 30
Notes 221
References 238
Index 253
List of Figures
x
List of Figures xi
xii
List of Tables xiii
The importance of financial risk was revealed by the traumatic events of 2007
and 2008, when the global financial community experienced a real estate
bubble collapse from which (at the time of writing) most of the world’s econ-
omies are still recovering. Human investment activity seems determined to
create bubbles, despite our long history of suffering.1 Financial investment
seems to be a never-ending game of greedy players seeking to take advantage
of each other, which Adam Smith assured us would lead to an optimal eco-
nomic system. It is interesting that we pass through periods of trying one
system, usually persisting until we encounter failure, and then moving on to
another. The United States went through a long stretch where regulation of
financial institutions was considered paramount, beginning with the Great
Depression of the 1930s. When relative prosperity was experienced, the 1980s
saw a resurgence of deregulation, culminating in the Gramm–Rudman Act that
dismantled much of the post-depression regulation in favor of a free-wheeling
economic system. Some post-2008 theorists have found evidence that this
deregulation went too far. It is notable that Canada, with an economy highly
integrated with that of the United States (but with more consistent regulation),
experienced none of the traumatic real estate issues that plagued the US.
We do not pretend to offer solutions to financial economic problems. We do,
however, purport to offer a variety of analytic models that can be used to aid
financial decision-making. These models are presented in the spirit that they
are tools, which can be used for good or bad. But we do contend that inves-
tigating these tools is important in helping to better understand our global
inter-connected economy, with its financial opportunities and risks. The
responsibility for investment decisions remains with human investors.
This book presents a number of operations research model applications to
financial risk management. It is based on a framework of four perspectives, each
with appended current examples, with separate chapters based on published
models designed to support financial risk management. The four perspectives
used are accounting (explaining the COSO framework in Chapter 1), finance
(reviewing some basic conceptual tools in Chapter 3), economic (risk theory in
Chapter 11), and sustainability (Chapter 16). Current issues related to each of
these perspectives are appended. Chapter 2 supplements the overview intro-
ductory chapter by discussing the ethical risk issues highlighted by the Enron
case. Chapter 4 supplements the financial perspective chapter with a review of
the 2008 real estate crash, both in the US and in Europe. Chapter 12 supple-
ments the economic perspective chapter with a review of the risks associated
xv
xvi Preface
with the British Petroleum oil spill in the Gulf of Mexico. Chapter 17 supple-
ments the sustainability chapter with reviews of some natural disaster events.
Given this framework with current examples, the focus of the book is on
quantitative models presented to support risk management in finance. These
models include sentiment analysis, data envelopment analysis, catastrophe
bond modeling, chance constrained optimization, bank credit scoring, multi-
objective credit scoring, and advanced time series modeling. Chapters 5 and
6 present sentiment analysis models, one of investment analysis, the second
of stock price volatility. Chapter 7 presents a data envelopment analysis risk
scoring model of internet stocks. Chapter 8 uses statistical credit scorecard
modeling, while Chapter 9 applies a TOPSIS credit-scoring model supple-
mented with simulation. Chapter 10 utilizes principal components analysis
to make DEA more efficient in analyzing the efficiency of on-line banking.
To supplement the economic perspective, another data envelopment ana-
lysis model is used to assess bank branch efficiency in Chapter 13, whereas
Chapter 14 describes catastrophe bond loss modeling, and Chapter 15 bilevel
mathematical programming in bank merger analysis. The sustainability per-
spective is supported by use of GARCH-type forecasting models of carbon
emissions markets in Chapter 18, while similar tools are used to forecast crude
oil prices in Chapter 19. Chapter 20 provides a summary of how risk man-
agement can be studies using a Confucius three-stage learning approach. Thus
the book provides a variety of tools for assessing different types of financial
risk situations.
Note
1. L. Laeven and F. Valencia (2008) ‘Systemic banking crises: a new database’,
International Monetary Fund, Working Paper WP/08/224.
Acknowledgements
xvii
xviii Acknowledgements
Notes
1. N. Li, X. Liang, X. Li, C. Wang, Desheng D. Wu (2009) ‘Network environment and
financial risk using machine learning and sentiment analysis,’ Human and Ecological
Risk Assessment, 15(2): 227–252.
2. D. Wu, L. Zheng, D.L. Olson (2014) ‘A decision support approach for online stock
forum sentiment analysis,’ IEEE Transactions on Systems Man and Cybernetics, 44(8):
1077–1087.
3. Chien-Ta Bruce Ho, Desheng Dash Wu, David L. Olson (2009) ‘A risk scoring model
and application to measuring internet stock performance,’ International Journal of
Information Technology and Decision Making, 8(1): 133–149.
4. Desheng Dash Wu, David L. Olson (2010) ‘Enterprise risk management: coping
with model risk in a large bank,’ Journal of the Operational Research Society, 61(2):
774–787.
5. D. Wu, D.L. Olson (2006) ‘A TOPSIS data mining demonstration and application to
credit scoring,’ International Journal of Data Warehousing & Mining, 2(3): 1–10.
6. D. Wu, D.D. Wu (2010) ‘Performance evaluation and risk analysis of online banking
service,’ Kybernetics, 39(5): 723–734.
7. D. Wu, Z. Yang, L. Liang(2006) ‘Using DEA-neural network approach to evaluate
branch efficiency of a large Canadian bank,’ Expert System with Applications,
108–115.
8. D. Wu, Y. Zhou (2010) ‘Catastrophe bond and risk modeling: a review and cali-
bration using Chinese earthquake loss data,’ Human and Ecological Risk Assessment,
16(3): 510–523.
9. D. Wu, C. Luo, H. Wang, J.R. Birge (2014) ‘Bilevel programming merger evaluation
and application to banking operations,’ Production and Operations Management. DOI:
10.1111/poms.12205. Accepted and in press.
10. X. Chen, Z. Wang, D.D. Wu (2013) ‘Modeling the price mechanism of carbon
emission exchange in the European Union emission trading system,’ Human and
Ecological Risk Assessment, 19(5): 1309–1323.
11. C. Luo, L.A. Seco, H. Wang, D.D. Wu (2010) Risk modeling in crude oil market: a
comparison of Markov switching and GARCH models’, Kybernetics, 39(5): 750–769.
12. D.D. Wu (2014) ‘An approach for learning risk management: confucianism system
and risk theory,’ International Journal of Financial Services Management. Accepted and
in press.
1
Enterprise Risk Management
Introduction
Living and working in today’s environment involves many risks. The processes
used to make decisions in this environment should consider the need both
to keep people gainfully employed (through increased economic activity) and
to protect humanity from threats arising from human activity. Terrorism led to
the gas attack on the Japanese subway system in 1995, to 9/11 in 2001, and to
the bombings of the Spanish and British transportation systems in 2004 and
2005 respectively. But nature has been far more deadly, with hurricanes in
Florida, tsunamis in Japan, earthquakes in China, and volcanoes in Iceland.
These locations only represent recent, well-publicized events. Nature can strike
at us anywhere. We need to consider the many risks that exist, and to come up
with strategies, controls, and regulations that accomplish a complex combin-
ation of goals.
Risks can be viewed as threats, but business exists to cope with risks. No one
should expect compensation or profit without taking on some risk. The key
to successful risk management is to select those risks that one is competent to
deal with, and to find some way to avoid, reduce, or insure against those risks
not in this category. Consideration of risk has always been part of business,
manifesting itself in the growth of coffee houses such as Lloyd’s of London
in the 17th century, spreading risk related to cargoes on the high seas. The
field of insurance developed to cover a wide variety of risks, related to external
and internal risks covering natural catastrophes, accidents, human error, and
even fraud. Enterprise risk management (ERM) is a systematic, integrated
approach to managing all risks facing an organization. It focuses on board
supervision, aiming to identify, evaluate, and manage all major corporate risks
in an integrated framework. The board is responsible for providing strategic
input, identifying performance objectives, making key personnel appoint-
ments, and providing management oversight. Enterprise risks are inherently
1
2 Enterprise Risk Management in Finance
Definition
Accounting perspective
• Control environment
• Risk assessment
• Control activities
• Information and communication
• Monitoring.
COSO was found to be used to a large extent by only 11% of the organiza-
tions surveyed, and only 15% of the respondents believed that their internal
auditors used the COSO 1992 framework in full. Chief executive officers and
chief financial officers are required to certify effective internal controls. These
controls can be assessed against COSO. This benefits stakeholders. Risk man-
agement is now understood to be a strategic activity, and risk standards can
ensure uniform risk assessment across the organization. Resources are more
likely to be devoted to the most important risk, and better responsiveness to
change is obtained.
Categories
The strategic level involves overarching activities such as organizational gov-
ernance, strategic objectives, business models, consideration of external forces,
and other factors. The operations level is concerned with business processes,
value chains, financial flows, and related issues. Reporting includes information
systems as well as means to communicate organizational performance on mul-
tiple dimensions, to include finance, reputation, and intellectual property.
Compliance considers organizational reporting on legal, contractual, and other
regulatory requirements (including environmental).
Activities
The COSO internal control process consists of a series of actions.5
Risk appetite
Risks are necessary to do business. Every organization can be viewed as a spe-
cialist at dealing with at least one type of risk. Insurance companies specialize in
assessing the market value of risks, and offer policies that transfer special types
of risks to themselves from their clients at a fee. Banks specialize in the risk of
loan repayment, and survive when they are effective at managing these risks.
Construction companies specialize in the risks of making buildings or other
facilities. However, risks come at organizations from every direction. Those risks
that are outside of an organization’s specialty are outside that organization’s
risk appetite. Management needs to assess risks associated with the opportun-
ities presented to it, and accept those that fit its risk appetite (or organizational
expertise), and offload other risks in some way (see step 6 above).
Risk appetite is the amount of risk that an organization is willing to accept
in pursuit of value. Each organization pursues various objectives to add value,
and should broadly understand the risk it is willing to undertake in doing so.6
COSO recommends three steps to determine risk appetite:
while risk probability could have categories of highly probable, probable, likely,
unlikely, or remote.
The likely actions of internal auditing were identified. Those risks involving
high risk and strong controls would call for checking that inherent risks were
in fact mitigated by risk response strategies and controls. Risks involving high
risk and weak controls would call for checking for adequacy of management’s
action plan to improve controls. Those risks assessed as low call for internal
auditing to review accuracy of managerial impact evaluation and risk event
likelihood.
ERM process
COSO provides a great deal of help in describing a process for risk management
implementation.7 This process should be continuous, supporting the organiza-
tion’s strategy. One possible list of steps could be:
Implementation issues
Past risk management efforts have been characterized by bottom-up imple-
mentation. But effective implementation calls for top-down management,
as do most organizational efforts. Without top support, lack of funding will
starve most efforts. Related to that, top support is needed to coordinate efforts
so that silo mentalities do not take over. COSO requires a holistic approach. If
COSO is adopted within daily processes, it can effectively strengthen corporate
8 Enterprise Risk Management in Finance
There have been many models applied to managing risks. There are many
published research papers proposing optimization modeling to ERM. We
feel that optimization of supply chain systems involving risk is dangerous,
in that we think there is a fundamental conflict between optimal planning
of complex systems (which seeks to eliminate all excess resources) and the
capability of a system to deal with risk. Think of Chicago’s O’Hare Airport.
Airlines schedule it to the optimum, seeking maximum revenue through
service to the most customers feasible. But anyone who has traveled through
O’Hare knows that the system is saturated – the slightest drop of rain results
in a cascade of cancelled flights throughout the US and Canada. The airlines
have dealt with their risk – they can pay inconvenienced customers the
minimum allowed by regulation. The customers on the other hand travel
through O’Hare at their own risk (at least with respect to on-time arrival at
their destination). Should the airlines back off their optimal schedule, they
would have greater slack capacity to absorb the unexpected, which occurs
whenever the weather in Chicago turns the least bit nasty, which occurs
almost always.
Book outline
focus on the analysis of tradeoffs. Once models are used to examine expected
relationships between causes and effects, risk reduction and management
can be more effective. The usual forms of management of risks tend to be
based upon either financial models, or through frameworks. Risk management
modeling tools demonstrated include GARCH, data mining through neural
networks, bilevel programming, efficiency analysis through DEA, sentiment
analysis, and stochastic simulation. These tools offer advanced techniques
available to aid decision-making under risk.
2
Enron
One of the primary reasons for the current emphasis on ethical business
practice has been the history of the Enron Corporation. Enron was founded
in 1985 to manage a natural gas pipeline. It expanded operations to include
trading not only natural gas but also other energy commodities, including gas
and electricity. This trading included derivatives on the price of gas, to hedge
risk. Enron participated in a joint venture creating an online trading operation
offering options and other derivatives to traders in the gas industry. By 2000 it
was well known as a trading pioneer in that field, and this led to its stock doing
well on Wall Street; in 2001 it was rated as 7th on the Fortune 500.
Risk management
Enron had a strong reputation for its use of sophisticated financial risk man-
agement tools. It was in a business with long-term fixed commitments that
needed hedging to survive the inevitable fluctuations in energy prices. The
sophisticated tools included derivatives and transfer of risk to special entities,
but the problem was that Enron owned these special entities. Thus it really
didn’t transfer risk, but hedged with itself. Enron accounting practices were
creative, hiding losses and shuffling debts through complex trades,1 but such
practice was actually quite common in the deregulated fervor of the 1990s.2
Enron also became involved in trading electricity in California, and was
caught manipulating that market.3 Late in 2001 the Securities and Exchange
Commission began investigating Enron, and Enron stock dropped. Enron filed
for Chapter 11 bankruptcy on December 2, 2001. During this fall, Enron exec-
utives off-loaded stock and gave themselves large compensation packages while
urging lower-level employees to retain their stock. Many high-profile court
trials ensued, with criminal convictions of CEO Kenneth Lay and President
Jeffrey Skilling in 2006. Lay was convicted in May 2006 on five counts of
11
12 Enterprise Risk Management in Finance
securities fraud, wire fraud, and conspiracy to commit the same, but he did not
go to prison as he died of a heart attack in July, before sentencing. Skilling was
convicted in May 2006 of 28 counts of securities fraud, wire fraud, conspiracy
to commit same, false statements, and insider trading. He was sentenced to
24 years and 4 months of prison, and ordered to provide $45 million in res-
titution to victims. Skilling entered prison in 2006, with appeals pending; in
June 2013, his sentence was reduced to 14 years in prison.4
California electricity
In 1996, California modified its controls on its energy markets, seeking to
increase competition. A spot market for energy began operation in April 1998.
In May 2000, significant price increases for energy were experienced, due to
shortage of supply. This shortage has been attributed to a cap on retail elec-
tricity prices, along with market manipulation and illegal shutdowns of pipe-
lines by Enron. On June 14, 2000, California endured multiple blackouts on a
large scale. Drought, as well as delays in approval for new energy plants, also
played a role. The fiasco played a part in Governor Gray Davis’s political popu-
larity. In August 2000, San Diego Gas & Electric Company filed a complaint
about electricity market manipulation. January 17 and 18, 2001, saw more
blackouts, and Governor Davis declared a state of emergency. More blackouts
occurred on March 19 and 20. Pacific Gas & Electric filed for bankruptcy in
April, and there were more blackouts on May 7 and 8. Energy prices stabi-
lized in September. In December, following Enron’s bankruptcy, allegations
were made that Enron had manipulated energy prices, and the Federal Energy
Regulatory Commission began investigation in February 2002.
The Wall Street Journal published a series of studies of the Enron case. Reasons
for the fall of Enron included:5
The worst of Enron’s egregious activities included its California energy market
manipulation, which gouged California electricity customers of billions, as
Enron 13
well as betrayal of its own employees, who were encouraged to invest their
retirement funds in Enron stock while the executive board sold them out.
Accounting impact
The first manifestation of change after the fall of Enron was the fall of its audit
firm, Arthur Andersen, in June 2002. Andersen was convicted of obstruction of
justice, for shredding documents related to Enron. This barred it from auditing
US and foreign-based public companies, leaving 1085 firms in need of a new
auditor. The Sarbanes–Oxley Act was passed in 2002, requiring additional
auditing requirements. The remaining Big Four audit firms were flooded with
business by these two events, leading them to drop 500 clients between 2002
and 2005 (and also enabling them to raise audit fees).6
SOX
The Sarbanes–Oxley Act of 2002 is a federal law setting standards for US public
company boards. It was enacted in reaction to corporate and accounting scan-
dals to include Enron, Tyco International, Adelphia, Peregrine Systems, and
WorldCom, each of which involved company collapses that led to heavy losses
by investors.
SOX provided reforms to include requirements for certification, criminal
penalties to chief officers of offending corporations, internal controls, inde-
pendent audit committees, and regulations of disclosure. The Sarbanes–Oxley
Act provides for a number of sections displayed in Table 2.1.
Sarbanes–Oxley (SOX) has been heavily studied. On the positive side, evi-
dence of increased transparency of firms under SOX has been reported. The
costs of compliance were found to be 0.043% of revenue in 2006, and 0.036%
of revenue in 2007, with costs much higher for decentralized companies with
multiple divisions.7 Surveys have found that there has been a positive impact
on investor confidence in the reliability of financial statements and in fraud
prevention. However, most survey respondents have seen the benefits to be
less than the cost. Costs incurred are for external auditor fees, insurance for
officers and directors, board compensation, lost productivity, and legal costs.
Reaction of firms to avoid SOX have included going private, delisting on stock
exchanges, and staying small enough to avoid its requirements.8 Smaller inter-
national companies have been found to prefer listing in UK stock exchanges
rather than US stock exchanges, indicating a cost impact on smaller firms.9
Studies have found that those firms that use avoidance tend to be small, less
financially endowed, with weak governance and weak performance. Larger
firms in the US have been found to reduce risk-taking (in terms of capital
expenditure, R&D, and variance in cash flow and returns).10
14 Enterprise Risk Management in Finance
Conclusions
SOX was not solely driven by the Enron case, but that was probably the most
salient representative of a general business climate in the US in the 1990s,
when the drive to greater profit overrode concern about risk.
Thus SOX provides some structure that probably aids systematic management
of risk within organizations. Primarily, it makes it harder for business directors
to mislead investors, stipulating reporting requirements that more accurately
document a firm’s financial performance, and stipulating greater account-
ability for the board of directors and especially for their audit committee.
Systems such as SOX and the International Organization for Standardization
(ISO) provide greater security and control, often implemented through enter-
prise resource planning (ERP) systems that provide standardized processes and
reduce human errors. There are some problems, because SOX and ISO are essen-
tially additional bureaucracy that imposes rigidity. Overall, however, SOX and
ISO achieve a great deal in the effort to regulate dishonest business behavior.
3
Financial Risk Management
Introduction
Concepts:
15
16 Enterprise Risk Management in Finance
The Basel Accords have been instrumental in providing guidance for banking
risk management. The Basel Committee on Bank Supervision sets standards
with the purpose of prudent bank regulation. Basel I was created in 1988 by
regulatory representatives of G-10 countries, along with central bank input.
Basel I set minimum capital requirements, grouping bank assets into categories
of credit risk with differential liquid asset holdings specified for various levels
of risk. Basel II was published in 2004, aimed at providing international stand-
ards for banking regulations and mitigating against a sequence of related bank
failures arising from strong cross-relationships among banks. In addition to
minimum capital requirements, Basel II provided for supervisory review and
market discipline. After the banking crisis of 2008, Basel III was published in
2010–2011, increasing liquidity requirements and decreasing bank leverage.
The traditional financial risk management approach is based on the
mean-variance framework of portfolio theory, that is, selection and diversi-
fication.2 Over the past 20 years, the field of financial risk management has
experienced a fast and advanced growth at an incredible speed. It is widely
recognized in finance that risk can be understood as two types: systematic
(non-diversifiable) risk, which is positively correlated with the rate of return,
and unsystematic (diversifiable) risk, which can be diversified by increasing
the number of securities invested. Based on the portfolio theory, the Capital
Asset Pricing Model (CAPM) was discovered to price risky assets on perfect
capital markets.3 A derivative market grew tremendously, with the recognition
of option pricing theory.4 Value-at-Risk models have been popular,5 partly in
response to Basel II banking guidelines. Other analytic tools include simu-
lation of internal risk rating systems using past data.
Economic risk management is originally based on the Expected Utility
Theory by recognizing people’s risk attitude on different sizes of risk – small,
medium, large – which is derived from the utility-of-wealth function.6 People
are risk averse when the size of the risk is large, and risk neutral when the scale
of risk is small.7 Decision-making behaviour when faced with lotteries and
other gambles motivates most of the studies on risk issues. The complexity
presented by derivatives, to include CDOs and CDSs, is complicated even more
by the use of high-speed computer trading.
Risks are traditionally defined as the combination of probability and severity,
but are actually characterized by additional factors. Markowitz (1952) equated
risk with variance, which would be controlled by diversification, considering
correlation across investments available, and focused on efficient portfolios
non-dominated with respect to risk and return. This leads to the need for some
calculus of preferences, such as multi-attribute utility theory. Financial risk
management has developed additional tools such as value at risk. Risk character-
istics include uncertainties, dynamics, dependence, clusters, and complexities,
which motivate the utilization of various operational research tools. Risks can
Financial Risk Management 17
Investment collars
The idea of a collar model is an option strategy using puts and calls simultan-
eously to manage investment risk. A put option is purchased by the investor
gaining the right to sell the underlying equity shares at a stated exercise price.
This provides downside protection. To offset the cost of the put, the investor
sells a call option, where underlying shares are sold at another exercise price.
The collar refers to the range set by the put and call options. Collars used in
this manner are intended to provide an alternative to diversification for risk
management, given the difficulties experienced in 2008 with correlation across
investments intended to be diversified. The benefit of the collar is to limit
downside risk. The cost is that upside benefit is capped (as well as the investor
being out the cost of obtaining put and call options). There is additional risk of
counterparty default on the put side. Collars have also been applied in the case
of adjustable rate mortgages with respect to interest rates.
VaR
Value at risk (VaR) is one of the most widely used models in risk management
(Gordon, 2009). It is based on probability and statistics (Jorion, 1997). VaR
can be characterized as a maximum expected loss, given a time horizon and
within a given confidence interval. Its utility is in providing a measure of risk
that illustrates the risk inherent in a portfolio with multiple risk factors, such
as portfolios held by large banks, which are diversified across many risk factors
and product types. VaR is used to estimate the boundaries of risk for a portfolio
over a given time period, for an assumed probability distribution of market
performance. The purpose is to diagnose risk exposure.
Definition
Value at risk describes the probability distribution for the value (earnings or
losses) of an investment (firm, portfolio, etc.). The mean is a point estimate of
a statistic showing a historical central tendency. Value at risk is also a point
18 Enterprise Risk Management in Finance
0.6
Probability density function
0.5
0.4
0.3
0.2 1–α Maximum loss
0.1
0.0 VaR CVaR
–1 0 1 2 3 4 Loss
Mean
estimate, but offset from the mean. It requires specification of a given prob-
ability level, and then provides the point estimate of the return or better
expected to occur at the prescribed probability. For instance, Figure 3.1 gives
the normal distribution for a statistic with a mean of 10 and a standard devi-
ation of 4 (Crystal Ball was used, with 10,000 replications).
However, value at risk has undesirable properties, especially for gain and loss
data with non-elliptical distributions. It satisfies the well accepted principle of
diversification under normal distribution; however, it violates the fairly well
accepted subadditive rule; that is, the portfolio VaR is not smaller than the
sum of the component VaR. The reason is that VaR only considers the extreme
percentile of a gain/loss distribution without considering the magnitude of
the loss. As a consequence, a variant of VaR, usually labeled Conditional-Value-
at-Risk (or CVaR), has been used. In computational issues, optimization of CVaR
can be very simple, which is another reason for adoption of CVaR. This pioneer
work was initiated by Rockafellar and Uryasev (2002), where CVaR constraints
in optimization problems can be formulated as linear constraints. CVaR repre-
sents a weighted average between the value at risk and losses exceeding the
value at risk. CVaR is a risk assessment approach used to reduce the probability
that a portfolio will incur large losses assuming a specified confidence level.
It is possible to maximize portfolio return subject to constraints including
Conditional Value-at-Risk (CVaR) and other downside risk measures, both
absolute and relative to a benchmark (market and liability-based). Simulation
CVaR-based optimization models can be developed.
Value-at-risk (VaR) has become a popular risk management tool. It is often
used to measure the risk of loss on a specific portfolio of financial assets, in
terms of the probability of losing a specified percentage of the portfolio in
mark-to-market value (current market price) over a certain time. If there is a
0.01 probability that a given portfolio has a daily 1% VaR of $100,000, this
Financial Risk Management 19
portfolio will lose $1000 (=0.01*100,000) in a given day with that probability.
This infers that a loss of $1000 in a day with this portfolio is expected on 1 day
out of 100. Banks commonly report VaR by type of market factors, such as cur-
rency rates, commodity prices, interest rates, equity prices, etc. The financial
industry has grown to view VaR as a metric indicating relative change in risk
for a given investment. In fact, before 2008, investment banks were known to
have guided their people to seek investments with higher VaR in the expect-
ation that they would yield higher returns, relying on the portfolio aspects of
CDOs to manage the risk.8
There are two broad uses of VaR. The short-range use is for risk management.
If the organization’s portfolio measured in mark-to-market terms falls below the
VaR at the end of the time horizon, assets are sold off to gain enough cash to
cover the deficiency. But there is an alternative use in risk measurement, taking
a longer-term view. Here, the aim is to measure the fluctuation in VaR and use it
as an indicator of trends in relative risk for the firm or investment department.
There are three basic ways to compute value-at-risk. The statistical approach
assumes a distribution (normal commonly) which can use the variance-
covariance of the investments in a given portfolio. We will demonstrate this
calculation in the simple case of one investment (avoiding the more complex
formulation involving covariance). Historical simulation looks at past data and
simply sorts outcomes, selecting the probability level desired; if you want a
0.99 probability assurance of not exceeding a VaR, use the observation that
is the 0.01 lowest. The third method is Monte Carlo simulation, which is the
most flexible (you can model any assumption you want, to include select distri-
butions you prefer, and can complicate the model with external probabilities
such as the probability of catastrophe). However, Monte Carlo is also the most
involved, and provides an estimated solution rather than a precisely calculated
outcome.
Even the simplest of these methods, using variance calculations, involves
some complications in the details. First, the definition’s strict sense is the
measure of the probability of the worst case falling below VaR over a given
time horizon. This turns out to be fairly difficult to compute. Thus a redefin-
ition of the end-of-period value, to below VaR, is often used. Another compli-
cation is the time horizon. One day is the most common, in which case the
time horizon isn’t so important; however, there are cases for longer time hori-
zons. Yet another complication is that the distribution of outcomes for positive
events (favorable) might differ from that of negative events (losses). This in fact
would be expected if outcomes were lognormally distributed. If the normal
distribution were used with different distributions for positive and negative
outcomes, daily price changes could be separated into positive and negative
groups, and each group analyzed separately. Since VaR is worried about losses,
the negative group would be of interest.
20 Enterprise Risk Management in Finance
Copulas
things, including stable correlation across assets over time. But in the housing
mortgage industry around 2007–2008, the flaw in this logic was seriously
exposed. Furthermore, Salmon (2009)11 pointed out that mortgage pools have
greater volatility than most bonds, as there is no guaranteed interest rate, in
part because mortgage borrowers do things like being late on payments, quit-
ting making payments, or conversely, paying loans off early. The pooling idea
was expected to be safe, first because few living people remembered house
prices doing anything but going up, second because housing markets across a
country (or across the world) were expected to be independent enough that a
down period in one area would be offset by gains in other areas, third because
defaults were rare and spread out over time. However, 2007 and 2008 saw wide-
spread decline in highly populated areas of the US, generating high degrees of
correlation.
Tranches
Conclusions
The overall lesson seems to be that there is no free lunch. It doesn’t make sense
to speculate on things that appear too good to be true. If too many people
do that, the system will inevitably catch up with the herd involved in that
irrational behavior. An accompanying lesson, unfortunately, is that those agile
enough, like some initiating lenders and investment banks, can rip off many
investors before the herd sees the light.
4
The Real Estate Crash of 2008
Introduction
The current population of the United States has grown up with what seemed to
be a steady and reliable increase in value of homes. Owning one’s own home
is one of the signs of a prosperous and successful culture. In the 1930s, many
people lost title to their homes during one of the greatest failures of human
economic systems known. In response, banks and mortgage lending were
regulated, bank deposits insured up to a level covering what most people had,
and stock-trading practices ostensibly controlled. True, very old people could
remember a time when the price of housing dropped, but with the inevitable
passage of life with time, this group grew smaller and smaller, and older and
older, and less relevant. There also were anomalies in local areas where, for
whatever reason, home prices might negatively fluctuate. Reinhart and Rogoff
have described five such anomalies, all associated with banking crises (Spain
in 1977, Norway in 1987, Finland and Sweden in 1991, and Japan in 1992).1 But
house price decline occurred very rarely, and nobody really noticed.
There also was a strong feeling that less regulation was always better. Ronald
Reagan may have been a democrat in his younger years, but he became the
champion of the conservative class. He rode this popularity to the presidency,
which was characterized by an emphasis on standing up to aircraft control-
lers and Sandinistas, along with a conservative preference for less government
interference. Even when a democratic president appeared, such as Bill Clinton,
his economic regulation (whether due to preference or poll-counting) did not
vary that much from conservatives. Many of the regulations imposed during
the 1930s were overturned in an effort to let the market run free, which was
expected to lead to a golden age of prosperity.
Financial research developed many tools that became popular during this
period. The Efficient Market Hypothesis held that asset prices are always and
everywhere at the correct price, a view that fits well with the concept that that
23
24 Enterprise Risk Management in Finance
These are only four of many economic crashes, all preceded by excessive rapid
growth (bubbles).4 There are many studies of these bubbles. Some bubbles
consist of expansion, followed by rising prices, overtrading, and mass par-
ticipation, followed by an event triggering doubt, a subsequent selling flood,
and ultimate collapse. The 1990s saw the crash of LTCM, demonstrating that
theoretical models do NOT cover everything, that every model leaves some-
thing out, and that life is more complex than anyone understands. Trade in
technical stocks made the NASDAQ highly popular, but it also demonstrated
a bubble, crashing the confidence of the brave new world of computer techs.
Yet investors still held great confidence in certain sectors of the economy, such
as real estate, which was as safe as houses. Risk managers created new tools to
make investment safer. Derivatives5 are securities or contracts deriving value
from an underlying natural security, such as a stock, a bond, or a mortgage.
While derivatives provide some security through diversification, their primary
attraction is that they enable high degrees of leverage.
The crisis is credited with beginning with the collapse of the US subprime
residential mortgage market in 2007, which spread throughout the world due to
exposure to US real estate assets through financial derivatives.6 Causes could be
the growth in asset securitization, US government initiatives to expand home
ownership (thus encouraging fewer loan restrictions), expansionary monetary
policy, and weaker regulatory oversight.7 The real estate price boom was furthered
by financial institution exploitation of loopholes in capital regulation, allowing
them to significantly increase leverage while remaining within required capital-
ization. Mortgage derivatives allowed investment in riskier and non-liquid assets
funded in wholesale markets, without sufficient capital backing. This, along
with high dependence on a short-term view and lax regulatory oversight, has
The Real Estate Crash of 2008 25
been credited with inducing the collapse of the bubble in 2008.8 Distress first
appeared in 2007 with losses by US subprime loan originators and those holding
derivatives based upon such mortgages. In late 2007, losses by Northern Rock, a
UK mortgage lender, indicated that the bust was going global.
Wiseman (2013)9 outlined four stages in a real estate cycle, with corre-
sponding activities (see Table 4.1).
Phase Characteristics
Mortgage system
With deregulation, the mortgage industry became more specialized, with a
number of organizations playing a role in the overall system. Lenders such
as Green Tree Finance led the charge to issue as many mortgages as possible
(mobile homes in the case of Green Tree; Ameriquest, Countrywide, Golden
West and others for conventional homes).10 These mortgages were sold to banks
and other investment agencies, so the mortgage initiators had little concern
other than generating lots of mortgages and making a living out of the fees.
In fact, many home owners saw the inevitable rise in home value to be an
opportunity to make a financial killing by leveraging as many home loans as
they could, making purchases for speculation rather than for residence. The
purchasers of these mortgages often combined various tranches of different
levels of perceived risk, with higher interest rate mortgages associated with
higher probability of loan failure. This evolved into the convolution in logic
that marginal borrowers, who had no choice but the highest interest rates, were
preferred customers. These instruments were sold to investors. Meanwhile,
these banks often covered their risk in innovative ways. A credit default swap
(CDS) is a credit derivative similar in concept to insurance; should the under-
lying asset fail, the purchaser receives payment. A collateralized mortgage obli-
gation (CMO) is a certificate built from tranches of mortgage-backed securities.
Thus it is a tranched instrument of an instrument that is already tranched. A
collateralized debt obligation (CDO) is similar, but can be based on any kind of
debt, not just mortgages.11
Northern Rock
2007 Event
Table 4.4 Northern Rock holdings before and after run (millions of pounds)
AIG
Date Event
February 11, 2008 AIG announced write down of $4.88 billion in CDSs
September 15, 2008 AIG reported to be seeking $40 billion in capital to avoid
downgrading by credit rating firms
September 17, 2008 Fed authorized loan of $85 billion to AIG, giving
Government 79.9% equity in AIG and veto power over
dividends, to be repaid in 24 months
October 9, 2008 Fed authorized bailout package of another $37.8 billion in
securities in exchange for cash collateral
market price nightly. The investment banks were buying insurance that the
CDS would not fall below a certain value, and a check every night made it more
likely that AIG would have to pay off the swap.17 Events related to the 2008 real
estate crisis are shown in Table 4.5.
While AIG made a lot of money issuing CDSs before 2008, investment banks
took the opportunity to purchase many CDSs that paid off in 2008. In fact,
they purchased more CDSs than the value of the underlying mortgage assets.
By September 16, 2008, AIG was in severe difficulty, its stock down to $3.75
(it had been $63.44 a year earlier).18 The failure of AIG has been attributed to
high-leverage trading, just as with large banks such as Lehman Brothers and
Bear Stearns.19 Other problems cited were lack of transparency with respect
to the risk of CDSs and CDOs; adverse selection, in that investment banks
knew more about the risks associated with the coverage they purchased from
AIG than AIG did; and the high magnitude of unhedged CDOs held by AIG
($562 billion). The issue with unhedged CDOs was complicated in that the con-
ventional expectation is that they are naturally diversified, but the mortgage
markets upon which they were based turned out to have a highly correlated
downward trend.
Risk management
The Northern Rock case demonstrates that there is more to risk management
than simply dealing with the hedging aspects of financial instruments. Linsley
and Slack focused on the ethical criticisms of Northern Rock.20 After becoming
a publicly traded bank, with the ability to access money market (wholesale)
financing, it found itself financing long-term mortgages with short-term
funding, which created a technical financial issue, as its cash flow was subject
to the variations in short-term financing. In the summer of 2007, the sub-prime
The Real Estate Crash of 2008 31
mortgage issues led to less liquidity on the wholesale money markets, placing
Northern Rock under great stress. Linsley and Slack looked at Northern
Rock’s press releases over the period 2005–2008, finding that it emphasized
robustness and strength in the early period, based on its claimed performance
and growth, the strength of the housing market and economy, a sound strategy
and business model, and its ability to manage lending risk. There was no ref-
erence to stakeholder relationships, and Linsley and Slack detected no evidence
of Northern Rock caring about customer welfare. Northern Rock did benefit
favorably from its previous mutual status, its local nature, and The Northern
Rock Foundation, which provided a charitable presence. The crisis led stake-
holders to adjust their view of Northern Rock. Northern Rock’s communica-
tions implied that the crisis was an external problem that would disappear with
time. The press release announcing that Her Majesty’s Treasury had guaranteed
a portion of deposits appears to have had little acknowledgement of depositor
concerns, and to have underestimated the negative impact. The passive nature
of Northern Rock’s response was attributed to be the reason for the protection
release announcement, leading to the run on the bank.
Clearly we infer that bank risk management involves more than monitoring
VaR, and even more than hedging. Banks by their very nature depend upon
customer confidence. The Northern Rock case demonstrates the negative
impact of not paying attention to developing and maintaining good relation-
ships with its depositors.
5
Financial Risk Forecast Using Machine
Learning and Sentiment Analysis
Introduction
Fluctuation in trading volume, just like that of stock price, vividly reflects
market behavior. We investigate associations between trading volume volatility
and online information volume, with the intent of forecasting the former.
Online financial information volume has been assumed to be an important
element affecting the financial trading volume volatility. We forecast vola-
tility, relying partly on online information, using both an ANN-based and
an SVM-based approach. We compare these forecasting models, to observe
their performances. The basic architecture of this approach is displayed in
Figure 5.1.
32
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 33
Training Forecasting
Trading volume
Google Financial news Yahoo Financial changing rates
finance volumes finance trading volumes and corresponding
volatilities
GARCH-ANN Generated
Forecast result
approach rules
GARCH-SVM Generated
approach Forecast result
rules
Comparative
study
Trading volume
Financial
changing rates and
news
corresponding
volumes
volatilities
Financial
Yahoo Google
trading
finance finance
volumes
Figure 5.1 Flow chart and functional parts of our approach associating information
volume and volatility
HowNet is used to calculate the sentiment of news pieces through its set of
keywords. Each news piece is decomposed and converted into a keyword array
in the same sequence as the words appear in the chapter, each word being
assigned a specific sentiment value based on the HowNet word corpus. The
overall sentiment for the whole chapter is acquired by combining the sentiment
values of all of its keywords. After the sentiment time series is obtained, it is fed
into the machine-learning system (SVM in particular) as one of the exogenous
inputs. By assigning sentiment as one element of the feature vector for a listed
company, non-linear correlation between online news sentiment and financial
volatility can be quantitatively analyzed, which might eventually lead to more
effective and efficient volatility forecasting. The basic architecture of this
approach can be seen in Figure 5.2.
Financial news used in this phase of the study was acquired from a variety
of online sources, and experiments were carried out on a huge body of com-
panies listed on US stock markets. Aggregated statistical results are the key tool
to substantiate nonlinear correlation between the two entities. Table 5.2 is a
snippet of the financial news entries with their calculated sentiment values; in
the time window, 2007–1 indicates that this is a news entry in the first week
of 2007.
Webpage
Internet
crawling
Cleaning Data
preprocessing
Parsing
Segmenting
Internet
Stemming
Historical
quote
Sentiment time
series Asset price volatility/
Trading volume volatility
time series
GARCH-SVM
approach
Predicted Prediction
time series error
Figure 5.2 Flow chart and functional parts of our approach to associate information
sentiment and volatility
yt = μ t + ε t ,
(5.1)
ε t |ψ t −1 ~ N (0, σ t2 ),
(5.2)
p q
σ t2 = α 0 + ∑α σ
i =1
i
2
t −i + ∑ β j ε t2− j , (a0 > 0, ai, bi ≥ 0)
j =1
(5.3)
Table 5.2 A snippet of news entries for the companies ADCT, S and MRO
2007010102057465 An Insecure Future for 2007–1 ADCT Perhaps its fatigue with the options scandal 45.4
McAfee that has now spread to more than 100 tech-
nology companies. Perhaps its ...
2007010202063354 Stocks end 2006 with best 2007–1 S NEW YORK (MarketWatch) – U.S. stocks 17.8
gains in three years finished the year with strong gains Friday,
with all three major stock averages booking
their best performance since 2003. The Dow
Jones Industrial Average ($INDU :
$INDUNews)
2007050203261590 Marathon Oils lower earn- 2007–18 MRO SAN FRANCISCO (MarketWatch) – −0.8
ings top forecasts Marathon Oil Corp. reported Tuesday a drop
in first-quarter earnings, clipped by lower oil
and gas prices and a decline in production.
For the three months ended March 31,
Marathon (MRO : MRONews)
2007060203595695 Still looking good 2007–22 ADCT ANNANDALE, Va. (MarketWatch) – It’s been 11.8
a little bit over two months since the trig-
gering of a rare, and historically very
bullish, technical signal. (Read my March 22
column.) Can we count on the bullish winds
of that signal blowing into the ...
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 37
where the daily return yt is sum of the deterministic mean return μt and a sto-
chastic term εt, also known as the shock, forecast error, residual, innovation,
etc.,3 ψt−1 represents the information set available at time t, and σt2 is the time-
varying variance of both yt and εt. In our approach, we substitute yt with the
daily changing rate of either the trading volume or the asset price.
The GARCH model uses εt as a function of those exogenous inputs, which
have some affect on financial volatility. The GARCH model bases its condi-
tional distribution on the information set available at time t. Freisleben and
Ripper 4 point out that the parameter βi in Equation (5.3) describes the stock
return’s immediate reaction to new events in the market, mostly in the form
of financial news. Meanwhile, the fast development of the internet enables us
to acquire the online financial information in a real-time, exhaustive fashion.
Considering these factors, designating financial information volume as one
variate of εt is justifiable.
Therefore we formulize εt using the following two equations,
ε t = yt − ζ, (5.4)
yt = μ t + ε t , (5.6)
ε t |ψ t −1 ~ N (0,σ t2 ) , (5.7)
p q
α t2 = α 0 + ∑α
i =1
i χ t − i (σ t2− i ) + ∑β ϕ
j =1
j t−j ( yt2− j )
r
(5.8)
+ ∑ γ k φt − k (Wt2− k )
k =1
where p, q, r represent the three time lags, the three unknown functions, xt–i,
φt − j, and ϕ t − k represent the undetermined nonlinear correlations.
Financial time series exhibit specific features that make a GARCH model a
preferable alternative. We assume that the volatility of financial trading volume
shares similar characteristics to that of stock price in exhibiting GARCH effects.
Figure 5.3 demonstrates that the daily changing rates of the trading volumes of
NASDAQ index within the period October 11, 1984–October 16, 2006 exhibit
volatility clustering. A kurtosis value of 11.53 also implies that there is an
underlying fat tail effect. We have discovered obvious GARCH effects exhib-
ited by online financial information volume time series in tests conducted on
more than 100 stocks in the US stock markets.
38 Enterprise Risk Management in Finance
0.5
–0.5
–1
500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
Figure 5.3 Daily changing rates of the trading volumes of NASDAQ index, October 11,
1984–October 16, 2006
vt
yt = ln . (5.9)
vt −1
If D is defined as the width of the calculating window, the volatility st2 can be
calculated by computing the variance of the yt within the (t − D + 1)th and the
tth day as
D −1
∑ (y t −i − yt )2
σ =
2 i=0 , (5.10)
t
D −1
where
D −1
∑y t −i
yt = i=0 . (5.11)
D
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 39
Table 5.3 The eight word sets we use in this chapter to calculate the keyword sentiment
POSITIVE A list of English words that have positive emotional polarity, which
includes a set of 4363 words.
NEGATIVE A list of English words that have negative emotional polarity, which
includes a set of 4574 words.
PRIVATIVE A list of privative English words, which includes 14 words.
{no, not, none, neither, never, hardly, seldom, barely, scarcely, ain’t,
aren’t, isn’t, hasn’t, haven’t}
The following five sets are modifiers, whose intensities decrease while i increases.
MODIFIER1 64 modifier words, with WEIGHT1 = 2.
MODIFIER 2 25 modifier words, with WEIGHT−2 = 1.8.
MODIFIER3 22 modifier words, with WEIGHT−3 = 1.6.
MODIFIER4 15 modifier words, with WEIGHT−4 = 1.4.
MODIFIER5 11 modifier words, with WEIGHT−5 = 0.8.
40 Enterprise Risk Management in Finance
Is w a positive N N
Is w a negative Output v as 0
word? word?
Y Y
Y
N
Multiply v with
WEIGHTi
Output v
End
W1 … Wi–1 Wi Wi+1 … WT
Training
W1 … Wi–1 Wi Wi+1 … WT
Forecasting
⎡ p2 − ⎤
⎢σ i −1(1) yi −1(1) Si −1(1)
p2 2
⎥
⎢ p2 − ⎥
I = ⎢σ i −1(2) yi −1(2) Si −1(2) ⎥
p2 2
⎢ ⎥
⎢... ⎥
⎢ p2 p−2
⎥
⎣⎢σ i −1( M ) yi −1( M ) Si −1( M )⎦⎥
2
and
⎡σ ip 2 (1) ⎤
⎢ p2 ⎥
σ (2) ⎥
O = ⎢⎢ i ⎥,
...
⎢ ⎥
⎣⎢σ i ( M )⎦⎥
p2
42 Enterprise Risk Management in Finance
where σ ip−21( k ) (k = 1,2,3, ... M ) is the asset price volatility within time window
−
Wi−1 for company k, yip−21( k ) represents the average daily changing rate
of the asset price of company k in Wi−1, and Si2−1( k ) is the sum of the sen-
timent values for all the news entries relating to company k within the time
window Wi−1.
Accordingly, denote by <I ’, O ’> the forecasting input and output matrix tuple
for Wi, and we have
⎡ p2 −p 2 ⎤
⎢σ i (1) yi (1) Si (1)
2
⎥
⎢ −p 2 ⎥
I 9 = ⎢σ i (2) yi (2) Si (2) ⎥
p2 2
⎢ ⎥
⎢... ⎥
⎢ p2 −p 2 ⎥
⎢⎣σ i ( M ) yi ( M ) Si ( M )⎥⎦
2
and
⎡σ ip+21(1) ⎤
⎢ p2 ⎥
σ (2) ⎥
O9 = ⎢⎢ i +1 ⎥.
...
⎢ ⎥
⎢⎣σ ip+21( M )⎥⎦
The empirical studies are roughly composed of two parts. In our first
experiment, we utilize both GARCH-based ANN and SVM to study the corre-
lations between financial information volume and trading volume volatility,
and conduct a comparative study of different machine-learning techniques in
financial volatility forecasting. The daily volatility model serves as the primary
approach to calculating volatility. The data sets used in this step are limited
to the trading quote and news data for two indices and two listed companies
in the US stock markets. The time horizon spans three months. In the second
experiment, we apply the GARCH-based SVM model to data from all of 2007
for 177 listed companies in the US markets, but in this experiment a time-
window-based volatility model is used instead. For both experiments, historical
financial quotation data is downloaded and formatted from Yahoo Finance
(http://www.finance.yahoo.com).
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 43
Table 5.4 Predicted values of the average forecast error and the volatility trend forecast
accuracy ratio
the RBF. Two major performance metrics are introduced in this experiment
to evaluate the aggregated forecast performance for all the 177 companies:
squared correlation coefficient (SCC) and volatility trend forecast accuracy
(VTFA). SCC and VTFA are computed based on the forecasting values for each
time window.
The squared correlation coefficient evaluates the correlation of all the
explanatory variables to the response variable. The closer this value is to 1, the
better regression result is achieved.
The volatility trend forecast accuracy is the proportion of companies with an
accurately predicted volatility trend among all the companies. The definition
of volatility trend is consistent with the first experiment.
Table 5.5 presents a demonstration of the asset price volatility for 177 com-
panies using information sentiment during the year 2007. Note that the penalty
parameter c and the RBF kernel parameter g are set as c = 64 and g = 1/3.
Table 5.5 Forecast results for 177 listed companies during the year 2007
Continued
46 Enterprise Risk Management in Finance
3
Price volatility
2.5
2
1.5
1
0.5
0
Week3
Week5
Week7
Week9
Week11
Week13
Week15
Week17
Week19
Week21
Week23
Week25
Week27
Week29
Week31
Week33
Week35
Week37
Week39
Week41
Week43
Week45
Week47
Week49
Week51
Time window
Figure 5.6 Price volatility forecast result for company MDT over all the time windows
1.4
1.2
Price volatility
1
0.8
0.6
0.4
0.2
0
Week3
Week5
Week7
Week9
Week11
Week13
Week15
Week17
Week19
Week21
Week23
Week25
Week27
Week29
Week31
Week33
Week35
Week37
Week39
Week41
Week43
Week45
Week47
Week49
Week51 Time window
Figure 5.7 Price volatility forecast result for company WAG over all the time windows
accurately predicted
90.00%
price volatility trend
80.00%
companies with
Proportion of
70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
Week3
Week5
Week7
Week9
Week11
Week13
Week15
Week17
Week19
Week21
Week23
Week25
Week27
Week29
Week31
Week33
Week35
Week37
Week39
Week41
Week43
Week45
Week47
Week49
Week51
Time window
Figure 5.8 Price volatility trend forecast accuracies for all the time windows
48 Enterprise Risk Management in Finance
70% of SCC was achieved for both price volatility and trading volume vola-
tility forecasting, giving convincing evidence to the correlations between
these factors.
This experiment extends existing studies to a large set of companies.
Empirical results are reflected by aggregated statistics, indicating the effects of
information on entire stock markets. The results of this phase, although only
focused upon US markets, provides a vivid description of the macro influences
of financial news on financial volatility. It is of critical value to strategic deci-
sion-making by financial practitioners who seek to gain a panoramic picture
of the general market.
Conclusions
Introduction
Over the past few decades, behavior finance and risk management have
attracted a great deal of attention from both researchers and practitioners
seeking to explain investor sentiment behavior, risk and loss perceptions,
factors affecting investment strategy and investor behavior related to ongoing
market trends.1
There is a great deal of unstructured data and information of public senti-
ments and opinions on market fluctuations posted on the internet today. This
includes, for example, discussion forums, blogs, or message boards such as
Facebook and Twitter, major sources of big data. Investors’ opinions and senti-
ments greatly impact on market volatility.2
This chapter develops a sentiment ontology for conducting context-sen-
sitive sentiment analysis of online opinion posts in stock markets by inte-
grating popular sentiment analysis into machine-learning approaches based
on support vector machine (SVM) and generalized autoregressive conditional
heteroskedasticity (GARCH)3 modeling.
Figure 6.1 presents the conceptual flow chart for the methodology we use to
conduct context-sensitive sentiment analysis of online opinion posts based on
multiple sources of data. We first manually label sentiment polarity of a subset
of postings. Then using sentiment analysis and these labeled postings, we
identify features from written stock forum text to automatically predict sen-
timent polarity of other postings. We then aggregate postings for each stock on
a daily basis. Thereafter we use an aggregated sentiment index and SVM clas-
sifiers to build GARCH-SVM4 models to predict future stock price volatility. As
49
50 Enterprise Risk Management in Finance
Data
preprocessing
Internet Segmenting
Simulation,
Multi- prediction
source Trading,
data statement
Dynamic training
and forecasting
can be seen from Figure 6.1, two sources of data are collected and incorporated
into the model: sentiment-related data and historical financial time series.
Sentiment analysis
In market prediction, sentiment analysis technology is employed to automat-
ically classify unstructured reviews as positive or negative, and then identify
investor sentiment as either ‘bullish’ or ‘bearish.’ We consider two approaches
for sentiment analysis: the machine-learning-based approach and the lexicon-
based approach.5
In the machine-learning-based approach, an n-gram model is necessary for
sentiment classification. The n-gram model takes characters (letters, spaces, or
symbols) as the basic units. The advantages of the n-gram method are: (1) It
is language-free, and can be applied to texts in English, traditional Chinese
and simplified Chinese; (2) Linguistics processing, word segment and part-of-
speech tagging of the text are unnecessary; (3) It has good fault tolerance to
identify spelling mistakes, and the requirement for extant knowledge of text is
low; (4) Dictionaries and regulations are unnecessary.
If the n-gram model is selected, classification accuracy would decline with
the increase of order, i.e. 1-grams>2-grams>3-grams. We therefore first selected
1-grams as potential features from the training set, then manually adjusted the
characters according to the following principle: When selecting 1-grams, there
will be many punctuation marks (commas and periods) that contribute little to
classification. Table 6.1 demonstrates a simple example. In order to improve clas-
sification accuracy, those punctuation marks should be eliminated manually.
Figure 6.2 demonstrates the basic process of the lexicon approach, which
includes pretreatment, word segmentation, POS (part of speech) tagging,
polarity (‘bullish’ or ‘bearish’) tagging, combining and results output.
Online Stock Forum Sentiment Analysis 51
File 1-grams
Data
Both stock forum data and financial time series data are used in this study.
The stock forum data utilized a corpus of stock reviews taken from a well-
known finance website, Sina Finance.
• Stocks were chosen to focus on the military sector, because their stock forum
showed a wide range of activity.
• For a period of four months, from July to November 2012, we downloaded
every message posted to these forums. We then employed our voting algo-
rithm with the larger training set to assess each review and determine its
sentiment.
Figure 6.3 shows that 96.82% of the total reviews occurred during the working
day, indicating high volume during trading days, and low activity during
weekends and holidays.
Figure 6.4 shows that most reviews occurred at 9 am, 10 am, and 4 pm, indi-
cating that investors prefer to communicate with each other at the opening and
closing stages of the stock market. At the same time, many reviews occurred at
7 pm, during non-working hours. This differs from the American stock market.
We analyzed the content of the reviews and found that forecast reviews would
always be updated after 3 pm. One possible reason is that the stock market
in China closes at 3 pm, and investors will then make predictions about the
future stock market.
We chose the reviews updated after 3 pm on the trading day to create a
corpus of online stock reviews on the whole stock market, from July 15, 2012
to November 15, 2012. This corpus was divided into two sets: a training corpus
and a forecast corpus. We first manually labeled sentiment polarity (‘bullish’ or
‘bearish’) in the training corpus. Then, using sentiment analysis and the labeled
training reviews, we identified features from the written text of the stock forum
to automatically predict the sentiment polarity of the other reviews. When cre-
ating the forecast corpus, the number and date of the reviews were recorded.
600
500
400
300
200
100
0
1 2 3 4 5 6 7
80
60
40
20
0
1 3 5 7 9 11 13 15 17 19 21 23
Methodology comparison
Classification performance for the machine-learning approach and the
lexicon approach is compared. The resulting computation shows that
the statistical machine learning approach has a classification accuracy of
81.82%, higher than that of the semantic approach, which has a classifi-
cation accuracy of 75.58%. Classification accuracy is the degree of close-
ness of computed results to actual (true) value. Note that this comparison
is statistically significant at the 0.95 level ( p value = 0.018). Table 6.2 shows
that classification accuracy of the statistical machine-learning approach is
reasonably robust with respect to the size of the training set when the size
is more than 600. The p value = 0.018 suggests that this is true at the sig-
nificance level of 95%.
Reviews were classified by our algorithms into one of three types: bullish
(optimistic), bearish (pessimistic), and neutral. Here the chi-square test shows
that there are significant differences in classification accuracy between the
semantic method and the statistical method. Because the statistical approach
possessed higher accuracy when compared with the semantic approach, we
opted for the statistical approach to assign labels for the reviews: bullish,
bearish and neutral.
When the size of training set was relatively small, classification accuracy
could be improved by expanding the training set. But when training reviews
were increased to a certain number, accuracy declined, and expanding the
training set caused an increase in training time. Therefore, when choosing
the size of training set, it is necessary to balance efficiency and accuracy. To
achieve this balance, we first manually labeled about 30,000 reviews to three
distinct sentiments, 1 for bullish, −1 for bearish, and 0 for neutral sentiment –
referred to as ‘manual labels.’
54 Enterprise Risk Management in Finance
Accuracy
Conclusions
Introduction
In financial markets, there are many kinds of investments, with stock the most
popular. When investors choose which stock to invest in, they may expect
high returns from investing in high performance companies. However, the
greatest concern for investors is whether their investment has the potential for
high returns, and whether the high performance companies will always yield
high returns.
Even after the dotcom collapse, US internet stock has remained a popular
investment. However, investors are still concerned about future Internet
Bubbles. Thus, the US internet stock market is a useful research focus with
respect to financial performance.
From an accounting perspective, the return on equity (ROE) ratio is an
important indicator to measure the performance of a company, because the
goal of a company is to maximum the stockholders’ equity. The DuPont model
breaks ROE into three parts: profit margin, total asset turnover and financial
leverage.1 It enables us to identify the existence of many indicators that influence
the performance of a company. Hence, multiple indicators are considered. Data
Envelopment Analysis (DEA) is a performance evaluation method capable of
considering multiple inputs and multiple outputs. In this research, we aim to
formulate an evaluation process combining the DEA method with the concept
of ROE. Investors can use this as a stock selection method, and managers can
use it for performance evaluation.
57
58 Enterprise Risk Management in Finance
related or independent of each other. In addition, the problems being faced are
extremely complex and unpredictable.
A number of techniques have been proposed. Objectivity, fairness and feasi-
bility are crucial for performance evaluation. This study reviews seven methods
applicable to the evaluation of performance. They are (1) Multivariate Statistical
Analysis,2 (2) Data Envelopment Analysis,3 (3) Analytic Hierarchy Process,4
(4) Fuzzy Set Theory,5 (5) Grey Relation Analysis,6 (6) Balanced Scorecard,7
and (7) Financial Statement Analysis.8 The fundamental theories of the seven
methods, and their advantages and disadvantages when applied to performance
evaluation, are described in detail below:
Strengths:
Weaknesses:
Strengths:
i. DEA could be used to handle problems with multiple inputs and outputs.
ii. It would not be influenced by different scales.
iii. The results of DEA evaluation on efficiency is a composite indicator, and could
be used to apply the concept of total production factors in economics.
iv. The weighted value in the DEA model is the product of mathematical cal-
culation, and hence free from human subjectivity.
v. DEA can deal with interval data as well as ordinal data.
DEA Risk Scoring Model of Internet Stocks 59
vi. The results of the evaluation by DEA could provide more information on
the data used, which could be used as a reference in the decision-making
process.
Weaknesses:
Strengths:
i. Easy to apply.
ii. The results are subject to consistency checking.
iii. Solid theoretical foundation and is objective.
iv. Easier to handle qualitative problems.
Weaknesses:
i. When there are great differences across experts, diverse results yield little
value.
ii. Fails to discuss the relation between factors (indicators).
Strengths:
Weaknesses:
Strengths:
Weaknesses:
Balanced scorecard
A performance evaluation system containing four components for evaluation.
This is also called a strategic management system, which could help firms
translate strategy into actions. The four components are finance, customer,
internal process, and learning and growth.
Strengths:
i. Can integrate information, and put various key factors for the success of
the organization into a single report.
ii. Avoids information overload, since the indicators used for performance
measurement are the key indicators.
Weaknesses:
i. The procedure for the application of BSC is complex and time consuming.
Strengths:
Weaknesses:
OE = AE × TE
62 Enterprise Risk Management in Finance
This is the efficiency measuring model that Farrell proposed in 1957.10 There are
two major DEA models: One is the CCR model proposed by Charnes, Cooper
and Rhodes,11 and the other is the BCC model proposed by Banker, Charnes
and Cooper, allowing variable returns to scale.12 The CCR model is input-
oriented, and the BCC is output-oriented. In this research, the output-oriented
BCC model is adopted because the variable returns to scale assumption is more
realistic, and the goal of companies is to maximize their outputs.
m
Min hs = ∑V X
j =1
j js + Ds
p
Subject to ∑U Y
k =1
k ks =1
p m
∑ U Y − ∑V X
k =1
k ki
j =1
j ji − Di ≤ 0 , i = 1,2,. . . n
Vi ≥ 0, j = 1,2, . . . m; Uk ≥ 0, k = 1,2,. . . p
Di is a constant and we can use Di as the index of the return scale of the DMU.
The standard is as follows:
Using duality theory and the slack variable to transform the equation,
we get:
⎧⎪ m p
⎫
+⎪
Max H s + ε ⎨∑ SVjs + ∑ SVks ⎬
−
⎩⎪ j =1 k =1 ⎭⎪
n
Subject to H sYks − ∑ Yki λ i + SVks+ = 0
i =1
n
X js − ∑ X ji λ i − SVjs− = 0
i =1
∑λ
i =1
i =1
+
λ i ≥ 0 , i = 1,2, . . . n; SVjs− ≥ 0 , j = 1,2, . . . m; SV kS ≥ 0, k = 1,2,. . . p
DEA Risk Scoring Model of Internet Stocks 63
We can see that compared with the CCR model, the BCC model adds the limi-
n
tation of ∑l
i =1
i = 1, to make sure that the production frontier will be raised to
the origin.
DEA can deal with multiple inputs and outputs simultaneously, and DEA
models are broadly used in many fields. DEA is believed to be one of the most
commonly used approaches to measure company performance in the financial
industry. In this section, we propose a model to combine DEA models with a
financial analysis tool to evaluate efficiency of online companies.
Financial ratio analysis has been the standard technique used in economics
to examine business and managerial performances.13 Due to its simplicity and
ease of understanding, the analytical ratio measure has been widely applied
in many areas such as in financial investment and insurance industries.
Two of the most preferred analytical ratios are return on equity (ROE) and
return on assets (ROA), both providing insight into a financial institution
that allows management to make strategic decisions that can dramatically
affect its structure and profitability. ROA is defined as the ratio of net income
divided by total assets, and estimates how efficient we are at earning returns
per dollar of assets. ROA has been merged into DEA to evaluate efficiency and
effectiveness of an organization.14 ROE is calculated by dividing net income
by average equity, and identifies how efficiently we use our invested capital.
Companies that boast a high ROE with little or no debt are able to grow
without large capital expenditures, allowing the owners of the business to
withdraw cash and reinvest it elsewhere. ROE is just as comprehensive as ROA,
and could be a better indicator than ROA in terms of identifying a firm’s prof-
itability and potential growth, that is, the potential risk that a firm can take.
Moreover, from the accounting perspective, when we use the ROA ratio to
measure company performance, the ROE ratio has to be used simultaneously
to see whether a high ROE ratio arises from financial leverage, or whether the
company ROA is high. So in this research, we use the ROE ratio to make the
evaluation process more complete.
Net Income
ROA =
Assets
Net Income
ROE =
Equity
Many investors fail to realize, however, that two companies can have the same
return on equity, yet one can be a much better business. The DuPont model
64 Enterprise Risk Management in Finance
Asset turnover (efficiency) is used to measure the ability of the firm to use its
assets, and can be deemed as the operational efficiency of a company. Profit
margin (effectiveness) is used to diagnose the effectiveness of a company; it
measures not only the competitiveness of the product but the expense control
ability of a company.
The equity multiple can be used to understand the capital structure of a
company, and companies can use financial leverage to control their capital
structure. So investors should take a higher risk to gain high returns. Hence,
we use the concept of ROE to test whether investing in companies with high
financial performance can get high returns or not, because it is important to
think of return and risk simultaneously when choosing an investment target.
Based on the concept of measuring firm performance by efficiency and
effectiveness, this research adopts the two-stage DEA model15 to evaluate the
performance of online companies. The two-stage DEA approach is shown in
Figure 7.1
The risk-scoring model is depicted in Figure 7.2, to include two sub-processes.
In contrast to Figure 7.1, Figure 7.2 introduces another process to understand
whether investing in companies with high financial performance can get high
returns or not.
These two processes can be used to evaluate the company from both enter-
prise (company performance) and investor (the returns per unit of risk available)
perspectives; hence this evaluation process can give investors and managers
more accurate criteria to make decisions.
Variable selection
In order to measure DMU efficiency, the selection of the input variables and
output variables is very important. In the internet industry, existing literature
defines a good set of variables to measure online company performance,
including both financial data and non-financial data.
DEA Risk Scoring Model of Internet Stocks 65
Effectiveness
Efficiency
Input Intermediate Output
variable variable variable
Stage 1 Stage 2
Process I
Effectiveness
Input Intermediate Output
Efficiency
Stage 1 Stage 2
Process II
Some criteria are set up in order to facilitate the input and output selection
as follows:
1. The variables adopted by a paper measuring the internet industry using the
DEA approach can be considered.
2. There are limited papers measuring internet industry performance using
the DEA approach, so a two-stage DEA approach will be used for measuring
operating efficiency and effectiveness.
3. For measuring the investing risk, there is only one paper using the DEA
approach to measure the relationship between return and risk. Hence, the
variables adopted by that paper have been considered.
4. All the variables must conform to the ROE concept.
revenue, gross profit, EPS and net income as the main variables to measure the
performance of online companies based on the literature review and sugges-
tions by experts. In order to measure efficiency, we used total assets, total
equity and operating expense as input variables to measure how much money
they can earn (revenue) and how many profits (gross profit) they can generate.
Total assets was chosen as an input because it is the sum of intangible asset,
current asset, and fixed asset – and the intangible asset which is shown in
the balance sheet as a result of a merger or takeover is hard to measure. The
variable of operating expense was chosen as an input because many internet
companies do not report the number of marketing expense or R&D expense. In
order to measure effectiveness, we used revenue and gross profits as input vari-
ables, to see how well companies controlled their expenses to generate money
(net income) and how much income was shared with stockholders (EPS).
In Evaluation Process 2, we choose Beta, book value to market value (BV/
MV), and rate of return as the main variables to measure how much return
a company can generate given the same risk. Beta was chosen as an input
because investing risk can be divided into systematic risk and non-systematic
risk. For investors, the non-systematic risk can be dispersed by diversification
effect. So, if systematic risk is the only concern, then the Beta coefficient is a
better way to measure the risk. BV/MV was chosen as an input because it is also
a commonly used ratio to measure investing risk.
Empirical study
The sample for this study includes listed online companies in the United
States. There are 127 such companies in NASDAQ categories, and they are
grouped into three categories: internet service providers, internet information
providers, and internet software and services providers. Because the data
for DMUs cannot be negative, the data of DMUs could not be collected, and
because some accounting periods were different across companies, we chose
only 27 listed online companies. Data was collected from Yahoo! Finance
(http://finance.yahoo.com/) and EDGAR Online (http://edgar.brand.edgar-
online.com/default.aspx) for 2006.
Operating Effectiveness
Company name Stock code efficiency score score
ADAM, ORCC, JCOM, PCLN, GOOG, and YHOO, which are BCC-efficient on
effectiveness.
Moreover, of the ten companies that are BCC-efficient on operating effi-
ciency, seven companies are not BCC-efficient on effectiveness. These are
TZOO, AMZN, UNTD, DTAS, EGOV, EBAY, and RATE. These seven companies
can use their resources to generate profits very well; however, they cannot use
their profits to generate income very well. This may be because when com-
paring to the companies that are BCC-efficient on both dimensions, these
seven companies do not control their expenses well, or they issue more stock
so that EPS becomes diluted.
On the other hand, of the six companies that are BCC-efficient on effect-
iveness, three are not BCC-efficient on operating efficiency. They are ORCC,
PCLN, and YHOO. These companies can use their profits to generate income
68 Enterprise Risk Management in Finance
very well; however, they cannot use their resources to generate profits very well.
This may be because when compared with the companies that are BCC-efficient
on both dimensions, these companies spend more costs on their products, or
the sales volume or price is lower than market, or their capital has mainly come
from inside (stockholder’s equity), not borrowed from outside (liabilities).
The DEA efficiency scores are a percentage value which varies between 0%
and 100%. In order to see the total efficiency of each company, we time the
efficiency scores of operating efficiency and effectiveness. The BCC efficiency
scores are also a percentage value between 0% and 100%. It can be observed
that only three companies perform best in both dimensions, showing that
they are BCC-efficient in both operating efficiency and effectiveness, which
are ADAM, GOOG and JCOM. These three companies can use their resources
to generate profits, as well as using their profits to generate income.
The DEA result for Evaluation Process 2 is shown in Table 7.2.
Table 7.2 BCC-efficient scores on the level of returns per unit of risk
We can see there are 4 out of 27 listed online companies, namely JCOM,
PCLN, VSTY, and ADAM, which are BCC-efficient on investing risk. These four
companies have the highest level of returns per unit of risk, which means that
if investors choose them, they can expect the highest level of returns.
There is a trend between total efficiency and investing risk. Table 7.3 shows
the total efficiency and risk for the top ten and bottom four companies. If
investors choose companies with high scores in total efficiency, they can enjoy
a higher level of returns. Otherwise, if the investors choose the companies
with low scores in total efficiency, they will get a lower level of returns.
Moreover, through the comparison in Table 7.4, we can see the companies
that are BCC-efficient on operating efficiency but not BCC-efficient on effect-
iveness will perform worse in total efficiency. For example, TZOO, AMZN,
UNTD, DTAS, EGOV, EBAY, and RATE are BCC-efficient on operating effi-
ciency, but their performance on effectiveness is bad, and so the total effi-
ciency of these companies will be lower.
On the other hand, the companies that are BCC-efficient on effectiveness
but not on operating efficiency will perform better in total efficiency than the
companies that are BCC-efficient on operating efficiency but not on effect-
iveness. For example, ORCC,PCLN, and YHOO are BCC-efficient on effect-
iveness, but their performance on operating efficiency is not good. However,
these companies have good scores in total efficiency.
So, we can see that the main dimension that influences the total efficiency
of a company is its effectiveness. In the internet industry, the effectiveness of a
company is more important than its operating efficiency.
1 ADAM ADAM
2 JCOM JCOM
3 GOOG PCLN
4 VSTY VSTY
5 YHOO ORCC
6 DRIV YHOO
7 PCLN GOOG
8 WBSN DRIV
9 ORCC SOHU
10 EBAY WBSN
24 ECLG ECLG
25 IPAS CORI
26 SPRT IPAS
27 CORI SPRT
70 Enterprise Risk Management in Finance
There are only 2 out of the 27 listed online companies, namely ADAM
and JCOM, that are BCC-efficient on operating efficiency, effectiveness and
investing risk. These companies operate well, and investors can get the highest
returns by choosing them.
GOOG is BCC-efficient on operating efficiency and marketability; however,
its efficiency score on investing risk is low. This means that GOOG can use its
resources to generate profits, as well as using its profits to generate income, but
investors cannot get the high returns they hope for by investing in GOOG.
On the other hand, there are 3 of the 27 listed online companies, namely
IPAS, CORI, and SPRT, which are BCC-inefficient on operating efficiency,
effectiveness and investing risk. These companies operate their company
inefficiently, and investors get the lowest returns by choosing them.
Conclusions
the stocks. And for managers, this model can be used to be the performance
measurement model; based on the relative score of each DMU, the managers
can know the position where their company and also their competitors
stand, and they can know in each dimension whether their company is
performing well or needs to be improved.
2. Based on the research results, the main dimensional influence for a company’s
total efficiency is effectiveness. For the internet industry, the company’s
effectiveness is more important than its operating efficiency in influencing
performance. Hence, investors may focus on the net income and EPS when
they want to measure the US internet stock market. For managers, most of
the internet companies perform well on operating efficiency, which means
the companies can use their resources to generate profit well. However, most
of the internet companies have lower efficiency scores on effectiveness.
So companies should control their expenses, hence their net income will
be higher.
3. If investors choose companies with high scores in operating performance,
they should gain a higher level of returns. If the investors choose companies
with low scores in operating performance, they are likely to gain a lower
level of returns.
As with any study, this research is not without limitations. Three limitations
are noted:
1. The DEA model cannot use negative numbers.17 However, the listed online
companies can have negative net income or equity. In order to compare
these listed online companies on the same basis, companies with different
financial periods were excluded, leaving only 27 companies. The number of
decision making units (DMUs) is greater than the twice of the summation
of the number of input and output variables, so the sample size used in this
study still complies with the requirement of DEA approach.
2. Non-financial data are not included, which is also an important dimension
to measure online companies. This is because some data cannot be meas-
ured or may be confidential.
3. There are few previous research reports using DEA to evaluate the per-
formance of the internet industry, resulting in the lack of theoretical backup
in the input and output variable selection.
4. DEA can be combined with classical risk management such as value-at-risk18
to develop new methodologies for optimizing risk management.19
8
Bank Credit Scoring
Introduction
Risk modeling
72
Bank Credit Scoring 73
The predictive scorecard that is currently being used in a large Ontario bank
is validated. This bank has a network with a total of more than 8000 branches
and 14,000 ATM machines operating across Canada. This bank successfully
conducted a merger of two brilliant financial institutions in 2000, and became
Canada’s leading retail banking organization. It has also become one of the
top three online financial service provider by providing online services to
more than 2.5 million online customers. The scorecard system used in retail
banking strategy will then need to be validated immediately due to this merger
event. The scorecard system under evaluation predicts the likelihood that a
60–120-day delinquent account (mainly on personal secured and unsecured
loans and lines of credit) will be cured within the subsequent three months.
By breaking up funded accounts into three samples based on their limit issue
date, the model’s ability to rank order accounts based on creditworthiness
was validated for individual samples and compared to the credit bureau score.
Tables 8.3, 8.4, and 8.5 give the sample size, mean, and standard deviation of
these three samples: Sample 1 involves accounts from January 1999 to June 1999,
Sample 2 from July 1999 to December 1999, and Sample 3 from January 2000
76 Enterprise Risk Management in Finance
Scorecard
Beacon/ (No Bureau Application Bureau
Scorecard Beacon Empirical score) alone alone
Scorecard
Beacon/ (No Bureau Application Bureau
Scorecard Beacon Empirical score) alone alone
to June 2000. Cases of 90 days’ delinquency or worse, and accounts that were
closed with a ‘NA (non-accrual)’ status or that were written off were included as
bad performance. Good cases were defined as those that did not meet the def-
inition of ‘bad.’ The ‘bad’ definition is evaluated at 18 months. Three samples of
cohorts were created and compared. Specified time periods refer to month-end
dates. For the performance analyses, the limit issue dates were considered,
Bank Credit Scoring 77
Scorecard
(No
Beacon/ Bureau Application Bureau
Scorecard Beacon Empirical score) alone alone
while the population analyses used the application dates. In order to validate
the relative effectiveness of the scorecard, we conducted statistical analysis, and
reported results for the following statistical measures: divergence test, Lorenz
curve, Kolmogorov–Smirnov (K–S) test, and population stability index.
100%
90%
80%
Percent of bads captured
70%
60%
Scorecard
50%
Beacon
40% Beacon/Empirica
Scorecard (No Bureau score)
30% Applicant Alone
20% Bureau Alone
Random
10% Exact
0%
0%
5%
%
0%
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
10
if 15% of the accounts were bad, the ideal or exact model would capture all
these bads within the 15th percentile of the score distribution (the x-axis).
Similarly, the K–S and divergence statistics determine how well the models
distinguished between ‘good’ and ‘bad’ accounts by assessing the properties of
their respective distributions.
The results indicate that the scorecard is a good predictor of risk. Amongst
the three selected sampling periods, the two periods January–June 99 and
January–June 00 highlight a slightly better predictive ability than the period
July–December 99. Scorecard also performs better than, though not by a sig-
nificant margin, the credit bureau score.
100%
90%
Percent of bads captured 80%
70%
60% Scorecard
50% Beacon
Beacon/Empirica
40% Scorecard (No Bureau Score)
30% Applicant Alone
Bureau Alone
20% Random
10% Exact
0%
0%
5%
%
0%
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
10
Percent into distribution
100%
90%
80%
Percent of bads captured
70%
60%
50% Scorecard
Beacon
40% Beacon/Empirica
Scorecard (No Bureau Score)
30% Applicant Alone
20% Bureau Alone
Random
10% Exact
0%
0%
5%
%
0%
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
10
Percent into distribution
100%
90%
80%
Percent of bads captured
70%
60%
50%
40% Jan–Jun’99
Jul–Dec’99
30%
Jan–Jun’00
20% Random
10% Exact
0%
0%
5%
%
0%
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
10
The performance statistics for the three selected samples shown in Tables 8.3,
8.4 and 8.5 indicate the superiority of the scorecard as a predictive tool. The
scorecard was found to be a more effective assessor of risk for the earlier sample,
January–June 99, and the latest sample, January–July 00, but was slightly less
effective for the July–December 99 sample. There was a more distinct separation
between ‘goods’ and ‘bads’ for the above-mentioned first two samples than the
last: the maximum difference between the ‘good’ and ‘bad’ cumulative distri-
butions was 39% and 38% respectively, versus 33% for the remaining sample.
Similarly, the divergence values were 0.869 and 0.843, versus 0.528 for the less
effective sample.
It is possible that the scorecard was better able to separate ‘good’ accounts
from ‘bad’ ones for the earlier sample. On the other hand, the process to clean
up delinquent unsecured line of credit accounts starting from mid-2001 may
result to more bad observations for the latest sample (those accounts booked
between January 00 and June 00 with an 18-month observation window will
catch up with this clean-up process). This can be evidenced by the bad rate
of 2.18% for the January–June 00 sample, compared to 1.45% for the July–
December 99 sample, and 1.17% for the January–June 99 sample. If most of
these bad accounts in the clean-up have a low initial score, the predictive
ability of the scorecard on this cohort will be increased.
Ascending
cumulative
Ascending of
FICO FICO Proportion Weight Contribution cumu- January–
Score development January– development January– Change Ratio of evidence to index lative of June 99
range (#) June 99 (#) (%) June 99 (%) (5)–(4) (5)/(4) in [(7)] (8)*(6) FICO % (%)
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
<170 37601 1430 7.13 1.96 −0.0517 0.2749 −1.2912 0.0668 7.13 1.96
170–179 25093 1209 4.76 1.66 −0.0310 0.3483 −1.0546 0.0327 11.89 3.62
180–189 30742 1888 5.83 2.59 −0.0324 0.4440 −0.8119 0.0263 17.72 6.21
190–199 37128 3284 7.04 4.50 −0.0254 0.6394 −0.4471 0.0114 24.77 10.71
200–209 42055 4885 7.98 6.70 −0.0128 0.8398 −0.1746 0.0022 32.74 17.41
210–219 46355 5735 8.79 7.86 −0.0093 0.8944 −0.1116 0.0010 41.53 25.27
220–229 49068 6716 9.31 9.21 −0.0010 0.9895 −0.0106 0.0000 50.84 34.48
230–239 48577 7543 9.21 10.34 0.0113 1.1226 0.1156 0.0013 60.06 44.83
240–249 48034 8762 9.11 12.02 0.0290 1.3187 0.2767 0.0080 69.17 56.84
250–259 46023 9121 8.73 12.51 0.0378 1.4328 0.3596 0.0136 77.90 69.35
260–269 40541 8826 7.69 12.10 0.0441 1.5739 0.4535 0.0200 85.59 81.45
270–279 37940 8310 7.20 11.40 0.0420 1.5835 0.4596 0.0193 92.78 92.85
>280 38050 5216 7.22 7.15 −0.0006 0.9910 −0.0090 0.0000 100.00 100.00
Total 527207 72925 100 100 .2027
Ascending
Weight cumulative
FICO July– FICO July– Proportion of evi- Contribution Ascending of July–
Score development December development December change Ratio dence to index Cumulative December
range (#) 99 (#) (%) 99 (#) (5)–(4) (5)/(4) in [(7)] (8)*(6) of FICO % 99 (#)
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
<170 37601 1447 7.13 2.12 −0.0502 0.2968 −1.2146 0.0609 7.13 2.12
170–179 25093 1352 4.76 1.98 −0.0278 0.4156 −0.8781 0.0244 11.89 4.10
180–189 30742 2106 5.83 3.08 −0.0275 0.5284 −0.6379 0.0175 17.72 7.18
190–199 37128 3609 7.04 5.28 −0.0176 0.7498 −0.2880 0.0051 24.77 12.46
200–209 42055 5452 7.98 7.98 0.0000 0.9999 −0.0001 0.0000 32.74 20.43
210–219 46355 6169 8.79 9.03 0.0023 1.0265 0.0261 0.0001 41.53 29.46
220–229 49068 7009 9.31 10.25 0.0095 1.1018 0.0969 0.0009 50.84 39.71
230–239 48577 7454 9.21 10.91 0.0169 1.1836 0.1685 0.0029 60.06 50.62
240–249 48034 7908 9.11 11.57 0.0246 1.2699 0.2389 0.0059 69.17 62.19
250–259 46023 7774 8.73 11.37 0.0264 1.3029 0.2646 0.0070 77.90 73.56
260–269 40541 7362 7.69 10.77 0.0308 1.4007 0.3370 0.0104 85.59 84.33
270–279 37940 6716 7.20 9.83 0.0263 1.3654 0.3114 0.0082 92.78 94.16
>280 38050 3993 7.22 5.84 −0.0138 0.8094 −0.2114 0.0029 100.00 100.00
Total 527207 68351 100 100 .1461
Ascending
FICO FICO Proportion Weight of Contribution Ascending cumulative
Score development January– development January– change Ratio evidence to index cumulative of January–
range (#) June 00 (#) (%) June 00 (%) (5)–(4) (5)/(4) in [(7)] (8)*(6) of FICO (%) June 00 (%)
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)
<170 37601 1928 7.13 2.46 −0.0467 0.3448 −1.0648 0.0498 7.13 2.46
170–179 25093 1838 4.76 2.34 −0.0242 0.4925 −0.7082 0.0171 11.89 4.80
180–189 30742 3136 5.83 4.00 −0.0183 0.6859 −0.3770 0.0069 17.72 8.80
190–199 37128 4784 7.04 6.10 −0.0094 0.8664 −0.1434 0.0013 24.77 14.91
200–209 42055 6505 7.98 8.30 0.0032 1.0401 0.0393 0.0001 32.74 23.20
210–219 46355 7212 8.79 9.20 0.0041 1.0462 0.0451 0.0002 41.53 32.40
220–229 49068 8250 9.31 10.52 0.0122 1.1306 0.1227 0.0015 50.84 42.92
230–239 48577 8762 9.21 11.18 0.0196 1.2129 0.1930 0.0038 60.06 54.10
240–249 48034 8769 9.11 11.18 0.0207 1.2276 0.2050 0.0043 69.17 65.28
250–259 46023 8451 8.73 10.78 0.0205 1.2348 0.2109 0.0043 77.90 76.06
260–269 40541 7850 7.69 10.01 0.0232 1.3020 0.2639 0.0061 85.59 86.07
270–279 37940 6736 7.20 8.59 0.0140 1.1939 0.1772 0.0025 92.78 94.67
>280 38050 4182 7.22 5.33 −0.0188 0.7391 −0.3024 0.0057 100.00 100.00
Total 527207 78403 100 100 .1036
Contribution February- March- April- May- August- September- October- November- December-
index January-00 00 00 00 00 June-00 July-00 00 00 00 00 00
< = 0.10 .0959 .0962 .0826 .0999 .0919 .0940 .0926 .0693 .0656
0.10 to 0.25 .1097 .1313 .1236
Contribution January-01 February-01 March-01 April-01 May-01 June-01 July-01 August-01 September-01 October-01 November-01 December-01 January-02
index
< = 0.10 .0940 .0898 .0787 .0979 .0829 .0615 .0696 .0701 .0907 .0816 .0817 .0915 .0771
Bank Credit Scoring 85
0.1036 for the January–June 00 sample. We also computed the monthly popu-
lation stability, showing the monthly index for total applications (funded or
not funded) in the two years starting from January 2000. This result further
confirms the declining trend with the monthly indexes for the past 20 months
all resting within 0.1 (see Table 8.10).
As indicated in Figures 8.5 and 8.6, more of the latest sample accounts had
a lower score compared to the older samples; a tendency of scores to drop over
time has been revealed. All of the three samples had a score distribution higher
than the development sample.
The stability indices revealed that the greatest population shift occurred
when the scorecard was originally put in place, then the extent of shift reduced
gradually across time. The indexes stayed within 0.1 for the past 20 months.
Conclusions
Maintaining a certain level of risk has become a key strategy to make profits in
today’s economy.
Risk in enterprise can be quantified and managed using various models.
Models also provide support to organizations seeking to control enterprise
risk. We have discussed risk modeling, and reviewed some common risk meas-
ures. Using the variation of these measures, we demonstrate support to risk
management through validation of predictive scorecards for a large bank. The
scorecard model is validated and compared to credit bureau scores. A com-
parison of the K–S value and the divergence value between scorecard and
bureau score in the three samples indicated scorecard to be a better tool than
100%
90%
Cumulative distribution
80%
70%
60%
50%
40% FICO Development
30% Jan–Jun99
20% Jul–Dec99
Jan–Jun00
10%
0%
280&UP
0–169
170–179
180–189
190–199
200–209
210–219
220–229
230–239
240–249
250–259
260–269
270–279
Score
14%
Interval population distribution
12%
10%
8%
6%
FICO Development
4% Jan–Jun99
Jul–Dec99
2% Jan–Jun00
0%
0–169
170–179
180–189
190–199
200–209
210–219
220–229
230–239
240–249
250–259
260–269
270–279
280&UP
Score
bureau score to distinguish the ‘bads’ from the ‘goods.’ In practice, however,
the vetting and validation of models may encounter many challenges. For
example, when retail models under vetting are relatively new to the enterprise,
when there are large amounts of variables and data to manipulate and there
is limited access to these datasets due to privacy restrictions, when validation
tests are not standardized and there are demands for ability to change the
measure if results do not look favorable, these challenges become apparent.
Introduction
The overall approach is to begin with a set of data, which in traditional data
mining practice is divided into training and test sets. The data can consist of
continuous or binary numeric data, with the outcome variable being binary.
The training set data is used to identify maximum and minimum measures
for each attribute. This training set is then standardized over the range of
0 to 1, with 0 reflecting the worst measure and 1 the best measure over each
attribute. Then relative weight importance is obtained by regression over the
standardized data to explain outcome performance in the training data set.
87
88 Enterprise Risk Management in Finance
(An intermediate third data set could be created for generation of weights if
desired.)
xij − xi−
yij = (9.1)
xi+ − xi−
m
0 ≤ wi ≤ 1, ∑w
i =1
i =1
Credit Scoring using Multiobjective Data Mining 89
The L∞ metric (the Tchebychev metric) by formula involves the infinite root of
an infinite power, but this converges to emphasizing the maximum distances.
The weights become irrelevant. Thus the L∞ distance measures are:
This retains the standardized features of the test data. The model application is
then obtained by applying the rules to the test data:
Dataset
(2002)12 and Bull (2005)13 provide procedures to deal with such problems of
unbalanced data if they detrimentally affect data mining models. The dataset
consisted of the outcome variable (categorical: default, good) and 12 continuous
numeric independent variables, as given in Table 9.1.
This dataset demonstrates many features encountered with real data. Most
variables are to be maximized, but here, three of the twelve variables would
have the minimum as preferable. There are negative values associated with the
dataset as well.
{1 1 1 1 1 1 1 1 1 1 1 1}
reflecting the best performance identified in the training set for each variable.
The nadir solution is conversely:
{0 0 0 0 0 0 0 0 0 0 0 0}.
Model comparisons
The original raw data was used with two commercial data mining software
tools (PolyAnalyst and See5) for decision tree models. The PolyAnalyst decision
tree model used only two variables, NI and WC. The decision tree is given in
Figure 9.1.
This model had a 0.865 correct classification rate over the test set of 126
observations, as shown in Table 9.3.
IF NI < 1250
AND IF WC < 607 THEN 0 (Neg)
ELSE IF WC >= 607 THEN 1 (Pos)
ELSE IF NI >= 1250 THEN 1 (Pos)
Yes Yes
THEN
IF IF Negative
NI<1250 WC<607 outcome
predicted
No No
THEN THEN
Positive Positive
outcome outcome
predicted predicted
Actual 0 (Neg) 9 2 11
Actual 1 (Pos) 6 109 115
15 111 126
No Yes
IF IF IF
NI > 1256 NI ≤ –26958 CF > 24812
Yes
Yes No
THEN
No
Positive
THEN IF outcome
Positive CA ≤ 828 predicted
THEN
outcome
Yes Negative
predicted
outcome
No
predicted
THEN
IF Negative
Yes IE > 2326 outcome
predicted
THEN
Negative No
outcome
predicted
THEN
Positive
outcome
predicted
The errors in this model were a bit more proportional in the bad case of
assigning actual default cases to the predicted on-time payment category.
However, cost vectors were not used, so there was no reason to expect the
model to reflect this.
Credit Scoring using Multiobjective Data Mining 95
See5 software yielded the following decision tree, using four independent
variables.
Table 9.4 shows the results for the decision tree model obtained from See5
software, which had a correct classification rate of 0.817, just a little worse than
the PolyAnalyst model (although this is for the specific data, and in no way is
generalizable):
Actual 0 (Neg) 8 3 11
Actual 1 (Pos) 13 102 115
21 105 126
Finally, the TOPSIS models were run. Results for the L1 model are given in
Table 9.5, with a correct classification rate of 0.944.
Actual 0 (Neg) 6 5 11
Actual 1 (Pos) 2 113 115
8 118 126
The results for the L 2 model are given in Table 9.6, with a correct classifi-
cation rate of 0.921.
Actual 0 (Neg) 6 5 11
Actual 1 (Pos) 5 110 115
11 115 126
The results for the L∞ model are given in Table 9.7, with a correct classification
rate of 0.844. Here all three metrics yield similar results (for this data, superior
to the decision tree models; but that is not a generalizable conclusion).
Actual 0 (Neg) 6 5 11
Actual 1 (Pos) 2 113 115
8 118 126
96 Enterprise Risk Management in Finance
The results for the different models are given in Table 9.8.
These models were applied to one data set, demonstrating how TOPSIS prin-
ciples can be applied to data mining classification. In this one small (but real)
data set for a common data mining application, the TOPSIS models gave better
fit to test data than did two well-respected decision tree software models. This
does not imply that the TOPSIS models are better, but it provides another tool
for classification. The TOPSIS models are easy to apply in spreadsheets, however
much data has been fit into that spreadsheet. Any number of independent vari-
ables could be used, limited only by database constraints.
The Monte Carlo simulation provides a good tool to test the effect of input
uncertainty over output results.14 Simulation was applied to examine the sen-
sitivity of the five models to perturbations in test data. Each test data variable
value was adjusted by adding an adjustment equal to:
The perturbations used were 0.25, 0.5, 1, and 2. These values reflect
increasing noise in the data. The adjustments are standard normal variates
with mean 0 and standard deviation found in the training dataset for that
variable. Simulation results are shown in Table 9.9.
PADT PADT C5 C5 L1 L1 L2 L2 L∞ L∞
Perturbation Min Max Min Max Min Max Min Max Min Max
0 0.9365 0.9365 0.8730 0.8730 0.9444 0.9444 0.9206 0.9206 0.9444 0.9444
0.25 0.7381 0.9365 0.6746 0.8651 0.7619 0.9286 0.7619 0.9127 0.6825 0.8889
0.50 0.7063 0.9444 0.5952 0.8413 0.7143 0.8968 0.6190 0.8730 0.6349 0.8492
1.0 0.6905 0.8968 0.5317 0.7937 0.6349 0.8492 0.5873 0.8333 0.4683 0.7460
2.0 0.6587 0.8968 0.5238 0.7857 0.5714 0.8175 0.5476 0.7937 0.3810 0.6508
Credit Scoring using Multiobjective Data Mining 97
The first decision tree model was quite robust, and in fact retained its pre-
dictive power in most of all five models as perturbations were increased. The
second decision tree model included more variables and a more complex
decision tree. However, it not only was less accurate without perturbation, it
also degenerated much faster than the simple two-variable decision tree. While
this is not claimed as generalizable, it is possible that simpler trees could be
more robust. (As a counterargument, models using more variables may have
less reliance on specific variables subjected to noise, so this issue merits further
exploration.)
The L1 and L 2 TOPSIS metrics had less degeneration than the four-variable
decision tree, but a little more than the two-variable decision tree. The L1
TOPSIS model was less affected by perturbations than was the L 2 model, which
in turn was quite a bit less affected than the L∞ model. This is to be expected,
as the L1 model is less affected by outliers, which can be generated by noise.
The L∞ model focuses on the worst case, which is a reason for it to be adversely
impacted by noise in the data.
Conclusions
Introduction
99
100 Enterprise Risk Management in Finance
To analyze efficiency, we looked into a few banks chosen from the UK and the
US. Data was gathered from 2007 annual reports (see the reference list for such
reports) of these banks divided into the input variables and the output vari-
ables. Table 10.1 gives input and output variables:
Table 10.2 presents online banking data for ten large banks, six from the
US and four from the UK: Bank of America,15 Citigroup,16 HSBC,17 Barclays,18
Performance Evaluation and Risk Analysis of Online Banking 101
We first conducted a plain DEA analysis and then used PCA scenario analysis
and PCA-DEA modeling.
Table 10.3 presents the scores produced from normal DEA models based on
all 45 combinations of variables in the data. A score of 1.000 (in bold) indi-
cates efficiency. It can be seen in the ABDCD12 column that seven of the ten
banks are found efficient. The problem is that using more variables obviously
finds more efficient solutions. The ABCD12 model generates too many effi-
cient DMUs/ties due to many input and output variables. But many variables
and factors can affect each other in real world; for example, operating cost is
usually related to the number of employees in the corporation. Therefore PCA
is applied to reduce these measures when applying DEA.
To conduct PCA analysis, we use various combinations of variables to see if we
can reduce the number of variables and/or detect structure in the relationships
between variables. Table 10.4 presents maximum component loadings matrix in
different models with different data. All models are weighted with a positive sign
Table 10.3 DEA combinations and their efficiencies
A1 B1 C1 D1 AB1 AC1 AD1 BC1 BD1 CD1 ABC1 ABD1 ACD1 BCD1 ABCD1
0.269 0.145 0.390 0.234 0.309 0.521 0.366 0.390 0.234 0.447 0.521 0.366 0.521 0.447 0.521
0.315 0.254 0.254 0.323 0.447 0.501 0.458 0.254 0.339 0.400 0.501 0.458 0.501 0.400 0.501
1.000 0.429 0.319 0.468 1.000 1.000 1.000 0.429 0.555 0.526 1.000 1.000 1.000 0.555 1.000
0.196 0.293 0.360 0.218 0.362 0.403 0.294 0.360 0.343 0.414 0.403 0.362 0.414 0.414 0.414
0.307 0.124 0.476 0.611 0.307 0.605 0.611 0.476 0.611 0.751 0.605 0.611 0.751 0.751 0.751
0.364 0.330 0.297 0.985 0.545 0.582 0.985 0.330 0.985 0.985 0.582 0.985 0.985 0.985 0.985
0.471 1.000 1.000 0.469 1.000 1.000 0.677 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.582 0.742 0.579 1.000 1.000 1.000 1.000 0.742 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.219 0.302 0.307 0.154 0.390 0.420 0.271 0.307 0.311 0.319 0.420 0.390 0.420 0.319 0.420
0.125 0.357 0.709 0.087 0.357 0.709 0.154 0.709 0.357 0.709 0.709 0.357 0.709 0.709 0.709
A2 B2 C2 D2 AB2 AC2 AD2 BC2 BD2 CD2 ABC2 ABD2 ACD2 BCD2 ABCD2
1.000 0.865 1.000 0.592 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.134 0.174 0.075 0.094 0.198 0.134 0.149 0.174 0.199 0.135 0.198 0.199 0.149 0.199 0.199
0.087 0.060 0.019 0.028 0.087 0.087 0.087 0.060 0.068 0.038 0.087 0.087 0.087 0.068 0.087
0.293 0.705 0.372 0.223 0.764 0.372 0.344 0.705 0.774 0.375 0.764 0.774 0.375 0.774 0.774
0.488 0.315 0.522 0.661 0.488 0.522 0.786 0.522 0.661 0.952 0.522 0.786 0.952 0.952 0.952
0.543 0.791 0.305 1.000 0.892 0.543 1.000 0.791 1.000 1.000 0.892 1.000 1.000 1.000 1.000
0.282 0.961 0.413 0.191 1.000 0.413 0.308 0.961 1.000 0.413 1.000 1.000 0.413 1.000 1.000
0.040 0.082 0.028 0.047 0.090 0.040 0.060 0.082 0.094 0.062 0.090 0.094 0.062 0.094 0.094
0.192 0.423 0.185 0.091 0.462 0.192 0.192 0.423 0.445 0.185 0.462 0.462 0.192 0.445 0.462
0.218 1.000 0.854 0.103 1.000 0.854 0.218 1.000 1.000 0.854 1.000 1.000 0.854 1.000 1.000
A12 B12 C12 D12 AB12 AC12 AD12 BC12 BD12 CD12 ABC12 ABD12 ACD12 BCD12 ABCD12
1.000 0.865 1.000 0.592 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.395 0.254 0.254 0.324 0.492 0.514 0.507 0.254 0.399 0.423 0.514 0.525 0.542 0.434 0.542
1.000 0.429 0.319 0.468 1.000 1.000 1.000 0.429 0.555 0.526 1.000 1.000 1.000 0.555 1.000
0.403 0.707 0.476 0.223 0.764 0.522 0.462 0.707 0.774 0.558 0.764 0.774 0.558 0.774 0.774
0.652 0.316 0.644 0.661 0.652 0.789 0.834 0.644 0.661 1.000 0.789 0.834 1.000 1.000 1.000
0.747 0.794 0.391 1.000 0.924 0.783 1.000 0.794 1.000 1.000 0.924 1.000 1.000 1.000 1.000
0.651 1.000 1.000 0.472 1.000 1.000 0.786 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.582 0.742 0.579 1.000 1.000 1.000 1.000 0.742 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.348 0.432 0.337 0.155 0.488 0.466 0.365 0.432 0.445 0.353 0.488 0.488 0.466 0.445 0.488
0.280 1.000 1.000 0.103 1.000 1.000 0.280 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
Notes: A: Deposit; B: Operation cost; C: The number of employees; D: Equipment; 1: Revenue; 2: Web reaches.
Performance Evaluation and Risk Analysis of Online Banking 103
on the first component and other variables, thus, the first component is named
‘overall measure of efficiency,’ and is the higher-weighted value in general. In all
models, variables can be reduced so that three principal components can be used
to explain more than 85% of the variation in the raw data.
In this research we chose the principal components with the accumulative
contribution ratio equal to or above 90%, and applied them to reevaluate the
efficiency of the online banking. For instance, in model ABCD12 we can con-
clude that only three principal components are used, because the accumulative
contribution of three principal components is 90.2%. The calculation is easy
due to the reduction of data complexity, and the ranks are reasonable.
The data extraction process is shown in Figure 10.1, where every vector
represents each of the six variables. How each variable contributes to the three
principal components is indicated by the direction and length of the vector.
1
Component 3
0
Employees
Total revenue
–1 Equipment
–1
Total deposits
Daily visits
0
0.5
1
Component 2 0.5
1 –0.5 0
–1
Component 1
In the three-dimensional solid area it is shown that the first principal com-
ponent, represented by the Component 1 axis, has positive coefficients for all
six variables. The second principal component, represented by the Component
2 axis, has negative coefficients for variables Total deposits (A), Operating
cost (B), Equipment (D) and Daily visits (2), and positive coefficients for the
remaining two variables. The Component 3 axis has all positive and negative
coefficients for the variables. This figure indicates that these components dis-
tinguish between online banking that has high values for the three sets of
variables, and low for the rest. This approach can effectively achieve dimen-
sionality reduction without losing too much information.
PCA-DEA analysis
This section analyzes online banking using an integrated PCA-DEA model.
As internet usage grew, it considerably changed the channel between banks
and clients. We used both financial and non-financial variables. The main
objective is to construct a framework with DEA and PCA approaches, and use
it for the measurement of online banking based on data collected from annual
reports and web metrics.
The basic DEA model in Table 10.2 shows us that we have seven banks that
are efficient. To reduce the number of variables and obtain more accurate
results, we ran a PCA-DEA analysis at 75%. This left us with three efficient
banks: Bank of America, Lloyds and the Royal Bank of Scotland. Table 10.5
presents integrated PCA-DEA scores, and Table 10.6 gives variance explained
by integrated PCA-DEA.
Only the Bank of America gets 100% efficiency in all models. Note that in
Model ABCD1, where all inputs and outputs except for Daily Reach are taken
into account, only one bank gets a score of 100% and the average efficiency is
only 37%. In the other models where the daily reach is taken into account the
average ranges from 70 to 75%.
To analyze all efficiency scores in Table 10.2, we ran PCA on all 45 sets of
scores for all banks. Figure 10.2 gives a plot of principal component loadings
in different DEA models, where different scores can be understood from a
different perspective. The plots of principal component loadings are converted
from the matrix of component loadings and show a set of directional vectors.
The computed component loadings result in meaningful naming for both
horizontal (the first component) and vertical axis (the second component).
The horizontal axis in Figure 10.2 is from west to east, representing the ‘overall
1.0
AB1 ACD1&ABC1
0.9 ABD1
0.8 A1 AD1 ACD1&ABCD1
0.7 D1 BD12
0.6 B1 BCD1
AD12
Cost oriented (Vertical)
0.5 D12 CD1
AC12 ACD12
0.4 ABD12
Component 2
BC1 AB12
0.3 ABC12
A12 ABCD12
iency
0.2 ll Effic
Overa
0.1 C1
CD12
BD12
0.0 (Horizontal)
B12 BCD12
–0.1
–0.2
BC12
C12
–0.3 D2
–0.4
AD2
–0.5 Online oriented (Vertical) CD2&ACD2
–0.6 A2 (AB2&BC2&ABC2&ABD2
B2
C2 AC2 &BCD2&ABCD2)
–0.7 BD2
–0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Component 1
Standardized Correlation
coefficients T Sig. coefficients
measure of efficiency’; the more efficient ones overall will be located in the
right direction. From the origin to north and south are the ‘cost oriented’ and
‘online oriented’ models respectively. Interestingly enough, such a finding is
consistent with existing work based on data from other nations.
Risk factors
This section seeks to detect the key variables that contribute the most to bank
revenue. We ran both multivariate linear regression and correlation analysis
and present computed values in Table 10.7. It can be seen that the Employees
variable has the largest effect on revenue with a regression coefficient value of
0.749 and correlation coefficient value of 0.783. This means that allocation of
Employees will affect profit or loss the most, holding the other variables con-
stant. That is to say, misutilization of Employees will bring huge potential risks,
which should be considered by banks’ managers when they decide to enter
the online banking market. This is because the risks have become important
factors affecting the survival of enterprises. Meanwhile, the Basel Committee
on Banking Supervision requires that every bank must have an efficient regu-
lation and risk management system where enterprise risk managers should
take an important role on the board of directors.25 Three other variables, that
is, total deposits, cost, and equipment, have less effect on revenue, and daily
reach has no effect on revenue at all. It may be easy to understand that the
people who click the online banks’ websites do not necessarily conduct trans-
actions over the websites. Therefore, future research should examine whether
the number of transactions will significantly affect revenue.
Conclusions
Various financial and non-financial variables have been used in this study
to analyze the online banking service of some giant banks. PCA is employed
to identify the variables contributing the most information content for DEA
Performance Evaluation and Risk Analysis of Online Banking 107
models. The results enable identification of the most efficient banks in terms of
the particular variables selected through DEA. This information is then further
analyzed in terms of multivariate linear regression, which enables significance
and correlation to be seen across variables.
The combination of models applied to banking risk management issues (in
this case, online banking) can provide useful tools to benchmark banking oper-
ations and to identify opportunities for improvement in those operations.
11
Economic Perspective
An early economic view of risk was addressed by Frank H. Knight,1 who focused
on the difference between uncertainty (a domain evading accurate meas-
urement: ‘cases of the non-quantitative type’) and risk: (‘a measurable uncer-
tainty, or “risk” proper’). Risk applied to cases where knowledge is available
about future outcomes and their probabilities, while uncertainty applies to
cases where there is knowledge about future outcomes but not about their
probabilities. Risk was viewed as important, drawing upon Courcelle-Seneuil’s
view2 that profit is due to the assumption of risk, compatible with von Thünen’s
view3 that profit was, in part, payment for certain risks. This was also expressed
by F. B. Hawley,4 who argued that risk-taking was the essential function of
the entrepreneur. Knight viewed risk as the objective form of the idea, and
reserved subjective elements to uncertainty. Since risk was measurable, those
things to which it applied had statistical distributions available.
Ganegoda and Evans (2012)5 proposed a framework for uncertainty assess-
ment in financial markets. This framework is displayed in Table 11.1.
Ignorance covers cases where there is no reliable evidence. Ambiguity covers
cases where subjectivity reigns, allowing different humans to interpret the
same term differently;such cases arise frequently in banking and investment.
Ganegoda and Evans use Li’s Gaussian copula model6 to price collateralized
debt obligations as a case in point. Although Li’s model (which sought to
identify the risk of a CDO by a correlation metric) seemed to work quite well
before the 2008 crisis, it failed under the conditions of stress that that crisis
entailed. Much financial forecasting is actually ambiguous in nature, because
while we gather statistics on past performance, we bet on future outcomes, and
if the underlying conditions of the future vary a great deal from those of the
past (and things are always somewhat different over time), statistical assump-
tions need to be considered in light of changed conditions. The difference
108
Economic Perspective 109
Ignorance Future events are unknown Value of real estate in a war zone
Ambiguity Future events vaguely defined Credit rating of asset-backed
securities
Uncertainty Events known but probability Risk of natural disaster
distribution is not High-degree unique events (fines)
Risk Events and associated probability Most insurance, market risk for
distribution known banks
If there are many branches to logic trees, Monte Carlo simulation is widely
used for analysis. The good feature of Monte Carlo simulation is that any dis-
tribution assumption (and any systemic property) can be assumed. However,
interpreting the output can easily become challenging.
Economic Perspective 111
Buehler et al. (2008)9 provided a review of their view of the evolution of risk
management theory. Table 11.2 displays their timeline.
Risk has been a fundamental topic of economic theory for centuries, and for
many years, risk management was pretty much fully described by insurance
and ad hoc hedging. The evolution outlined in Table 11.2 shows the appear-
ance of derivatives which came to be used as a risk management tool. Financial
theory has evolved to a much greater level of sophistication. Markowitz gave
us the mean-variance approach, defining risk as variance, and providing a
tool to identify the efficient frontier where risk and return were traded off.
Markowitz’s model also allows correlations to be included, realizing that
investment returns are not independent of the returns of other investments.
Sharpe (1970)10 extended Markowitz’s work to the capital asset pricing model,
where investments are evaluated in terms of risk and return relative to the
market as a whole. In this view, the riskier the stock, the greater the expected
profit. Thus this leads to the conclusion that risk is opportunity. Efficient
market theory views the market price as incorporating perfect information.
Prices then vary randomly around their appropriate equilibrium value.
However, the complexities of life demonstrated in 2008 that every model
leaves some bit of reality out.
Humans are known to have limited abilities to deal with extreme event proba-
bilities.11 Sometimes, conventional statistical training does not aid this ability.
Humans have been found to have biases, some of which affect their decision-
making when they are trying to manage risks. A common mistake is over-
confidence in one’s ability and knowledge. This also applies to experts. The
problem of anchoring involves bias in estimations dependent upon initial
estimates. Availability involves reliance upon memory of past experience
as a guide in estimating event probability. Risk managers are called upon to
estimate the likelihood of possible scenarios in stress testing, but they may
be biased by recent events, especially those that make the news. Framing is
a cognitive bias in which a view of risk depends upon context. For instance,
gamblers who are ahead are notably risk averse, while those that are behind
are risk seeking (determined to break even). Risk managers may do the same,
depending upon company profitability in the recent past. Herding involves a
tendency to mimic others, which often is found in investments. Essentially,
herding can lead to bubbles, or to bank runs.
Reality
Humans have always been susceptible to bubbles. Charles Mackay (1841)12
reviewed a number, to include the Dutch tulip mania in the early 17th century,
112 Enterprise Risk Management in Finance
the South Sea Company bubble of 1711–1720, and the Mississippi Company
bubble of 1719–1720. Patterson (2010)13 gave Isaac Newton’s famous quote
complaining of losing £20,000 in the South Sea Bubble in 1720: ‘I can calculate
the motion of heavenly bodies but not the madness of people.’
More recent problems include the London Market Exchange (LMX) spiral.
In 1983, excess-of-loss reinsurance was popular, especially with Lloyd’s of
London. Syndicates unintentionally paid themselves to insure themselves
against ruin.14 These risks were viewed as independent – but in fact they were
not, because they involved a cycle of hedging to the same pool of insurers.
Hurricane Alicia was very damaging in 1983, and nearly brought down
Lloyd’s of London, much of the damage being blamed on the LMX cycle of
reinsurance.
Black Monday, October 19, 1987 was another critical event, when the stock
exchange nearly melted down. Some blamed portfolio insurance, based on
efficient-market theory and models implementing computer trading to take
advantage of what were viewed as temporary diversions from a fundamental
price reflecting value.
Yet another recent problem arose with Long Term Capital Management
(LTCM), a firm created to benefit from Black–Scholes formulation of the value
of derivatives. Lowenstein (2000)15 reviewed the case in depth, beginning
with the theoretical model of the value of the new instrument of derivatives,
LTCM’s spectacular success, and its ultimate collapse due to positions held
in Russian banking in the latter part of 1998. LTCM was bailed out by the
Federal government in the US because it was viewed as too big to be allowed
to collapse.
In 2002, highly popular information technology stocks plummeted. In the
1990s, venture capitalists were highly amenable to throwing money at any
proposal suggesting a way to implement computer technology. Stock prices
for many new startups skyrocketed, in hopes that each was the new IBM, or
Microsoft, SAP, or Oracle. But most were not viable, and this market sector
proved to be yet another bubble.
Today, we are still suffering the aftereffects of the subprime mortgage collapse.
Banks seemingly prospered in a climate of deregulation and merger begun in
1981. Many invested heavily in financial instruments created out of pools,
including mortgages, many of which were generated by aggressive marketing
to those who were offered homes beyond their ability to pay for. This was not
viewed as a problem, as the housing market in most of the United States had
little evidence of decline in value, and in some regions (California, Florida,
Nevada) were vastly outperforming other types of investment return. However,
in 2008 house values proved capable of declining in value, and many of the
most aggressive investing banks were stuck with mammoth deficits. Some (not
Economic Perspective 113
all) such banks were bailed out by the federal government, and most credit this
action with saving the US economy (if not the world economy).
Difficulty in grasping extreme event probabilities has long been noted. Taleb
(2007)16 notes that we are trained to consider fair coin flips to have a 0.5 prob-
ability of heads as well as tails. He proposes that if we observe 99 consecutive
coin flips, statistical training steers us to assume that the next coin flip will
have a probability of 0.5 for both heads and tails. Taleb argues that a more
pragmatic estimate of events is that something is crooked. Taleb also discussed
casino treatment of risk, one of their definite core competencies. Casinos have
mechanisms in place (risk management) to assure they don’t go broke. But
operating a casino is not completely immune to risk. Taleb related the four
biggest losses casinos experienced in recent times:
• A tiger bit a member of the Siegfried and Roy entertainment team, costing
the casino about $100 million by one estimation.
• A contractor was constructing a hotel annex, suffered losses in the project,
and sued. He lost the suit, and tried to dynamite the casino.
• Casinos (rightfully so) are required to file tax returns with the Internal
Revenue Service. An employee charged with this duty failed to perform –
not once but over a number of years. When the malfeasance was discovered,
the casino was liable, and had to pay a huge fine as its license was at risk.
• A casino owner’s daughter was kidnapped. In violation of law, he used casino
money to raise the ransom demanded. While this was understandable, the
casino was liable.
These four examples represent very rare (hopefully) events, with little basis
for accurate actuarial calculation. They fall into the scale of ignorance, in that
they have outcomes that were probably deemed beyond the realm of like-
lihood. But firms must operate in environments that include the possibility of
meteors hitting the earth and causing the end of life as we know it, of wars, of
terrorism, and global warming threatening islands and port cities.
Taleb presented the Black Swan problem. Most humans try to be scientific,
and learn from their observations and history. But while nobody in Europe had
seen a black swan, and had thus assumed they didn’t exist, when they settled
Australia they found some, disproving their empirical hypothesis. Taleb also
noted fallacies on the part of investors, who assume data is normally distrib-
uted. In practice, especially during bubble bursts, fat tails with higher extreme
probabilities are often observed. Cognitive psychology can explain some of
this. Kahneman and Tversky (2000)17 emphasized human biases from framing,
with different attitudes toward risk found during winning and losing streaks.
Humans also have been found to overestimate the probability of rare events,
114 Enterprise Risk Management in Finance
such as the odds of the next asteroid impacting the earth, or the risk of terror-
ists on airplanes. Akerlof and Shiller (2009)18 argued that standard economic
theory makes too many assumptions; when human decisions are involved, his-
torical data is not a good predictor of future performance.
Risk mitigation
ERM seeks to provide means to recognize and mitigate risks. The field of
insurance developed to cover a wide variety of risks, external and internal,
covering natural catastrophes, accidents, human error, and even fraud.
Financial risk has been controlled through hedge funds and other tools over
the years, often by investment banks. With time, it was realized that many
risks could be prevented, or their impact reduced, through loss-prevention and
control systems, leading to a broader view of risk management.
Risk tolerance
A key concept of risk management is risk tolerance. Risk tolerance bounds the
risk a firm is willing to assume. It can be calculated as the maximum amount
of surplus a company is willing to lose over a given period for a specific event.
It also could express the most the company is willing to lose per year. Yet
another definition is to express the probability that a capital adequacy ratio
(the ratio of reserves to potential payouts) will fall below a given level.
Another key concept is that organizations are in business to cope with risks
in their area of expertise (core competence), but should shed risks that are not
in this core. Insurance is the most common means to shed these non-core
risks.
Recent events
Doherty et al. (2009)19 sought to explain the underlying causes of the 2008
global financial crisis, concluding that values, incentives, decision processes,
and internal controls all played a role. Oversight and control were found
lacking in the years prior to 2008, with heavy use of leverage on the part of
investment firms that amplified risks, which in turn were hidden by the strong
profit performance measurement. A negative view of risk management would
be to ensure loss avoidance. A positive view would emphasize risk management
as part of value creation.
The insurance industry by its nature is focused on risk transfer. During the
2008 and subsequent period, while the insurance industry remained profitable
for the greater part, it experienced significant declines in return as well as
losses of risk capital. Doherty et al. proposed approaches to managing risk in
this market:
Economic Perspective 115
firm value. The issue of defining what things belong in recurring or nonre-
curring categories does involve a level of subjectivity, lending itself to abuse.
The basic argument is that honesty pays in the long run.
Conclusions
Powers et al. extended their analysis to the continuous case, where it is not
necessary to establish boundaries between high and low expected frequencies
or severities.
Financial theory has developed a number of tools to supplement insurance
and hedging as means of implementing risk management. Derivatives are
intended as means to hedge in a more sophisticated manner. But the human
factor enters into the equation when customers do not understand what they
are purchasing (and possibly salespeople don’t either). Humans have a ten-
dency to be exuberant that has manifested itself over and over with time, cre-
ating high variance in market prices that we know as bubbles.
Buehler et al. concluded that companies have their own appropriate ratio
of debt-to-equity related to the probability of incurring loss. Greater equity
Economic Perspective 117
capital than required will lead to inefficient capital use, as more profits will
be needed to maintain its average per-share profitability. Insufficient equity
capital risks default or financial stress, as well as limiting the firm’s ability to
take advantage of new growth opportunities. Optimal debt level is a function of
a firm’s key market, financial, and operating risks. The firm’s ability to mitigate
those risks varies. The risks within the firm’s competence to readily deal with
should be retained, while the risks that they are less capable of dealing with
should be transferred.
12
British Petroleum Deepwater Horizon
Introduction
The Macondo well, operated by BP, aided by driller Transocean Ltd. and
receiving cement support from Halliburton Co., blew out on April 20, 2010,
leading to 11 deaths. The subsequent 87-day flow of oil into the Gulf of Mexico
dominated news in the US for an extensive period of time, polluting fisheries in
the Gulf as well as the coastal areas of Louisiana, Mississippi, Alabama, Florida,
and Texas. The cause was attributed to defective cement in the well.
Subsequent studies by regulatory agencies could detect no formal risk assess-
ment by BP.1 Risk management in this context is not traditional insurance risk
management, nor the financial risk management that made hedging famous.
Rather, risk management in this broader sense means responsibility to rec-
ognize hazards and take action to prevent them to the extent possible. The
Deepwater oil spill demonstrates such a context, and the disastrous conse-
quences of not accomplishing such risk management.
Our economy gets involved in many extraction activities, most of which
involve high degrees of risk. In the 19th century, the prevailing attitude was
that miners were paid above the market level of wages in compensation for
the risks to their lives involved in their work. We still have mining, but the
20th century saw more activity in petroleum extraction, again with high levels
of life-threatening events. Government has reflected society’s changing atti-
tudes. This evolution is still going on, but we feel that the trend is for some
group to take the attitude that for every catastrophe, ‘we will not rest until that
never happens again.’ The Occupational Safety and Health Act of 1972 went
far to make workplaces safer. It has taken many years for our society to work
out this transition to safer work practices as a requirement, even at the expense
of operating cost. (In fact, we remember when one of the initial motivators for
firms to accept OSHA was that ‘it pays’ through reduced legal suits and fines.
There have been many acts passed in the US to regulate all industry, including
all phases of petroleum production.
118
British Petroleum Deepwater Horizon 119
Deepwater horizon
The Macondo Mississippi Canyon Block 252 oil well was 5000 feet deep, 40
miles southeast of the Louisiana shore. It was owned by Transocean Ltd.,
leased to British Petroleum. On April 20, 2010, the well erupted after a blowout
resulting in explosion and fire. Eleven workers were killed, another 17 injured.
Two days after the explosion, the entire rig sank. Gas and mud from the well
had triggered an explosion that sank the platform and cut the well pipe at
the sea floor. A containment cap was in place, but failed. The wellhead was
compromised, discharging crude at the rate peaking at 9000 barrels per day
into the Gulf of Mexico. The oil spill shut down Louisiana seafood.2
The oil spill persisted from April until July, gushing nearly 5 million barrels
of oil before it was finally stopped. The response effort was massive, with some
2600 vessels deployed. Shoreline protection efforts included 4.4 million feet of
sorbent boom, and there was heavy use of oil dispersants. Silves and Comfort
provided a listing of conditions describing BP risk management with respect to
Deepwater Horizon (Table 12.1).
Oil properties Light and gaseous oil, evaporating quickly in warm climate
1. the disaster was a black swan, which could not have been foreseen; and
2. BP and its regulators were incompetent black sheep.
Borison and Hamm considered a third perspective, seeking insight into what
better risk management could have done.
The uncertainty (ambiguity) of the case makes traditional tools such as VaR
analysis inappropriate; while there is a statistical database of oil spills, they are
hopefully rare events. So tools such as stress testing were proposed as more
appropriate. MMS was responsible for regulation, and had a system in place
to deal with oil spills that included tools to estimate likelihood and impact
based on historical data. However, MMS was not required to quantify risk of
a large spill from the Macondo well, and appears not to have done so. MMS
relied on qualitative analysis in the form of developers specifying a worst-case
scenario and giving detailed plans of response. Thus BP was aware of a large oil
spill, although no probability estimates were recorded. Thus BP did not under-
estimate the probability of the spill – rather, BP didn’t estimate it at all. When
risk analysis is solely qualitative, there is a great deal of personal interpretation.
For instance, a ‘worst-case’ could imply a 1 in 100 event, or a 1 in 1,000,000
event.
Borison and Hamm suggested a Bayesian analysis instead. Eckle and Burgherr
(2013)5 applied Bayesian analysis to oil chain fatalities, using a Poisson dis-
tribution for accident frequency and a gamma distribution for accident
severity. Monte Carlo simulation modeling was used to combine these factors.
Abbasinejad et al. (2012)6 studied Iranian energy consumption since 1983,
finding it unbalanced, with excessive use of oil and gas damaging the envir-
onment and encouraging a shift to natural gas and electricity. High growth in
energy demand was expected, and a Bayesian vector autoregressive forecasting
model of energy consumption was applied. Bayesian analysis bases probabil-
ities on judgment rather than on empirical observations. While statistically it
122 Enterprise Risk Management in Finance
Conclusions
The BP spill is only the most recent salient event demonstrating the risk to
the environment from energy policy. One alternative is nuclear energy, with
far less probability of a spill. However, there is of course the fear of such a spill
being far more dangerous than mere petroleum. Our global economy requires
a great deal of transportation, and we have developed a complex system relying
on petroleum-based energy. There are known relative risks of spills in the
system. Drilling in the ocean taps vast reserves, and occurs at a distance from
population centers. But as the BP case demonstrates, spills can be dramatic.
Conversely, land-based drilling creates a need to transport oil. Pipelines are
safer than trains, but political issues make it difficult to create new pipelines
while existing (old) railroad track becomes the major shipment technique. We
could switch to electric cars and trucks to vastly reduce the use of petroleum
for transportation – but the added strain on the electrical generation system
would open new opportunities for disaster. The problem is essentially that our
economies are complex and interrelated.
Borison and Hamm inferred three major lessons from the BP oil spill:
3. Borison and Hamm call for systematic learning to ensure that risk man-
agement improves over time. We think that they are absolutely correct.
Understanding the interactions of complex systems is critical in enabling
effective response.
The BP oil spill demonstrates a common risk context, involving massive engin-
eering undertakings such as seen in construction projects of many kinds. Tools
to plan for risk that were suggested were stress testing (war games, what-if
scenario analysis, many other names) and simulation using subjective inputs.
13
Bank Efficiency Analysis
Introduction
124
Bank Efficiency Analysis 125
• Neither DEA nor ANNs make assumptions about the functional form that
links its inputs to outputs.
• DEA seeks a set of weights to maximize technical efficiency, whereas ANNs
seek a set of weights to derive the best possible fit through observations of
the training dataset.
But the improvement of its efficiency from 0.8 to 0.9 could be due to many
causes, for example, scale economy, or an increase in several outputs. ANNs
have been viewed as a good tool to approximate numerous nonparametric and
nonlinear problems. Thus, the banking industry provides good opportunities
for the applications of ANNs. To the best of our knowledge, there are no studies
using ANNs dealing with bank branch efficiency, but this chapter presents
a DEA-NN approach to evaluate the branch performance of a big Canadian
bank. The results are also compared with the corresponding efficiency ratings
obtained from DEA. The fact that the DEA property of unit invariance is similar
to the property of scale preprocessing required by NNs validates the rationale
to implement a comparison between pure DEA results and DEA-NN results.
Based on this analysis, similarities and differences between ANNs and DEA
models are further investigated.
DEA is used to establish a best practice group amongst a set of observed units
and to identify the units that are inefficient when compared to the best practice
group. DEA also indicates the magnitude of the inefficiencies and improve-
ments possible for the inefficient units. Consider n DMUs to be evaluated,
DMUj (j = 1,2 ... n) that consume the amounts Xj = {xij} of m different inputs
(i = 1, 2, ... , m) and produce the amounts Yj = {yrj} of r outputs (r = 1 , ... , s).
The input-oriented efficiency of a particular DMU0 under the assumption
of variable returns to scale (VRS) can be obtained from the following linear
programs (input-oriented BCC model:9
r r
min z0 = θ − ε ⋅ 1s + − ε ⋅ 1s −
θ , λ , s+ , s −
s.t . Y λ − s + = Y0 (13.1)
θ X0 − X λ − s − = 0
r
1λ = 1
λ , s+ , s− ≥ 0
allows variable returns to scale. The dual program of the above formulation is
illustrated by:
max w0 = μ T Y0 + u0
μ ,ν
s.t . v T X0 = 1
r
μ T Y − v T X + u0 1 ≤ 0 (13.2)
r
−μ ≤ − ε • 1
T
r
− vT ≤ − ε • 1
u0 free
r
If the convexity constraint (1λ = 1) in (13.1) and the variable u0 in (13.2) are
removed, the feasible region is enlarged, which results in the reduction in the
number of efficient DMUs, and all DMUs are operating at constant returns to
scale (CRS). The resulting model is referred to as the CCR model.10 DEA has a
rich literature base of over 3000 papers and several books for those who require
detailed information on this technology.
In summary, each DEA model seeks to determine which of the n DMUs
define an envelopment surface that represents best practice, referred to as the
empirical production function or the efficient frontier. Units that lie on the
surface are deemed efficient in DEA, while those units that do not are termed
inefficient. DEA provides a comprehensive analysis of relative efficiencies
for multiple input-multiple output situations by evaluating each DMU and
measuring its performance relative to an envelopment surface composed of
other DMUs. Those DMUs are the peer group for the inefficient units known
as the efficient reference set. As the inefficient units are projected onto the
envelopment surface, the efficient units closest to the projection and whose
linear combination comprises this virtual unit form the peer group for that
particular DMU. The targets defined by the efficient projections give an indi-
cation of how this DMU can improve to become efficient.
Neural networks
Neural networks provide a new way for feature extraction (using hidden layers)
and classification (e.g., multilayer perceptrons). In addition, existing feature
extraction and classification algorithms can also be mapped into neural
network architectures for efficient (hardware) implementation.
Backpropagation neural network (BPNN) is the most widely used neural
network technique for classification or prediction.11 Figure 13.1 provides the
structure of the backpropagation neural network.
128 Enterprise Risk Management in Finance
W11 Z1
V1
X1
W12 Z2
V2
Wn2 ŷ
Wn1
Vm
W1m
Xx
Wnm
Zm y–ŷ
With backpropagation, the related input data are repeatedly presented to the
neural network. The output of the neural network is compared to the desired
output and an error is calculated in each iteration. This error is then back-
propagated to the neural network, and used to adjust the weights so that the
error decreases with each iteration and the neural model gets closer and closer
to producing the desired output. This process is known as training.
When the neural networks are trained, three problems should be taken into
consideration. First, it is very challenging to select the learning rate for the
nonlinear network; if the learning rate is too large, it leads to unstable learning,
but on the contrary, if the learning rate is too small, it results in incredibly
long training iterations. Second, settling in a local minimum may be good or
bad, depending on how close the local minimum is to the global minimum
and how accurate an error is required. In either case, backpropagation may
not always find the correct weights for the optimum solution. We may reini-
tialize the network several times to guarantee the optimal solution. Finally,
the network is sensitive to the number of neurons in its hidden layers; too few
neurons can lead to under-fitting, however, too many neurons can cause over-
fitting. Although all training points are well fit, the fitting curve takes wild
oscillations between these points. In order to solve these problems, we pre-
process the data before training. The scale of data values is bounded to 10 and
100 by dividing by a constant value, such as 10 or 100. The weights are initial-
ized with a random decimal fraction ranging from –1 to 1. Moreover, there are
about 12 training algorithms for BPNN. After preliminary analyses and trial,
Bank Efficiency Analysis 129
One hundred and forty-two branches of a big Canadian bank in the Toronto
area were involved in the analysis. The data covered the period October to
December, 2001. Summary statistics for the inputs and outputs are reported
in Table 13.1.
From the table, no consistent trend in the data was found over the time
horizon of analysis. There are no significant variations in terms of deposits
and loans.
Other
general
Personnel expenses Deposits Loans Revenues
data for nonparametric models should be at least ten times the number of input
variables. Since we have five inputs in the ANNs, a minimum of 50 training
examples was necessary to have reasonable learning of connection weights.
Since we had less than ten efficient branches (efficiency score is 1) for each
training, and a minimum of 50 branches for training was desirable, we used a
grouping technique by roughly pre-specified cutoff efficiency threshold values
of 0.98, 0.8 and 0.5. Thus, we obtained sufficient branches in our ‘efficient’
set. Note that the word efficient in the DEA context means DMUs with an effi-
ciency score of 1. Since in our case the ‘efficient’ set does not only include all
DMUs with an efficiency score of 1, we have used quotation marks to indicate
that the word ‘efficient’ has a slightly different meaning from the DEA context.
The same logic applies for the word inefficient (efficiency score < 1). The per-
formance of ‘efficient’ and ‘inefficient’ branches was then tested on the entire
dataset so that industry efficiency could be predicted and analyzed.
The neural network was trained for each month using different combina-
tions of subsets. A computer program was written using Matlab language as
well as the neural network add-in module of Matlab.
Table 13.2 presents the parameters of the estimated neural networks in the
algorithm. The estimated neural network incorporates four tanh hidden units,
and the Levenberg–Marquardt algorithm is employed for training.
After training, we obtained three estimated neural networks for each month,
the best two of which were used for the branch efficiency prediction for each
month. The best two estimated neural networks are denoted as DEA-NN1 and
DEA-NN2, using training data subset S1◡ S2◡ S3 or S1◡ S2◡ S4, respectively.
The results are consistent with Pendharkar et al.’s (2003) findings that it is
better to train ANN on an efficient training data subset in order to improve the
predictive performance of an ANN.
Concept Result
Based upon the best two estimated neural networks, the branch efficiencies
are calculated. The results are shown in Tables 13.3 and 13.4. Table 13.3 gives
results of number of branches corresponding to each efficiency interval for
three months. We grouped dataset SS2 into four categories depending on the
efficiency value intervals (0.98, 1], (0.8, 0.98], (0.5, 0.] and (0, 0.5], then did a
statistics analysis of DEA-NN efficiencies.
Table 13.4 shows that the efficiency scores for some DMUs are larger than 1
in the DEA-NN model, which is not allowable in the DEA context. This occurs
in the DEA-NN model since NNs actually generate a stochastic frontier based
upon the ‘efficient’ DMUs due to the statistical and probabilistic (and thus
varying) properties embedded in NNs. Neural network models have the ability
to approximate complex nonlinear functions in a semi-parametric fashion. We
observe that the bank branch performance is very close in the three-month
period, since there is no significant change in the bank’s policy and the eco-
nomic conditions during the examined period.
Standard
Max Min Mean Median Deviation
If the four least efficient branches could improve their efficiency rating to the
0.5– 0.8 range while all the other branches remained the same, the branches
involved could save 18% of their costs on average (from Table 13.6). The DEA-NN
approach can provide guidance for inefficient branches to improve their per-
formance to any efficiency rating that management thinks necessary.
To verify the rationality of our proposed DEA-NN approach, a regression ana-
lysis between the efficiency result achieved by our current DEA-NN approach
and that by the normal DEA model is conducted. In the regression, a unity
(1) slope, a zero value of intercept and unity R2 coefficient indicate that our
current DEA-NN results provide a good estimation for the DEA result. The
regression results based on three months’ data are shown in Tables 13.5, 13.6,
and 13.7, where slope, intercept and r-squared coefficient are calculated. The
predicted efficiency has a strong correlation with that calculated by DEA,
which indicates that the predicted efficiency is a good proxy to classical DEA
results.
Current
Current Improved expenses Difference
Transit no. score score (p.a.) (p.a.)
From Table 13.7, the DEA-ANN results are highly correlated with those
obtained in the DEA CRS techniques.
Table 13.7 Regression analysis for branch efficiency prediction using October data
Standard
Factor Max Min Mean Median Deviation
November December
#3, #13, #31, #40, #3, #49, #64, #40, #81, #3, #4,, #49, #64, #3, #49, #64, #81,
#49, #64, #81, #92 #92, #104, #131, #81, #91,#127 #93, #91,#131, #110
#93,#110, #16, #29,
#135, #91
Conclusions
This chapter presents a DEA-NN study applied to the branches of a big Canadian
bank. The results are comparable to the normal DEA results. However, the
DEA-NN approach produces a more robust frontier and identifies more effi-
cient units since more good performance patterns are explored. Furthermore,
the DEA-NN approach identifies those less-than-optimal performers, and
suggests areas in which their performance can be improved to attain better
efficiency ratings. We conclude this section with a comparison of the two
methodological approaches in Table 13.12. In summary, the neural network
approach requires no assumptions about the production function (the major
drawback of the parametric approach) and is highly flexible.
Bank Efficiency Analysis 135
Neural
DEA Network
Similarities
Nonparametric Nonparametric
No assumptions about the functional No assumptions about the functional
form that links its inputs to outputs form that links its inputs to outputs
Optimal weights to maximize the Optimal weights to derive the best
efficiency possible fit
Unit and scale invariant Scale preprocessing
Differences
Medium assumptions about functional Low assumptions about functional
form and data form and data
Medium flexibility High flexibility
Many theoretical studies/applications Few theoretical studies
on efficiency
Low cost of software, estimation time High cost of software, estimation time
14
Catastrophe Bond and Risk Modeling
Introduction
Catastrophe bonds, or cat bonds, are the most common type of cat risk-linked
securities. Cat bonds have complicated structures, and refer to a financial
instrument devised to transfer insurance risk from insurance and reinsurance
companies to the capital market. The payoff from cat bonds is dependent on the
qualifying trigger event(s) such as natural disasters, such as earthquakes, floods
and hurricanes, or manmade events such as fire, explosions and terrorism. We
will review the modeling approaches of cat bonds as follows.
136
Catastrophe Bond and Risk Modeling 137
60
50 Total
40
Payoff
30
Property insurance
20
10 Life insurance
0
0 10 20 30 40 50 60 70
t (in days)
Figure 14.1 Insurance payoffs (million RMB) for the Wenchuan earthquake (from www.
circ.gov.cn)
Loss model
Demonstration of computation
Earthquake loss data obtained from the China Statistics Yearbook is presented
in Table 14.3. From Table 14.2, we can see that the biggest earthquake was in
2008 in Sichuan province. In this earthquake catastrophe, although the loss
claims were mainly from the government and public endowment, insurers did
claim high losses. For example, according to some government surveys (www.
circ.gov.cn), the excluded liability of earthquake risk in the clauses of some life
insurances was waived by most insurers. Obviously, for modeling catastrophe
loss, log-normal performs better than normal distribution. This is consistent
with the majority of published research.13
Next, we analyze the statistical characteristics of the logarithm of catas-
trophe loss: mean reversion and fat tail. We use the presence of an autoregres-
sive (AR) feature to test mean reversion in the data. In linear processes with
Model Literature
normal shocks, this amounts to checking stationarity. If this is true, this can be
a symptom for mean reversion, and a mean reverting process can be adopted as
a first assumption for the data.
Mean reversion cam be defined as the property of always reverting to a
certain constant as time passes. This property is true for an AR(1) process if
the absolute value of the autoregression coefficient is less than one, that is,
|α| < 1. Since for the AR(1) process |α| < 1 is also a necessary and sufficient con-
dition for stationarity, testing for mean reversion is equivalent to testing for
140 Enterprise Risk Management in Finance
stationarity. Note that in the special case where |α| = 1, the process behaves like
a pure random walk with constant drift. There is a growing number of station-
arity statistical tests available and ready for use in many econometric pack-
ages. The most popular are: the Dickey and Fuller (DF) test; the Augmented
DF (ADF) test; the Phillips–Perron (PP) test (PP); the Variance Ratio (VR) test;
and the Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test. We choose the ADF
test due to its robust features. Computation is shown in Table 14.3, indicating
a t-statistic value of −4.123 and probability value of 0.0021. This means the
earthquake loss data pass the ADF test, indicating mean reversion.
To test for fat tails we first use a graphical tool called QQ-plot, which
compares the tails in the data with those from the Gaussian distribution. The
QQ-plot gives immediate graphical evidence of the possible presence of fat
tails. Further evidence can be obtained by considering the skewness and excess
kurtosis values in order to see how the data differ from the Gaussian distri-
bution. Computations indicate skewness and excess kurtosis values of 4401
and 23 respectively, suggesting the earthquake loss data do have fat tails.
Parameter estimation
Our parameter estimation is based on Markov chain Monte Carlo (MCMC)
approaches, which are a class of algorithms for sampling from probability
distributions based on constructing a Markov chain that has the desired dis-
tribution as its equilibrium distribution. MCMC methods are particularly well
suited for financial pricing applications with stochastic process problems, for
several reasons. First, state variables solve stochastic differential equations,
which are built from Brownian motions, Poisson processes, or other i.i.d.
shocks. Therefore, standard tools of Bayesian inference can be directly used
here. Second, MCMC is a unified estimation procedure which simultaneously
estimates parameters and latent variables. MCMC directly computes the distri-
butions of the latent variables and parameters given the observed data. This
is a strong alternative to the usual approach of applying approximate filters
or latent variable proxies. Finally, MCMC is based on conditional simulation
without any optimization. MCMC provides a strategy for generating samples
x0:t, while exploring the state space using a Markov chain mechanism. This
mechanism is constructed so that the chain spends more time in the most
t-Statistic Prob.
important regions. The MCMC approach in this chapter uses random walk
MH (Metropolis–Hastings) algorithm which assumes uniform distribution in
[−0.5, 0.5]. Parameter vector θ is updated by following θ′ = θ + τε, where ε is an
error term, and τ is a harmonic parameter. Variance of error term is rectified
by changing τ. For details of approximating parameters, readers can refer to
Andrieu and Freitas.14 Codes are written and implemented in Matlab language.
Table 14.4 presents the computational result of parameter estimation using all
three sets of models.
Figures 14.2 and 14.3 depict the fitted curve of a compound Poisson model
and different jump-diffusion processes using historical and simulated data
16
14
12
10
0
2 3 4 5 6 7 8 9
Figure 14.2 Frequency histogram of logarithm of earthquake loss with normal fit
142 Enterprise Risk Management in Finance
× 10–6
1.4 Loss
Normal
Lognormal
1.2
Gamma
Log-Logistic
1
Density
0.8
0.6
0.4
0.2
0
0.5 1 1.5 2 2.5
Data ×107
Figure 14.3 Historical vs. simulated data distribution of compound Poisson model
×10–7
7
Loss
Jump diffusion-Normal
6 Jump diffusion-Lognormal
Jump diffusion-Camma
5 Jump diffusion-Log-logistic
Double exponential jump diffusion
Density
0
0 0.5 1 1.5 2 2.5
Data
×107
Error analysis
This section provides error analysis in order to validate solutions from prior
material by running simulations. The goodness of fit can also be tested,
following the error analysis. We first simulate 10,000 paths of earthquake loss
data using the Monte Carlo simulation. Then we calculate averages of 10,000
144 Enterprise Risk Management in Finance
paths to estimate loss data. Finally, we compute their mean square deviation,
mean absolute error and maximal absolute error. The computational results are
shown in Table 14.6.
As can be seen from Table 14.6, the mean square deviation value and
absolute average error value of the double exponential jump-diffusion model
are 0.13571 and 0.1024 respectively, which are less than the corresponding
values of other models. Second, the maximal absolute error value in the double
exponential jump-diffusion model is 1.2963, close to the value in loglogistic
jump-diffusion model, which is 0.8991. These values suggest that the double
exponential jump-diffusion model is the best fit for Chinese earthquake
loss data.
Conclusions
Introduction
145
146 Enterprise Risk Management in Finance
Finite resources,
e.g., IT Budget
Figure 15.1 Supply chain model of the banking process with constrained resources
148 Enterprise Risk Management in Finance
• The bank puts more emphasis on the primary-market business than the
secondary-market business. The primary-market business unit seizes
constrained resources to reach its goal, and the secondary-market business
unit attempts to maximize its profits, defined and solved as the follower. We
denote this as an upper-leader (UL) game.
• The bank places greater emphasis on the secondary-market business than
on the primary-market business. The secondary-market business unit serves
as a leader, and the primary-market business unit serves as a follower in the
Stackelberg game. We denote this as a lower-r (LL) game. The problem for
the leader at the lower level is to find an optimal resource allocation and
weighting scheme that maximize the total profit subject to the follower’s
optimal strategy. The follower at the upper level attempts to find an optimal
weighting scheme to maximize its total profit given the resource allocating
scheme determined by the leader.
Mathematical model
s.t . A1 x + B1y ≤ b1
(I)
min f ( x, y ) = p2T x + q2T y
y
s.t . A2 x + B2 y ≤ b2
q p
max ∑d
r =1
T
r y% jr − ∑ csT x% js
s =1
n
s.t . ∑λ x
i =1
i ir ≤ x% jr ( r = 1,..., p ), (II)
n
∑λ y
i =1
i is ≥ y% js ( s = 1,..., q ),
λi ≥ 0 (i = 1,..., n ),
where (x~j1, ... ,x~jp, y~j1, ... ,y~jq) are decision variables and c = (c1, ... ,cp) and
d = (d1, ... ,dq) are the unit price vectors attached to the input x~ = (x~j1, ... ,x~jp) and
output y~j = (y~j1, ... ,y~jq) vectors respectively, λ = (λ1, ... ,λn) is a nonnegative multi-
plier used to aggregate existing production activities. Based on an optimal
solution (x*j1, ... , x*jp, y*j1, ... , y*jq) of the above model, the profit efficiency of DMU
j (PEj) is computed as follows:
q p
∑d y r jr − ∑ cs xjs
PEj = r =1
q
s =1
p
(III)
∑ dr y − ∑ cs xjs*
r =1
*
jr
s =1
where yj = (yj1, ... ,yjq), xj = (xj1, ... ,xjq), are the vectors of observed values for
q p
DMU j. Under the positive profit assumption, i.e., ∑d
r =1
T
r y jr − ∑ csT xjs > 0 , we
s =1
Merger evaluation
Step 1: Solve the bilevel programming DEA problem for each DMU, using inte-
grated bilevel programming-DEA model to obtain optimal solutions. Using
nonnegative multipliers, solutions of integrated bilevel programming-DEA
give the best-practice frontier through a convex combination of existing pro-
duction activity.
Step 2: For each variable, compute the average slack-adjusted input-output
bundle, and the profits of leader and follower are computed as the average
input-output bundle.
Step 3: Solve the bilevel programming DEA problem with the average input-
output bundle to generate efficiency values and record the corresponding
optimal input-output bundle for the leader, follower and system.
Step 4: Compute the total (slack-adjusted) input and output bundles of n
systems, and the profits of leader, follower and system using the total input-
output bundle are computed.
Step 5: Solve the bilevel programming DEA problem with the total input-output
bundle. With the solution, we can compute merger efficiency, harmony and
scale efficiency of the whole bilevel system as well as for the subsystems.
Table 15.1 Input and output data for the 8 branches in the numerical example
Branch XD1 X1 Z1 Y X D2 X2 Z2 PL PF
the profit efficiency values for both leaders and followers, and then change
the value of the profit-sharing parameter α from 0 to 0.1. A numerical analysis
suggests that both leaders and followers are better off when α « (0, α̂), where
α̂ ≈ 0.01.
This section conducts a banking chain merger efficiency analysis using our
proposed approach. The Canadian banking industry experienced an increas-
ingly dynamic market environment due to a change in the legislative regime
of the Canadian government in the early 1990s. Benefiting from new and
cost-effective technology, Canadian banks have in many ways increased per-
formance measurements and reduced operating costs. They have maintained or
even increased the quality of their services while expanding to a broader cus-
tomer base in order to be more competitive in the global banking market. For
example, GIS-based technologies have been employed by Canadian banks for
merger evaluation, particularly for derivation of market boundaries and market
share estimation.18 Both negative and positive effects of mergers need to be taken
into consideration along with uncertainties and risks stemming from multiple
sources. Gauging the potential gains from mergers, and the decomposition of
these gains into harmony and scale effects, provide support for decisions made
by banks on whether to green-light a merger with its underlying conditions.
The hierarchical structure of the banking chain is similar to Figure 15.1. For
clarity of exposition, in the bilevel banking chain model we extract two direct
inputs (personnel costs, and other expenses); one intermediate output (loans);
and two final outputs (profit, and loan recovery). We consider the annual
IT budget as a constrained input resource to support computers, required
software and systems, data network rentals, and any maintenance and repair.
The large Canadian bank we consider has the greatest market share in the
Canadian e-banking business, which relies heavily on IT support. Personnel
costs include salaries and benefit payments for full-time, part-time, and con-
tract employees. Other expenses are costs other than personnel and IT budget,
such as marketing and advertising expenditures, training and education costs,
and communication expenses. Loans are composed of credit notes issued to
individual customers.
Thirty branches of this bank in Ontario are selected to consider mergers of the
branches as a form of intra-firm reorganization. The potential for performance
improvement in these branches provides the incentive to consider gains from
merging independently operated branches with other banking processes. We
chose branches from the same bank for two primary reasons. First, we expect
high relative efficiency levels because of the similarity of the technologies and
locations and the cooperation among the branches. Second, existing work from
Bilevel Programming Merger Analysis in Banking 153
XD1 X1 Y X2 Z2
DMU1 71.3 1.5 0.133 1447.8 2.5 523.2 1427.7 1374.9 500.6
DMU2 107.1 1.7 0.169 1950.2 2.3 534 1923.3 1841.2 504.8
DMU3 122.4 2.35 0.24 2095.2 1.65 536.3 2066 1970.2 505.4
DMU4 41 1.1 0.159 1364.4 2.9 495.4 1324.8 1322.1 452.9
DMU5 36.3 2.11 0.156 1390.2 1.89 521.1 1365.2 1351.6 494.2
DMU6 40.9 1.33 0.18485 1520.6 2.67 523.7 1496.3 1478.2 496.7
DMU7 91.8 0.6 0.5642 8118.6 3.4 610.3 8005.2 8025.6 493.5
DMU8 123.5 0.71 0.12 1144.1 3.29 519.9 1126.9 1019.8 499.4
DMU9 182.1 1.2 0.198 1742.5 2.8 527.4 1712.9 1559 495
DMU10 191.5 1.2 0.198 1742.5 2.8 527.4 1712.9 1549.6 495
DMU11 302.8 2 0.137 3153.7 2 442.9 2980.6 2848.8 267.8
DMU12 544 3.8 0.297 4517.7 0.2 386 4300.9 3969.6 169
DMU13 87.4 0.5 0.131 1434.2 3.5 517.7 1412.7 1346.2 492.7
DMU14 691.8 3.7 0.125 3249.1 0.3 564.8 3070.4 2553.5 385.8
DMU15 458 4 0.138 2622 0 402 2283.8 2159.9 63.84
DMU16 124.1 1.1 0.144 1749.3 2.9 524.3 1728.3 1624 500.4
DMU17 45 0.53 0.076 951.2 3.47 506.7 932.18 905.59 484.2
DMU18 589.2 3.45 0.155 4246.9 0.55 600.2 4026.1 3654.1 378.8
DMU19 713.8 3.82 0.14 3915.8 0.18 372.5 3559.5 3198 15.98
DMU20 97.3 1.28 0.126 1898.7 2.72 524.3 1870.4 1800 493.3
DMU21 229.4 1.36 0.12843 1876.5 2.64 487.1 1805.2 1645.6 413.2
DMU22 44.4 0.55 0.059 754.6 3.45 515.3 744.79 709.59 502
DMU23 50.8 0.57 0.057 759.5 3.43 512.3 749.78 708.07 499.1
DMU24 37 0.98 0.141 1690.6 3.02 523.3 1658.5 1652.5 488.2
DMU25 39.5 1.04 0.146 1726.4 2.96 526.3 1697.1 1685.7 494
DMU26 268 2.06 0.196 3643 1.94 560.1 3577.4 3372.7 492.6
DMU27 78.1 0.67 0.105 1158.1 3.33 512 1143 1079.2 493.6
DMU28 87.2 1 0.121 2220.7 3 524.8 2158.5 2132.4 459.6
DMU29 175.7 0.106 0.127 2067 3.894 525.3 2042.2 1891.1 496.6
DMU30 193.9 1.72 0.165 2132.5 2.28 493.5 2030.1 1936.7 388.9
the theory of transfer effects suggests that merger performance is higher when
business units operate in similar industries, due to a positive transfer effect; the
merger performance could suffer if it was made in a different industry, where a
negative transfer effect may occur.19 In general, similar organizational cultures
may enhance the chances of gains from operational mergers.
Considering this situation, we expect potential gains from mergers, in par-
ticular from the harmony effect due to new market conditions, and envir-
onmental regulations may create new potential demand for the extension
154 Enterprise Risk Management in Finance
quite different from those of the follower. This seems to suggest that the system
profit efficiency is mainly affected by the leader.
To examine how resource allocation impacts the firm’s performance, it is
necessary to align resource allocation to a company’s bottom-line business
goals. For this assessment, banking operations consist of two main value-
added activities in two market. Under the UL game structure, the upstream-
level sub-division has stronger market power and decides on the amount of
resources it needs to maximize efficiency. Hence, we identify stage 1 as a
resource allocation-related value-added activity under the UL game scenario.
In contrast, stage 2 is related to resource allocation and value-added activity
under the LL game scenario. This assumption, although possibly more
restrictive than in reality, may be reasonable and necessary because of the
lack of information about how management can allocate and spend annual
IT budgets.
Under the UL game scenario and the VRS assumption of the 30 branches,
three achieved overall efficiency, eight were rated efficient in stage 1 (resource
allocation-related activity), and three were rated efficient in stage 2. The results
show five branches ass rated efficient in the resource allocation-related activity
(stage 1) without achieving overall efficiency. The following classification of
branches aims to provide a means for management to better evaluate their
resource allocation-related operations.21
Different types of branches can now be studied in more detail to identify the
reasons for performance differences. To explain the systematic differences
between different types of branches, management may refer to factors such as
the type of system used, management practices, and implementation proce-
dures. Similar to Wang et al. (1997)22 an interesting observation from this
classification is that inefficiency in resource allocation always leads to overall
inefficiency. Moreover, an overall inefficiency seems to imply efficiency in
resource allocation-related activity.
156 Enterprise Risk Management in Finance
SFA
Statistics PE-CRS PE-VRS CCR BCC SFA Linear loglinear
SFA SFA
PE-CRS PE-VRS CCR BCC LINEAR LOGLINEAR
Notes: ** Pearson Correlation is significant at the 0.01 level (two-tailed); NA cannot be computed
because at least one of the variables is constant.
it is then important to open the black box and develop good underlying pro-
duction models of the technology accommodating inner intermediate inputs/
outputs and game structure.
Post merger
To respect the fact that most branches favor merging two adjacent branches,
we examine potential profit gains by merging two branches at a time. This
leads to a total of 435 possible mergers involving two branches. Therefore, the
relative profit efficiency of these 435 possible mergers is computed with ref-
erence to the original DMU by our bilevel programming-DEA model. We tested
the merger gains from all of these combinations using both the CRS and VRS
bilevel DEA chain merger models.
We examined two kinds of merger activities: a merger of individual sub-
chain (either leader or follower) members and a chain merger. Table 15.6 gives
the computational statistics under both the CRS and VRS assumptions: the
number of the efficient and coordinated (using profit-sharing strategy) mergers,
and the average merger efficiency scores Em. Tables 15.7 and 15.8 present merger
efficiencies of the top ten most promising mergers under both CRS and VRS
assumptions respectively.
If we believe in the estimated CRS technology, under the 1st situation with
UL game structure, we see that at an overall scale the average potential profit
gains in the (435–12) bilevel system mergers is 2.2% (=1.022–1) while this
number decreases to 0.7% (=1.007–1) under the second situation, with LL game
structure. Indeed, in the detailed results, 100 out of the 225 effective system
158 Enterprise Risk Management in Finance
CRS VRS
_ _
Game Measure (>100%) Number Em Number Em
The UL game Efficient mergers for the leader 218 1.023 213 1.779
Efficient mergers for the follower 191 1.02 13 1
Efficient mergers for the whole 225 1.022 213 1.61
system
Coordinated efficient mergers 163 1.02 7 1.45
The LL game Efficient mergers for the new 299 1 70 1
leader
Efficient mergers for the new 249 1.035 113 1.263
follower
Efficient mergers for the whole 335 1.007 112 1.194
system
Coordinated efficient mergers 193 1.006 3 1.2
pairs have an improvement potential of more than 5% under the first situ-
ation, with UL game structure, and 128 out of the 335 effective system pairs
have an improvement potential of more than 2% under the second situation,
with LL game structure.
Bilevel Programming Merger Analysis in Banking 159
Managerial insights
Conclusions
also defined merger efficiency concepts for both single units and multiple
units under this structure, and developed an approach to solve the NP-hard
bilevel programming-DEA model. In addition, we discussed the decomposition
of merger efficiency into a harmony effect and a scale effect at both the chain
and sub-chain levels. In this framework, we have shown that the supply chain
with constrained resources and a leader–follower relationship is efficient if and
only if both leader and follower are efficient. We proposed a profit-sharing
strategy to address incentive incompatibility problems that might be present in
merging firms with such a leader–follower structure. Both leaders and followers
benefit under the proposed incentive-compatible strategy.
A case study of potential intra-firm banking-chain mergers with limited
input resources was also presented to illustrate the proposed approach. Using
435 potential mergers involving branches merging in pairs, the results show
significant potential gains from these mergers in banking chains with a leader–
follower structure and constrained recourses. The case study demonstrates
that bank branches achieve potential gains by conducting intra-firm mergers,
which can be incentive-compatible. The findings of the case study also provide
insights into the consequences of different pairings of firm entities and the
results of different types of M&A deals. This allows a deeper understanding of
mergers in the financial sector and its implications on the acquiring banking
entities with chain operations.
16
Sustainability and Risk in Globalization
We all are aware that we live in a world of change, and that there are many
things going on that appear to be problematic. There are islands in the Pacific
that are going underwater (well, they aren’t sinking, but the water level is
rising). There are glaciers that are disappearing. Europeans spent a good part
of the second millennium AD looking for a Northwest Passage to Asia – but
nature appears to be providing one for us in the third. Climate appears to be
changing, although weather being what it is, long-range trends are elusive at
best. But North Dakota might someday be a winter haven, and Omaha might
be a seaport. We need to worry about the environment. We will, of course,
argue about how to cope. Some want to ban coal and petroleum use NOW.
Others prefer to consider moving uphill.
The utilization of high technology for loss-prevention and control systems
in natural disasters or fires, accidents and quantitative models in derivatives
for insurances and finance has expanded substantially in the past decade.
Encouraged by traumatic recent events such as 9/11/2001 and business scandals
including Enron and WorldCom,1 risk management has not only developed a
control focus, but most importantly it remains a tool to enhance the value of
systems, both commercial and communal. Integrated approaches to manage
risks facing organizations have been developed along with new business phil-
osophies of enterprise risk management.
Enterprise sustainability
163
164 Enterprise Risk Management in Finance
management’s aim to identify present and future threats and to devise plans
to mitigate or eliminate these risks.
All human endeavors involve uncertainty and risk. Mitroff and Alpaslan
(2003)3 categorized emergencies and crises into three categories: natural disas-
ters, malicious activities, and systemic failures of human systems. Nature does
many things to us, disrupting our best-laid plans and undoing much of what
humans have constructed. Events such as earthquakes, floods, fires and hurri-
canes are manifestations of the majesty of nature. Recent events, including the
tsunami in the Indian Ocean in 2004 and Hurricane Katrina in New Orleans in
2005 demonstrate how powerless humans can be in the face of nature’s wrath.
Global economic crisis risks are profound and widespread over the last decade.
Businesses in fact exist to cope with risk in their area of specialization. But
chief executive officers are responsible for dealing with any risk fate throws at
their organization.
Malicious acts are intentional on the part of fellow humans who are either
excessively competitive or who suffer from character flaws. Examples include
the Tylenol poisonings of 1982, syringes being placed in Pepsi cans in 1993,
the bombing of the World Trade Center in 1993, Sarin gas attacks in Tokyo
in 1995, terrorist destruction of the World Trade Center in New York in 2001,
and corporate scandals within Enron, Andersen, and WorldCom in 2001.
More recent malicious acts include terrorist activities in Spain and London,
and in the financial realm, the Ponzi scheme of Bernard Madoff uncovered
in 2009. Wars fall within this category, although our perceptions of what is
sanctioned or malicious are colored by our biases. Criminal activities such
as product tampering or the blend of kidnapping and murder are clearly not
condoned. Acts of terrorism are less easily classified, as what is terrorism to
some of us is the expression of political behavior to others. Similar gray cat-
egories exist in the business world. Marketing is highly competitive, and
positive spinning of your product often tips over to malicious slander of com-
petitor products. Malicious activity has even arisen within the area of infor-
mation technology, in the form of identity theft or tampering with company
records.
Probably the most common source of crises is unexpected consequences
arising from overly complex systems.4 Examples of such crises include Three
Mile Island in Pennsylvania in 1979 and Chernobyl in 1986 within the nuclear
power field, the chemical disaster in Bhopal, India, in 1984, the Exxon Valdez
oil spill in 1989, the Ford-Firestone tire crisis in 2000, and the Columbia space
shuttle explosion in 2003. The financial world is not immune to systemic
failure, as demonstrated by the Barings Bank collapse in 1995, the failure of
Long-Term Capital Management in 1998, and the subprime mortgage bubble
implosion leading to near-failure in 2008. The use of electric cars, which are
Sustainability and Risk in Globalization 165
Types of risk
What we eat
One of the major issues facing human culture is the need for quality food. Two
factors that need to be considered are: human population growth, and threats
to the environment. We have understood since Malthus that the human popu-
lation cannot continue to grow exponentially. Some countries, such as China,
have been proactive in seeking to control their population growth. Other areas,
such as Europe, are documenting decreases in population growth, probably
due to societal consensus. But other areas, which include India and Africa,
continue to experience rapid increases in population. This may change as these
areas become more affluent (see China and Europe). But there is no universally
acceptable way to control human population growth. Thus, we expect to see a
continued increase in demand for food.
Agricultural science has been relatively effective in developing better strains
of crops through a number of methods, including bioengineering and genetic
science; this led to what was expected to be a ‘green revolution’ a generation
ago. As with all of mankind’s schemes, the best-laid plans of humans involve
many complexities and unexpected consequences. North America has devel-
oped the means to increase production of food that is free from many of the
problems that existed a century ago. However, people in Europe, Australia,
Asia, and Africa, are concerned about new threats arising from genetically
modified agricultural crops. Many US citizens are concerned about the risks of
genetically modified food. This is another example of human efforts to reach
improvements that lead to new dangers, or unintended consequences, with
great disagreement about what is seen to be the truth.
A third factor complicating the food issue is distribution. North America and
Ukraine have long been fertile producing centers, generating surpluses of food.
This connects to supply chain issues. But the fundamental issue is the inter-
connected global human system with surpluses in some locations and food
scarcities in others. Technically, this is a supply chain issue. But more import-
antly it is an economic issue of sharing food, which is a series of political issues.
The heavy reliance of contemporary businesses on international collaborative
supply chains leads to many risks arising from shipping, as well as to other
factors such as political stability, physical security from natural disaster, piracy
Sustainability and Risk in Globalization 167
on the high seas, and changing regulations. Sustainable supply chain man-
agement has become an area of pressure potentially applied by governing agen-
cies, customers, and the various corporate entities involved within a supply
chain. These interests can include industry cartels such as OPEC, regulatory
environments such as the Eurozone, industry lobbies as in the sugar industry,
and so forth that complicate international business on the scale induced by
global supply chains.
Water is one of the most abundant assets on the Earth, probably next to
oxygen, which chemists know is a related element. Rainwater used to be consid-
ered pure. The industrial revolution caused the unintended production of acid
precipitation with numerous unanticipated consequences, locally, regionally
and globally. Water used to be free in many places; only 30 years ago, in those
places paying for water would have been considered the height of idiocy.
Managing water is recognized as a major issue in that less than 3% of the
world’s water is fresh. Lambooy (2011)6 called for attention to waste water man-
agement, management of freshwater consumption, and groundwater control
management. Water management is increasingly an economic issue, leading to
the political arena. Wherever there is a scarcity of water, this induces political
efforts to gain a greater share for each political entity, involving allocations not
only of drinking water for cities, but also for irrigation of agriculture lands and
even for sustainable levels to enable river navigation.
Globalization
Living and working in today’s environment involves many risks. The processes
used to make decisions in regard to these risks must consider both the need
to keep people gainfully and safely employed through increased economic
activity and to protect the earth from threats arising from human activities.
We need to consider that there are many risks, and we have challenges in
developing strategies, controls, and regulations designed to reduce the risks
while seeking to achieve other goals.
Globalization has played a major role in expanding the opportunities for
many manufacturers, retailers, and other business organizations to be more
efficient. The tradeoff has always been the cost of transportation, as well as the
added risk of globalizing.
In 2010 the Eyjafjallajökull volcano in Iceland shut down air transportation
across most of Europe. Some Europeans spent a full week waiting for some
means to travel across Europe. Supply chains were also disrupted, as transpor-
tation (logistics) is key to linking production facilities in supply chains; many
in Europe found their supermarkets short of fresh fruit and flowers. Supply
chains often depend on lean manufacturing, requiring just-in-time delivery
of components. These systems are optimized, which means elimination of
the slack that would cover contingencies such as volcanic disruption of air
transport. Multiple sourcing, flexible manufacturing strategies, and logistics
networks capable of alternative routing are clearly needed.
Sustainability and Risk in Globalization 169
On March 11, 2011, an earthquake off the Pacific coast of Japan led to a cata-
strophic tsunami that destroyed most of a rich area of advanced technology
manufacturing. It also severely damaged a nuclear power plant, which at the
time of writing still saw damage control efforts. While the worst impact was
in terms of Japanese lives, there also was major impact on many of the world’s
supply chains. Organizations such as Samsung, Ford Motor Company, and
Boeing found production disrupted due to lack of key components from Japan.
Japanese plants produced about 20% of the semiconductors used worldwide,
and double that for electronic components; for example, Toshiba produced
one-quarter of the nano flash chips used, and on March 14, 2011, it had to halt
operations due to power outages.
Modern supply chains need to develop ways to work around any kind of
disruption. Wars of course lead to major disruption in supply chains, but tariff
regulations can have an impact as well. In 2002, Honda Motors had to spend
$3,000 per ton in airlifting carbon sheet steel to the US after tariff-related
supply disruptions. In January 2011, the Volkswagen, Porsche and BMW supply
chains in Germany were taxed by surging demand; Volkswagen actually had
to halt production due to engine and other part shortages. This was not due
to natural disaster or war, or any other negative factor, but rather to booming
demand in China and the United States. Lean manufacturing and modern
consumer retailing operations require maintenance of supply.
Supply chains can offer great value to us as consumers: competition has led to
better products at lower costs enabled by shipping (by land and air as well as sea)
through supply chains; outsourcing allows producers to access the best materials
and process them at the lowest cost. Lean manufacturing enables cost efficiency
as well. But both of these valuable trends lead to greater supply chain exposure.
There is a need to gain flexibility, which can be obtained in a number of ways:
Economically efficient supply chains push the tradeoff between cost and risk.
The lowest cost alternative usually is vulnerable to some kind of disruption.
Some of the economic benefit from low cost has to be invested in means to
enable flexible coping with disruption.
170 Enterprise Risk Management in Finance
providers; but even providers have suffered extreme stress. Nigeria’s export
revenues were dramatically impacted by production shutdown from December
2005 to April 2007.14 Risk management in the petroleum industry is managed
through hedging, using futures, forwards, options, or swaps. Modeling support
in the form of Monte Carlo simulation is often applied. Optimization is some-
times applied as well, but encounters obstacles in the form of fitting distribu-
tions (specifically, the fat tail problem).
There is an increasing tendency toward an integrated or holistic view of risks.
Enterprise Risk Management (ERM) is an integrated approach to achieving the
enterprise’s strategic, programmatic, and financial objectives with acceptable
risk. The philosophy of ERM generalizes these concepts beyond financial risks
to include all kinds of risks beyond disciplinary silos.
Contingency management has been widely systematized in the military.
Systematic organizational planning recently has been observed to include
scenario analysis, giving executives a means of understanding what might go
wrong, thus giving them some opportunity to prepare reaction plans. A compli-
cating factor is that organization leadership is rarely a unified whole, but rather
consists of a variety of stakeholders with potentially differing objectives.
1. Risk context and drivers: Risk drivers arising from the external environment
will affect all organizations, and can include elements such as the potential
collapse of the global financial system, or wars. Industry specific supply
chains may have different degrees of exposure to risks. A regional grocery
will be less impacted by recalls of Chinese products involving lead paint
than will those supply chains carrying such items. Supply chain configur-
ation can be the source of risks. Specific organizations can reduce industry
risk by the way the make decisions with respect to vendor selection. Partner
specific risks include consideration of financial solvency, product quality
capabilities, and compatibility and capabilities of vendor information
systems. The last level of risk drivers relate to internal organizational proc-
esses in risk assessment and response, and can be improved by better equip-
ping and training of staff and improved managerial control through better
information systems.
2. Risk management influencers: This level involves actions taken by the organ-
ization to improve their risk position. The organization’s attitude toward
risk will affect its reward system, and mold how individuals within the
Sustainability and Risk in Globalization 173
organization will react to events. This attitude can be dynamic over time,
responding to organizational success or decline.
3. Decision makers: Individuals within the organization have risk profiles. Some
humans are more risk averse, others more risk seeking. Different organiza-
tions have different degrees of group decision making. More hierarchical
organizations may isolate specific decisions to particular individuals or
offices, while flatter organizations may stress greater levels of participation.
Individual or group attitudes toward risk can be shaped by their recent
experiences, as well as by the reward and penalty structure used by the
organization.
4. Risk management responses: Each organization must respond to risks, but
there are many alternative ways in which the process used can be applied.
Risk must first be identified. Monitoring and review requires measurement
of organizational performance. Once risks are identified, responses must be
selected. Risks can be mitigated by an implicit tradeoff between insurance
and cost reduction. Most actions available to organizations involve knowing
what risks the organization can cope with because of their expertise and
capabilities, and which risks they should outsource to others at some cost.
Some risks can be dealt with, others avoided.
5. Performance outcomes: Organizational performance measures can vary widely.
Private for-profit organizations are generally measured in terms of profit-
ability, short-run and long-run. Public organizations are held accountable in
terms of effectiveness in delivering services as well as the cost of providing
these services.
In normal times, there is more of a focus on high returns for private organiza-
tions, and lower taxes for public institutions. But risk events can make their
preparations to deal with risk exposure much more important, focusing on
survival.
Conclusions
Introduction
Natural disasters by definition are surprises, causing a great deal of damage and
inconvenience. Earthquakes are among the most terrifying and destructive
natural disasters threatening humans. Emergency management has been
described as the process of coordinating an emergency or its aftermath through
communication and organization for deployment and the use of emergency
resources. This chapter provides the state-of-the-art studies of risk and emer-
gency management related to the Wenchuan earthquake that happened in
China in May 2008.
Natural disasters are the biggest challenge that risk managers face, due to the
threats that go with them.1 Natural disasters by definition are surprises, causing
a great deal of damage and inconvenience. Some things we do to ourselves,
such as revolutions, terrorist attacks and wars; terrorism led to the gassing of
the Japanese subway system, to 9/11/2001, and to the bombings of the Spanish
and British transportation systems. Some things nature does to us – volcanic
eruptions, tsunamis, hurricanes and tornados. The SARS virus disrupted public
and business activities, particularly in Asia.2 More recently, the H1N1 virus has
sharpened the awareness of the response system world-wide. Some disasters
combine human and natural causes – we dam up rivers to control floods, to
irrigate, to generate power – and even for recreation, as at Johnstown, PA, at
the turn of the 20th century. We have developed low-pollution, low-cost elec-
tricity through nuclear energy, as at Three-Mile Island in Pennsylvania and
Chernobyl. We have built massive chemical plants to produce low cost chemi-
cals, as at Bhopal, India.
Lee and Preston provide a review of high-impact, low-probability events,
focusing on analysis of the Eyjafjallajökull volcano.3 This event was represen-
tative of “black swans”,4 that is, impossible-to-predict events with a very low
likelihood but high costs of mitigation. Other examples include Hurricane
175
176 Enterprise Risk Management in Finance
Katrina in New Orleans, the Japanese earthquake and tsunami of 2011, and
the 2003 SARS outbreak. What we don’t do to ourselves in the form of wars and
economic catastrophes, nature trumps with overwhelming natural disasters.
These disruptions are major components of supply chain systems, which
have become key components of today’s global economy. The ability to cope
with unexpected events has proven critical for global supply chain success,
as demonstrated by Nokia in the past few years, as well as production halts
experienced by Toyota and Sony due to the 2011 earthquake and tsunami in
Japan.
In a natural disaster, there will inevitably be many who feel that whatever
the authorities did was overkill and unnecessary, just as there will be many
who feel that the authorities didn’t do enough to (1) prevent the problem, and
(2) mitigate the problem after it occurred. It is the nature of emergency man-
agement to be unthanked. Transparency, especially during and after a crisis,
helps to assure the public that decisions are made on the best available evi-
dence in order to gain public confidence and to manage vested interests.
Globalized supply chains, particularly those based on just-in-time methods,
are vulnerable. Famous historical examples include Nokia’s response to a
March 2000 lightning strike in Albuquerque, NM, leading to a fire in a Royal
Philips Electronics fabrication line that supplied RFID chips. Both Nokia and
its competitor Ericsson were served by this key supply chain link,5 and Philips
estimated that at least a week would be required to return to full production.
Ericsson passively waited – but Nokia proactively arranged for alternative
supplies, as well as redesigning products to avoid the need for those chips.
Nokia gained significant advantage in this market, turning in a profit while
Ericsson suffered an operating loss.6 Lee and Preston state that the maximum
tolerance for supply chain disruption in a just-in-time global economy is one
week.
Lee and Preston draw the following recommendations from the
Eyjafjallajökull experience:
Be prepared
While natural disasters come as surprises, we can nevertheless be prepared. In
some cases, such as Hurricane Katrina or Mount Saint Helens, we get warning
signs, but we never completely know the extent of what is going to happen.7
Emergency management has been described as the process of coordinating
an emergency or its aftermath through communication and organization for
deployment and the use of emergency resources.8 Emergency management is
a dynamic process conducted under stressful conditions, requiring flexible
and rigorous planning, cooperation, and vigilance. During emergencies, a
variety of organizations are often involved, and commercial rivalry can lead to
normal competition, rivalry, and mutual distrust. At the governmental level,
one would expect cooperation in attaining a common goal, but often so many
diverse agencies get involved that attention to the overriding shared goal is
dimmed by specific agency goals. Cooperation is also hampered by differences
in technology.
Risks exist in every aspect of our lives. In the food production area, science
has made great strides in genetic management. But there are concerns about
some of the manipulations involved, with different views prevailing across the
globe. In the United States, genetic management is generally viewed as a way
to obtain better and more productive sources of food more reliably. However,
there are strong objections to bioengineered food in Europe and Asia. Some
natural diseases, such as mad cow disease, have appeared that are very dif-
ficult to control. The degree of control accomplished is sometimes disputed.
178 Enterprise Risk Management in Finance
Europe has strong controls on bioengineering, but even there a pig breeding
scandal involving hazardous feed stock and prohibited medications has arisen.9
Bioengineering risks are important considerations in the food chain.10 Genetic
mapping offers tremendous breakthroughs in the world of science, but involve
political risks when applied to human resources management.11 Even applying
information technology to better manage healthcare delivery risks involves
risks.12 Reliance on computer control has been applied to flying aircraft, but
hasn’t always worked.13
Risks can be viewed as threats, but businesses exist to cope with risks.
Different disciplines have different ways of classifying risks. We propose the
following way of classifying risks: field based and property based.
● Field based classification: Financial risks, which basically include all sorts of
risks related to financial sectors and financial aspects in other sectors; these
include, but are not restricted to, market risk, credit risk, operational risk,
operational risk, liquidity risk.
Nonfinancial risks, which includes risks from sources that are not related to
finance. These include, but are not restricted to, political risks, reputational
risks, bioengineering risks, and disaster risks.
● Property based classification: We think risks can have three properties: Probability,
dynamics, and dependence. The first two properties have been widely recog-
nized in inter-temporal models from behavior decision and behavior econom-
ics.14 The last property is well studied in the finance discipline.
The probability of the occurrence and severity of risks mainly involves the util-
ization of probability theory and various distributions, to model risks. This can
be dated back to the 1700s, when Bernoulli, Poisson, and Gauss used to model
normal events, and generalized Pareto distributions and generalized extreme
value distribution to model extreme events. The dynamics of risks mainly deals
with stochastic process theory in risk management. This can be dated back to
the 1930s, when the Markov processes, Brownian motion and Levy processes
were developed. The dependence of risks mainly deals with correlation among
risk factors; various copula functions are built, and Fourier transformations are
also used here.
Technical tools
Emergency management
Local, state and federal agencies in the United States are responsible for
responding to natural and man-made disasters. This is coordinated at the federal
level through the Federal Emergency Management Agency (FEMA). While
FEMA has done much good, it is almost inevitable that more is expected of it
than it delivers in some cases, such as hurricane recovery in Florida in various
years and the Gulf Coast from Hurricane Katrina in 2005. National security
is the responsibility of other agencies, military and civilian (Department of
Homeland Security – DHS). They are supported by non-governmental agen-
cies such as the American Red Cross. Again, these systems seem to be effective
for the greater part, but are not failsafe, as demonstrated by Pearl Harbor and
9/11/2001.
Disasters are abrupt, calamitous events that cause great damage, loss of lives,
and destruction. Emergency management is accomplished in every country
to some degree. Disasters occur throughout the world, in every form: natural,
man-made, and combination. Disasters by definition are unexpected, and tax
the ability of governments and other agencies to cope. A number of intelli-
gence cycles have been promulgated, but all are based on the idea of:
Conclusions
Introduction
183
184 Enterprise Risk Management in Finance
commitment targets assigned by the Kyoto Protocol at the minimum cost. The
EU ETS comprises three trading periods or phases, the first from January 2005
to December 2007, the second from January 2008 to December 2012, and the
third from January 2013 to December 2020.
Under the EU ETS, each state member is assigned a national emission quota,
precisely stated in the National Allocation Plan (NAP), which has to be approved
by the EU Commission. Then national allowances are distributed among their
industrial operators with permits, and the actual emission amount in line with
NAP is supervised. Allowances are supposed to expire at the end of each year
and traded or privately reassigned, on the over-the-counter market or one of the
European climate exchanges, such as European Climate Exchange, BlueNext,
PowerNext, Nord Pool and others.
As a commodity, the supply and demand of allowances in the EU ETS market
determines the price for carbon, like other bulk commodities. A greater supply
of allowances will lead to a lower carbon price. On the other hand, too much
demand for allowances will result in a higher carbon price.
During Phase I, all the participating countries accepted most of their allow-
ances freely. However, the share of auctioning allowances was improved
greatly during Phase II, which seemed to be more market-driven. In Phase
III, a substantial number of permits are being allocated centrally with a large
share of auctioning permits, a different method from that used in the National
Allocation Plan.
Within a trading phase, banking and borrowing is allowed. For example, a
2009 EUA could be used in 2010 (Banking) or in 2008 (Borrowing). However,
cross-period borrowing and banking is not allowed. Member states do not have
the discretion to bank EUA from Phase I to Phase II.
The price mechanism of the carbon emission exchange is one of the crucial
factors for its success due to the increasingly intensified world attention to
global carbon emission reduction. The advanced price mechanism of the EU
ETS was based on five years of experience, and a few scholars conducted initial
theoretical and empirical research studying it. Cities in China, such as Beijing,
Tianjin, Shanghai, Wuhan, Changsha, Shenzhen and Kunming, have gradually
set carbon emission exchanges, but they are only at the preliminary stage and
the trading volumes are expected to be low. Therefore, learning or studying the
previous EU ETS experience seems useful in establishing the Chinese carbon
emission exchange system.
Literature review
issues, market fundamentals and the demand and supply of carbon emission
allowances. Burniaux (2000)12 alternatively assumed that the price of fuels,
the average carbon content in energy usage, energy substitution possibilities,
and the price and availability of clean substitute fuels are the driving factors
for CO2 pricing. Considine (2000)13 studied emission movements considering
the impact of weather. The hotter and colder seasons have a great impact on
energy consumption, and as a result the demand for carbon emission allow-
ances becomes much larger. Therefore, temperature variation is one of the
driving factors. On the other hand, the prices of oil and gas have a positive
effect on carbon emission allowances prices. Sijm et al. (2005)14 come to a
similar conclusion, and find that energy prices, characteristics of the energy
sector, demand and supply of allowances, and economic structure play key
roles in the formation of allowances price. Ciorba et al. (2001)15 pointed out
that the price of energy, the level of emission, the geographical features of a
country and its climatic conditions and temperature are the most influential
factors on setting allowances prices. Springer (2003)16 indicated that the deter-
minant of prices mainly lies on the level of emission. Springer and Varilek
(2004)17 explore other factors that could affect the cost of compliance with
the Kyoto Protocol, such as the number of sectors in the economy that do
not participate in either the trading of allowances or the inter-period transfer
of allowances. From an econometrical perspective, Manasanet-Bataller, Pardo,
and Valor (2007)18 attempted estimation of the effects of some determinants of
the EU ETS daily forward prices in 2005. Explanatory variables were oil, natural
gas and coal prices. That study also included weather variables, which are gen-
erally important determinants of allowances prices. In summary, most studies
concluded that the level of emission, energy price and weather variables are the
main driving factors in the formation of the carbon emission price.
In general, very few studies have used data from carbon emission exchanges
to test quantitative models for allowances pricing. Furthermore, there is no
literature on exploring variations of price dynamics with different phases of
the EU ETS. Therefore, in this study we have tried to investigate the EUA price
dynamics and select an appropriate model in order to investigate the distinc-
tions and improvements between the two phases.
Price movements
In this section, the price movements in the EU ETS over five years are described
and explained. Data and news are taken from news and weekly trading reports of
Point Carbon and Climate Group, two key websites in carbon emission research.
Figure 18.1 shows the price movements of carbon emission in Phase I, a
trial phase with large price volatility. It consisted of five stages in price move-
ments. The first stage was recorded from the start of the phase to the middle
Pricing of Carbon Emission Exchange in the EU ETS 187
35
30
25
20
15
10
0
05 3
05 3
05 –3
05 –3
06 –3
06 3
06 3
06 3
06 –3
06 –3
0 7 –3
07 3
07 3
07 3
07 –3
0 7 –3
3
20 –4–
20 –6–
20 – 2 –
2 0 –4–
20 – 6 –
20 –2–
20 –4–
20 – 6 –
2–
20 –8
20 –10
20 12
20 –8
20 –10
2 0 12
20 –8
2 0 –10
–1
05
–
20
of July 2005; the price was stimulated to €28.9 by the European Commission’s
announcement further cutting the National Allocation Plan (NAP) for Italy,
Poland and the Czech Republic, and the denial of the United Kingdom to
extend the amount of certificates. The second stage was from the middle of
July to December 2005; early participants in the EU ETS sold their allowances
at high prices before the price dropped when new members from Eastern
Europe entered the market. The third stage covered January to April 2006;
most research reports released an announcement about over-discharging CO2
compared with the EUA supply, and the carbon emissions price went up. The
fourth stage was the period from May to December 2006; auditing and super-
vision by the European Union was minimal, and every country excessively
distributed free allowances to their industry operators. Then the 2005 CO2 data
indicated that the current allowances in the market were greatly oversupplied
and the price dropped dramatically over a few trading days. The fifth stage
was between January and December 2007; European countries released their
Phase II NAPs and the rule of prohibition on banking and borrowing between
Phases I and II was clarified. The EUA price fell to zero.
The price movement in Phase II was stable (Figure 18.2) with reduction
commitment in accordance with the Kyoto Protocol. The period from January
2008 to 2010 could be divided into three stages. The first between January
and June 2008; with weak economic expectations and rising coal prices, the
EUA price was reduced to €18.84. However, it rose back up to €28.59 due to the
impact of oil and gas prices. The second stage was from July 2008 to February
2009; as the financial crisis burst out all over the world, demand for oil and
gas dropped significantly, which led to reduced demand for EUA. At the same
time, the ratio for free allotments and tradable allowances rose. The EUA
188 Enterprise Risk Management in Finance
35
30
25
20
15
10
0
–2
–2
–2
–2
–2
–2
–2
–2
–2
–2
–2
–2
1–
1–
–1
–3
–5
–7
–9
–1
–3
–5
–7
–9
–1
–3
–1
–1
08
08
08
08
08
09
09
09
09
09
10
10
08
09
20
20
20
20
20
20
20
20
20
20
20
20
20
20
Figure 18.2 EUA price movement in Phase II
price declined, with low trading volumes in this stage. The third stage was
from March 2009 to 2010; oil, gas and power prices went up with economic
recovery and the EUA price increased from €7.9 to €15, over one year fluctu-
ating between €12 and €15.
This data shows that the price volatility in Phase I was much larger than in
Phase II, primarily influenced by political issues and trading rules. First, dispensed
allowances exceeded discharged CO2, which led to the distortion of the EUA price.
The price mechanism was especially disturbed by political issues. For example,
the European Union announcement regarding the oversupply of allowances led
to the EUA price drop to €10 within several trading days. Second, micro statistical
data are absent, which is one of the reasons for the oversupply circumstances of
EUA. Consequently, a high portion of free allowances distributed to enterprises
pushed the EUA price down. Power enterprises received much higher free allow-
ances than other enterprises, and sold their allowances for profits. This made
the existing EUA superfluous, adverse to formation of a rational price in Phase I.
In Phase II, the EU ETS improved the rules on NAP supervision, distribution of
allowances and allotment methods. As a result, the price mechanism returned
to fundamentals: energy price, power price, and weather variables. An advanced
trading and price mechanism has preliminarily been formed.
Generally, the right for carbon emission is a public good with externality,
and its impact is not directly reflected in market cost and price. However,
political conventions, such as the United Nations Framework Convention on
Climate Change (UNFCC) and the Kyoto Protocol led to market scarcity and
Pricing of Carbon Emission Exchange in the EU ETS 189
1.0
0.5
0.0
–0.5
–1.0
–1.5
100 200 300 400 500 600
.15
.10
.05
.00
–.05
–.10
–.15
50 100 150 200 250 300 350 400 450 500
Standard
Series N Mean Median Max Min Deviation Skew Kurt
Standard
Series N Mean Median Max Min Deviation Skew Kurt
second phase, volatility is much smaller than in the first, fluctuating between
0.15 and −0.15.
To estimate and test the price mechanism of EUA, each phase is divided
randomly, one for estimation, the other for forecasting. The estimation stage
of Phase I is from April 2005 to December 2006, and the remainder for fore-
casting. For the second phase, the estimation stage is from January 2008 to
December 2009, and the remainder for forecasting. Tables 18.1 and 18.2 give
descriptive statistics of the two phases.
Pricing of Carbon Emission Exchange in the EU ETS 191
The skew of estimation and forecasting in Phase I is −0.489 and −2.589, while
the kurtosis measures are 32.792 and 41.592 respectively. Similarly, the skew of
estimation and forecasting during Phase II are −0.204 and 0.261, with kurtosis
of 4.770 and 2.552. These data show large skew and kurtosis. Both stages in
Phase I are left-skewed, but during phase II left-skewed in estimation and right-
skewed in forecasting. Due to asymmetry, excess kurtosis and heavy tails, the
normal distribution does not fit the data very well. Given the large volatility of
logreturns, the model used must convey and explain the data characteristics
described above.
t-Statistic Probability
t-Statistic Probability
t-Statistic Probability
t-Statistic Probability
Autocorrelation Partial Correlation AC PAC Q-Stat Prob Autocorrelation Partial Correlation AC PAC Q-Stat Prob
Time series correlations tests are also conducted. Figure 18.5 shows that
both the logreturns in Phase I and Phase II are not autocorrelated. Durbin–
Watson statistics values are 1.9996 and 1.9852 respectively, which is closer to
2, inferring that there is no autocorrelation.
Table 18.7 The ARCH LM test for EUA logreturns in Phases I and II
ARCH test
Table 18.8 The Akaike info criterion (AIC) and Schwarz criterion (SC) for the estimated
model in Phase I
Table 18.9 The Akaike info criterion (AIC) and Schwarz criterion (SC) for the estimated
model in Phase II
Method selection
The above evidence shows that GARCH cluster models fit the price mechanism
of carbon emission exchange better. Three typical models, EGARCH (1, 1)-t,
GARCH (1, 1)-normal and GARCH (1, 1)-GED, are chosen for comparison to
select the best model in the GARCH models cluster. To evaluate the models, the
Akaike info criterion (AIC) and Schwarz criterion (SC) are chosen to examine
the estimated models. Choosing the most parsimonious model, it is found that
the EGARCH (1, 1)-t model is better. In Phase I, while AIC values are −4.290,
−4.121, and −4.272 respectively, SC values are −4.243, −4.092 and −4.235.
Although the differences between EGARCH (1, 1)-t and GARCH (1, 1)-GED
are relatively smaller, EGARCH (1, 1)-t is still the best model considering both
criteria.
Similarly, in Phase II, AIC and SC values are −4.407, −4.362, −4.390 and
−4.362, −4.335 and −4.354 respectively (see Table 18.9). The differences between
EGARCH (1, 1)-t and GARCH (1, 1)-GED are very small as well. EGARCH (1, 1)-t
is still preferred.
194 Enterprise Risk Management in Finance
Variance equation
C(1) −1.040 0.234 −4.436 0.000
C(2) 0.530 0.094 5.640 0.000
C(3) −0.121 0.057 −2.141 0.032
C(4) 0.903 0.029 31.259 0.000
T-DIST.DOF 3.569 0.709 5.033 0.000
Akaike info criterion −4.290 Durbin– 1.696
Watson stat
Schwarz criterion −4.243
Pricing of Carbon Emission Exchange in the EU ETS 195
Innovations
Innovation 2
–2
0 100 200 300 400 500 600 700
0.05
0
0 100 200 300 400 500 600 700
2 Returns
Return
–2
0 100 200 300 400 500 600 700
Variance equation
C(1) −0.312 0.110 −2.840 0.005
C(2) 0.126 0.053 2.363 0.018
C(3) −0.120 0.031 −3.881 0.000
C(4) 0.971 0.012 81.385 0.000
T-DIST.DOF 8.423 4.023 2.093 0.036
Akaike info criterion −4.407 Durbin– 1.722
Watson stat
Schwarz criterion −4.362
Innovations
0.2
Innovation
–0.2
0 100 200 300 400 500 600
0.05
0
0 100 200 300 400 500 600
Returns
0.2
Return
–0.2
0 100 200 300 400 500 600
Conclusions
Introduction
Risk analysis of the crude oil market has always been a core research problem
important to both practitioners and academia. Risks arise primarily from
changes in oil prices. During the 1970s and 1980s there were a number of
steep increases in oil prices; these price fluctuations reached new peaks in 2007
when the price of crude oil doubled during the financial crisis, and double
digit fluctuations continued between 2007 and 2008 for short periods. These
fluctuations would not be worrisome if oil was not such an important com-
modity in the world’s economy. But when oil prices become too high and their
volatility increases, they have a direct impact on the economy in general, and
affect the government decisions regarding market regulation, thus impacting
firm and individual consumer incomes.1
Price volatility analysis has been a hot research area for many years.
Commodity markets are characterized by extremely high levels of price vola-
tility. Understanding the volatility dynamic process of oil price is a very
important and crucial way for producers and countries to hedge various risks
and to avoid excess exposures to risks (Hung et al., 2008).
To deal with different phases of volatility behavior and the dependence of
variability of time series on their cycles, models allowing for autocorrelation as
well as heteroskedasticity such as ARCH, GARCH or regime-switching models
have been suggested. The former two are very useful in modeling a unique
stochastic process with conditional variance; the latter has the advantage of
dividing the observed stochastic behavior of a time series into several separate
phases with different underlying stochastic processes. Both types of models are
widely used in practice.
Hung et al. (2008)2 employed three GARCH models (GARCH-N, GARCH-t
and GARCH-HT) to investigate the influence of fat-tailed innovation processes
199
200 Enterprise Risk Management in Finance
Volatility models
Historical volatility
We assume εt to be the mean innovation for energy log price changes or price
returns. To estimate the volatility at time t over the last N days we have:
1/2
⎡ ⎛ 1 ⎞ N −1 ⎤
VH,t = ⎢⎜ ⎟ ∑ i = 0 ε t2− i ⎥ ,
⎣ ⎝ N ⎠ ⎦
ARMA(R,M)
Given a time series Xt, the autoregressive moving average (ARMA) model is
very useful for predicting future values in time series where both an autore-
gressive (AR) term and a moving average (MA) term are present. The model is
usually then referred to as the ARMA(R,M) model, where R is the order of the
first term and M is the order of the second term. The following ARMA(R,M)
model contains the AR(R) and MA(M) models:
R M
Xt = c + ε t + ∑ ϕ i Xt − i + ∑ θ j ε t − j .
i =1 j =1
ARMAX(R,M, b)
To include the AR(R) and MA(M) models and a linear combination of the last b
terms of a known and external time series dt, one can use an ARMAX(R,M, b)
model with R autoregressive terms, M moving average terms and b exogenous
inputs terms:
R M b
Xt = c + ε t + ∑ϕ X
i =1
i t −i + ∑ θ j ε t − j + ∑ η k dt − k ,
j =1 k =1
where η1, ... ,ηb are the parameters of the exogenous input dt.
ARCH(q)
Autoregressive Conditional Heteroskedasticity (ARCH) modeling is the
predominant statistical technique employed in the analysis of time-
varying volatility. In ARCH models, volatility is a deterministic function of
historical returns. The original ARCH(q) formulation proposed by Engle15
models conditional variance as a linear function of the first q past squared
innovations:
q
σ t2 = c + ∑ α i ε t2− i .
i =1
GARCH(p,q)
Bollerslev’s Generalized Autogressive Conditional Heteroskedasticity
[GARCH(p,q)] specification16 generalizes the model by allowing the current
conditional variance to depend on the first p past conditional variances as well
as the q past squared innovations. That is:
p q
σ t2 = L + ∑ βi σ t2− i + ∑ α j ε t2− j ,
i =1 j =1
σ t2 = L + β 1σ t2−1 + α 1 ε t2−1.
EGARCH
The Exponential Generalized Autoregressive Conditional Heteroskedasticity
(EGARCH) model introduced by Nelson (1991)17 builds in a directional effect
of price moves on conditional variance. Large price declines, for instance
may have a larger impact on volatility than large price increases. The general
EGARCH(p,q) model for the conditional variance of the innovations, with
leverage terms and an explicit probability distribution assumption, is:
⎪⎧|ε t − j | ⎪⎫ 2
where, E {| zt − j |}E ⎨ ⎬= for the normal distribution, and
⎩⎪ σ t − j ⎭⎪ π
⎛ v −1⎞
Γ
⎧⎪|ε t − j | ⎫⎪ v − 2 ⎜⎝ 2 ⎟⎠
E {| zt − j |}E ⎨ ⎬= for the Student’s t distribution with degree
⎪⎩ σ t − j ⎪⎭ π ⎛v⎞
Γ⎜ ⎟
⎝2⎠
of freedom ν > 2.
Volatility Forecasting of the Crude Oil Market 203
GJR(p,q)
GJR( p,q) model is an extension of an equivalent GARCH( p,q) model with zero
leverage terms. Thus, estimation of initial parameter for GJR models should be
identical to those of GARCH models. The difference is the additional assump-
tion with all leverage terms being zero:
p q q
σ t2 = L + ∑ βi σ t2− i + ∑ α j ε t2− j + ∑ Lj St − j ε t2− j
i =1 j =1 j =1
p q
1 q
∑β + ∑α
i =1
i
j =1
j + ∑ Lj < 1
2 j =1
L ≥ 0, βi ≥ 0, α j ≥ 0, α j + Lj ≥ 0.
Regime-switching models
Markov regime-switching models have been applied in various fields such as
oil and the macroeconomy analysis,18 analysis of business cycles (Hamilton,
198919) and modeling stock market and asset returns (Vo, 2009).
We now consider a dynamic volatility model with regime-switching. Suppose
a time series yt follows an AR ( p) model with AR coefficients, together with the
mean and variance, depending on the regime indicator st:
p
yt = μ st + ∑ ϕ j ,st yt − j + ε t , «t ~ i.i.d. Normal (0, sst2 )
j =1
1 ⎡ ω2 ⎤
f ( yt | st ,Yt −1 ) = ⋅ exp ⎢ − t 2 ⎥ = f ( yt | st , yt −1 ,..., yt − p ,
2πσ s2t ⎣ 2σ st ⎦
p
where ω t = yt − ω st − ∑ ϕ j , st yt − j .
j =1
where St−1 = {st−1, st−2, ... } is the set of all the past information on st.
204 Enterprise Risk Management in Finance
Data
The data spans a continuous sequence of 866 days from February 2006 to July
2009, showing the closing prices of the NYMEX Crude Oil index during this time
period on a day-to-day basis. Weekends and holidays are not included in our data,
thus those days are considered as without price movement. Using the logarithm
of price changes means that our continuously compounded return is symmetric,
preventing us from having nonstationary oil price levels that would affect our
return volatility. Table 19.1 presents the descriptive statistics of daily crude oil
price changes. In Figure 19.1 we show a plot of crude oil daily price movement.
To get a preliminary view of volatility change, Table 19.2 shows descriptive
statistics for the logreturn of the Daily Crude Oil Index ranging over the period
February 2006 to July 2009. The corresponding plot is given in Figure 19.2.
Distribution analysis
Figure 19.3 displays a distribution analysis of our data ranging from February
2006 up to July 2009. The data is the log return of the daily crude oil price
Statistics Value
160
140
120
100
80
60
40
0 100 200 300 400 500 600 700 800 900
Statistics Value
0.15
0.1
0.05
–0.05
–0.1
0 100 200 300 400 500 600 700 800 900
25
r data
Normal distribution
20 T location-scale distribution
Density
15
10
0
–0.08 –0.06 –0.04 –0.02 0 0.02 0.04 0.06 0.08 0.1
Data
movements over the time period mentioned above. We can see that the
best distribution for our data is a t-Distribution, shown by the blue line in
Figure 19.3. The red line represents the normal distribution of our data. So a
conditional t-Distribution is preferred to normal distribution in our research.
An augmented Dickey–Fuller univariate unit root test yields a p -value of
1.0*e-003, 1.1*e-003 and 1.1*e-003 for lags of 0, 1 and 2 respectively. All
p -values are smaller than 0.05, which indicates that the time series has a
property of trend-stationary.
Results
GARCH modeling
We first estimated the parameter of the GARCH(1,1) model using 865 observa-
tions in Matlab, and then tried various GARCH models using different prob-
ability distributions with the maximum likelihood estimation technique.
In many financial time series, the standardized residuals zt = εt / st usually
display high levels of kurtosis, which suggests departure from conditional nor-
mality. In such cases, the fat-tailed distribution of the innovations driving a
dynamic volatility process can be better modeled using the Student’s −t or the
Generalized Error Distribution (GED). Taking the square root of the conditional
variance and expressing it as an annualized percentage yields a time-varying
volatility estimate. A single estimated model can be used to construct forecasts
of volatility over any time horizon. Table 19.3 presents the GARCH(1,1) esti-
mation using the t-distribution. The conditional mean process is modeled by
use of ARMAX(0,0,0).
Substituting these estimated values in the math model, we yield the explicit
form as follows:
yt = 6.819e − 4 + «t
s t2 = 2.216e − 6 + 0.9146 s t2−1 + 0.0815 «t2−1.
Innovations
0.1
Innovation
0.05
0
–0.05
–0.1
0 100 200 300 400 500 600 700 800 900
0.04
0.03
0.02
0.01
0
0 100 200 300 400 500 600 700 800 900
Returns
0.15
0.1
0.05
Return
0
–0.05
–0.1
0 100 200 300 400 500 600 700 800 900
Figure 19.4 depicts the dynamics of the innovation, standard deviation, and
return, using the above estimated GARCH model, that is, the ARMAX(0,0,0)
GARCH(1,1) with a log likelihood value of 2284.97. We want to find a higher
log likelihood value for other GARCH modeling, so we use the same data with
different models in order to increase the robustness of our model.
We now try different combinations of ARMAX and GARCH, EGARCH and
GJR models. Computational results are presented in Table 19.4.
A general rule for model selection is that we should specify the smallest, sim-
plest models that adequately describe data, because simple models are easier
to estimate, easier to forecast, and easier to analyze. Model selection criteria
such as AIC and BIC penalize models for their complexity when considering
best distributions that fit the data. Therefore, we can use log likelihood (LLC),
Akaike (AIC) and Bayesian (BIC) information criteria to compare alternative
models. Usually, differences in LLC across distributions cannot be compared
since distribution functions can have different capabilities for fitting random
data, but we can use the minimum AIC and BIC, maximum LLC values as
model selection criteria.20
As can be seen from Table 19.4, the log likelihood value of ARMAX(1,1,0)
GJR(2,1) yields the highest log likelihood value 2292.32 and the lowest AIC
value −4566.6 among all modeling techniques. Thus we select GJR models. The
ARMAX(1,1,0) GJR(2,1) model was used in a simulation and a forecast for the
standard deviation of a 30-day period using 20,000 realizations.
208 Enterprise Risk Management in Finance
The forecasting horizon was defined to be 30 days (one month). The simu-
lation used 20,000 outcomes over a 30-day period based on our fitted model
ARAMX(1,1,0) GJR(2,1) with a horizon of 30 days from ‘Forecasting.’ Figure 19.5
compares forecasts from ‘Forecasting’ with those derived from ‘Simulation.’
The first four panels of Figure 19.5 directly compare each of the forecasted
outputs with the corresponding statistical result obtained from simulation.
The last two panels of Figure 19.5 illustrate histograms from which we could
compute the approximate probability density functions and empirical confi-
dence bounds. When comparing forecasting with its counterpart derived from
Volatility Forecasting of the Crude Oil Market 209
Standard Deviation
Standard Deviation
0.09 0.023
0.08
0.0225
0.07
0.06 0.022
Forecast results Forecast results
0.05 0.0215
Simulation results Simulation results
0.04
0.021
0.03
0.02 0.0205
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Forecast period Forecast period
–4
Standard error of forecast of returns ×10 Forecast of Returns
0.024 15
Forecast results Forecast results
0.0235
Simulation results Simulation results
0.023 10
Standard Deviation
0.0225
Return
0.022 5
0.0215
0.021 0
0.021
0.0205 –5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Forecast period Forecast period
2000 3000
2500
Count
Count
1500
2000
1000 1500
1000
500
500
0 0
–0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 –0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15
Return Return
the Monte Carlo simulation, we show computation for four parameters in the
first four panels of Figure 19.5: the conditional standard deviations of future
innovations, the MMSE forecasts of the conditional mean of the Nasdaq return
series, cumulative holding period returns and the root mean square errors
(RMSE) of the forecasted returns. The fifth panel of Figure 19.5 uses a histogram
to illustrate the distribution of the cumulative holding period return obtained
210 Enterprise Risk Management in Finance
if an asset was held for the full 30-day forecast horizon. In other words, we plot
the logreturn obtained by investing in NYMEX Crude Oil Index today, and
sold after 30 days. The last panel of Figure 19.5 uses a histogram to illustrate
the distribution of the single-period return at the forecast horizon, that is, the
return of the same mutual fund, the 30th day from now.
Transition
Model log Non- probabil-
(Distribution like- switching Switching parameters ities
assumption) lihood parameters state 1 state 2 matrix
Transition
Model probabil-
(Distribution log like- Non-switching Switching parameters ities
assumption) lihood parameters state 1 state 2 matrix
0.8
0.7 1
0.6 0.8
0.5
0.6
0.4
0.3 0.4
0.2
0.2
0.1
0 0
0 100 200 300 400 500 600 700 800 900 0 100 200 300 400 500 600 700 800 900
Time Time
State 1
1 -29 7
7-0 7-0
20
0
0 9-0
20
5
-12 0-1
6 -12 0 7-1
2 00 20
State 2
1
2-1 -14
0 6-1 7 -10
20 2 00
10 -30
6 -2- 7 -01
2 00 2 00
State 1
-29 -07
7 -01 -07
2 00 00
9
2
5
-12 0-1
6 -12 0 7-1
2 00 20
State 2
-11 -14
6 -12 7 -10
2 00 2 00
10
6 -2- 1-3
0
2 00 7-0
0
20
Conclusions
We examined crude oil price volatility dynamics using daily data for the
period February 13, 2006 up to July 21, 2009. We employed the GARCH,
EGARCH and GJR models and various Markov regime-switching models using
the maximum likelihood estimation technique to model volatility. Codes were
written in Matlab language. We compared several parameter settings in all
models. In GARCH models, the ARMAX (1,1,0)/GJR(2,1) yielded the best fitted
result, with maximum log likelihood value of 2292.32 when assuming that
our data followed a t-distribution. Markov regime-switching models generated
a similar fitted result, but with a bit lower log likelihood value. Markov regime-
switching modeling gave interesting results in classifying historical data into
two states: a normal period and a turmoil period. This can account for some
market performance during the financial crisis.
20
Confucius Three-stage Learning of Risk
Management
Introduction
Wishing to order well their States, they first regulated their families.
Wishing to regulate their families, they first cultivated their persons.
Their persons being cultivated, their families were regulated.
Their families being regulated, their States were rightly governed.
215
216 Enterprise Risk Management in Finance
Self-cultivation
the first property of risks. This was developed in the 1600s and popularized
in 1700s with representative scholars such as Bernoulli, de Moivre, Laplace,
Poisson, Gauss, and Pareto. The second type of risk property uses various sto-
chastic modeling tools developed intensively since the 1930s by well-known
scholars such as Lévy, Khintchine, Kolmogorov, and Doob. The third property
of risks is new research problems in finance that have been developed since the
1950s by well-known scholars such as Fréchet and Sklar, who develop funda-
mental theories that are useful for derivative pricing. Complexity of risks can
be described by complexity science theory developed since the 1960s.5
Family regulation
I am going to tell a story extracted from a cartoon picture in the book The
Cartoon Introduction to Economics.6 I give the story a name: a story of family
risk management. This is a family of three people: the child, the mommy and
the daddy. On Monday morning, the family members all have questions about
what they are going to do.
The child is going to school and raises a question: is it going to rain today?
The mom is planning to buy a second-hand BMW car and raises a question: is
this used BMW a lemon or a peach? The daddy is reading news on the stock
market and raises the question: am I going to buy stock in Facebook or mutual
funds Comfort?
The child’s question reflects his attitude of risk-aversion to today’s weather.
This is the core theme in expected utility theory originated from classical eco-
nomics in Adam Smith’s time. The daddy’s question indicates his diversifi-
cation strategy for investment in risky assets: a portfolio optimization strategy
might be preferred. There are milestone works in financial risk theory: Harry
Markowitz presented mean-variance framework in a 1952 paper and a 1959
book7 on how to find the best possible diversification strategy, and William
Sharpe’s capital asset pricing model (CAPM) in 19648 provided tools to
determine a theoretically appropriate required rate of return of an asset, if that
asset is to be added to an already well-diversified portfolio, given that asset’s
non-diversifiable risk. James Tobin expanded on Markowitz’s work by adding a
risk-free asset to the analysis in 1958,9 leading to the development of a super-ef-
ficient portfolio and the capital market line. All these works won a Nobel Prize,
although they include strong assumptions that are opposed, such as perfect
capital markets, or log-normally distributed market data. The New York Times
ran a story when James Tobin died in 2002:
After he won the Nobel Prize, reporters asked him to explain the portfolio
theory. When he tried to do so, one journalist interrupted, ‘Oh, no, please
explain it in lay language.’ So he described the theory of diversification by
218 Enterprise Risk Management in Finance
saying: ‘You know, don’t put all your eggs in one basket.’ Headline writers
around the world the next day created some version of ‘Economist Wins
Nobel for Saying, “Don’t Put Eggs in One Basket.”’
State harmonization
Conclusions
The self-cultivation perspective is the root and the first stage of learning risk
management. Fundamental math theories that are useful in risk management
are summarized. A basic classification of risk properties is given by Uncertainty
of risks, Dynamics of risks, Clustering/Dependence/interconnection of risks
and Complexity of risks. For each property, I gave a related math theory that
may be useful in treating risk problems.
A story of family risk management was presented to show the second stage
of learning risk management. This perspective demonstrates that risk theory
is embedded in many other theories, including expected utility theory in
decision sciences and microeconomics, portfolio optimization theory and
CAPM theory in modern finance, adverse selection, asymmetric information,
and contract theory in economics, especially in industry organization theory.
Armed with the self-cultivation stage tool and family-regulation stage know-
ledge, one might be ready for handling real-world risk management problems
at a state-harmonization level. I gave systemic risk as a typical problem example
for treating economic crisis originated from the secondary mortgage market.
Notes
2 Enron
1. G. Ailon (2012) ‘The discursive management of financial risk scandals: the case of
Wall Street Journal commentaries on LTCM and Enron,’ Qualitative Sociology, 35:
251–270.
2. J.E. Stiglitz (2003) The Roaring Nineties: A New History of the World’s Most Prosperous
Decade. W.W. Norton & Co.
3. L. Fox (2003) Enron: The Rise and Fall. Wiley.
4. C. Hurt (2014) ‘The duty to manage risk,’ The Journal of Corporate Law, 39(2):
153–267.
5. Ailon (2012), op cit.
6. C. Hollingsworth (2012) ‘Risk management in the post-SOX era,’ International Journal
of Auditing, 16: 35–53.
7. H.N. Butler, L.E. Ribstein (2006) The Sarbanes–Oxley Debacle: What We’ve Learned;
How to Fix It, AEI.
8. A. Dey (2010) ‘The chilling effect of Sarbanes–Oxley: a discussion of Sarbanes–Oxley
and corporate risk-taking,’ Journal of Accounting and Economics, 49(1–2), 53–57.
9. J.D. Piotroski, S. Srinivasan (2008) ‘Regulation and bonding: the Sarbanes–Oxley Act
and the flow of international listings,’ Journal of Accounting Research, 46(2): 383–425.
10. L. Bargeron, K. Lehn, C. Zutter (2009) ‘Sarbanes–Oxley and corporate risk-taking,’
Journal of Accounting and Economics, 49(1–2): 34–52.
221
222 Notes
9. F.B. Wiseman (2013) Some Financial History Worth Reading: A Look at Credit, Real Estate,
Investment Bubbles & Scams, and Global Economic Superpowers. Abcor Publishers.
10. R. Boyd (2011) Fatal Risk: A Cautionary Tale of AIG’s Corporate Suicide. Wiley.
11. S. Patterson (2010) The Quants: How a New Breed of Math Whizzes Conquered Wall
Street and Nearly Destroyed It. Crown Business.
12. V. Sampath (2009) ‘The need for greater focus on nontraditional risks: the case of
Northern Rock,’ Journal of Risk Management in Financial Institutions, 2(3): 301–305.
13. H.S. Shin (2009) ‘Reflections on Northern Rock: the bank run that heralded the
global financial crisis,’ Journal of Economic Perspectives, 23(1): 101–119.
14. P. Goldsmith-Pinkham, T. Yorulmazer (2010) ‘Liquidity, bank runs, and bailouts:
Spillover effects during the Northern Rock episode,’ Journal of Financial Service
Research, 37(2/3): 83–98.
15. R. Shelp, A. Ehrbar (2009) Fallen Giant: The Amazing Story of Hank Greenberg and the
History of AIG. Wiley.
16. Ibid.
17. Ibid.
18. J.F. Egginton, J.I. Hilliard, A.P. Liebenberg, I.A. Liebenberg (2010) ‘What effect did
AIG’s bailout, and the preceding events, have on its competitors?’ Risk Management
and Insurance Review, 13(2): 225–249.
19. J. Hobbs (2011) ‘Financial derivatives, the mismanagement of risk and the case of
AIG,’ CPCU eJournal, 64(7): 1–8.
20. P.M. Linsley, R.E. Slack (2013) ‘Crisis management and an ethic of care: The case of
Northern Rock Bank,’ Journal of Business Ethics, 113(2): 285–295.
2. W. Antweiler, M. Frank (2004) ‘Is all that talk just noise? The information content
of internet stock message boards,’ Journal of Finance, 59(3): 1259–1295.
3. R.F. Engle (1982) ‘Autoregressive conditional heteroscedasticity with estimates of
variance of United Kingdom inflation,’ Econometrica, 50: 987–1008.
4. C. Burges (1998) ‘A tutorial on support vector machines for pattern recognition,’
Data Mining and Knowledge Discovery, 2(2): 121–167.
5. W. Chan (2003) ‘Stock price reaction to news and no-news: drift and reversal after
headlines,’ Journal of Financial Economics, 70: 223–260.
6. M. Baker, J. Wurgler (2006) ‘Investor sentiment and the cross-section of stock
returns,’ Journal of Finance, 61(4): 1645–1680.
7. J. Seigel (2002) Stocks for the Long Run. 3rd ed. McGraw-Hill.
8. D. Bathia, D. Bredin (2012) ‘An examination of investor sentiment effect on G7 stock
market returns,’ European Journal of Finance, DOI:10.1080/1351847X.2011.636834.
13. P.S. Sudarsanam and R.J. Taffler (1995) ‘Financial ratio proportionality and inter-
temporal stability: an empirical analysis,’ Journal of Banking & Finance, 19(1):
45–60.
14. C.-T. Ho, D.S. Zhu (2004) ‘Performance measurement of Taiwan’s commercial banks,’
International Journal of Productivity and Performance Management, 53(5): 425–434.
15. S.N. Huang, T.L. Kao (2006) ‘Measuring managerial efficiency in non-life insurance
companies: an application of two-stage data envelopment analysis,’ International
Journal of Management, 23(3): 699–720.
16. L.M. Seiford, J. Zhu (1999) ‘Profitability and marketability of the top 55 U.S. com-
mercial banks,’ Management Science, 45(9): 1270–1288; M. Gulser, M. Ilhan (2001)
‘Risk and return in the world’s major stock markets,’ Journal of Investing, (Spring):
62–67; X. Luo (2003) ‘Evaluating the profitability and marketability efficiency
of large banks: an application of data envelopment analysis,’ Journal of Business
Research, 56: 627–635; A. Barua, P.L. Brockett, W.W. Cooper, H. Deng, B.R. Parker,
T.W. Ruefli, A. Whinston (2004) ‘Multi-factor performance measure model with
an application to fortune 500 companies,’ Socio-Economic Planning Sciences, 38:
233–253; C.S. Carlos, F.C. Yolanda, M.M. Cecilio (2005) ‘Measuring DEA efficiency
in internet companies,’ Decision Support Systems, 38: 557–573; D. Wu, Z. Yang,
L. Liang (2006) ‘Using DEA-neural network approach to evaluate branch effi-
ciency of a large Canadian bank,’ Expert Systems with Applications, 31(1): 108–115;
S.F. Lo, W.M. Lu (2006) ‘Does size matter? finding the profitability and market-
ability benchmark of financial holding companies,’ Journal of Operational Research,
23(2): 229–246; H.C. Tsai, C.M. Chen, G.H. Tzeng (2006) ‘The comparative prod-
uctivity efficiency for global telecoms,’ International Journal of Production Economics,
103: 509–526.
17. D. Wu (2006) ‘A note on DEA efficiency assessment using ideal point: an improvement
of Wang and Luo’s model,’ Applied Mathematics and Computation, 2: 819–830; N.
Ahmad, D. Berg, G.R. Simons (2006) ‘The integration of analytic hierarchy process
and data envelopment analysis in a multi-criteria decision-making problem,’
International Journal of Information Technology and Decision Making, 5: 263–276.
18. J. Yao, Z. Li, K.W. Ng (2006) ‘Model risk in VaR estimation: an empirical study,’
International Journal of Information Technology and Decision Making, 5: 503–512.
19. Y. Shi, Y. Peng, G. Kou, Z. Chen (2006) ‘Classifying credit card accounts for business
intelligence and decision making: a multiple-criteria quadratic programming
approach,’ International Journal of Information Technology and Decision Making, 4:
1–19; S. Deng, Z. Xia (2006) ‘A real options approach for pricing electricity tolling
agreements,’ International Journal of Information Technology and Decision Making,
5, 421–436.
4. G.J. Alexander, A.M. Baptista (2004) ‘A comparison of VaR and CVaR constraints
on portfolio selection with the mean-variance model,’ Management Science, 50(9):
1261–1273; V. Chavez-Demoulin, P. Embrechts, J. Nešlehová (2006) ‘Quantitative
models for operational risk: extremes, dependence and aggregation,’ Journal of
Banking & Finance, 30: 2635–2658; R. Garcia, É. Renault, G. Tsafack (2007) ‘Proper
conditioning for coherent VaR in portfolio management,’ Management Science, 53(3):
483–494; N. Taylor (2007) ‘A note on the importance of overnight information in
risk management models,’ Journal of Banking & Finance, 31: 161–180.
5. T. Jacobson, J. Lindé, K. Roszbach (2006) ‘Internal ratings systems, implied credit
risk and the consistency of banks’ risk classification policies,’ Journal of Banking &
Finance, 30, 1899–1926.
6. H. Elsinger, A. Lehar, M. Summer (2006) ‘Risk assessment for banking systems,’
Management Science, 52(9): 1301–1314.
7. R.S. Kaplan, D.P. Norton (1992) ‘The balanced scorecard – measures that drive per-
formance,’ Harvard Business Review, 70(1): 71–79; and R.S. Kaplan, D.P. Norton (2006)
Alignment: Using the Balanced Scorecard to Create Corporate Synergies. Harvard Business
School Press Books.
8. D. Bigio, R.L. Edgeman, T. Ferleman (2004) ‘Six sigma availability management of
information technology in the office of the chief technology officer of Washington,
DC.,’ Total Quality Management, 15(5–6): 679–687; S. Scandizzo (2005) ‘Risk mapping
and key risk indicators in operational risk management,’ Economic Notes by Banca
Monte dei Paschi di Siena SpA, 34(2): 231–256.
9. A. Papalexandris, G. Ioannou, G. Prastacos, K.E. Soderquist (2005) ‘An integrated
methodology for putting the balanced scorecard into action,’ European Management
Journal, 23(2): 214–227.
10. J. Calandro, Jr., S. Lane, S. (2006) ‘An introduction to the enterprise risk scorecard,’
Measuring Business Excellence, 10(3): 31–40.
11. U. Anders, M. Sandstedt (2003) ‘An operational risk scorecard approach,’ Risk, 16(1):
47–50; H. Wagner (2004) ‘The use of credit scoring in the mortgage industry,’ Journal
of Financial Services Marketing, 9(2): 179–183.
12. S. Caudle (2005) ‘Homeland security,’ Public Performance & Management Review,
28(3): 352–375.
13. H.S.B. Herath, W.G. Bremser (2005) ‘Real-option valuation of research and devel-
opment investments: implications for performance measurement,’ Managerial
Auditing Journal, 20(1): 55–72.
14. F. Lhabitant (2000) ‘Coping with model risk,’ in The Professional Handbook of
Financial Risk Management, M. Lore, L. Borodovsky (eds), Butterworth-Heinemann.
15. J. Sobehart, S. Keenan (2001) ‘Measuring default accurately,’ Credit Risk Special Report,
Risk, 14: 31–33.
X. Li, S. Wang, Z.Y. Dong eds, Lecture Notes in Artificial Intelligence. Keynote paper.
Springer, 1–9.
4. D.L. Olson (2005) ‘Comparison of weights in TOPSIS models,’ Mathematical and
Computer Modelling, 40: 721–727.
5. M. Freimer, P.L. Yu (1976) ‘Some new results on compromise solutions for group
decision problems,’ Management Science, 22(6): 688–693; T.E. Dielman (2005) ‘Least
absolute value regression: recent contributions,’ Journal of Statistical Computation &
Simulation, 75(4): 263–286.
6. S.C. Caples, M.E. Hanna (1997) ‘Least squares versus least absolute value in real
estate appraisals,’ Appraisal Journal, 65(1): 18–24.
7. G.W. Bassett, Jr. (1997) ‘Robust sports rating based on least absolute errors,’ American
Statistician, 51(2): 99–105.
8. Olson (2005), ibid.
9. S.M. Lee, D.L. Olson (2004) ‘Goal programming formulations for a comparative ana-
lysis of scalar norms and ordinal vs. ratio data,’ Information Systems and Operational
Research, 42(3): 163–174.
10. A. Barnes (1987) ‘The analysis and use of financial ratios: a review article’, Journal
of Business and Finance Accounting, 14: 449–461; H. Deng, C.-H. Yeh, R.J. Willis
(2000) ‘Inter-company comparison using modified TOPSIS with objective weight’,
Computers & Operations Research, 27: 963–973.
11. D.L. Olson (2004) ‘Data set balancing’, Lecture Notes in Computer Science: Data Mining
and Knowledge Management, Y. Shi, W. Xu, Z. Chen, eds. Springer, 71–80.
12. J. Laurikkala (2002) ‘Instance-based data reduction for improved identification of
difficult small classes’, Intelligent Data Analysis, 6(4): 311–322.
13. B. Bull (2005) ‘Exemplar sampling: Nonrandom methods of selecting a sample
which characterizes a finite multivariate population,’ American Statistician, 59(2):
166–172.
14. D.L. Olson, D. Wu (2006) ‘Simulation of fuzzy multiattribute models for grey rela-
tionships,’ European Journal of Operational Research, 175(1): 111–120.
8. A. Soteriou, S.A. Zenios, (1999) ‘Operations, quality, and profitability in the pro-
vision of banking services,’ Management Science, 45(9): 1221–1238.
9. H. Tulkens (1993) ‘On FDH efficiency analysis: some methodological issues and
applications to retail banking, courts and urban transit,’ Journal of Productivity
Analysis, 4(1–2): 183–210.
10. A.N. Berger, D.B. Humphrey (1992) ‘Measurement and efficiency issues in com-
mercial banking,’ in Z. Griliches, ed., Output Measurement in the Service Sectors, NBER
Studies in Income and Wealth, 245–300. The University of Chicago Press.
11. A.N. Berger, D.B. Humphrey (1997) ‘Efficiency of financial institutions: inter-
national survey and direction for future research,’ European Journal of Operational
Research, 98:175–212.
12. A.N. Berger, D. Hancock, D.B. Humphrey (1993) ‘Bank efficiency derived from the
profit function,’ Journal of Banking and Finance, 17(2–3): 317–348.
13. D.D. Wu (2009) ‘Performance evaluation: an integrated method using data
envelopment analysis and fuzzy preference relations,’ European Journal of Operational
Research, 194(1): 227–235.
14. K. Eriksson, K. Kerem, D. Nilsson (2008) ‘The adoption of commercial innovations
in the former Central and Eastern European markets: the case of internet banking
in Estonia,’ International Journal of Bank Marketing, 26(3): 154–169.
15. Bank of America (2007) Annual Report 2007, available at www.rbs.com/microsites/
gra2007/downloads/RBS_GRA_2007.pdf.
16. Citibank (2007) Annual Report 2007, available at www.citi.com/citi/fin/data/k07c.
pdf.
17. HSBC (2007) Annual Report 2007, available at www.investis.com/reports/hsbc_
ar_2007_En/report.php?type=1.
18. Barclays (2007) Annual Report 2007, available at www.barclaysannualreport.com/
index.html.
19. Chase (2007) Annual Report 2007, available at: http://investor.shareholder.com/
common/.
20. Wells Fargo (2007) Annual Report 2007, available at www.wellsfargo.com/downloads/
pdf/invest_relations/wf2007annualreport.pdf.
21. Lloyds (2007) Annual Report 2007, available at www.investorrelations.lloydstsb.com/
media/pdf_irmc/ir/2007/2007_LTSB_Group_R&A.pdf.
22. Royal Bank of Scotland (2007) Annual Report 2007, available at www.rbs.com/micro-
sites/gra2007/downloads/RBS_GRA_2007.pdf.
23. SunTrust (2007) Annual Report 2007, available at www.suntrustenespanol.com/
suntrust.
24. Wachovia (2007) Annual Report 2007, available at www.wachovia.com/file/2007_
Wachovia_Annual_Report.pdf.
25. Basel (2005) ‘Amendment to the Capital Accord to the Incorporate Market Risks,’ Basel
Committee on Banking Supervision, Basel.
11 Economic Perspective
1. F.H. Knight (1921) Risk, Uncertainty and Profit. Hart, Schaffner & Marx.
2. J.G. Courcelle-Seneuil (1852) ‘Profit,’ in Coquelin and Guillaumin, eds, Dictionnaire
de l’ėconomie politique, 2nd ed.
3. J.H. Von Thünen (1826) The Isolated State.
Notes 229
11. P. Hansen, B. Jaumard, G. Savard (1992) ‘New branch and bound rules for linear
bilevel programming,’ SIAM Journal on Scientific and Statistical Computing, 13(5):
1194–1217.
12. Bard (1998), op cit.
13. W.W.Cooper, L.M. Seiford, K. Tone (2000) Data Envelopment Analysis. Kluwer.
14. S.C. Ray (2004) Data Envelopment Analysis: Theory and Techniques for Economics and
Operations Research. Cambridge University Press, 189–208.
15. P. Bogetoft, D. Wang (2005) ‘Estimating the potential gains from mergers,’ Journal of
Productivity Analysis, 23: 145–171.
16. Wu and Birge (2012), op cit.
17. R. Maddigan, J. Zaima (1985) ‘The profitability of vertical integration,’ Managerial
and Decision Economics, 6(3): 178–179.
18. E.H. MacDonald (2001) ‘GIS in banking: evaluation of Canadian Bank mergers,’
Canadian Journal of Regional Science, 24(3): 419–442.
19. S. Finkelstein, H. Jerayr (2002) ‘Understanding acquisition performance: the role of
transfer effects,’ Organization Science, 13(1): 36–47.
20. Cooper et al. (2000), op cit.
21. C.H. Wang, R. Gopal, S. Zionts (1997) ‘Use of data envelopment analysis in assessing
information technology impact on firm performance,’ Annals of Operations Research,
73: 191 –213.
22. Ibid.
23. J.D. Cummins, X. Xie (2008) ‘Mergers and acquisitions in the US property-liability
insurance industry: productivity and efficiency effects,’ Journal of Banking & Finance,
2(1): 30–55.
11. T.M. Mata, R.L. Smith, D.M. Young, C.A.V. Costa (2005) ‘Environmental analysis of
gasoline blending components through their life cycle,’ Journal of Cleaner Production,
13(5): 517–523.
12. H. Von Blottnitz, M.A. Curran (2007) ‘A review of assessments conducted on bio-
ethanol as a transportation fuel from a net energy, greenhouse gas, and environ-
mental life cycle perspective,’ Journal of Cleaner Production, 15(7): 607–619.
13. A. Akcil (2006) ‘Managing cyanide: health, safety and risk management prac-
tices at Turkey’s Ovacik gold–silver mine,’ Journal of Cleaner Production, 14(8):
727–735.
14. N. Gȕlpinar, E. Canakoglu, D. Pachamanova (2014) ‘Robust investment decisions
under supply disruption in petroleum markets,’ Computers & Operations Research, 44:
75–91.
15. F. Cucchiella, M. Gastaldi (2006) ‘Risk management in supply chains: a real option
approach,’ Journal of Manufacturing Technology Management, 17(6): 700–720.
16. B. Ritchie, C. Brindley (2007) ‘An emergent framework for supply chain risk man-
agement and performance measurement,’ Journal of the Operational Research Society,
58: 1398–1411.
17. F.B. Hawley (1907) Enterprise and the Productive Process.
18. F.H. Knight (1921) Risk, Uncertainty, and Profit. Hart, Schaffner & Marx.
19. D. Kahneman, A. Tversky (2000) Choices, Values, and Frames. The University of
Cambridge.
11. K.S. Markel, L.A. Barclay (2007) ‘The intersection of risk management and human
resources: an illustration using genetic mapping,’ International Journal of Risk
Assessment and Management, 7(3): 326–340.
12. D.H. Smaltz, R. Carpenter, J. Saltz (2007) ‘Effective IT governance in healthcare
organizations: a tale of two organizations,’ International Journal of Healthcare
Technology and Management, 8(1/2): 20–41.
13. D. Dalcher (2007) ‘Why the pilot cannot be blamed: a cautionary note about excessive
reliance on technology,’ International Journal of Risk Assessment and Management,
7(3): 350–366.
14. M. Baucells, F.H. Heukamp (2009) ‘Probability and time tradeoff,’ Working Paper,
http://ssrn.com/abstract=970570.
15. J. Pan, M. Wang, D. Li, J. Le4 (2009) ‘Automatic generation of seamline network
using area Voronoi diagrams with overlap,’ IEEE Transactions on Geoscience and
Remote Sensing, 47(6): 1737–1744.
16. D. Engel (2009) ‘Hi-tech solutions for crisis management,’ African Business,
352: 50.
17. J. Wei, D. Zhao, L. Liang (2009) ‘Estimating the growth models of news stories on
disasters,’ Journal of the American Society for Information Science and Technology, 60(9):
1741–1755.
18. M. Saadatseresht, A. Mansourian, M. Taleai (2009) ‘Evacuation planning using
multiobjective evolutionary optimization approach,’ European Journal of Operational
Research, 198(1): 305–314.
19. R. Morelli, A. Tucker, N. Danner, T.R. de Lanerolle, H.J.C. Ellis, O. Izmirli, D. Krizanc,
G. Parker (2009) ‘Revitalizing computing education through free and open source
software for humanity,’ Communications of the ACM, 52(8): 67–75.
20. N. Santella, L.J. Steinberg, K. Parks (2009) ‘Decision making for extreme events:
Modeling critical infrastructure interdependencies to aid mitigation and response
planning,’ Review of Policy Research, 26(4): 409–422.
21. F. Aleskerov, A.L. Say, A. Toker, H.OL. Akin, G. Altay (2005) ‘A cluster-based decision
support system for estimating earthquake damage and casualties,’ Disasters, (3):
255–276.
6. C. Aloui, R. Jammazi (2009) ‘The effects of crude oil shocks on stock market shifts
behaviour: a regime switching approach,’ Energy Economics, 31(5): 789–799.
7. F. Klaassen (2002) ‘Improving GARCH volatility forecasts with regime-switching
GARCH,’ Empirical Economics, 27: 363–394.
8. A. Cologni, M. Manera (2009) ‘The asymmetric effects of oil shocks on output
growth: a Markov–Switching analysis for the G-7 countries,’ Economic Modelling,
26(1): 1–29.
9. Y. Fan, Y.J. Zhang, H.T. Tsaic, Y.M. Wei (2008) ‘Estimating ‘Value at Risk’ of crude oil
price and its spillover effect using the GED-GARCH approach,’ Technological Change
and the Environment, 30(6): 3156–3171.
10. C. Aloui, S. Mabrouk (2009) ‘Value-at-risk estimations of energy commodities via
long-memory, asymmetry and fat-tailed GARCH models,’ Energy Policy, 38(5):
2326–2339.
11. P. Agnolucci (2009) ‘Volatility in crude oil futures: a comparison of the pre-
dictive ability of GARCH and implied volatility models,’ Energy Economics, 31(2):
316–321.
12. C. Engel (1994) ‘Can the Markov switching model forecast exchange rates? ‘ Journal
of International Economics, 36(1): 151–165.
13. M.T. Vo (2009) ‘Regime-switching stochastic volatility: evidence from the crude oil
market,’ Energy Economics, 31(5): 779–788.
14. E. Fama (1970) ‘Efficient capital markets: a review of theory and empirical work,’
Journal of Finance, 25: 383–417.
15. R.F. Engle (1982) ‘Autoregressive conditional heteroscedasticity with estimates of
variance of United Kingdom inflation,’ Econometrica, 50: 987–1008.
16. T. Bollerslev (1986) ‘Generalized autoregressive conditional heteroskedasticity,’
Journal of Econometrics, 31: 307–327.
17. D.B. Nelson (1991) ‘Conditional heteroskedasticity in asset returns: a new approach,’
Econometrica, 59: 347–370.
18. J.E. Raymond, R.W. Rich (1997) ‘Oil and the macroeconomy: a Markov state-
switching approach,’ Journal of Money, Credit and Banking, 29(2): 193–213.
19. J.D. Hamilton (1989) ‘A new approach to the economic analysis of nonstationary
time series and the business cycle,’ Econometrica, 57(2): 357–384.
20. D. Cousineau, S. Brown, A. Heathcote (2004) ‘Fitting distributions using maximum
likelihood: methods and packages,’ Behavior Research Methods, Instruments, &
Computers, 36: 742–756.
238
References 239
S. Borak, W. Härdle, S. Trück, R. Weron (2006) ‘Convenience yields for CO2 emission
allowance future contracts,’ SFB 649 discussion paper 2006–076, SFB Economic Risk
Berlin.
A. Borison, G. Hamm (2011) ‘Black swan or black sheep?’ Risk Management, 58(3):
48–53.
B.M. Bowling, L. Rieger (2005) ‘Success factors for implementing enterprise risk man-
agement,’ Bank Accounting and Finance, 18(3): 21–26.
R. Boyd (2011) Fatal Risk: A Cautionary Tale of AIG’s Corporate Suicide. Wiley.
M. Brunnermeier (2009) ‘Deciphering the liquidity and credit crunch 2007–2008,’
Journal of Economic Perspectives, 23, 77–100.
K. Buehler, A. Freeman, R. Hulme (2008) ‘The new arsenal of risk management,’ Harvard
Business Review, 86(9): 93–100.
B. Bull (2005) ‘Exemplar sampling: nonrandom methods of selecting a sample which
characterizes a finite multivariate population,’ American Statistician, 59(2): 166–172.
C. Burges (1998) ‘A tutorial on support vector machines for pattern recognition,’ Data
Mining and Knowledge Discovery, 2(2): 121–167.
D. Burtraw (1996) ‘Cost savings sans allowance trades? Evaluating the SO2 emission
trading program to date,’ Discussion Paper 95–30-REV.
J.M. Burniaux, J.O. Martins (2000) ‘Carbon emission leakages: a general equilibrium
view,’ OECD Economics Department Working Papers No. 242.
H.N. Butler, L.E. Ribstein (2006) The Sarbanes-Oxley Debacle: What We’ve Learned; How
to Fix It, AEI.
J. Calandro, Jr., S. Lane (2006) ‘An introduction to the enterprise risk scorecard,’ Measuring
Business Excellence, 10(3), 31–40.
S.C. Caples, M.E. Hanna (1997) ‘Least squares versus least absolute value in real estate
appraisals,’ Appraisal Journal, 65(1): 18–24.
P. Capros, L. Mantzos (2000) ‘The economic effects of industry-level emission trading
to reduce greenhouse gases,’ Report to DG environment, E3M-Laboratory 21 at ICCS/
NTUA.
C.S. Carlos, F.C. Yolanda, M.M. Cecilio (2005) ‘Measuring DEA efficiency in internet
companies,’ Decision Support Systems, 38: 557–573.
F. Caron, J. Vanthienen, B. Baesens (2013) ‘A comprehensive investigation of the applic-
ability of process mining techniques for enterprise risk management,’ Computers in
Industry, 64: 464–475.
S. Caudle (2005) ‘Homeland security,’ Public Performance & Management Review, 28(3):
352–375.
K. Cengiz, C. Ufuk, U. Ziya (2003) ‘Multi-criteria supplier selection using Fuzzy AHP,
Logistics Information Management, 16(6): 382–394.
W. Chan (2003) ‘Stock price reaction to news and no-news: drift and reversal after head-
lines,’ Journal of Financial Economics, 70: 223–260.
L.F. Chang, M.W. Hung, (2009) ‘Analytical valuation of catastrophe equity options with
negative exponential jumps,’ Insurance: Mathematics and Economics, 44: 59–69.
A. Charnes, W.W. Cooper, E. Rhodes (1978) ‘Measuring the efficiency of decision making
units,’ European,’ Journal of Operational Research, 6(2): 429–444.
V. Chavez-Demoulin, P. Embreechts, J. Nešlehová (2006) ‘Quantitative models for oper-
ational risk: extremes, dependence and aggregation,’ Journal of Banking & Finance, 30:
399–417.
X. Chen, Z. Wang, D.D. Wu (2013) ‘Modeling the price mechanism of carbon emission
exchange in the European Union Emission Trading System,’ Human and Ecological Risk
Assessment, 19(5): 1309–1323.
References 241
Chien-Ta Ho, D.D. Wu, D.L. Olson (2009) ‘A risk scoring model and application to meas-
uring internet stock performance,’ International Journal of Information Technology and
Decision Making, 8(1): 133–149.
T.-C. Chu (2002) ‘Facility location selection using fuzzy TOPSIS under group deci-
sions,’ International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, 10(6):
687–701.
U. Ciorba, A. Lanza, F. Pauli (2001) ‘Kyoto protocol and emission trading: does the US
make a difference?’ FEEM working paper 90.2001, Milan.
J.A. Clark (1996) ‘Economic cost, scale efficiency, and competitive viability in banking,’
Journal of Money, Banking and Credit, 28(3): 342–364.
B. Cohen (1997) The Edge of Chaos: Financial Booms, Bubbles, Crashes and Chaos. John
Wiley & Sons Ltd.
A. Cologni, M. Manera (2009) ‘The asymmetric effects of oil shocks on output growth: a
Markov–Switching analysis for the G-7 countries,’ Economic Modelling, 26(1): 1–29.
T.J. Considine (2000) ‘The impacts of weather variations on energy demand and carbon
emissions,’ Resource and Energy Economics, 22: 295–314.
G. Cooper (2008) The Origin of Financial Crises: Central Banks, Credit Bubbles and the
Efficient Market Fallacy. Vintage Books.
W.W. Cooper, L.M. Seiford, K. Tone (2000) Data Envelopment Analysis. Kluwer.
COSO (2004) Enterprise Risk Management – Integrated Framework: Executive Summary.
September.
A. Costa, R. N Markellos (1997) ‘Evaluating public transport efficiency with neural
network models,’ Transportation Research C, 5(5): 301–312.
Counterparty Risk Management Policy Group III (2008) Containing Systemic Risk,
August 6.
J.G. Courcelle-Seneuil (1852) ‘Profit,’ in Coquelin and Guilllaumin, eds, Dictionnaire de
l’ėconomie politique, 2nd ed.
D. Cousineau, S. Brown, A. Heathcote (2004) ‘Fitting distributions using maximum like-
lihood: methods and packages,’ Behavior Research Methods, Instruments, & Computers,
36: 742–756.
S. Cox, H. Pedersen (2000) ‘Catastrophe risk bonds,’ North American Actuarial Journal,
48: 56–82.
P. Criqui, A. Kitous (2003) ‘Impacts of linking JI and CDM credits to the European emis-
sion allowance trading scheme,’ KPI technical report.
M. Crouhy, D. Galai, R. Mark (1998) ‘Model Risk,’ Journal of Financial Engineering, 7(3/4),
267–288; reprinted in Model Risk: Concepts, Calibration and Pricing, (ed. R. Gibson),
Risk Book, 2000, 17–31.
M. Crouhy, D. Galai, R. Mark (2000) ‘A comparative analysis of current credit risk
models,’ Journal of Banking & Finance, 24, 59–117.
F. Cucchiella, M. Gastaldi (2006) ‘Risk management in supply chains: a real option
approach,’ Journal of Manufacturing Technology Management, 17(6): 700–720.
J.D. Cummins, X. Xie (2008), ‘Mergers and acquisitions in the US property-liability
insurance industry: productivity and efficiency effects,’ Journal of Banking & Finance,
2(1): 30–55.
D. Dalcher (2007) ‘Why the pilot cannot be blamed: a cautionary note about excessive
reliance on technology,’ International Journal of Risk Assessment and Management, 7(3):
350–366.
G. Daskalakis, D. Psychoyios, R.N. Markellos (2009) ‘Modeling CO2 emission allow-
ance prices and derivatives: evidence from the European trading,’ Journal of Banking &
Finance, 33(7): 1230–1241.
242 References
A. Dassios, J. Jang (2003) ‘Pricing of catastrophe reinsurance and derivatives using the
cox process with shot noise intensity,’ Finance and Stochastics, 7: 73–95.
G. Dell’Arriccia, L. Laeven, D. Igan (2008) ‘Credit booms and lending standards: evi-
dence from the subprime mortgage market,’ IMF Working Paper 08/106.
H. Deng, C.-H., Yeh, R.J. Willis (2000) ‘Inter-company comparison using modified
TOPSIS with objective weight,’ Computers & Operations Research, 27: 963–973.
S. Deng, Z. Xia (2006) ‘A real options approach for pricing electricity tolling agreements,’
International Journal of Information Technology and Decision Making, 5: 421–436.
A. Dey (2010) ‘The chilling effect of Sarbanes-Oxley: a discussion of Sarbanes-Oxley and
corporate risk-taking,’ Journal of Accounting and Economics, 49(1–2): 53–57.
R. Deyoung (1997) ‘A diagnostic test for the distribution-free efficiency estimator: an
example using US commercial bank data,’ European Journal of Operational Research,
98(2): 243–249.
G. Dickinson (2001) ‘Enterprise risk management: its origins and conceptual foun-
dation,’ The Geneva Papers on Risk and Insurance, 26(3): 360–366.
T.E. Dielman (2005) ‘Least absolute value regression: recent contributions,’ Journal of
Statistical Computation & Simulation, 75(4): 263–286.
N. Doherty, J. Lamm-Tennant, L.T. Starks (2009) ‘Lessons from the financial crisis on
risk and capital management: the case of insurance companies,’ Journal of Applied
Corporate Finance, 21(4): 52–59.
D. Dong, Q. Dong (2003) ‘HowNet – a hybrid language and knowledge resource,’
Proceedings of 2003 International Conference on Natural Language Processing and Knowledge
Engineering, 820 – 824, October 26–29.
M. Drew (2007) ‘Information risk management and compliance – expect the unex-
pected,’ BT Technology Journal, 25(1): 19–29.
N. Dunbar (1999) Investing Money: The Story of Long-Term Capital Management and the
Legends Behind It. Wiley.
P. Eckle, P. Burgherr (2013) ‘Bayesian data analysis of severe fatal accident risk in the oil
chain,’ Risk Analysis, 33(1): 146–160.
J.F. Egginton, J.I. Hilliard, A.P. Liebenberg, I.A. Liebenberg (2010) ‘What effect did AIG’s
bailout, and the preceding events, have on its competitors?’ Risk Management and
Insurance Review, 13(2): 225–249.
H. Elsinger, A. Lehar, M. Summer (2006) ‘Risk assessment for banking systems,’
Management Science, 52(9), 1301–1314.
C. Engel (1994) ‘Can the Markov switching model forecast exchange rates?’ Journal of
International Economics, 36(1): 151–165.
D. Engel (2009) ‘Hi-tech solutions for crisis management,’ African Business, 352, 50.
R.F. Engle (1982) ‘Autoregressive conditional heteroscedasticity with estimates of
variance of United Kingdom inflation,’ Econometrica, 50, 987–1008.
K. Eriksson, K. Kerem, D. Nilsson (2008) ‘The adoption of commercial innovations in
the former Central and Eastern European markets: the case of internet banking in
Estonia,’ International Journal of Bank Marketing, 26(3): 154–169.
P. Espahbodi (1991) ‘Identification of problem banks and binary choice models,’ Journal
of Banking and Finance, 15, 53–71.
E.F. Fama (1965) ‘Random walks in stock market prices,’ Financial Analysts Journal, 51(1):
404–419.
E. Fama (1970) ‘Efficient capital markets: a review of theory and empirical work,’ Journal
of Finance, 25: 383–417.
Y. Fan, Y.J. Zhang, H.T. Tsaic, Y.M. Wei (2008) ‘Estimating ‘value at risk’ of crude oil price
and its spillover effect using the GED-GARCH approach,’ Technological Change and the
Environment, 30(6): 3156–3171.
References 243
M.J. Farrell (1957) ‘The measurement of productive efficiency,’ Journal of the Royal
Statistical Society 120: 253–281.
T.S. Felix, H.J. Chan (2003) ‘An innovative performance measurement method for
supply chain management,’ Supply Chain Management: An International Journal, 8(3):
209–223.
G.J. Fielding, T.T. Babitsky, M.E. Brenner (1985) ‘Performance evaluation for bus transit,’
Transportation Research, 19A(1): 73–82.
S. Finkelstein, H. Jerayr (2002) ‘Understanding acquisition performance: the role of
transfer effects,’ Organization Science, 13(1): 36–47.
A.R. Fleissig, T. Kastens, D. Terrell (2000) ‘Evaluating the semi-nonparametric fourier,
aim, and neural networks cost functions,’ Economics Letters, 68(3): 235–244.
L. Fox (2003) Enron: The Rise and Fall. Wiley.
M. Freimer, P.L. Yu (1976) ‘Some new results on compromise solutions for group decision
problems,’ Management Science, 22(6): 688–693.
B. Freisleben, K. Ripper (1997) ‘Volatility estimation with a neural network,’ Proceedings
of the IEEE/IAFE on Computational Intelligence for Financial Engineering, 177–181, March
24–25.
M. Friedman, L.J. Savage (1948) ‘The utility analysis of choices involving risk,’ The
Journal of Political Economy, 56(4): 279–304.
K. Furst, W.W. Lang, D. Nolle (2000) ‘Internet banking: developments and prospects,’
Economic and Policy Analysis, Working Paper 2000–9.
A. Ganegoda, J. Evans (2012) ‘A framework to manage the measurable, immeasurable
and the unidentifiable financial risk,’ Australian Journal of Management, 39(5): 5–34.
R. Garcia, É. Renault, G. Tsafack (2007) ‘Proper conditioning for coherent VaR in port-
folio management,’ Management Science, 53(3), 483–494.
D. Gardner (2007) The Four Books. The Teachings of the Later Confucian Tradition. Hackett
Publishing.
H. Geman, M. Yor (1997) ‘Stochastic time changes in catastrophe option pricing,’
Insurance: Mathematics and Economics, 21: 185–193.
P. Goldsmith-Pinkham, T. Yorulmazer (2010) ‘Liquidity, bank runs, and bailouts:
spillover effects during the Northern Rock episode,’ Journal of Financial Service Research,
37(2/3): 83–98.
G. Gorton (2008) ‘The panic of 2007,’ NBER Working Paper No. 14358.
N. Gülpinar, E. Canakoglu, D. Pachamanova (2014) ‘Robust investment decisions under
supply disruption in petroleum markets,’ Computers & Operations Research, 44: 75–91.
M. Gulser, M. Ilhan (2001) ‘Risk and return in the world’s major stock markets,’ Journal
of Investing, (Spring): 62–67.
J.D. Hamilton (1989) ‘A new approach to the economic analysis of nonstationary time
series and the business cycle,’ Econometrica, 57(2): 357–384.
P. Hansen, B. Jaumard, G. Savard (1992) ‘New branch and bound rules for linear bilevel
programming,’ SIAM Journal on Scientific and Statistical Computing, 13(5): 1194–1217.
F.B. Hawley (1907) Enterprise and the Productive Process.
R. Hecht Nielsen (1990) ‘Neural Computing,’ Addison Wesley: 124–133.
H.S.B. Herath, W.G. Bremser (2005) ‘Real-option valuation of research and development
investments: implications for performance measurement,’ Managerial Auditing Journal,
20(1): 55–72.
H.N. Higgins (2012) ‘Learning internal controls from a fraud case at Bank of China,’
Issues in Accounting Education, 27(4): 1171–1192.
C-T. Ho (2006) ‘Measuring bank operations performance: an approach based on grey
relation analysis,’ Journal of the Operational Research Society, 57: 227–349.
244 References
C-T. Ho, D.S. Zhu (2004) ‘Performance measurement of Taiwan’s commercial banks,’
International Journal of Productivity and Performance Management, 53(5): 425–434.
C-T. Ho, D. Wu (2009) ‘Online banking performance evaluation using data envelopment
analysis and principal component analysis,’ Computers & Operations Research, 36(6):
1835–1842.
J. Hobbs (2011) ‘Financial derivatives, the mismanagement of risk and the case of AIG,’
CPCU eJournal, 64(7): 1–8.
C. Hollingsworth (2012) ‘Risk management in the post-SOX era,’ International Journal of
Auditing, 16: 35–53.
K. Hopkins (2003) ‘Value opportunity three: improving the ability to fulfill demand,’
Business Week, January 13.
S.N. Huang, T.L. Kao (2006) ‘Measuring managerial efficiency in non-life insurance com-
panies: an application of two-stage data envelopment analysis,’ International Journal of
Management, 23(3): 699–720.
D.W. Hubbard (2009) The Failure of Risk Management: Why It’s Broken and How to Fix It.
John Wiley & Sons.
J.C. Hung, M.C. Lee, H.C. Liu (2008) ‘Estimation of value-at-risk for energy commodities
via fat-tailed GARCH models,’ Energy Economics, 30(3): 1173–1191.
C. Hurt (2014) ‘The duty to manage risk,’ The Journal of Corporate Law, 39(2): 153–267.
C.L. Hwang, K. Yoon (1981) Multiple Attribute Decision Making: Methods and Applications.
Springer-Verlag.
T. Jacobson, J. Lindé, K. Roszbach (2006) ‘Internal ratings systems, implied credit risk
and the consistency of banks’ risk classification policies,’ Journal of Banking & Finance,
30: 1899–1926.
S. Jaimungal, T. Wang (2005) ‘Catastrophe options with stochastic interest rates and
compound Poisson losses,’ Insurance: Mathematics and Economics, 38: 469–483.
Jemison, B. David, S.B. Sitkin (1986) ‘Corporate acquisitions: a process perspective,’
Academy of Management Review, 11(1): 145–163.
D. Kahneman, A. Tversky (1972) ‘Subjective probability: a judgment of representa-
tiveness,’ Cognitive Psychology, 3, 430–454.
D. Kahneman, A. Tversky (2000) Choices, Values, and Frames. The University of
Cambridge.
M. Kainuma, Y. Matsuoka, T. Morita (1999) ‘Development of AIM (Asian-Pacific Integrated
Model) for coping with global warming,’ Proceedings of the IEEE International Conference
on System Man and Cybernetics, 6: 569–574.
R.S. Kaplan, D.P. Norton (1992) ‘The balanced scorecard – measures that drive per-
formance,’ Harvard Business Review, 70(1): 71–79.
R.S. Kaplan, D.P. Norton (2006) Alignment: Using the Balanced Scorecard to Create Corporate
Synergies. Harvard Business School Press Books.
N. Kapucu, M. Van Wart (2008) ‘Making matters worse: an anatomy of leadership fail-
ures in managing catastrophic events,’ Administration & Society, 40(7): 711–740.
B. Keys, T. Mukherjee, A. Seru, V. Vig (2010) ‘Did securitization lead to lax screening?
Evidence from subprime loans,’ Quarterly Journal of Economics, 125: 307–362.
F. Klaassen (2002) ‘Improving GARCH volatility forecasts with regime-switching
GARCH,’ Empirical Economics, 27: 363–394.
G. Klepper, S. Peterson (2004) ‘The EU emissions trading scheme: allowance prices, trade
flows, competitiveness effects,’ European Environment, 14(4): 201–218.
G. Klepper, S. Peterson (2006) ‘Emissions trading, CDM, JI and more – the climate
strategy of the EU,’ Energy Journal, 27(2): 1–26.
F.H. Knight (1921) Risk, Uncertainty and Profit. Hart, Schaffner & Marx.
References 245
K.S. Markel, L.A. Barclay (2007) ‘The intersection of risk management and human
resources: an illustration using genetic mapping,’ International Journal of Risk Assessment
and Management, 7(3): 326–340.
H.M. Markowitz (1952) ‘Portfolio selection,’ The Journal of Finance, 17(1): 77–91.
H.M. Markowitz (1959) Portfolio Selection: Efficient Diversification of Investments. John
Wiley & Sons (reprinted by Yale University Press, 1970).
T.M. Mata, R.L. Smith, D.M. Young, C.A.V. Costa (2005) ‘Environmental analysis of
gasoline blending components through their life cycle,’ Journal of Cleaner Production,
13(5): 517–523.
C. McDonald (2009) New PRIMA president sees public RMs as masters of disaster.
National Underwriter/Property & Casualty Risk & Benefits Management, 113(21): 17–31.
D.B. McDonald (2011) ‘When risk management collides with enterprise sustainability,’
Journal of Leadership, Accountability and Ethics, 8(3): 56–66.
MEI Computer Technology Group Inc. (2011) ‘2011 Trade Promotion Management
Trends.’
R.C. Merton (1974) ‘On the pricing of corporate debt: the risk structure of interest rates,’
The Journal of Finance, 29(2): 449–470.
D. Meyler, J.P. Stimpson, M.P. Cutghin (2007) ‘Landscapes of risk,’ Organization &
Environment, 20(2): 204–212.
I.I. Mitroff, M.C. Alpaslan (2003) ‘Preparing for evil,’ Harvard Business Review, 81(4): 109–115.
R. Morelli, A. Tucker, N. Danner, T.R. de Lanerolle, H.J.C. Ellis, O. Izmirli, D. Krizanc,
G. Parker (2009) ‘Revitalizing computing education through free and open source
software for humanity,’ Communications of the ACM, 52(8): 67–75.
A.S. Mukherjee (2008) The Spider’s Strategy: Creating Networks to Avert Crisis, Create
Change, and Really Get Ahead. FT Press.
P.K. Narayan, S. Narayan, A. Prasad (2008) ‘Understanding the oil price-exchange rate
nexus for the Fiji islands,’ Energy Economics, 30(5): 2686–2696.
D.B. Nelson (1991) ‘Conditional heteroskedasticity in asset returns: a new approach,’
Econometrica, 59: 347–370.
D. Ng, P.D. Goldsmith (2010) ‘Bio energy entry timing from a resource based view and
organizational ecology perspective,’ International Food & Agribusiness Management
Review, 13(2): 69–100.
W.D. Nordhaus (2001) ‘Climate change: global warming economics,’ Science, 294(5545):
1283–1284.
W.D. Nordhaus, J.G. Boyer (1999) ‘Requiem for Kyoto: an economic analysis of the Kyoto
protocol,’ The Energy Journal, 20: 93–130.
D.L. Olson (2004) ‘Data set balancing,’ Lecture Notes in Computer Science: Data Mining and
Knowledge Management, Y. Shi, W. Xu, & Z. Chen, eds. Springer, 71–80.
D.L. Olson (2005) ‘Comparison of weights in TOPSIS models,’ Mathematical and Computer
Modelling, 40: 721–727.
D.L. Olson, D. Wu (2005) ‘Decision making with uncertainty and data mining,’ Advanced
Data Mining and Applications: First International Conference, ADMA, X. Li, S. Wang,
Z.Y. Dong eds, Lecture Notes in Artificial Intelligence. Keynote paper. Springer, 1–9.
D.L. Olson, D. Wu (2006) ‘Simulation of fuzzy multiattribute models for grey relation-
ships,’ European Journal of Operational Research, 175(1): 111–120.
D.L. Olson, D. Wu (2008) Enterprise Risk Management. World Scientific.
J. Pan, M. Wang, D. Li, J. Le (2009) ‘Automatic generation of seamline network using area
Voronoi diagrams with overlap,’ IEEE Transactions on Geoscience and Remote Sensing,
47(6): 1737–1744.
References 247
M.S. Paolella, L. Taschini (2006) ‘An econometric analysis of emission trading allow-
ances,’ Research Paper Series 06–26, FINRISK: National Center of Competence in
Research Financial Valuation and Risk Management.
A. Papalexandris, G. Ioannou, G. Prastacos, K.E. Soderquist (2005) ‘An integrated meth-
odology for putting the balanced scorecard into action,’ European Management Journal,
23(2): 214–227.
J. Paradi, S. Vela, H. Zhu (2010) ‘Adjusting for cultural differences, a new DEA model
applied to a merged bank,’ Journal of Productivity Analysis, 33: 109–123.
S. Patterson (2010) The Quants: How a New Breed of Math Whizzes Conquered Wall Street
and Nearly Destroyed It. Crown Business.
P.C. Pendharkar, J.A. Rodger (2003) ‘Technical efficiency-based selection of learning
cases to improve forecasting accuracy of neural networks under monotonicity assump-
tion,’ Decision Support Systems, 36(1): 117–136.
Y. Peng (2000) Management Decision Analysis. Science Publication.
C. Perrow (1984) Normal Accidents: Living with High-risk Technologies. Basic Books.
C. Perrow (1999) Normal Accidents: Living with High-Risk Technologies. Princeton University
Press.
J.D. Piotroski, S. Srinivasan (2008) ‘Regulation and bonding: the Sarbanes-Oxley Act
and the flow of international listings,’ Journal of Accounting Research, 46(2): 383–425.
M.R. Powers, T.Y. Powers, S. Gao (2012) ‘Risk finance for catastrophe losses with Pareto-
calibrated Lévy-stable severities,’ Risk Analysis, 32(11): 1967–1977.
S.C. Ray (2004) Data Envelopment Analysis: Theory and Techniques for Economics and
Operations Research. Cambridge University Press, 189–208.
J.E. Raymond, R.W. Rich (1997) ‘Oil and the macroeconomy: a Markov state-switching
approach,’ Journal of Money, Credit and Banking, 29(2): 193–213.
L.A. Reilly, O. Courtenay (2007) ‘Husbandry practices, badger sett density and habitat
composition as risk factors for transient and persistent bovine tuberculosis on UK
cattle farms,’ Preventive Veterinary Medicine, 80(2–3): 129–142.
C.M. Reinhart, K.S. Rogoff (2008) ‘Is the 2007 subprime crisis so different? An inter-
national historical comparison,’ American Economic Review, 98(2): 339–344.
B. Ritchie, C. Brindley (2007) ‘An emergent framework for supply chain risk man-
agement and performance measurement,’ Journal of the Operational Research Society,
58: 1398–1411.
L. Rittenberg, F. Martens (2012) Enterprise Risk Management: Understanding and
Communicating Risk Appetite. COSO.
S.A. Ross, R.M. Westerfield, B.D. Jordan (2007) Corporate Finance Essentials. McGraw-
Hill/Irwin.
M. Saadatseresht, A. Mansourian, M. Taleai (2009) ‘Evacuation planning using multiob-
jective evolutionary optimization approach,’ European Journal of Operational Research,
198(1): 305–314.
T.L. Saaty (2008) ‘Decision making with the analytic hierarchy process,’ International
Journal of Services Sciences, 1(1): 83–98.
F. Salmon (2009) ‘Recipe for disaster: the formula that killed Wall Street,’ Wired, 17(3).
V. Sampath (2009) ‘The need for greater focus on nontraditional risks: the case of
Northern Rock,’ Journal of Risk Management in Financial Institutions, 2(3): 301–305.
N. Santella, L.J. Steinberg, K. Parks (2009) ‘Decision making for extreme events: mod-
eling critical infrastructure interdependencies to aid mitigation and response plan-
ning,’ Review of Policy Research, 26(4): 409–422.
M. Santiago (2011) ‘The Huasteca rain forest,’ Latin American Research Review, 46: 32–54.
248 References
J.N. Stanard, M.G. Wacek (1991) ‘The spiral in the catastrophe retrocessional market,’
Casualty Actuarial Society Discussion Paper, May, Arlington, VA.
J.E. Stiglitz (2003) The Roaring Nineties: A New History of the World’s Most Prosperous
Decade. W.W. Norton & Co.
P.S. Sudarsanam, R.J. Taffler (1995) ‘Financial ratio proportionality and inter-temporal
stability: an empirical analysis,’ Journal of Banking & Finance, 19(1): 45–60.
G. Suder, D.W. Gillingham (2007) ‘Paradigms and paradoxes of agricultural risk gov-
ernance,’ International Journal of Risk Assessment and Management, 7(3): 444–457.
J.A.K. Suykens, T.V. Gestel, J.D. Brabanter. (2002) Least Squares Support Vector Machines.
World Scientific Press.
N. Taleb (2012) Antifragile: Things That Gain from Disorder. Random House.
N.N. Taleb (2007) The Black Swan: The Impact of the Highly Improbable. Penguin Books.
N.N. Taleb, D.G. Goldstein, M.W. Spitznagel (2009) ‘The six mistakes executives make in
risk management,’ Harvard Business Review, 87(10): 78–81.
W.-J. Tan, P. Enderwick (2006) ‘Managing threats in the global era: the impact and
response to SARS,’ Thunderbird International Business Review, 48(4): 515–536.
J. Taylor (2009) Getting Off Track: How Government Actions and Interventions Caused,
Prolonged, and Worsened the Financial Crisis. Hoover Press.
N. Taylor (2007) ‘A note on the importance of overnight information in risk management
models,’ Journal of Banking & Finance, 31: 161–180.
J. Tobin (1958) ‘Liquidity preference as behavior towards risk,’ The Review of Economic
Studies, 25: 65–86.
M.D. Troutt, A. Rai, A. Zhang (1995) ‘The potential use of DEA for credit applicant
acceptance systems,’ Computers and Operations Research, 4: 405–408.
H.C. Tsai, C.M. Chen, G.H. Tzeng (2006) The comparative productivity efficiency for
global telecoms,’ International Journal of Production Economics, 103: 509–526.
H. Tulkens (1993) ‘On FDH efficiency analysis: some methodological issues and appli-
cations to retail banking, courts and urban transit,’ Journal of Productivity Analysis,
4(1–2): 183–210.
M. Uhrig-Homburg, M. Wagner (2006) ‘Success chances and optimal design of deriva-
tives on CO2 emission certificates,’ Working Paper, University of Karlsruhe.
L.V. Utikin (2007) ‘Risk analysis under partial prior information and nonmonotone
utility functions,’ International Journal of Information Technology and Decision Making,
6, 625–647.
V.E. Vaugirard (2003) ’Pricing catastrophe bonds by an arbitrage approach,’ The Quarterly
Review of Economics and Finance, 43: 119–132.
M.T. Vo (2009) ‘Regime-switching stochastic volatility: evidence from the crude oil
market,’ Energy Economics, 31(5): 779–788.
H. Von Blottnitz, M.A. Curran (2007) ‘A review of assessments conducted on bio-ethanol
as a transportation fuel from a net energy, greenhouse gas, and environmental life
cycle perspective,’ Journal of Cleaner Production, 15(7): 607–619.
J. Von Neumann, O. Morgenstern (1944) Theory of Games and Economic Behaviour, 2nd
ed. Princeton University Press.
J.H. Von Thünen (1826) The Isolated State.
H. Wagner (2004) ‘The use of credit scoring in the mortgage industry,’ Journal of Financial
Services Marketing, 9(2): 179–183.
C.H. Wang, R. Gopal, S. Zionts (1997) ‘Use of data envelopment analysis in assessing
information technology impact on firm performance,’ Annals of Operations Research,
73: 191–213.
250 References
H.F. Wang, D.J. Hu (2005) ‘Comparison of SVM and LS-SVM for regression,’ International
Conference on Neural Networks and Brain 2005, 1: 279–283, October 13–15, 2005.
S.H. Wang (2003) ‘Adaptive non-parametric efficiency frontier analysis: a neural-net-
work-based model,’ Computers & Operations Research, 30: 279–295.
B. Watkins (2003) ‘Riding the wave of sentiment: an analysis of return consistency as a
predictor of future returns,’ Journal of Behavioral Finance, 4(4): 191–200.
J. Wei, D. Zhao, L. Liang (2009) ‘Estimating the growth models of news stories on
disasters,’ Journal of the American Society for Information Science and Technology, 60(9):
1741–1755.
D. Williamson (2007) ‘The COSO ERM framework: a critique from systems theory of
management control,’ International Journal of Risk Assessment and Management, 7(8):
1089–1119.
F.B. Wiseman (2013) Some Financial History Worth Reading: A Look at Credit, Real Estate,
Investment Bubbles & Scams, and Global Economic Superpowers. Abcor Publishers.
D. Wu (2006) ‘A note on DEA efficiency assessment using ideal point: an improvement
of Wang and Luo’s model,’ Applied Mathematics and Computation, 2: 819–830.
D.D. Wu (2009) ‘Performance evaluation: an integrated method using data envelopment
analysis and fuzzy preference relations,’ European Journal of Operational Research,
194(1): 227–235.
D.D. Wu (2014) ‘An approach for learning risk management: confucianism system and
risk theory,’ International Journal of Financial Services Management. Accepted and in
press.
D.D. Wu, J.R. Birge (2012) ‘Serial chain merger evaluation model and application to
mortgage banking,’ Decision Sciences, 43(1): 5–36.
D. Wu, C. Luo, H. Wang, J.R. Birge (2014) ‘Bilevel programming merger evaluation
and application to banking operations,’ Production and Operations Management. DOI:
10.1111/poms.12205. Accepted and in press.
D. Wu, D.L. Olson (2006) ‘A TOPSIS data mining demonstration and application to
credit scoring,’ International Journal of Data Warehousing & Mining, 2(3): 1–10.
D. Wu, D.L. Olson (2009) ‘Introduction to the special section on optimizing risk man-
agement. Methods and Tools,’ Human and Ecological Risk Assessment, 15(2): 220–226.
D.D. Wu, D.L. Olson (2010) ‘Enterprise risk management: coping with model risk in a
large bank,’ Journal of the Operational Research Society, 61(2): 774–787.
D. Wu, D.L. Olson (2010) ‘Enterprise risk management: coping with model risk in a large
bank,’ Journal of the Operational Research Society, 61(2): 179–190.
D. Wu, D.D. Wu (2010) ‘Performance evaluation and risk analysis of online banking
service,’ Kybernetics, 39(5): 723–734.
D. Wu, Z. Yang, L. Liang (2006) ‘Using DEA-neural network approach to evaluate branch
efficiency of a large Canadian bank,’ Expert Systems with Applications, 31(1): 108–115.
D. Wu, L. Zheng, D.L. Olson (2014) A Decision Support Approach for Online Stock Forum
Sentiment Analysis. IEEE Transactions on Systems Man and Cybernetics. Accepted
and in press. DOI: 10.1109/TSMC.2013.2295353.
D. Wu, Y. Zhou (2010) ‘Catastrophe bond and risk modeling: a review and calibration
using Chinese earthquake loss data,’ Human and Ecological Risk Assessment, 16(3):
510–523.
J. Yao, Z. Li, K.W. Ng (2006) ‘Model risk in VaR estimation: an empirical study,’
International Journal of Information Technology and Decision Making, 5: 503–512.
X. Yao, H. Yao (2000) An Introduction to Confucianism. Cambridge University Press.
T.K. Zhelev (2005) ‘On the integrated management of industrial resources incorporating
finances,’ Journal of Cleaner Production, 13(5): 469–474.
References 251
L. Zhu (2008) ‘Double exponential jump diffusion model for catastrophe bonds pricing,’
Journal of Fujian University of Technology, 6: 336–338 (in Chinese).
R. Zolkos, M. Bradford (2011) ‘Risk management faulted in probe of BP disaster,’ Business
Insurance, 45(36): 4–25.
Company websites
Bank of America (2007) Annual Report 2007, available at www.rbs.com/microsites/
gra2007/downloads/RBS_GRA_2007.pdf.
Barclays (2007) Annual Report 2007, available at www.barclaysannualreport.com/index.
html.
Chase (2007) Annual Report 2007, available at http://investor.shareholder.com/
common/.
Citibank (2007) Annual Report 2007, available at www.citi.com/citi/fin/data/k07c.pdf.
Dominion (2001) ‘Internet banking struggles for profits,’ available at www.stuff.co.nz/
inl/index/0,1008,779016a28,FF.html.
HSBC (2007) Annual Report 2007, available at www.investis.com/reports/hsbc_ar_2007_
En/report.php?type=1.
Jupiter Research.(2004) ‘FIND research, Institute for Information Industry,’ available at
http://www.find.org.tw.
Lloyds (2007) Annual Report 2007, available at www.investorrelations.lloydstsb.com/
media/pdf_irmc/ir/2007/2007_LTSB_Group_R&A.pdf.
Royal Bank of Scotland (2007) Annual Report 2007, available at www.rbs.com/microsites/
gra2007/downloads/RBS_GRA_2007.pdf.
SunTrust (2007) Annual Report 2007, available at www.suntrustenespanol.com/suntrust.
Wachovia (2007) Annual Report 2007, available at www.wachovia.com/file/2007_
Wachovia_Annual_Report.pdf.
Wells Fargo (2007) Annual Report 2007, available at www.wellsfargo.com/downloads/pdf/
invest_relations/wf2007annualreport.pdf.
Index
253
254 Index
daily volatility model, 38–39 33, 34–42, 48, 49, 55–56, 192–197,
Data Envelopment Analysis (DEA), 57–71, 199–200, 202, 206–210
100, 124, 126–127, 149–162 globalization, 168–169, 171–173
data mining, 87–90 Golden West, 26
decision making unit (DMU), 61, 66, 70, Green Tree, 26
126, 131, 149, 150, 151 grey relation analysis, 58, 60
decision support system (DSS), 180–181
decision tree, 93–96 H1N1 virus, 2
Deep Water Horizon, 118–123 hedging, 15, 27
derivatives, 24, 73 herding, 111
Distribution Free Approach, 100, 124 HSBC, 100
double marginalization, 151
DuPont model, 57 Icelandic volcano, 1
ICTCLAS System, 51
economic perspective, 108–117 implementation issues, 7–8
Efficient Market Hypothesis, 23 incentive incompatibility, 151
efficient market theory, 109 indifference theory, 110
emergency management, 179–180 information sentiment, 33–34
emergency management support systems Innovative Support To Emergencies,
(EMSS), 180–181 Diseases And Disasters
energy risk, 167 (InSTEDD), 179
Enron, 11–14, 15, 72, 163, 164 International Organization for
enterprise resource planning (ERP) Standardization (ISO), 14
systems, 14 investment collars, 17
Ericsson, 176
ERM process, 7–8 Kolmogorov-Smirnov statistic, 75, 77, 142
European Climate Exchange (ECX), Kyoto protocol, 183, 184, 187
184, 189
Expected Utility Theory, 16 Lehman Brothers, 15, 30
Exponential Generalized Autoregressive Levy process, 17
Conditional Heteroscedasticity Lexicon approach, 50
(EGARCH) model, 193–194, 197, 202 Li David X., 20, 108
Eyjafjallajokull, 168, 175 Lloyd’s of London, 1, 72, 101, 104, 174
London Market Exchange (LMX), 112
family regulation, 217–218 Long Term Capital Management (LTCM),
FEMA, 179 15, 24, 112, 164
financial risk forecasting, 32–48 Lorenz curve, 75, 78
financial risk management, 15–22
financial statement analysis, 58, 60–61 machine learning, 32–48, 50
Florida hurricanes, 1 Macondo, 118, 119, 120
food risk, 166 malicious activities, 164
Fourier transformations, 17 marginal abatement cost curve, 185
framing, 111 Markov chain Monte Carlo, 144
free-disposal hull, 100, 124 Markov process, 17
Fuzzy set theory, 58, 59–60 Markov regime switching model,
210–214
Gaussian copula, 20, 108 Markowitz, 16, 109
Generalized Autoregressive Conditional mean variance, 110
Heteroscedasticity (GARCH) model, merger evaluation, 150–152
Index 255