Download as pdf or txt
Download as pdf or txt
You are on page 1of 277

Enterprise Risk Management in Finance

Enterprise Risk Management


in Finance
Desheng Dash Wu
Stockholm Business School, Stockholm University, Sweden RiskLab,
University of Toronto, Canada

and

David L. Olson
College of Business Administration, University of Nebraska, USA
© Desheng Dash Wu and David L. Olson 2015
All rights reserved. No reproduction, copy or transmission of this
publication may be made without written permission.
No portion of this publication may be reproduced, copied or transmitted
save with written permission or in accordance with the provisions of the
Copyright, Designs and Patents Act 1988, or under the terms of any licence
permitting limited copying issued by the Copyright Licensing Agency,
Saffron House, 6–10 Kirby Street, London EC1N 8TS.
Any person who does any unauthorized act in relation to this publication
may be liable to criminal prosecution and civil claims for damages.
The authors have asserted their rights to be identified as the authors of this work
in accordance with the Copyright, Designs and Patents Act 1988.
First published 2015 by
PALGRAVE MACMILLAN
Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited,
registered in England, company number 785998, of Houndmills, Basingstoke,
Hampshire RG21 6XS.
Palgrave Macmillan in the US is a division of St Martin’s Press LLC,
175 Fifth Avenue, New York, NY 10010.
Palgrave Macmillan is the global academic imprint of the above companies
and has companies and representatives throughout the world.
Palgrave® and Macmillan® are registered trademarks in the United States,
the United Kingdom, Europe and other countries.
ISBN: 978–1–137–46628–0
This book is printed on paper suitable for recycling and made from fully
managed and sustained forest sources. Logging, pulping and manufacturing
processes are expected to conform to the environmental regulations of the
country of origin.
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Wu, Desheng Dash.
Enterprise risk management in finance / Desheng Dash Wu, David L. Olson.
pages cm
ISBN 978–1–137–46628–0 (hardback)
1. Risk management. 2. Financial risk management. I. Olson, David L. II. Title.
HD61.W796 2015
332.1068’1—dc23 2014049736
Contents

List of Figures x

List of Tables xii

Preface xv

Acknowledgements xvii

1 Enterprise Risk Management 1


Introduction 1
Definition 2
Accounting perspective 3
The COSO framework 3
Categories 4
Activities 4
Risk appetite 6
ERM process 7
Implementation issues 7
Risk management modeling 9
Book outline 9

2 Enron 11
Risk management 11
California electricity 12
Accounting impact 13
SOX 13
Conclusions 14

3 Financial Risk Management 15


Introduction 15
Investment collars 17
VaR 17
Copulas 20
Tranches 21
Conclusions 22

4 The Real Estate Crash of 2008 23


Introduction 23
Real estate in 2008 24
Mortgage system 26

v
vi Contents

Northern Rock 26
AIG 29
Risk management 30

5 Financial Risk Forecast Using Machine Learning and


Sentiment Analysis 32
Introduction 32
Information volume and volatility 32
Information sentiment and volatility 33
Volatility model and modified non-linear GARCH model 34
Daily volatility model 38
Using GARCH-based SVM to associate information sentiment
with asset price volatility 39
Sentiment analysis of financial news 39
Using GARCH-based SVM to associate information sentiment
and volatility 41
Empirical results and analysis 42
Trading volume volatility forecasting 43
Volatility forecasting with sentiment analysis 44
Conclusions 48

6 Online Stock Forum Sentiment Analysis 49


Introduction 49
Architectural design of GARCH-SVM based on sentiment index 49
Sentiment analysis 50
Data 51
Methodology comparison 53
Sentiment and stock price volatility 54
Conclusions 55

7 DEA Risk Scoring Model of Internet Stocks 57


Introduction 57
Different methods of performance evaluation 57
Multivariate statistical analysis 58
Data envelopment analysis 58
Analytic hierarchy process 59
Fuzzy set theory 59
Grey relation analysis 60
Balanced scorecard 60
Financial statement analysis 60
Basics of data envelopment analysis 61
The proposed approach 63
Variable selection 64
Empirical study 66
Contents vii

The DEA result 66


Conclusions 70

8 Bank Credit Scoring 72


Introduction 72
Risk modeling 72
Performance validation in credit rating 74
Case study: credit scorecard validation 75
Statistical results and discussion 77
Population distributions and stability 80
Conclusions 85
Appendix A8.1: Informal Definitions 86

9 Credit Scoring using Multiobjective Data Mining 87


Introduction 87
TOPSIS for data mining 87
Steps of the TOPSIS data mining method 88
Dataset 90
TOPSIS model over training data 91
Model comparisons 93
Simulation of model results 96
Conclusions 97

10 Online Banking Efficiency and Risk Evaluation with


Principal Component Analysis 99
Introduction 99
Data and variables 100
Results and analysis 101
PCA-DEA analysis 104
Risk factors 106
Conclusions 106

11 Economic Perspective 108


The traditional economic view 108
The human factor 111
Reality 111
Risk mitigation 114
Risk tolerance 114
Recent events 114
Conclusions 116

12 British Petroleum Deepwater Horizon 118


Introduction 118
Deepwater horizon 119
The Oil Pollution Act of 1990 120
viii Contents

Recovery factors in Macondo 120


Risk management factors 121
Conclusions 122

13 Bank Efficiency Analysis 124


Introduction 124
Data envelopment analysis (DEA) 126
Neural networks 127
The energy we use 129
Bank branch efficiency analysis 129
Short-term efficiency prediction 133
Conclusions 134

14 Catastrophe Bond and Risk Modeling 136


Introduction 136
Catastrophe risk instruments 136
Loss model 138
Demonstration of computation 138
Parameter estimation 140
Error analysis 143
Conclusions 144

15 Bilevel Programming Merger Analysis in Banking 145


Introduction 145
A conceptual banking chain with constrained resources 146
Mathematical model 148
Merger evaluation 150
A numerical example for incentive incompatibility 151
Case study: banking chain illustration 152
Post merger 157
Managerial insights 160
Conclusions 161

16 Sustainability and Risk in Globalization 163


Enterprise sustainability 163
Types of risk 165
Contexts of sustainable risk 166
Globalization 168
Supply chain risk management 170
Global business risks 171
Conclusions 173

17 Risk from Natural Disasters 175


Introduction 175
Preparing for high-impact, low-probability events 176
Be prepared 177
Risks and emergencies 177
Contents ix

Technical tools 178


Emergency management 179
Emergency management support systems 180
Conclusions 181

18 Pricing of Carbon Emission Exchange in the EU ETS 183


Introduction 183
Literature review 184
Price movements 186
Model, data and sample 188
Analysis of EUA logreturns 189
Time series test 191
GARCH effect test 192
Method selection 193
Estimation and forecasting 194
Conclusions 197

19 Volatility Forecasting of the Crude Oil Market 199


Introduction 199
Volatility models 200
Historical volatility 200
ARMA(R,M) 201
ARMAX(R,M, b) 201
ARCH(q) 201
GARCH(p,q) 202
EGARCH 202
GJR(p,q) 203
Regime-switching models 203
Data 204
Distribution analysis 204
Results 206
GARCH modeling 206
Markov regime-switching modeling 210
Conclusions 214

20 Confucius Three-stage Learning of Risk Management 215


Introduction 215
Self-cultivation 216
Family regulation 217
State harmonization 218
Conclusions 219

Notes 221
References 238
Index 253
List of Figures

3.1 CVaR and VaR 18


5.1 Flow chart and functional parts of our approach associating
information volume and volatility 33
5.2 Flow chart and functional parts of our approach to associate
information sentiment and volatility 35
5.3 Daily changing rates of the trading volumes of NASDAQ index 38
5.4 Sentiment calculation process for the current keyword w 40
5.5 Sliding time window learning and forecasting 41
5.6 Price volatility forecast result for company MDT over all the
time windows 47
5.7 Price volatility forecast result for company WAG over all the
time windows 47
5.8 Price volatility trend forecast accuracies for all the
time windows 47
6.1 Conceptual modeling of sentiment for volatility forecast 50
6.2 The Lexicon approach for sentiment classification 51
6.3 Volume of reviews distributed by time of week 52
6.4 Volume of reviews distributed by time of day 53
7.1 Two-stage DEA model 65
7.2 Proposed evaluation process 65
8.1 Lorenz curve, January–June 1999 sample 78
8.2 Lorenz curve, July–December 1999 sample 79
8.3 Lorenz curve, January–June 2000 sample 79
8.4 Performance comparison of three samples 79
8.5 Cumulative population distribution on all applicants 85
8.6 Interval population distribution on all applications 86
9.1 PolyAnalyst decision tree 93
9.2 See5 decision tree 94
10.1 3D plot of PCA analysis 103
10.2 Plots of principal component loadings in different DEA models 105
13.1 Backpropagation neural networks 128
14.1 Insurance payoffs (million RMB) for the Wenchuan earthquake 137
14.2 Frequency histogram of logarithm of earthquake loss with
normal fit 141
14.3 Historical vs. simulated data distribution of compound
Poisson model 142
14.4 Historical vs. simulated data of stochastic process 143

x
List of Figures xi

15.1 Supply chain model of the banking process with


constrained resources 147
18.1 EUA price movement in Phase I 187
18.2 EUA price movement in Phase II 188
18.3 Logreturns of EUA in Phase I 189
18.4 Logreturns of EUA in Phase II 190
18.5 Series correlation of EUA logreturns in Phases I and II 192
18.6 Residual, standard deviation and logreturns series in Phase I 195
18.7 Residual, standard deviation and logreturns series in Phase II 196
19.1 NYMEX crude oil daily price movements 204
19.2 NYMEX crude oil daily logreturn 205
19.3 Normal distribution vs. t-distribution 205
19.4 Innovation, standard deviation, return 207
19.5 Simulation and forecasting 209
19.6 Transitional probabilities in Markov regime-switching with GED 213
19.7 Returns of two regimes in historical time series 213
19.8 Price of two regimes in historical time series 214
List of Tables

1.1 COSO ERM cube 4


1.2 Risk management responsibilities 8
2.1 Sarbanes–Oxley act elements 14
4.1 Real estate cycle 25
4.2 Northern Rock events 28
4.3 Northern Rock retail deposits 29
4.4 Northern Rock holdings before and after run 29
4.5 Key events for AIG 30
5.1 The information volumes calculated from Google Finance 34
5.2 A snippet of news entries for the companies ADCT, S and MRO 36
5.3 The eight word sets we use in this chapter to calculate the
keyword sentiment 39
5.4 Predicted values of the average forecast error and the volatility
trend forecast accuracy ratio 44
5.5 Forecast results for 177 listed companies during the year 2007 45
6.1 Selecting 1-grams as features 51
6.2 Relative accuracies by sentiment 54
7.1 BCC-efficient scores on performance 67
7.2 BCC-efficient scores on the level of returns per unit of risk 68
7.3 Ranking of the BCC-efficient scores of total efficiency and
investing risk 69
7.4 Ranking of the BCC-efficient scores of whole model 70
8.1 Balanced scorecard perspectives, goals, and measures 74
8.2 Model risk events in banking 75
8.3 Scorecard performance validation, January–June 1999 76
8.4 Scorecard performance validation, July–December 1999 76
8.5 Scorecard performance validation, January–June 2000 77
8.6 Summary of performance samples 78
8.7 Population stability, January–June 1999 81
8.8 Population stability, July–December 1999 82
8.9 Population Stability, January–June 2000 83
8.10 Total population stability index 84
9.1 Independent variables for Canadian banking data set 91
9.2 Standardized data regression 92
9.3 Coincidence matrix – PolyAnalyst decision tree 94
9.4 Coincidence matrix – See5 decision tree 95
9.5 Coincidence matrix – TOPSIS L1 model 95

xii
List of Tables xiii

9.6 Coincidence matrix – TOPSIS L2 model 95


9.7 Coincidence matrix – TOPSIS L∞ model 95
9.8 Comparison of model results 96
9.9 Simulation results 96
10.1 Online banking DEA variables 101
10.2 Online banking data 101
10.3 DEA combinations and their efficiencies 102
10.4 Maximum component loadings matrix in different models 103
10.5 Integrated PCA-DEA score 104
10.6 Variance explained in integrated PCA-DEA 105
10.7 Multivariate linear regression analysis – DV bank revenue 106
11.1 Realms of uncertainty 109
11.2 Evolution of risk management 110
12.1 BP risk factors 119
12.2 Factors in recovery 120
13.1 Summary statistics of data 129
13.2 Estimated neural network parameters 130
13.3 Number of branches corresponding to each efficiency interval 131
13.4 Statistical results corresponding to each efficiency interval 131
13.5 Efficiency score distribution 132
13.6 Implication of slight efficiency improvement on branch costs 132
13.7 Regression analysis for branch efficiency prediction using
October data 133
13.8 Number of branches in each efficiency interval 133
13.9 DEA-NN3 results 133
13.10 Regression analysis for short-term efficiency prediction 134
13.11 Comparison of best-practice branches 134
13.12 Comparison of DEA and DEA-NN to efficiency measurement 135
14.1 Catastrophe loss models 138
14.2 Chinese earthquake loss data, 1966–2008 139
14.3 Results of ADF testing 140
14.4 Results of parameter estimation 141
14.5 Results of K–S test 143
14.6 Error analysis 144
15.1 Input and output data for the 8 branches in the
numerical example 151
15.2 Raw data for 30 branches 153
15.3 Profit efficiency values 154
15.4 A comparison with existing DEA and SFA 156
15.5 Correlations 157
15.6 Statistics under both CRS and VRS assumptions 158
15.7 Top ten promising mergers under UL game structure 158
xiv List of Tables

15.8 Top ten promising mergers under LL game structure 159


18.1 Descriptive statistics of EUA logreturns in Phase I 190
18.2 Descriptive statistics of EUA logreturns in Phase II 190
18.3 The ADF test of EUA logreturns in Phase I 191
18.4 The ADF test of EUA first-order difference in Phase I 191
18.5 The ADF test of EUA logreturns in Phase II 191
18.6 The ADF test of EUA first-order difference in Phase II 192
18.7 The ARCH LM test for EUA logreturns in Phases I and II 193
18.8 The Akaike info criterion (AIC) and Schwarz criterion (SC)
for the estimated model in Phase I 193
18.9 The Akaike info criterion (AIC) and Schwarz criterion (SC)
for the estimated model in Phase II 193
18.10 Estimation of EUA logreturns in Phase I 194
18.11 EUA Logreturns forecasting in Phase I 195
18.12 Estimation of EUA logreturns in Phase II 196
18.13 EUA Logreturn forecasting in Phase II 197
19.1 Statistics on the daily crude oil index changes,
February 2006–July 2009 204
19.2 Daily crude oil index logreturn statistics,
February 2006–July 2009 205
19.3 GARCH(1,1) estimation using the t-distribution 206
19.4 Various GARCH modeling characteristics 208
19.5 Markov regime-switching computation example 211
19.6 Markov regime-switching using Hamilton’s (1989) model 211
19.7 Markov regime-switching using t-distribution 212
19.8 Markov regime-switching using GED 212
20.1 Risk management links 219
Preface

The importance of financial risk was revealed by the traumatic events of 2007
and 2008, when the global financial community experienced a real estate
bubble collapse from which (at the time of writing) most of the world’s econ-
omies are still recovering. Human investment activity seems determined to
create bubbles, despite our long history of suffering.1 Financial investment
seems to be a never-ending game of greedy players seeking to take advantage
of each other, which Adam Smith assured us would lead to an optimal eco-
nomic system. It is interesting that we pass through periods of trying one
system, usually persisting until we encounter failure, and then moving on to
another. The United States went through a long stretch where regulation of
financial institutions was considered paramount, beginning with the Great
Depression of the 1930s. When relative prosperity was experienced, the 1980s
saw a resurgence of deregulation, culminating in the Gramm–Rudman Act that
dismantled much of the post-depression regulation in favor of a free-wheeling
economic system. Some post-2008 theorists have found evidence that this
deregulation went too far. It is notable that Canada, with an economy highly
integrated with that of the United States (but with more consistent regulation),
experienced none of the traumatic real estate issues that plagued the US.
We do not pretend to offer solutions to financial economic problems. We do,
however, purport to offer a variety of analytic models that can be used to aid
financial decision-making. These models are presented in the spirit that they
are tools, which can be used for good or bad. But we do contend that inves-
tigating these tools is important in helping to better understand our global
inter-connected economy, with its financial opportunities and risks. The
responsibility for investment decisions remains with human investors.
This book presents a number of operations research model applications to
financial risk management. It is based on a framework of four perspectives, each
with appended current examples, with separate chapters based on published
models designed to support financial risk management. The four perspectives
used are accounting (explaining the COSO framework in Chapter 1), finance
(reviewing some basic conceptual tools in Chapter 3), economic (risk theory in
Chapter 11), and sustainability (Chapter 16). Current issues related to each of
these perspectives are appended. Chapter 2 supplements the overview intro-
ductory chapter by discussing the ethical risk issues highlighted by the Enron
case. Chapter 4 supplements the financial perspective chapter with a review of
the 2008 real estate crash, both in the US and in Europe. Chapter 12 supple-
ments the economic perspective chapter with a review of the risks associated

xv
xvi Preface

with the British Petroleum oil spill in the Gulf of Mexico. Chapter 17 supple-
ments the sustainability chapter with reviews of some natural disaster events.
Given this framework with current examples, the focus of the book is on
quantitative models presented to support risk management in finance. These
models include sentiment analysis, data envelopment analysis, catastrophe
bond modeling, chance constrained optimization, bank credit scoring, multi-
objective credit scoring, and advanced time series modeling. Chapters 5 and
6 present sentiment analysis models, one of investment analysis, the second
of stock price volatility. Chapter 7 presents a data envelopment analysis risk
scoring model of internet stocks. Chapter 8 uses statistical credit scorecard
modeling, while Chapter 9 applies a TOPSIS credit-scoring model supple-
mented with simulation. Chapter 10 utilizes principal components analysis
to make DEA more efficient in analyzing the efficiency of on-line banking.
To supplement the economic perspective, another data envelopment ana-
lysis model is used to assess bank branch efficiency in Chapter 13, whereas
Chapter 14 describes catastrophe bond loss modeling, and Chapter 15 bilevel
mathematical programming in bank merger analysis. The sustainability per-
spective is supported by use of GARCH-type forecasting models of carbon
emissions markets in Chapter 18, while similar tools are used to forecast crude
oil prices in Chapter 19. Chapter 20 provides a summary of how risk man-
agement can be studies using a Confucius three-stage learning approach. Thus
the book provides a variety of tools for assessing different types of financial
risk situations.

Note
1. L. Laeven and F. Valencia (2008) ‘Systemic banking crises: a new database’,
International Monetary Fund, Working Paper WP/08/224.
Acknowledgements

Chapter 5 follows the work of Li et al. (2009)1 on financial risk analysis. We


acknowledge Taylor & Francis for granting us rights to revise and publish this
work in a book format. In this chapter we consider a simplified version of
Li et al. (2009). Readers may refer to Li et al. (2009) for further theoretical and
modeling issues.
Chapter 6 follows the work of Wu et al. (2014)2 on financial risk analysis. In
this chapter we consider a simplified version of Wu et al. (2014). Readers may
refer to Wu et al. (2014) for theoretical and modeling issues.
Chapter 7 follows the work of Ho, Wu and Olson (2009).3 We acknowledge
World Scientific for granting us rights to revise and publish this work in a book
format. Readers may refer to Ho, Wu and Olson (2009) for further theoretical
and modeling issues.
Chapter 8 follows the work of Wu and Olson (2010).4 We acknowledge
Palgrave Macmillan for granting us rights to the revise and publish this work
in a book format. Readers may refer to Wu and Olson (2010) for further theor-
etical and modeling issues.
Chapter 9 follows the work of Wu and Olson.5 We acknowledge Idea Group
Inc. for granting us rights to revise and publish this work in a book format.
Readers may refer to Wu and Olson (2006) for further theoretical and mod-
eling issues.
Chapter 10 follows the work of Wu and Wu (2010)6 on banking operations.
We acknowledge Emerald for granting us rights to revise and publish this work
in a book format.
Chapter 13 follows the work of Wu et al. (2006)7 on financial risk analysis.
We acknowledge Elsevier for granting us rights to revise and publish this work
in a book format. Readers may refer to Wu et al. (2006) for further theoretical
and modeling issues.
Chapter 14 follows the work of Wu and Zhou (2010)8 on Cat bond and
financial risk analysis. We acknowledge Taylor & Francis for granting us rights
to revise and publish this work in a book format. Readers may refer to Wu and
Zhou (2010) for further theoretical and modeling issues.
Chapter 15 follows the work of Wu et al. (2014)9 on financial merger ana-
lysis. We acknowledge Wiley and the Production and Operations Management
Society for granting us rights to revise and publish this work in a book format.
Readers may refer to Wu et al. (2014) for further theoretical and modeling
issues.

xvii
xviii Acknowledgements

Chapter 18 follows the work of Chen et al. (2010)10 on carbon emission


pricing. We acknowledge Taylor & Francis for granting us rights to revise and
publish this work in a book format. In this chapter we consider a simplified
and demonstration version of Chen et al. (2010).
Chapter 19 follows the work of Luo et al.11 dealing with carbon emission
pricing. We acknowledge Emerald for granting us rights to revise and publish
this work in a book format.
Chapter 20 follows the work of Wu.12 We acknowledge Inderscience for
granting us rights to revise and publish this work in a book format.

Notes
1. N. Li, X. Liang, X. Li, C. Wang, Desheng D. Wu (2009) ‘Network environment and
financial risk using machine learning and sentiment analysis,’ Human and Ecological
Risk Assessment, 15(2): 227–252.
2. D. Wu, L. Zheng, D.L. Olson (2014) ‘A decision support approach for online stock
forum sentiment analysis,’ IEEE Transactions on Systems Man and Cybernetics, 44(8):
1077–1087.
3. Chien-Ta Bruce Ho, Desheng Dash Wu, David L. Olson (2009) ‘A risk scoring model
and application to measuring internet stock performance,’ International Journal of
Information Technology and Decision Making, 8(1): 133–149.
4. Desheng Dash Wu, David L. Olson (2010) ‘Enterprise risk management: coping
with model risk in a large bank,’ Journal of the Operational Research Society, 61(2):
774–787.
5. D. Wu, D.L. Olson (2006) ‘A TOPSIS data mining demonstration and application to
credit scoring,’ International Journal of Data Warehousing & Mining, 2(3): 1–10.
6. D. Wu, D.D. Wu (2010) ‘Performance evaluation and risk analysis of online banking
service,’ Kybernetics, 39(5): 723–734.
7. D. Wu, Z. Yang, L. Liang(2006) ‘Using DEA-neural network approach to evaluate
branch efficiency of a large Canadian bank,’ Expert System with Applications,
108–115.
8. D. Wu, Y. Zhou (2010) ‘Catastrophe bond and risk modeling: a review and cali-
bration using Chinese earthquake loss data,’ Human and Ecological Risk Assessment,
16(3): 510–523.
9. D. Wu, C. Luo, H. Wang, J.R. Birge (2014) ‘Bilevel programming merger evaluation
and application to banking operations,’ Production and Operations Management. DOI:
10.1111/poms.12205. Accepted and in press.
10. X. Chen, Z. Wang, D.D. Wu (2013) ‘Modeling the price mechanism of carbon
emission exchange in the European Union emission trading system,’ Human and
Ecological Risk Assessment, 19(5): 1309–1323.
11. C. Luo, L.A. Seco, H. Wang, D.D. Wu (2010) Risk modeling in crude oil market: a
comparison of Markov switching and GARCH models’, Kybernetics, 39(5): 750–769.
12. D.D. Wu (2014) ‘An approach for learning risk management: confucianism system
and risk theory,’ International Journal of Financial Services Management. Accepted and
in press.
1
Enterprise Risk Management

Introduction

Living and working in today’s environment involves many risks. The processes
used to make decisions in this environment should consider the need both
to keep people gainfully employed (through increased economic activity) and
to protect humanity from threats arising from human activity. Terrorism led to
the gas attack on the Japanese subway system in 1995, to 9/11 in 2001, and to
the bombings of the Spanish and British transportation systems in 2004 and
2005 respectively. But nature has been far more deadly, with hurricanes in
Florida, tsunamis in Japan, earthquakes in China, and volcanoes in Iceland.
These locations only represent recent, well-publicized events. Nature can strike
at us anywhere. We need to consider the many risks that exist, and to come up
with strategies, controls, and regulations that accomplish a complex combin-
ation of goals.
Risks can be viewed as threats, but business exists to cope with risks. No one
should expect compensation or profit without taking on some risk. The key
to successful risk management is to select those risks that one is competent to
deal with, and to find some way to avoid, reduce, or insure against those risks
not in this category. Consideration of risk has always been part of business,
manifesting itself in the growth of coffee houses such as Lloyd’s of London
in the 17th century, spreading risk related to cargoes on the high seas. The
field of insurance developed to cover a wide variety of risks, related to external
and internal risks covering natural catastrophes, accidents, human error, and
even fraud. Enterprise risk management (ERM) is a systematic, integrated
approach to managing all risks facing an organization. It focuses on board
supervision, aiming to identify, evaluate, and manage all major corporate risks
in an integrated framework. The board is responsible for providing strategic
input, identifying performance objectives, making key personnel appoint-
ments, and providing management oversight. Enterprise risks are inherently

1
2 Enterprise Risk Management in Finance

part of corporate strategy. Thus consideration of risks in strategy selection can


be one way to control them. ERM can be viewed as top-down by necessity for
this reason.

Definition

Risk management can be defined as the process of identification, analysis and


either acceptance or mitigation of uncertainty in investment decision-making.
Once risk has been processed in this manner, risk management seeks coordi-
nated and economical application of resources to control the probability and/
or impact of adverse events, and to monitor the effectiveness of actions taken.1
Risk management is about managing uncertainty related to a threat. ERM has
been recognized as being one of the most important issues in business man-
agement in the last decade. There are systematic variations in ERM practices in
the financial services industry. There is a need to monitor and address all risks
inherent in organizational operations as necessary to avoid economic catas-
trophe. There is a need to consider all corporate risks within a single ERM
framework in order to gain long-run competitive advantage.
In the US, recent crises include the 2007 subprime crisis of the banking
industry, the Fannie Mae and Freddie Mac crisis in secondary US mortgage
markets, the failure of Lehman Brothers, Merrill Lynch’s takeover by Bank of
America and insurance industry giant AIG applying for emergency financial
support from the Federal Reserve. More recently, the H1N1 virus has sharpened
the awareness of the response system worldwide. Risks can arise in many facets
of business. Global economic crisis risks are profound and widespread over the
last decade. Businesses in fact exist to cope with risk in their area of special-
ization. But chief executive officers are responsible for dealing with any risk
that fate throws at their organization.
Risk management began in the financial disciplines. Financial risk man-
agement has focused on banking, accounting, and finance. There are many good
organizations that have done excellent work to aid organizations dealing with
those specific forms of risk, applying many types of models. Risk management
can also be applied in other areas, to include accounting. Risk management can
be defined as the process of identification, analysis and either acceptance or miti-
gation of uncertainty in investment decision-making. Risk management is about
managing uncertainty related to a threat. Traditional risk management focuses
on risks stemming from physical or legal causes such as natural disasters or fires,
accidents, death and lawsuits. Financial risk management deals with risks that
can be managed using traded financial instruments. The most recent concept,
enterprise risk management, provides a tool to enhance the value of systems,
both commercial and communal, from a systematic point of view. Operations
research (OR) is always useful for optimizing risk management.
Enterprise Risk Management 3

Accounting perspective

Accounting responsibilities involve auditing organizational operations to


provide stakeholders with accurate, transparent information of finances. This
includes assuring that a sound process is in place to detect, deal with, and
monitor risk. The accounting approach to risk management is centered to a
large degree on the standards promulgated by the Committee on Sponsoring
Organizations of the Treadway Commission (COSO), generated by the Treadway
Commission beginning in 1992. The Sarbanes–Oxley Act of 2002 outlines
regulatory requirements for publicly traded firms to establish, evaluate, and
assess the effectiveness of internal accounting controls. SOC has had a syn-
ergistic impact with COSO. While many companies have not used it, COSO
offers a framework for organizations to manage risk.2 COSO objectives are:

1. Effectiveness and efficiency of operations


2. Reliability of financial reporting
3. Compliance with applicable laws and regulations.

To attain these objectives, COSO identifies the components of internal


control:

• Control environment
• Risk assessment
• Control activities
• Information and communication
• Monitoring.

COSO was found to be used to a large extent by only 11% of the organiza-
tions surveyed, and only 15% of the respondents believed that their internal
auditors used the COSO 1992 framework in full. Chief executive officers and
chief financial officers are required to certify effective internal controls. These
controls can be assessed against COSO. This benefits stakeholders. Risk man-
agement is now understood to be a strategic activity, and risk standards can
ensure uniform risk assessment across the organization. Resources are more
likely to be devoted to the most important risk, and better responsiveness to
change is obtained.

The COSO framework


In 2004, COSO published an Enterprise Risk Management – Integrated Framework.3
COSO provides a framework to manage enterprise uncertainty, expressed in
their ERM Cube. The cube considers dimension of objective categories, activ-
ities, and organizational levels, as shown in Table 1.1.
4 Enterprise Risk Management in Finance

Table 1.1 COSO ERM cube1

Categories Activities Levels

Strategic Internal environment Entity level


Operations Objective setting Division
Reporting Event identification Business unit
Compliance Risk assessment Subsidiary
Risk response
Control activities
Information & communication
Monitoring

Note: 1COSO (2004). Enterprise Risk Management – Integrated Framework:


Executive Summary. September.

This framework provides key principles and concepts, a common language,


and clear direction and guidance.4

Categories
The strategic level involves overarching activities such as organizational gov-
ernance, strategic objectives, business models, consideration of external forces,
and other factors. The operations level is concerned with business processes,
value chains, financial flows, and related issues. Reporting includes information
systems as well as means to communicate organizational performance on mul-
tiple dimensions, to include finance, reputation, and intellectual property.
Compliance considers organizational reporting on legal, contractual, and other
regulatory requirements (including environmental).

Activities
The COSO internal control process consists of a series of actions.5

1. Internal Environment: The process starts with identification of the organiza-


tional units, with entity level representing the overall organization. The tone
is set by the top of the organization. This includes actions to develop a risk
management philosophy, create a risk management culture, and design a
risk management organizational structure.
2. Objective Setting: Each participating division, business unit, and subsidiary
would then identify business objectives and strategic alternatives, reflecting
vision for enterprise success. These objectives would be categorized as stra-
tegic, operations, reporting, and compliance. These objectives need to be
integrated with enterprise objectives at the entity level. Objectives should
be clear and strategic, and should reflect the entity-wide risk appetite.
Enterprise Risk Management 5

3. Event Identification: Management needs to identify events that could


influence organizational performance, either positively or negatively.
Risk events are identified, along with event interdependencies. (Some
events are isolated, while others are correlated.) Measurement issues
associated with methodologies or risk assessment techniques need to be
considered.
4. Risk Assessment: Each of the risks identified in Step 3 is assessed in terms
of probability of occurrence, as well as the impact each risk will have on
the organization. Thus both impact and likelihood are considered. Their
product provides a metric for ranking risks. Assessment techniques can
include point estimates, ranges, or best/worst-case scenarios.
5. Risk Response: Strategies available to manage risks are developed. These
can include risk acceptance, risk avoidance, risk sharing, or risk reduction.
Options have been summarized into the four Ts:

a. Treat a risk: Take direct action to reduce impact or likelihood.


b. Terminate a risk: Discontinue activity exposing the organization to the
risk.
c. Transfer a risk: Insurance or contracts.
d. Take (or tolerate) a risk: For areas of organizational expertise, they
may decide to accept risk with the idea that they are expert at dealing
with it.

Avoidance is akin to terminating, acceptance to treating, reduction and transfer


to transfer above, and seeking risks to toleration. Risks are necessary to lead to
situations likely to offer profit, but risks should be taken only after informed
business analysis.
The effects of risk response on other risks should be considered.

6. Control Activities: Controls needed to mitigate identified risks are selected.


Implicit in this step is assessment of the costs of each risk response available,
and consideration of activities to reduce risks.
7. Information and Communication: Control and other risk response activities are
put in place to ensure appropriate action is taken within the organization.
Organizations need to ensure that information systems can measure and
report risk accurately. ERM effectiveness and cost should be communicated
to stakeholders.
8. Monitoring: As part of an ongoing process, the effectiveness of plan imple-
mentation is monitored, feeding back to the control step if problems are
encountered. Monitoring includes risk evaluations comparing actual event
occurrences with prior estimates of probability, frequency, and cost.
6 Enterprise Risk Management in Finance

Risk appetite
Risks are necessary to do business. Every organization can be viewed as a spe-
cialist at dealing with at least one type of risk. Insurance companies specialize in
assessing the market value of risks, and offer policies that transfer special types
of risks to themselves from their clients at a fee. Banks specialize in the risk of
loan repayment, and survive when they are effective at managing these risks.
Construction companies specialize in the risks of making buildings or other
facilities. However, risks come at organizations from every direction. Those risks
that are outside of an organization’s specialty are outside that organization’s
risk appetite. Management needs to assess risks associated with the opportun-
ities presented to it, and accept those that fit its risk appetite (or organizational
expertise), and offload other risks in some way (see step 6 above).
Risk appetite is the amount of risk that an organization is willing to accept
in pursuit of value. Each organization pursues various objectives to add value,
and should broadly understand the risk it is willing to undertake in doing so.6
COSO recommends three steps to determine risk appetite:

1. Developing risk appetite requires consideration of the current level and


distribution of risks the organization faces, assessment of the level of
risk that the organization can handle, the acceptable level of risk that the
organization is willing to accept, and attitude concerning growth, risk, and
return.
2. Once the risk appetite appropriate to the organization is agreed upon, it
must be communicated throughout the organization.
3. Risks affecting the organization need to be monitored in terms of quanti-
tative and qualitative measures.

Examples of specific measures for a business might be:

• How customer requirements are being met


• Shareholder expectations
• Strategic initiatives and growth
• Financial reporting
• Operational performance
• Regulatory compliance
• Employee health and safety
• Environmental responsibility.

Matyjewicz and D’Arcangelo gave simple examples of how risk assessment


could be applied. First, a matrix of risk level (high or low) and control strength
(weak or strong) could be generated for each identified risk. Risk impact could
be further categorized, as critical, significant, moderate, low, or insignificant,
Enterprise Risk Management 7

while risk probability could have categories of highly probable, probable, likely,
unlikely, or remote.
The likely actions of internal auditing were identified. Those risks involving
high risk and strong controls would call for checking that inherent risks were
in fact mitigated by risk response strategies and controls. Risks involving high
risk and weak controls would call for checking for adequacy of management’s
action plan to improve controls. Those risks assessed as low call for internal
auditing to review accuracy of managerial impact evaluation and risk event
likelihood.

ERM process

COSO provides a great deal of help in describing a process for risk management
implementation.7 This process should be continuous, supporting the organiza-
tion’s strategy. One possible list of steps could be:

1. Risk Identification: identification of what could go wrong without controls,


considering key organizational areas such as mission, customers, people,
physical assets, and financial assets.
2. Risk Analysis: consideration of risk likelihood and impact, and ranking them
accordingly.
3. Response to Significant Risks: toleration, treatment, transfer, or termination.
4. Identification of Controls: where risks can be effectively dealt with; and the
controls needed to effectively respond.
5. Reporting and Monitoring: documentation of risk evaluation, events, impacts,
and effectiveness of strategies.

It is paramount that a climate of dealing with risks be spread throughout the


organization. This can be aided by specific identification of roles, responsi-
bilities, and communications channels, along with clearly stated risk strategy
and appetite. Protocols in the form of risk guidelines should include rules and
procedures along with specific risk management methodologies, tools, and
techniques. It is also important to clearly identify responsibilities. COSO
provides typical risk responsibilities, as shown in Table 1.2.

Implementation issues
Past risk management efforts have been characterized by bottom-up imple-
mentation. But effective implementation calls for top-down management,
as do most organizational efforts. Without top support, lack of funding will
starve most efforts. Related to that, top support is needed to coordinate efforts
so that silo mentalities do not take over. COSO requires a holistic approach. If
COSO is adopted within daily processes, it can effectively strengthen corporate
8 Enterprise Risk Management in Finance

Table 1.2 Risk management responsibilities1

Entity Risk management responsibilities

CEO/Board Set strategic approach and risk appetite


Establish risk management structure
Identify most significant risk
Crisis management
Unit managers Establish risk awareness culture
Set performance targets
Ensure implementation
Identify and report changes
Individuals Understand RM processes
Report insufficient controls
Report loss events and close calls
Cooperate with incident investigations
Risk manager Develop current risk management policy
Document policies and structures
Coordinate internal controls
Compile data and reports
Internal audit manager Develop risk-based audit program
Audit risk processes
Provide assurance of risk management
Report internal control efficiency/effectiveness

Note: 1The Association of Risk Managers (2010). A Structured Approach to Enterprise


Risk Management (ERM) and the Requirements of ISO 31000. COSO.

governance. Another important issue is the application of sufficient resources


to effectively implement ERM.
One view of ERM, parallel to that of the CMI system used in software engin-
eering, is as follows.8

Level 1: Compliance – review of policy and procedure with a checklist orien-


tation, providing low value to the organization in terms of ERM.
Level 2: Control – implementation of control frameworks, still using a checklist
orientation, also providing low value to organizations.
Level 3: Process – taking a process view across departments, focusing on effect-
iveness as well as efficiency, to include process mapping.
Level 4: Risk Management – use of shared risk language, with the ability to pri-
oritize efforts based on process mapping.
Level 5: Enterprise Risk Management – the Nirvana of holistic risk reviews tied
to entity strategy based on common risk language, viewing risk man-
agement as a process, providing high value to organizational risk
management.
Enterprise Risk Management 9

Risk management modeling

There have been many models applied to managing risks. There are many
published research papers proposing optimization modeling to ERM. We
feel that optimization of supply chain systems involving risk is dangerous,
in that we think there is a fundamental conflict between optimal planning
of complex systems (which seeks to eliminate all excess resources) and the
capability of a system to deal with risk. Think of Chicago’s O’Hare Airport.
Airlines schedule it to the optimum, seeking maximum revenue through
service to the most customers feasible. But anyone who has traveled through
O’Hare knows that the system is saturated – the slightest drop of rain results
in a cascade of cancelled flights throughout the US and Canada. The airlines
have dealt with their risk – they can pay inconvenienced customers the
minimum allowed by regulation. The customers on the other hand travel
through O’Hare at their own risk (at least with respect to on-time arrival at
their destination). Should the airlines back off their optimal schedule, they
would have greater slack capacity to absorb the unexpected, which occurs
whenever the weather in Chicago turns the least bit nasty, which occurs
almost always.

Book outline

Chapter 2 provides a review of the Enron event, demonstrating why SOX


happened. Chapter 3 will focus on financial risks, not restricted to market risk,
credit risk, operational risk, operational risk, liquidity risk. Financial risk has
been controlled through hedge funds and other tools over the years, often by
investment banks. Chapter 4 discusses the 2007–2008 real estate financing
crisis. Chapter 11 discusses the insurance perspective, realizing that many risks
can be prevented, or their impact reduced, through loss-prevention and control
systems, leading to a broader view of risk management. Chapter 12 demon-
strates its importance by reviewing the BP oil spill. Chapter 16 looks at the wider
view of sustainability and risk in globalization. Chapter 17 reviews some of the
risks to business operations arising from natural disaster, calling for insurance.
Chapter 20 presents a Confucian perspective on risk management.
Section II of the book presents a variety of quantitative modeling tech-
niques proposed for ERM. Models provide a means to quantify risks, and to
aid decision-making concerning issues with complex interactions. Effective
management of risks inherently involves tradeoffs. Optimization models may
identify solutions with the greatest expected short-term profits, but these
solutions also tend to have high levels of risk, especially in the longer term.
Simulation models enable consideration of uncertainties, as long as they are
expressed in the form of probability distributions. Multiple criteria models
10 Enterprise Risk Management in Finance

focus on the analysis of tradeoffs. Once models are used to examine expected
relationships between causes and effects, risk reduction and management
can be more effective. The usual forms of management of risks tend to be
based upon either financial models, or through frameworks. Risk management
modeling tools demonstrated include GARCH, data mining through neural
networks, bilevel programming, efficiency analysis through DEA, sentiment
analysis, and stochastic simulation. These tools offer advanced techniques
available to aid decision-making under risk.
2
Enron

One of the primary reasons for the current emphasis on ethical business
practice has been the history of the Enron Corporation. Enron was founded
in 1985 to manage a natural gas pipeline. It expanded operations to include
trading not only natural gas but also other energy commodities, including gas
and electricity. This trading included derivatives on the price of gas, to hedge
risk. Enron participated in a joint venture creating an online trading operation
offering options and other derivatives to traders in the gas industry. By 2000 it
was well known as a trading pioneer in that field, and this led to its stock doing
well on Wall Street; in 2001 it was rated as 7th on the Fortune 500.

Risk management

Enron had a strong reputation for its use of sophisticated financial risk man-
agement tools. It was in a business with long-term fixed commitments that
needed hedging to survive the inevitable fluctuations in energy prices. The
sophisticated tools included derivatives and transfer of risk to special entities,
but the problem was that Enron owned these special entities. Thus it really
didn’t transfer risk, but hedged with itself. Enron accounting practices were
creative, hiding losses and shuffling debts through complex trades,1 but such
practice was actually quite common in the deregulated fervor of the 1990s.2
Enron also became involved in trading electricity in California, and was
caught manipulating that market.3 Late in 2001 the Securities and Exchange
Commission began investigating Enron, and Enron stock dropped. Enron filed
for Chapter 11 bankruptcy on December 2, 2001. During this fall, Enron exec-
utives off-loaded stock and gave themselves large compensation packages while
urging lower-level employees to retain their stock. Many high-profile court
trials ensued, with criminal convictions of CEO Kenneth Lay and President
Jeffrey Skilling in 2006. Lay was convicted in May 2006 on five counts of

11
12 Enterprise Risk Management in Finance

securities fraud, wire fraud, and conspiracy to commit the same, but he did not
go to prison as he died of a heart attack in July, before sentencing. Skilling was
convicted in May 2006 of 28 counts of securities fraud, wire fraud, conspiracy
to commit same, false statements, and insider trading. He was sentenced to
24 years and 4 months of prison, and ordered to provide $45 million in res-
titution to victims. Skilling entered prison in 2006, with appeals pending; in
June 2013, his sentence was reduced to 14 years in prison.4

California electricity
In 1996, California modified its controls on its energy markets, seeking to
increase competition. A spot market for energy began operation in April 1998.
In May 2000, significant price increases for energy were experienced, due to
shortage of supply. This shortage has been attributed to a cap on retail elec-
tricity prices, along with market manipulation and illegal shutdowns of pipe-
lines by Enron. On June 14, 2000, California endured multiple blackouts on a
large scale. Drought, as well as delays in approval for new energy plants, also
played a role. The fiasco played a part in Governor Gray Davis’s political popu-
larity. In August 2000, San Diego Gas & Electric Company filed a complaint
about electricity market manipulation. January 17 and 18, 2001, saw more
blackouts, and Governor Davis declared a state of emergency. More blackouts
occurred on March 19 and 20. Pacific Gas & Electric filed for bankruptcy in
April, and there were more blackouts on May 7 and 8. Energy prices stabi-
lized in September. In December, following Enron’s bankruptcy, allegations
were made that Enron had manipulated energy prices, and the Federal Energy
Regulatory Commission began investigation in February 2002.
The Wall Street Journal published a series of studies of the Enron case. Reasons
for the fall of Enron included:5

1. Bad investments (12/4/2001)


2. Loss of trust in Enron Online traders (12/6/2001)
3. Bad hedging, bad trading, bad assets (12/26/2001)
4. Efforts to expand into market with no experience (12/31/2001)
5. Creation of a complex structure of subsidiaries and financial instruments
evading clear explanation to investors (1/11/2002)
6. Accounting statements not providing investors with a complete picture nor
a fair assessment of risks (1/16/2002)
7. Adoption of self-dealing partnerships with accountants and lawyers covering
up illegal actions 1/21/2002).

The worst of Enron’s egregious activities included its California energy market
manipulation, which gouged California electricity customers of billions, as
Enron 13

well as betrayal of its own employees, who were encouraged to invest their
retirement funds in Enron stock while the executive board sold them out.

Accounting impact
The first manifestation of change after the fall of Enron was the fall of its audit
firm, Arthur Andersen, in June 2002. Andersen was convicted of obstruction of
justice, for shredding documents related to Enron. This barred it from auditing
US and foreign-based public companies, leaving 1085 firms in need of a new
auditor. The Sarbanes–Oxley Act was passed in 2002, requiring additional
auditing requirements. The remaining Big Four audit firms were flooded with
business by these two events, leading them to drop 500 clients between 2002
and 2005 (and also enabling them to raise audit fees).6

SOX

The Sarbanes–Oxley Act of 2002 is a federal law setting standards for US public
company boards. It was enacted in reaction to corporate and accounting scan-
dals to include Enron, Tyco International, Adelphia, Peregrine Systems, and
WorldCom, each of which involved company collapses that led to heavy losses
by investors.
SOX provided reforms to include requirements for certification, criminal
penalties to chief officers of offending corporations, internal controls, inde-
pendent audit committees, and regulations of disclosure. The Sarbanes–Oxley
Act provides for a number of sections displayed in Table 2.1.
Sarbanes–Oxley (SOX) has been heavily studied. On the positive side, evi-
dence of increased transparency of firms under SOX has been reported. The
costs of compliance were found to be 0.043% of revenue in 2006, and 0.036%
of revenue in 2007, with costs much higher for decentralized companies with
multiple divisions.7 Surveys have found that there has been a positive impact
on investor confidence in the reliability of financial statements and in fraud
prevention. However, most survey respondents have seen the benefits to be
less than the cost. Costs incurred are for external auditor fees, insurance for
officers and directors, board compensation, lost productivity, and legal costs.
Reaction of firms to avoid SOX have included going private, delisting on stock
exchanges, and staying small enough to avoid its requirements.8 Smaller inter-
national companies have been found to prefer listing in UK stock exchanges
rather than US stock exchanges, indicating a cost impact on smaller firms.9
Studies have found that those firms that use avoidance tend to be small, less
financially endowed, with weak governance and weak performance. Larger
firms in the US have been found to reduce risk-taking (in terms of capital
expenditure, R&D, and variance in cash flow and returns).10
14 Enterprise Risk Management in Finance

Table 2.1 Sarbanes–Oxley act elements

Accounting oversight Board created to provide independent oversight of public


board accounting firms offering audit services.
Auditor independence Standards for external auditor independence to limit
conflicts of interest.
Corporate responsibility Senior executives are individually responsible for financial
report accuracy and completeness.
Financial disclosure Reporting requirements for financial transactions.
Analyst conflicts of Definition of codes of conduct for securities analysts,
interest requiring conflict of interest disclosure.
Commission resources Practices to restore investor confidence
and authority
Studies and reports Comptroller General and SEC required to perform studies
and report findings on effect of consolidation of public
accounting firms, role of credit rating agencies, securities
violations and enforcement, investment bank participation
in earnings manipulation and financial condition obfus-
cation.
Corporate and criminal Criminal penalties for manipulation, destruction, or alter-
fraud accountability nation of financial records described, whistle-blower pro-
tection provided.
White collar crime Increased criminal penalties for white-collar crimes and
penalty enhancement conspiracies.
Corporate tax returns CEO required to sign company tax return.
Corporate fraud Identifies corporate fraud, revises sentencing guidelines.
accountability

Conclusions

SOX was not solely driven by the Enron case, but that was probably the most
salient representative of a general business climate in the US in the 1990s,
when the drive to greater profit overrode concern about risk.
Thus SOX provides some structure that probably aids systematic management
of risk within organizations. Primarily, it makes it harder for business directors
to mislead investors, stipulating reporting requirements that more accurately
document a firm’s financial performance, and stipulating greater account-
ability for the board of directors and especially for their audit committee.
Systems such as SOX and the International Organization for Standardization
(ISO) provide greater security and control, often implemented through enter-
prise resource planning (ERP) systems that provide standardized processes and
reduce human errors. There are some problems, because SOX and ISO are essen-
tially additional bureaucracy that imposes rigidity. Overall, however, SOX and
ISO achieve a great deal in the effort to regulate dishonest business behavior.
3
Financial Risk Management

Introduction

Traditional risk management focuses on risks stemming from physical or legal


causes such as natural disasters or fires, accidents, death and lawsuits. Financial
risk management deals with risks that can be managed using traded financial
instruments. The events of the 21st century have made it even more critical.
Top business management came under suspicion after the scandals at ENRON,
WorldCom, and other business entities. In recent times, many investors have
experienced difficulties from bubbles. The most spectacular failure in the late
20th century was probably that of Long-Term Capital Management,1 but that was
only a precursor to the more comprehensive failure of technology firms during
the dotcom bubble around 2001. The global financial community suffered the
2007 subprime crisis of the banking industry, the Fannie Mae and Freddie Mac
crisis in secondary US mortgage markets, Lehman Brothers’ failure, Merrill
Lynch’s takeover by Bank of America, and industry-giant AIG applying for emer-
gency financial support from the Federal Reserve. The financial world’s failures
include the Barings Bank collapse in 1995, as well as the Long-Term Capital
Management and subprime mortgage bubble implosion already mentioned.
Financial management needs to obtain a return on capital, while simultan-
eously maintaining positive cash flow. Banks, the petroleum industry, and com-
modity trading of all types need to consider these fundamental requirements.
This chapter provides an overview of risks in human investment activity, and
describes a few basic tools to aid in financial enterprise risk management.

Concepts:

Hedging as a means to trade-off return for assurance


Copulas as a means of hedging
Value-at-risk as a tool for daily operations of cash management
Tranching of investment products

15
16 Enterprise Risk Management in Finance

The Basel Accords have been instrumental in providing guidance for banking
risk management. The Basel Committee on Bank Supervision sets standards
with the purpose of prudent bank regulation. Basel I was created in 1988 by
regulatory representatives of G-10 countries, along with central bank input.
Basel I set minimum capital requirements, grouping bank assets into categories
of credit risk with differential liquid asset holdings specified for various levels
of risk. Basel II was published in 2004, aimed at providing international stand-
ards for banking regulations and mitigating against a sequence of related bank
failures arising from strong cross-relationships among banks. In addition to
minimum capital requirements, Basel II provided for supervisory review and
market discipline. After the banking crisis of 2008, Basel III was published in
2010–2011, increasing liquidity requirements and decreasing bank leverage.
The traditional financial risk management approach is based on the
mean-variance framework of portfolio theory, that is, selection and diversi-
fication.2 Over the past 20 years, the field of financial risk management has
experienced a fast and advanced growth at an incredible speed. It is widely
recognized in finance that risk can be understood as two types: systematic
(non-diversifiable) risk, which is positively correlated with the rate of return,
and unsystematic (diversifiable) risk, which can be diversified by increasing
the number of securities invested. Based on the portfolio theory, the Capital
Asset Pricing Model (CAPM) was discovered to price risky assets on perfect
capital markets.3 A derivative market grew tremendously, with the recognition
of option pricing theory.4 Value-at-Risk models have been popular,5 partly in
response to Basel II banking guidelines. Other analytic tools include simu-
lation of internal risk rating systems using past data.
Economic risk management is originally based on the Expected Utility
Theory by recognizing people’s risk attitude on different sizes of risk – small,
medium, large – which is derived from the utility-of-wealth function.6 People
are risk averse when the size of the risk is large, and risk neutral when the scale
of risk is small.7 Decision-making behaviour when faced with lotteries and
other gambles motivates most of the studies on risk issues. The complexity
presented by derivatives, to include CDOs and CDSs, is complicated even more
by the use of high-speed computer trading.
Risks are traditionally defined as the combination of probability and severity,
but are actually characterized by additional factors. Markowitz (1952) equated
risk with variance, which would be controlled by diversification, considering
correlation across investments available, and focused on efficient portfolios
non-dominated with respect to risk and return. This leads to the need for some
calculus of preferences, such as multi-attribute utility theory. Financial risk
management has developed additional tools such as value at risk. Risk character-
istics include uncertainties, dynamics, dependence, clusters, and complexities,
which motivate the utilization of various operational research tools. Risks can
Financial Risk Management 17

be viewed as having three properties: Probability, dynamics, and dependence.


Probability in risk management deals with distribution models. This approach
can be dated back to the 1700s, leading to the Bernoulli, Poisson, and Gaussian
models of events, the generalized Pareto distributions, and the generalized
extreme value distributions to model extreme events. Risk dynamics uses sto-
chastic process theory in risk management. This can be dated back to the 1930s
when Markov processes, Brownian motion and Levy processes were developed.
Dependence of risks deals with correlation among risk factors. Various copula
functions are built, and Fourier transformations are also used. The dependence
of returns across investments leads to additional complexities.

Investment collars

The idea of a collar model is an option strategy using puts and calls simultan-
eously to manage investment risk. A put option is purchased by the investor
gaining the right to sell the underlying equity shares at a stated exercise price.
This provides downside protection. To offset the cost of the put, the investor
sells a call option, where underlying shares are sold at another exercise price.
The collar refers to the range set by the put and call options. Collars used in
this manner are intended to provide an alternative to diversification for risk
management, given the difficulties experienced in 2008 with correlation across
investments intended to be diversified. The benefit of the collar is to limit
downside risk. The cost is that upside benefit is capped (as well as the investor
being out the cost of obtaining put and call options). There is additional risk of
counterparty default on the put side. Collars have also been applied in the case
of adjustable rate mortgages with respect to interest rates.

VaR
Value at risk (VaR) is one of the most widely used models in risk management
(Gordon, 2009). It is based on probability and statistics (Jorion, 1997). VaR
can be characterized as a maximum expected loss, given a time horizon and
within a given confidence interval. Its utility is in providing a measure of risk
that illustrates the risk inherent in a portfolio with multiple risk factors, such
as portfolios held by large banks, which are diversified across many risk factors
and product types. VaR is used to estimate the boundaries of risk for a portfolio
over a given time period, for an assumed probability distribution of market
performance. The purpose is to diagnose risk exposure.

Definition
Value at risk describes the probability distribution for the value (earnings or
losses) of an investment (firm, portfolio, etc.). The mean is a point estimate of
a statistic showing a historical central tendency. Value at risk is also a point
18 Enterprise Risk Management in Finance

0.6
Probability density function
0.5
0.4
0.3
0.2 1–α Maximum loss
0.1
0.0 VaR CVaR
–1 0 1 2 3 4 Loss
Mean

Figure 3.1 CVaR and VaR

estimate, but offset from the mean. It requires specification of a given prob-
ability level, and then provides the point estimate of the return or better
expected to occur at the prescribed probability. For instance, Figure 3.1 gives
the normal distribution for a statistic with a mean of 10 and a standard devi-
ation of 4 (Crystal Ball was used, with 10,000 replications).
However, value at risk has undesirable properties, especially for gain and loss
data with non-elliptical distributions. It satisfies the well accepted principle of
diversification under normal distribution; however, it violates the fairly well
accepted subadditive rule; that is, the portfolio VaR is not smaller than the
sum of the component VaR. The reason is that VaR only considers the extreme
percentile of a gain/loss distribution without considering the magnitude of
the loss. As a consequence, a variant of VaR, usually labeled Conditional-Value-
at-Risk (or CVaR), has been used. In computational issues, optimization of CVaR
can be very simple, which is another reason for adoption of CVaR. This pioneer
work was initiated by Rockafellar and Uryasev (2002), where CVaR constraints
in optimization problems can be formulated as linear constraints. CVaR repre-
sents a weighted average between the value at risk and losses exceeding the
value at risk. CVaR is a risk assessment approach used to reduce the probability
that a portfolio will incur large losses assuming a specified confidence level.
It is possible to maximize portfolio return subject to constraints including
Conditional Value-at-Risk (CVaR) and other downside risk measures, both
absolute and relative to a benchmark (market and liability-based). Simulation
CVaR-based optimization models can be developed.
Value-at-risk (VaR) has become a popular risk management tool. It is often
used to measure the risk of loss on a specific portfolio of financial assets, in
terms of the probability of losing a specified percentage of the portfolio in
mark-to-market value (current market price) over a certain time. If there is a
0.01 probability that a given portfolio has a daily 1% VaR of $100,000, this
Financial Risk Management 19

portfolio will lose $1000 (=0.01*100,000) in a given day with that probability.
This infers that a loss of $1000 in a day with this portfolio is expected on 1 day
out of 100. Banks commonly report VaR by type of market factors, such as cur-
rency rates, commodity prices, interest rates, equity prices, etc. The financial
industry has grown to view VaR as a metric indicating relative change in risk
for a given investment. In fact, before 2008, investment banks were known to
have guided their people to seek investments with higher VaR in the expect-
ation that they would yield higher returns, relying on the portfolio aspects of
CDOs to manage the risk.8
There are two broad uses of VaR. The short-range use is for risk management.
If the organization’s portfolio measured in mark-to-market terms falls below the
VaR at the end of the time horizon, assets are sold off to gain enough cash to
cover the deficiency. But there is an alternative use in risk measurement, taking
a longer-term view. Here, the aim is to measure the fluctuation in VaR and use it
as an indicator of trends in relative risk for the firm or investment department.
There are three basic ways to compute value-at-risk. The statistical approach
assumes a distribution (normal commonly) which can use the variance-
covariance of the investments in a given portfolio. We will demonstrate this
calculation in the simple case of one investment (avoiding the more complex
formulation involving covariance). Historical simulation looks at past data and
simply sorts outcomes, selecting the probability level desired; if you want a
0.99 probability assurance of not exceeding a VaR, use the observation that
is the 0.01 lowest. The third method is Monte Carlo simulation, which is the
most flexible (you can model any assumption you want, to include select distri-
butions you prefer, and can complicate the model with external probabilities
such as the probability of catastrophe). However, Monte Carlo is also the most
involved, and provides an estimated solution rather than a precisely calculated
outcome.
Even the simplest of these methods, using variance calculations, involves
some complications in the details. First, the definition’s strict sense is the
measure of the probability of the worst case falling below VaR over a given
time horizon. This turns out to be fairly difficult to compute. Thus a redefin-
ition of the end-of-period value, to below VaR, is often used. Another compli-
cation is the time horizon. One day is the most common, in which case the
time horizon isn’t so important; however, there are cases for longer time hori-
zons. Yet another complication is that the distribution of outcomes for positive
events (favorable) might differ from that of negative events (losses). This in fact
would be expected if outcomes were lognormally distributed. If the normal
distribution were used with different distributions for positive and negative
outcomes, daily price changes could be separated into positive and negative
groups, and each group analyzed separately. Since VaR is worried about losses,
the negative group would be of interest.
20 Enterprise Risk Management in Finance

The distribution of financial returns underlying VaR calculations involves


fat tails.9 Taleb (2012) argued that we tend to see the world in its stable form,
and don’t expect rare events we haven’t seen lately (black swans). We tend to
expect a normal distribution, but if in reality the distribution has a higher
probability of rare events, the tails also have a higher probability, or become
fatter.
Another caveat about VaR is that it doesn’t represent the worst that could
happen – it represents a point on the distribution of possible outcomes. For
a 0.99 probability level, if outcomes do happen to follow the assumed dis-
tribution, outcomes worse than the VaR would be expected once out of 100
events. For a 250-day trading year, that amounts to 2.5 the expected times that
losses would exceed the VaR.

Copulas

Copulas were proposed by David X. Li in 2000.10 The idea is that bundles of


investments with risks would be safer than each investment individually, and
the copula was a means to identify the relative risk of the pool of risky assets.
This was very germane to the financial derivatives based on mortgages that
were generated by financial institutions in the 1990s. Using a copula calcu-
lation based on the normal distribution (Gaussian copula), banks could bundle
high-risk investments together in collateralized debt obligations (CDOs) that
had much lower joint risk. CDOs are asset-backed securities that allow their
issuer to use cash flows from different debt assets to back bonds which the issuer
could then sell to investors. Furthermore, the issuer could market portions of
these bonds associated with different levels of risks (tranches), which inves-
tors in turn would assume reflected different returns (the assumption being
that higher risk yielded higher returns, and the pooled nature of the under-
lying assets reduced risks). Thus the issuing banks could sell CDO tranches to a
variety of buyers interested in different levels of risk. Credit rating institutions
saw the logic, and were so comforted by the reduced risk that they gave very
favorable credit ratings to CDOs. Furthermore, while the number of mortgages
was finite, CDOs could be sold multiple times, vastly increasing the amount of
money at risk.
As to copulas, the fundamental measure was the probability of joint default.
If component investment survival times were independent of each other,
the risk for the set would be easy to calculate. However, positive correlation
between survival times has been widely observed. Li’s Gaussian copula for-
mulation derived the joint distribution from the marginal distributions of the
component investments.
Issuers of CDOs came to rely on the Gaussian copula (which assumes a
normal distribution of outcomes). That formulation assumed a number of
Financial Risk Management 21

things, including stable correlation across assets over time. But in the housing
mortgage industry around 2007–2008, the flaw in this logic was seriously
exposed. Furthermore, Salmon (2009)11 pointed out that mortgage pools have
greater volatility than most bonds, as there is no guaranteed interest rate, in
part because mortgage borrowers do things like being late on payments, quit-
ting making payments, or conversely, paying loans off early. The pooling idea
was expected to be safe, first because few living people remembered house
prices doing anything but going up, second because housing markets across a
country (or across the world) were expected to be independent enough that a
down period in one area would be offset by gains in other areas, third because
defaults were rare and spread out over time. However, 2007 and 2008 saw wide-
spread decline in highly populated areas of the US, generating high degrees of
correlation.

Tranches

The financial industry developed some highly innovative investment vehicles


after the deregulation of banks. These included collections of mortgages into
credit default swaps (CDSs) or collateralized debt obligations (CDOs). Credit
default swaps are agreements where the seller of the CDS will pay the buyer
should some credit arrangement default. Anybody can purchase a CDS, even
those without any interest in the loan upon which the CDS is based. CDSs
are marketable. A CDO is an asset-backed security, often used in the mortgage
market. The underlying investments involve cash flows, which investors are
after. CDOs usually involve collections of loans (such as mortgages) of varying
degrees and risk, and possibly different areas and types of housing to diversify
risk. The basic idea is that this diversification provides an assurance of col-
lective security while each of the component parts of the CDO can be sold
to investors, reflecting different returns due to being sorted into tranches of
relative risk.
This scheme worked extraordinarily well from deregulation in the late
1990s until 2007, when the unforeseen circumstance of house price declines
occurred. Before 2007, loans were pushed on any housing investor in sight,
without much regard to their ability to repay, because the initiating banks were
not going to hold the mortgages, but were going to sell them as commodities
to investment banks. The investment banks in turn were not too concerned
about repayment, as they sold their CDOs on to investors. When the under-
lying house prices started to fall, however, the system collapsed, with ripple
effects suffered around the world. What happened was that the probabilities of
default jumped up, because borrowers were finding that their house value was
dropping to below their mortgage level, and they were “underwater” on their
loans. Some borrowers took that as a reason to default. More often, however,
22 Enterprise Risk Management in Finance

the borrowers were overextended by the underlying economic decline occur-


ring at the same time, and were thus unable to pay their mortgages, so the
banks foreclosed.

Conclusions

The overall lesson seems to be that there is no free lunch. It doesn’t make sense
to speculate on things that appear too good to be true. If too many people
do that, the system will inevitably catch up with the herd involved in that
irrational behavior. An accompanying lesson, unfortunately, is that those agile
enough, like some initiating lenders and investment banks, can rip off many
investors before the herd sees the light.
4
The Real Estate Crash of 2008

Introduction

The current population of the United States has grown up with what seemed to
be a steady and reliable increase in value of homes. Owning one’s own home
is one of the signs of a prosperous and successful culture. In the 1930s, many
people lost title to their homes during one of the greatest failures of human
economic systems known. In response, banks and mortgage lending were
regulated, bank deposits insured up to a level covering what most people had,
and stock-trading practices ostensibly controlled. True, very old people could
remember a time when the price of housing dropped, but with the inevitable
passage of life with time, this group grew smaller and smaller, and older and
older, and less relevant. There also were anomalies in local areas where, for
whatever reason, home prices might negatively fluctuate. Reinhart and Rogoff
have described five such anomalies, all associated with banking crises (Spain
in 1977, Norway in 1987, Finland and Sweden in 1991, and Japan in 1992).1 But
house price decline occurred very rarely, and nobody really noticed.
There also was a strong feeling that less regulation was always better. Ronald
Reagan may have been a democrat in his younger years, but he became the
champion of the conservative class. He rode this popularity to the presidency,
which was characterized by an emphasis on standing up to aircraft control-
lers and Sandinistas, along with a conservative preference for less government
interference. Even when a democratic president appeared, such as Bill Clinton,
his economic regulation (whether due to preference or poll-counting) did not
vary that much from conservatives. Many of the regulations imposed during
the 1930s were overturned in an effort to let the market run free, which was
expected to lead to a golden age of prosperity.
Financial research developed many tools that became popular during this
period. The Efficient Market Hypothesis held that asset prices are always and
everywhere at the correct price, a view that fits well with the concept that that

23
24 Enterprise Risk Management in Finance

economy is regulated best that is regulated least.2 Investors prospered during


the 1990s, none more than Long-Term Capital Management (LTCM). LTCM was
built on the financial models of Black, Scholes, and Merton, providing tools to
price derivatives. Billionaires who could afford to buy into LTCM made billions.3
But human economic activity consists of a complex interaction of many actors,
some more exuberant about prospects than others. And yet, while on average for
every buyer there is a seller, and the idea of a stable equilibrium seems reasonable,
human history is full of periods where a market will go crazy and drive prices
beyond reason. These periods are known as bubbles. Consider the following:

• 1630 tulip mania


• 1720 the Mississippi Company
• 1720 the South Sea Company
• 1929 stock market crash.

These are only four of many economic crashes, all preceded by excessive rapid
growth (bubbles).4 There are many studies of these bubbles. Some bubbles
consist of expansion, followed by rising prices, overtrading, and mass par-
ticipation, followed by an event triggering doubt, a subsequent selling flood,
and ultimate collapse. The 1990s saw the crash of LTCM, demonstrating that
theoretical models do NOT cover everything, that every model leaves some-
thing out, and that life is more complex than anyone understands. Trade in
technical stocks made the NASDAQ highly popular, but it also demonstrated
a bubble, crashing the confidence of the brave new world of computer techs.

Real estate in 2008

Yet investors still held great confidence in certain sectors of the economy, such
as real estate, which was as safe as houses. Risk managers created new tools to
make investment safer. Derivatives5 are securities or contracts deriving value
from an underlying natural security, such as a stock, a bond, or a mortgage.
While derivatives provide some security through diversification, their primary
attraction is that they enable high degrees of leverage.
The crisis is credited with beginning with the collapse of the US subprime
residential mortgage market in 2007, which spread throughout the world due to
exposure to US real estate assets through financial derivatives.6 Causes could be
the growth in asset securitization, US government initiatives to expand home
ownership (thus encouraging fewer loan restrictions), expansionary monetary
policy, and weaker regulatory oversight.7 The real estate price boom was furthered
by financial institution exploitation of loopholes in capital regulation, allowing
them to significantly increase leverage while remaining within required capital-
ization. Mortgage derivatives allowed investment in riskier and non-liquid assets
funded in wholesale markets, without sufficient capital backing. This, along
with high dependence on a short-term view and lax regulatory oversight, has
The Real Estate Crash of 2008 25

been credited with inducing the collapse of the bubble in 2008.8 Distress first
appeared in 2007 with losses by US subprime loan originators and those holding
derivatives based upon such mortgages. In late 2007, losses by Northern Rock, a
UK mortgage lender, indicated that the bust was going global.
Wiseman (2013)9 outlined four stages in a real estate cycle, with corre-
sponding activities (see Table 4.1).

Table 4.1 Real estate cycle

Phase Characteristics

Peak Boom Government raises interest rates to cool economy


Consumer confidence high
Higher demand for goods & services
Aggregate supply near aggregate demand
Adequate housing inventory, home prices stable
New home starts and real estate activity stable
Bank-owned inventory and foreclosures low
Low vacancies, high rental prices
Speculators overvalue properties
Contraction Bust Restrictive and expensive credit financing
Decreased consumer confidence, declining consumer demand
Aggregate supply > aggregate demand
Housing inventory builds as sales decline
Home prices start to decline
New home starts decline
Banked-owned inventory and foreclosures increase
Commodity based properties (farmland) decline
Competition declines, speculators sell off
Trough Bust Government lowers interest rates to stimulate
Consumer confidence low, supply > demand
Excess housing inventory, low sales level
Home prices stabilize at low levels
New home starts low
Bank-owned inventory & foreclosures high
Vacancies high, rental prices low
Speculators tend to undervalue properties
Buyer’s market
Recovery Boom Easy, cheap credit
Increasing consumer confidence
Increasing consumer demand for goods & services
Aggregate supply > aggregate demand
Demand for housing exceeds inventory
Commodity-based business properties peak in value
Home sales increase
Home prices increase
More new home starts
Bank-owned inventory & foreclosures decrease
Vacancies decrease, rental prices rise
Prices rise, speculators buy
Seller’s market

Source: Based on Wiseman (2013).


26 Enterprise Risk Management in Finance

Mortgage system
With deregulation, the mortgage industry became more specialized, with a
number of organizations playing a role in the overall system. Lenders such
as Green Tree Finance led the charge to issue as many mortgages as possible
(mobile homes in the case of Green Tree; Ameriquest, Countrywide, Golden
West and others for conventional homes).10 These mortgages were sold to banks
and other investment agencies, so the mortgage initiators had little concern
other than generating lots of mortgages and making a living out of the fees.
In fact, many home owners saw the inevitable rise in home value to be an
opportunity to make a financial killing by leveraging as many home loans as
they could, making purchases for speculation rather than for residence. The
purchasers of these mortgages often combined various tranches of different
levels of perceived risk, with higher interest rate mortgages associated with
higher probability of loan failure. This evolved into the convolution in logic
that marginal borrowers, who had no choice but the highest interest rates, were
preferred customers. These instruments were sold to investors. Meanwhile,
these banks often covered their risk in innovative ways. A credit default swap
(CDS) is a credit derivative similar in concept to insurance; should the under-
lying asset fail, the purchaser receives payment. A collateralized mortgage obli-
gation (CMO) is a certificate built from tranches of mortgage-backed securities.
Thus it is a tranched instrument of an instrument that is already tranched. A
collateralized debt obligation (CDO) is similar, but can be based on any kind of
debt, not just mortgages.11

Northern Rock

The Northern Counties Permanent Building Society was established in 1850,


serving Newcastle upon Tyne. The Rock Building Society was established in
1865. Both were building societies, and they merged to become Northern Rock,
a mutually owned savings and mortgage bank. The Building Societies Act of
1986 allowed building societies to convert to public banks, allowing them access
to wholesale money markets. Northern Rock went public in 1997, enabling it to
sell shares on the stock market. It borrowed on capital markets, lent this money
to customers, turned the loans into bonds, and sold the bonds.
Sampath wrote about the organizational risk in reputation, using Northern
Rock as a case in point.12 Basel II addressed operational risk, credit risk, and
market risk. In a Pillar 2, it mentioned strategic risk, reputation risk, and non-
standard risk, but these last three categories of risk have no specific capital-
ization provisions. This is because there is less data available and this makes
it difficult to quantify exposure. Sampath argued that Northern Rock demon-
strated failure in management of these less quantifiable risks.
The Real Estate Crash of 2008 27

Within months of capitalization in 1997, Northern Rock was accused of


arbitrarily changing the terms of its accounts and reducing the interest it
paid to its depositors. In 1996 Northern Rock was forced to remove the early-
repayment penalty clauses it had inserted into some of its fixed-rate mort-
gages. In 1997 it was criticized for increasing its mortgage rates in anticipation
of rising base rates that did not materialize. In 1998 it was cited as among the
worst mortgage institutions in terms of press coverage. Thus Northern Rock’s
actions jeopardized its reputation risk. In 2002 it instituted a social and cus-
tomer service commitment in efforts to use corporate social responsibility
as a means to resurrect its image. It also set up programs to support gay and
lesbian foundations, tackled domestic violence, and set aside 5% of pre-tax
profits for charity.
In 2004 Northern Rock floated bonds, shifting away from home loans. It
depended heavily on short-term loans from other banks rather than on deposi-
tors to get the cash it needed to lend in mortgages. In mid-2007 it was caught
by an increase in funding costs, which it was unable to pass on to its clients.
It also overlooked hedging loans for the two months between mortgage initi-
ation and securitization.
The United Kingdom had not seen a bank run since 1866, when London bank
Overend Gurney overextended its resources during the boom in railroad and
dock construction. The US saw many bank runs in the 1930s, which led to pro-
tection by the Federal Deposit Insurance Corporation. In the United Kingdom,
this protection was provided by the banking industry. On September 13, 2007,
the BBC announced that Northern Rock Bank had sought Bank of England
support, which was provided the next morning. While undoubtedly not the
intended outcome, the effect was that Northern Rock depositors lined up to
withdraw their deposits on September 14.13
Northern Rock was a mortgage bank, focusing on prime mortgages. It had
a much heavier proportion of nonretail funding, in the form of short-term
borrowing in capital markets and securitized notes – and the global credit
crisis in the summer of 2007 led to a massive reduction in short-term funding
and interbank lending. The French BNP Paribas closed three investment funds
invested in US subprime mortgages on August 9, 2007, joining the difficulties
many were experiencing in renewal of short-term borrowing. While Northern
Rock had little, if any, subprime lending, it drew from the same pools for short-
term funding. Thus Northern Rock approached the Financial Services Authority
(FSA) on August 13 for help. The FSA and the Bank of England sought to quietly
deal with the crisis by finding a private buyer for Northern Rock, but failed,
and had to announce its assistance to Northern Rock on August 14. Thus the
public learned of Northern Rock’s problems, and depositors joined the lines to
get their money.
28 Enterprise Risk Management in Finance

Table 4.2 Northern Rock events

2007 Event

July 25 Northern Rock issues optimistic outlook


August 9 BNP Paribas suspends three investment funds with subprime
mortgages
August 13 Northern Rock informs regulators of funding difficulties
August 14 Bank of England alerted of Northern Rock difficulties
September 4 Money market problems increase, LIBOR reaches 9 year peak
September 12 Bank of England announces it would support banks through
short-term loans, but not massive injection of funds
September 13 BBC reveals Northern Rock asked for, will receive BOE aid
September 14 BOE and others reveal Northern Rock will receive help, lines queue
at Northern Rock
September 17 After stock market close, British Government announces guarantee
of all Northern Rock deposits in turbulent period
September 19 BOE announces injection of liquidity into money markets,
extension to mortgage debt
September 20 Government guarantee extended to unsecured wholesale funding
October 9 Government guarantee extended to new retail deposits

Source: Extracted from Goldsmith-Pinkham and Yorulmazer (2010).14

Northern Rock’s difficulties in obtaining short-term funds were not due


to its lending practices, but rather to the system’s inability to provide funds.
This in turn was due to the subprime mortgage issues of 2007. The events are
outlined in Table 4.2.
Table 4.3 shows types of retail deposits at Northern Rock before and after
the run.
Table 4.4 shows the holdings of Northern Rock before (June 2007) and after
(December 2007) the liquidity crisis. As a mutual, Northern Rock dealt with
retail mortgages. Securitized notes were medium- to long-term, which out-
stripped retail deposits after going public. Covered bonds were illiquid long-
term liabilities against segregated mortgage assets. Wholesale liabilities (money
markets) were nonretail funding not covered as covered bonds or securitized
notes.
While Northern Rock did not make subprime loans, they were vulnerable in
a market where housing values were in decline. Because they were overlever-
aged, they suffered the first bank run in Great Britain in over a century.
Northern Rock failed strategically. It shifted away from its traditional market
of mutual mortgage lending, seeking perceived higher profits in broader
lending. Sampath attributes its troubles to underestimation of reputational risk,
The Real Estate Crash of 2008 29

Table 4.3 Northern Rock retail deposits (millions of pounds)

December 2006 December 2007 Change

Postal accounts 10,201 4,351 −5,850


Branch accounts 5,573 3,035 −2,538
Offshore & other 4,105 1,371 −2,734
Internet & telephone 2,752 1,712 −1,040
Totals 22,631 10,469 −12,162

Source: Extracted from Shin (2009).

Table 4.4 Northern Rock holdings before and after run (millions of pounds)

June 2007 December 2007 Change

Securitized notes 45,698 43,070 −2,628


Wholesale 26,710 11,472 −15,238
Retail 24,350 10,469 −13,881
Covered bonds 8,105 8,938 +833
BOE Loan – 28,473 +28,473
Totals 104,863 102,422 −2,441

Source: Extracted from Shin (2009).

which jeopardized public confidence. Restrictive and sharp practices destroyed


its credibility and reputation, and it was hit with a run that was withstood only
through Bank of England intervention.

AIG

AIG is the world’s largest insurance company. It began in China. In 1926 it


opened operations in the US to write insurance on American risks outside
the US, and in the 1930s it started to buy American insurance companies.15
It became very large, and by the time of the real estate crisis in 2007, this had
a bearing on the risk AIG was exposed to; on paper, it could say that it had
offloaded some risk to a reinsurer, but since it owned the reinsurer, the risk
was retained.16
Starting in 1999, AIG and its subsidiaries issued a large number of CDSs.
These provided a very strong revenue stream for the firm when market condi-
tions were stable, and there were low default rates. A feature of CDSs purchased
by AIG from investment banks was credit support annexes, standard contracts
attached to swap agreements mandating that the instrument be marked to
30 Enterprise Risk Management in Finance

Table 4.5 Key events for AIG

Date Event

February 11, 2008 AIG announced write down of $4.88 billion in CDSs
September 15, 2008 AIG reported to be seeking $40 billion in capital to avoid
downgrading by credit rating firms
September 17, 2008 Fed authorized loan of $85 billion to AIG, giving
Government 79.9% equity in AIG and veto power over
dividends, to be repaid in 24 months
October 9, 2008 Fed authorized bailout package of another $37.8 billion in
securities in exchange for cash collateral

Source: Extracted from Egginton et al. (2010).

market price nightly. The investment banks were buying insurance that the
CDS would not fall below a certain value, and a check every night made it more
likely that AIG would have to pay off the swap.17 Events related to the 2008 real
estate crisis are shown in Table 4.5.
While AIG made a lot of money issuing CDSs before 2008, investment banks
took the opportunity to purchase many CDSs that paid off in 2008. In fact,
they purchased more CDSs than the value of the underlying mortgage assets.
By September 16, 2008, AIG was in severe difficulty, its stock down to $3.75
(it had been $63.44 a year earlier).18 The failure of AIG has been attributed to
high-leverage trading, just as with large banks such as Lehman Brothers and
Bear Stearns.19 Other problems cited were lack of transparency with respect
to the risk of CDSs and CDOs; adverse selection, in that investment banks
knew more about the risks associated with the coverage they purchased from
AIG than AIG did; and the high magnitude of unhedged CDOs held by AIG
($562 billion). The issue with unhedged CDOs was complicated in that the con-
ventional expectation is that they are naturally diversified, but the mortgage
markets upon which they were based turned out to have a highly correlated
downward trend.

Risk management

The Northern Rock case demonstrates that there is more to risk management
than simply dealing with the hedging aspects of financial instruments. Linsley
and Slack focused on the ethical criticisms of Northern Rock.20 After becoming
a publicly traded bank, with the ability to access money market (wholesale)
financing, it found itself financing long-term mortgages with short-term
funding, which created a technical financial issue, as its cash flow was subject
to the variations in short-term financing. In the summer of 2007, the sub-prime
The Real Estate Crash of 2008 31

mortgage issues led to less liquidity on the wholesale money markets, placing
Northern Rock under great stress. Linsley and Slack looked at Northern
Rock’s press releases over the period 2005–2008, finding that it emphasized
robustness and strength in the early period, based on its claimed performance
and growth, the strength of the housing market and economy, a sound strategy
and business model, and its ability to manage lending risk. There was no ref-
erence to stakeholder relationships, and Linsley and Slack detected no evidence
of Northern Rock caring about customer welfare. Northern Rock did benefit
favorably from its previous mutual status, its local nature, and The Northern
Rock Foundation, which provided a charitable presence. The crisis led stake-
holders to adjust their view of Northern Rock. Northern Rock’s communica-
tions implied that the crisis was an external problem that would disappear with
time. The press release announcing that Her Majesty’s Treasury had guaranteed
a portion of deposits appears to have had little acknowledgement of depositor
concerns, and to have underestimated the negative impact. The passive nature
of Northern Rock’s response was attributed to be the reason for the protection
release announcement, leading to the run on the bank.
Clearly we infer that bank risk management involves more than monitoring
VaR, and even more than hedging. Banks by their very nature depend upon
customer confidence. The Northern Rock case demonstrates the negative
impact of not paying attention to developing and maintaining good relation-
ships with its depositors.
5
Financial Risk Forecast Using Machine
Learning and Sentiment Analysis

Introduction

There is a widespread need for effective forecasting of financial risk using


readily available financial measures, but the complicated environment
facing financial practitioners and business institutions makes this very chal-
lenging. The concept of financial volatility, a required parameter for pricing
many kinds of financial assets and derivatives, is critical, because it is widely
expected that financial volatility implies financial risk. Therefore, accurate
prediction of financial volatility is extremely important. Efficient prediction
of financial volatility has been an extremely difficult task, but we can now
offer a scalable and customizable mathematical model to achieve this goal,
employing two approaches to forecast the volatility using financial infor-
mation available online. First, we carry out a comparative study between two
different machine-learning techniques – artificial neural networks (ANN) and
support vector machines (SVM) – to forecast trading volume volatility. Second,
we utilize semantic techniques to probe correlations between information sen-
timent and asset price volatility.

Information volume and volatility

Fluctuation in trading volume, just like that of stock price, vividly reflects
market behavior. We investigate associations between trading volume volatility
and online information volume, with the intent of forecasting the former.
Online financial information volume has been assumed to be an important
element affecting the financial trading volume volatility. We forecast vola-
tility, relying partly on online information, using both an ANN-based and
an SVM-based approach. We compare these forecasting models, to observe
their performances. The basic architecture of this approach is displayed in
Figure 5.1.

32
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 33

Training Forecasting

Trading volume
Google Financial news Yahoo Financial changing rates
finance volumes finance trading volumes and corresponding
volatilities

GARCH-ANN Generated
Forecast result
approach rules

GARCH-SVM Generated
approach Forecast result
rules
Comparative
study

Trading volume
Financial
changing rates and
news
corresponding
volumes
volatilities

Financial
Yahoo Google
trading
finance finance
volumes

Data from immediate past Most recent data

Figure 5.1 Flow chart and functional parts of our approach associating information
volume and volatility

We downloaded online financial information from Google Finance (http://


www.finance.google.com), as shown in Figure 5.1. The data was post-processed
to obtain information volumes for various stocks and indices. Table 5.1 shows
a snippet of the post-processing result, with each sub-row on top indicating the
date, and the one below the volume, of the news. In Table 5.1, N, D, M, I, A, and
G stand for NASDAQ, DOW, MSFT, INTC, AAPL and GOOG, respectively. 0629
stands for June 29, 2006.

Information sentiment and volatility


Representing financial information purely by its volume might be misleading,
thus undermining the efficiency of the forecast. In order to investigate the
impacts of online information upon financial time series more comprehen-
sively, we explore its content as well, specifically emotional or sentimental
polarity. An essential step is to evaluate the sentiment of each news entry, pri-
marily accomplished by the methodology of bags-of-words. We aim to obtain
a real value for each news entry whose sign denotes the authors’ judgmental
state, while the absolute value indicates emotion intensity.
HowNet (http://www.keenage.com/html/e_index.html) is a commonsense
online knowledge base indicating inter-conceptual relations and inter-attribute
relations of concepts utilizing Chinese lexicons and their English equivalents.1
34 Enterprise Risk Management in Finance

Table 5.1 The information volumes calculated from Google Finance

0629 0630 0703 0704 0705 0706 0707 ...


N
7 4 5 2 4 4 3 ...

D 0630 0703 0704 0705 0706 0707 0708 ...


2 3 3 4 5 12 1 ...

M 0628 0629 0630 0701 0703 0704 0705 ...


5 8 3 1 2 2 4 ...

I 0703 0705 0706 0707 0710 0711 0712 ...


2 2 2 3 2 1 2 ...

A 0630 0701 0703 0705 0706 0707 0710 ...


7 1 4 3 6 2 2 ...
0717 0718 0719 0720 0721 0724 0725 ...
G
3 5 2 8 7 4 5 ...

HowNet is used to calculate the sentiment of news pieces through its set of
keywords. Each news piece is decomposed and converted into a keyword array
in the same sequence as the words appear in the chapter, each word being
assigned a specific sentiment value based on the HowNet word corpus. The
overall sentiment for the whole chapter is acquired by combining the sentiment
values of all of its keywords. After the sentiment time series is obtained, it is fed
into the machine-learning system (SVM in particular) as one of the exogenous
inputs. By assigning sentiment as one element of the feature vector for a listed
company, non-linear correlation between online news sentiment and financial
volatility can be quantitatively analyzed, which might eventually lead to more
effective and efficient volatility forecasting. The basic architecture of this
approach can be seen in Figure 5.2.
Financial news used in this phase of the study was acquired from a variety
of online sources, and experiments were carried out on a huge body of com-
panies listed on US stock markets. Aggregated statistical results are the key tool
to substantiate nonlinear correlation between the two entities. Table 5.2 is a
snippet of the financial news entries with their calculated sentiment values; in
the time window, 2007–1 indicates that this is a news entry in the first week
of 2007.

Volatility model and modified non-linear GARCH model


Volatility refers to the standard deviation or variance of the change in value
of a financial instrument within a specific time span. The GARCH system is
widely employed in modeling financial time series that exhibit time-varying
volatility clustering. In this section, we develop a GARCH system by incorpor-
ating financial information into the usual framework.
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 35

Webpage
Internet
crawling

Cleaning Data
preprocessing
Parsing
Segmenting
Internet
Stemming

How Net Sentiment


corpse calculation

Historical
quote
Sentiment time
series Asset price volatility/
Trading volume volatility
time series

Dynamic training and forecasting

GARCH-SVM
approach

Predicted Prediction
time series error

Figure 5.2 Flow chart and functional parts of our approach to associate information
sentiment and volatility

The GARCH model, proposed by Bollerslev in 1986,2 can be formulated as

yt = μ t + ε t ,
(5.1)

ε t |ψ t −1 ~ N (0, σ t2 ),
(5.2)
p q
σ t2 = α 0 + ∑α σ
i =1
i
2
t −i + ∑ β j ε t2− j , (a0 > 0, ai, bi ≥ 0)
j =1
(5.3)
Table 5.2 A snippet of news entries for the companies ADCT, S and MRO

Time Company News


News ID News title window symbol News body sentiment

2007010102057465 An Insecure Future for 2007–1 ADCT Perhaps its fatigue with the options scandal 45.4
McAfee that has now spread to more than 100 tech-
nology companies. Perhaps its ...
2007010202063354 Stocks end 2006 with best 2007–1 S NEW YORK (MarketWatch) – U.S. stocks 17.8
gains in three years finished the year with strong gains Friday,
with all three major stock averages booking
their best performance since 2003. The Dow
Jones Industrial Average ($INDU :
$INDUNews)
2007050203261590 Marathon Oils lower earn- 2007–18 MRO SAN FRANCISCO (MarketWatch) – −0.8
ings top forecasts Marathon Oil Corp. reported Tuesday a drop
in first-quarter earnings, clipped by lower oil
and gas prices and a decline in production.
For the three months ended March 31,
Marathon (MRO : MRONews)
2007060203595695 Still looking good 2007–22 ADCT ANNANDALE, Va. (MarketWatch) – It’s been 11.8
a little bit over two months since the trig-
gering of a rare, and historically very
bullish, technical signal. (Read my March 22
column.) Can we count on the bullish winds
of that signal blowing into the ...
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 37

where the daily return yt is sum of the deterministic mean return μt and a sto-
chastic term εt, also known as the shock, forecast error, residual, innovation,
etc.,3 ψt−1 represents the information set available at time t, and σt2 is the time-
varying variance of both yt and εt. In our approach, we substitute yt with the
daily changing rate of either the trading volume or the asset price.
The GARCH model uses εt as a function of those exogenous inputs, which
have some affect on financial volatility. The GARCH model bases its condi-
tional distribution on the information set available at time t. Freisleben and
Ripper 4 point out that the parameter βi in Equation (5.3) describes the stock
return’s immediate reaction to new events in the market, mostly in the form
of financial news. Meanwhile, the fast development of the internet enables us
to acquire the online financial information in a real-time, exhaustive fashion.
Considering these factors, designating financial information volume as one
variate of εt is justifiable.
Therefore we formulize εt using the following two equations,

ε t = yt − ζ, (5.4)

ε t = f t (Wt , ε 9t ) = g t (Wt ) + θ t ε 9t , (5.5)

where ζ is a constant and Wt is the on-line financial information volume on


day t. Consequently a modified GARCH model can be expressed as

yt = μ t + ε t , (5.6)

ε t |ψ t −1 ~ N (0,σ t2 ) , (5.7)
p q
α t2 = α 0 + ∑α
i =1
i χ t − i (σ t2− i ) + ∑β ϕ
j =1
j t−j ( yt2− j )

r
(5.8)
+ ∑ γ k φt − k (Wt2− k )
k =1

where p, q, r represent the three time lags, the three unknown functions, xt–i,
φt − j, and ϕ t − k represent the undetermined nonlinear correlations.
Financial time series exhibit specific features that make a GARCH model a
preferable alternative. We assume that the volatility of financial trading volume
shares similar characteristics to that of stock price in exhibiting GARCH effects.
Figure 5.3 demonstrates that the daily changing rates of the trading volumes of
NASDAQ index within the period October 11, 1984–October 16, 2006 exhibit
volatility clustering. A kurtosis value of 11.53 also implies that there is an
underlying fat tail effect. We have discovered obvious GARCH effects exhib-
ited by online financial information volume time series in tests conducted on
more than 100 stocks in the US stock markets.
38 Enterprise Risk Management in Finance

0.5

–0.5

–1

500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500

Figure 5.3 Daily changing rates of the trading volumes of NASDAQ index, October 11,
1984–October 16, 2006

Daily volatility model


In associating information volume with trading volume volatility, we define
the variance of the trading volume’s daily changing rates within a period
starting on the previous day and ending on the current day. For a specific stock
or index, let vt denote trading volume on day t, so that the daily changing rate
yt of the trading volume is denoted as

vt
yt = ln . (5.9)
vt −1
If D is defined as the width of the calculating window, the volatility st2 can be
calculated by computing the variance of the yt within the (t − D + 1)th and the
tth day as
D −1

∑ (y t −i − yt )2
σ =
2 i=0 , (5.10)
t
D −1
where
D −1

∑y t −i
yt = i=0 . (5.11)
D
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 39

Daily volatilities are calculated based on a sliding volatility window of a certain


length. Each day’s volatility represents the variation in the value of trading
volume over the past few days until the current day.

Using GARCH-based SVM to associate information sentiment with


asset price volatility
This section investigates the relationship between volatility and information
sentiment. We incorporate the average sentiment value into the current fore-
casting model, and predict how volatility will move in the immediate future
using a more comprehensive perspective.

Sentiment analysis of financial news


The sentiment of a news chapter is based upon the sentiment values of all its
keywords, that is, those containing emotional polarity. We used the English
lexicon released by HowNet to form the essential word sets based upon which
the sentiment calculation for a keyword is implemented. We generated eight
word sets: POSITIVE, NEGATIVE, PRIVATIVE, MODIFIER i (i = 1,2,3,4,5). Each
MODIFIERi is bound with a weight value WEIGHTi denoting the intensity of
the words listed in this set. The intensity of the words increases with weight
value. Table 5.3 defines these eight sets.
We calculated keyword sentiment by counting matches, using the algorithm
flowcharted in Figure 5.4.
The sentiment for an entire chapter is calculated by summing the sentiments
for all the keywords contained in that chapter.

Table 5.3 The eight word sets we use in this chapter to calculate the keyword sentiment

Word sets Description

POSITIVE A list of English words that have positive emotional polarity, which
includes a set of 4363 words.
NEGATIVE A list of English words that have negative emotional polarity, which
includes a set of 4574 words.
PRIVATIVE A list of privative English words, which includes 14 words.
{no, not, none, neither, never, hardly, seldom, barely, scarcely, ain’t,
aren’t, isn’t, hasn’t, haven’t}
The following five sets are modifiers, whose intensities decrease while i increases.
MODIFIER1 64 modifier words, with WEIGHT1 = 2.
MODIFIER 2 25 modifier words, with WEIGHT−2 = 1.8.
MODIFIER3 22 modifier words, with WEIGHT−3 = 1.6.
MODIFIER4 15 modifier words, with WEIGHT−4 = 1.4.
MODIFIER5 11 modifier words, with WEIGHT−5 = 0.8.
40 Enterprise Risk Management in Finance

Get the current word w to be


analyzed, and set its sentiment
value v to 0

Is w a positive N N
Is w a negative Output v as 0
word? word?

Y Y

Set v to 1 Set v to –1 End

Get the list of words


which are within k
words before w

Does any of the word in this list appear in


PRIVATIVE?
N

Get the list of words which


Multiply v with –1 are either within m words
before w or n words after w

Does any of the word in this list appear in


MODIFIERi

Y
N
Multiply v with
WEIGHTi

Output v

End

Figure 5.4 Sentiment calculation process for the current keyword w


Financial Risk Forecast Using Machine Learning and Sentiment Analysis 41

W1 … Wi–1 Wi Wi+1 … WT

Training

W1 … Wi–1 Wi Wi+1 … WT

Forecasting

Figure 5.5 Sliding time window learning and forecasting

Using GARCH-based SVM to associate information sentiment and


volatility
As mentioned, volatility will be calculated on a time window basis, meaning
that the time series will first be segmented into a series of time windows. The
GARCH-based SVM approach is carried out on a sliding window basis; each SVM
is trained on data collected from the current and the previous time window.
The well-trained SVM is then used to predict volatility in the next time window.
Figure 5.5 illustrates the process of sliding time window machine learning,
where Wi−1 and Wi constitute the training input and output respectively, and
Wi and Wi+1 constitute the forecasting input and output respectively.
Each time window in Figure 5.5 corresponds to an input and output matrix for-
matted for machine learning and prediction. These matrices contain expanded
companies in order to generate aggregated statistics for the entire stock market. If
there are M listed companies of interest, let <I, O> denote the training input and
output matrix tuple to forecast the volatility in time window Wi, giving:

⎡ p2 − ⎤
⎢σ i −1(1) yi −1(1) Si −1(1)
p2 2

⎢ p2 − ⎥
I = ⎢σ i −1(2) yi −1(2) Si −1(2) ⎥
p2 2

⎢ ⎥
⎢... ⎥
⎢ p2 p−2

⎣⎢σ i −1( M ) yi −1( M ) Si −1( M )⎦⎥
2

and

⎡σ ip 2 (1) ⎤
⎢ p2 ⎥
σ (2) ⎥
O = ⎢⎢ i ⎥,
...
⎢ ⎥
⎣⎢σ i ( M )⎦⎥
p2
42 Enterprise Risk Management in Finance

where σ ip−21( k ) (k = 1,2,3, ... M ) is the asset price volatility within time window

Wi−1 for company k, yip−21( k ) represents the average daily changing rate
of the asset price of company k in Wi−1, and Si2−1( k ) is the sum of the sen-
timent values for all the news entries relating to company k within the time
window Wi−1.
Accordingly, denote by <I ’, O ’> the forecasting input and output matrix tuple
for Wi, and we have

⎡ p2 −p 2 ⎤
⎢σ i (1) yi (1) Si (1)
2

⎢ −p 2 ⎥
I 9 = ⎢σ i (2) yi (2) Si (2) ⎥
p2 2

⎢ ⎥
⎢... ⎥
⎢ p2 −p 2 ⎥
⎢⎣σ i ( M ) yi ( M ) Si ( M )⎥⎦
2

and

⎡σ ip+21(1) ⎤
⎢ p2 ⎥
σ (2) ⎥
O9 = ⎢⎢ i +1 ⎥.
...
⎢ ⎥
⎢⎣σ ip+21( M )⎥⎦

Empirical results and analysis

The empirical studies are roughly composed of two parts. In our first
experiment, we utilize both GARCH-based ANN and SVM to study the corre-
lations between financial information volume and trading volume volatility,
and conduct a comparative study of different machine-learning techniques in
financial volatility forecasting. The daily volatility model serves as the primary
approach to calculating volatility. The data sets used in this step are limited
to the trading quote and news data for two indices and two listed companies
in the US stock markets. The time horizon spans three months. In the second
experiment, we apply the GARCH-based SVM model to data from all of 2007
for 177 listed companies in the US markets, but in this experiment a time-
window-based volatility model is used instead. For both experiments, historical
financial quotation data is downloaded and formatted from Yahoo Finance
(http://www.finance.yahoo.com).
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 43

Trading volume volatility forecasting


We utilize both ANN and SVM to forecast volatilities. The SVM toolbox
adopted in this paper is the LS-SVMlab (http://www.esat.kuleuven.ac.be/sista/
lssvmlab/). The least squares SVM (LS-SVM)5 is a reformulation of the regular
SVM. The kernel function we used is the radial basis function (RBF). Financial
information was acquired from Google Finance, yielding a comprehensive set
of online financial information for the previous three months from more than
500 financial portals. The width of the volatility calculating windows (D) in
both phases is set to 20 days.
We use σ^t2 to denote forecast volatility on day t; if σ^t2 − σ 2t−1 and σt2 − Δσ 2t−1 share
the same sign, we say that an accurate forecast for the volatility trend Δσt2 has
been achieved. The aforementioned ratio is defined as the percentage of the
days on which the forecast volatility trend has been accurately forecast.
We conducted experiments on two indices (NASDAQ and DOW) and two
stocks (MSFT and INTC) with time spans from June 30, 2006 to September
28, 2006; from June 29, 2006 to September 26, 2006; from June 28, 2006 to
September 26, 2006; and from July 3, 2006 to September 28, 2006, respect-
ively. The optimal parameters for RBF in SVM (the regularization parameter
gam and the bandwidth sig2) are set to sig2 = 50 and gam = 10. The results of
these experiments are shown in Table 5.4, where s/i stands for stock/index,
and N, D, M, and I stand for NASDAQ, DOW, MSFT, and INTC, respectively.
C represents the size of the samples, H the number of the hidden nodes, and p,
q, r the three time lags. In order to compare different results, we altered the par-
ameter values for both GARCH-based ANN and SVM three times. Table 5.4 gives
predicted values of the average forecast error and the volatility trend forecast
accuracy ratio for these three scenarios. Results indicate that GARCH-based
SVM outperforms GARCH-based ANN for volatility forecasts, whereas GARCH-
based ANN achieves a better forecast result for the volatility trend. This is pri-
marily because SVM is characteristic of the use of kernels, the absence of local
minima, the sparseness of the solution and the capacity control obtained by
optimizing the margin, which enables its better generality in overcoming over-
fitting phenomenon. In addition, SVM is a preferable solution, especially for
small-scaled sample set, with our experiment as a case in point. Nonetheless,
considering the forecast for volatility trend, the ANN-based approach consid-
erably outplays the other.
Our studies show that if we take the online information volume as an
exogenous input, the forecast performance for the trading volume volatility
considerably outperforms the price return volatility. Therefore trading volume
is more likely to be affected by online financial information. In addition, we
have found out that the larger the value of D, the smaller the average forecast
error, which shows the volatility clustering feature of the financial time series.
44 Enterprise Risk Management in Finance

Table 5.4 Predicted values of the average forecast error and the volatility trend forecast
accuracy ratio

s/i model C H p q r ē (%) ratio (%)

ANN 20 5 4 9 1 11.08 83.33


SVM 20 – 4 9 1 9.60 58.33
ANN 10 5 5 8 2 9.33 73.91
N
SVM 10 – 5 8 2 7.30 52.17
ANN 10 4 3 8 3 7.67 69.57
SVM 10 – 3 8 3 7.62 47.83
D ANN 10 8 6 6 1 10.17 68.00
SVM 10 – 6 6 1 9.42 64.00
ANN 15 5 9 4 2 12.96 64.71
SVM 15 – 9 4 2 11.34 64.71
ANN 20 5 4 9 1 9.72 83.33
SVM 20 – 4 9 1 8.39 66.67
ANN 10 8 5 8 2 9.39 82.61
M
SVM 10 – 5 8 2 6.69 60.87
ANN 10 6 5 5 3 11.04 69.23
SVM 10 – 5 5 3 7.81 57.69
ANN 10 6 5 5 3 14.31 61.54
SVM 10 – 5 5 3 13.55 38.46
I ANN 15 6 9 9 3 16.72 64.71
SVM 15 – 9 9 3 16.09 41.18

Additionally, a better forecast performance can be achieved if we square the


moving average component of the input vector, which substantiates one of
the GARCH theory’s contentions that there is a significant correlation between
the squared residuals of financial time series.

Volatility forecasting with sentiment analysis


Correlations between financial news sentiment value and stock price volatility
of listed companies are investigated using GARCH-based SVM regression. A
time window with a length of seven days (five trading days) is used. The forecast
is implemented progressively, with sliding time windows, yielding aggregated
statistics for all companies. Before being fed into the SVM model, both quote
and news data are segmented into time windows. The financial news in this
experiment is downloaded from over 200 English portals on the internet.
The SVM toolbox utilized in this experiment is the open source library
LIBSVM (http://www.csie.ntu.edu.tw/~cjlin/libsvm/). The news entries have a
time horizon ranging from January 1, 2007 to December 3, 2007, spanning
49 time windows. In total, a set of 177 listed companies in the US stock markets
are studied. There are 153,468 pieces of news for all the companies, all of which
are tagged with their sentiment values computed. The kernel function used is
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 45

the RBF. Two major performance metrics are introduced in this experiment
to evaluate the aggregated forecast performance for all the 177 companies:
squared correlation coefficient (SCC) and volatility trend forecast accuracy
(VTFA). SCC and VTFA are computed based on the forecasting values for each
time window.
The squared correlation coefficient evaluates the correlation of all the
explanatory variables to the response variable. The closer this value is to 1, the
better regression result is achieved.
The volatility trend forecast accuracy is the proportion of companies with an
accurately predicted volatility trend among all the companies. The definition
of volatility trend is consistent with the first experiment.
Table 5.5 presents a demonstration of the asset price volatility for 177 com-
panies using information sentiment during the year 2007. Note that the penalty
parameter c and the RBF kernel parameter g are set as c = 64 and g = 1/3.

Table 5.5 Forecast results for 177 listed companies during the year 2007

Time window Time window Time windows


for forecast for forecast for training
output input samples SCC (R2) VTFA

Week3 Week2 Week1–2 0.662534 0.655367


Week4 Week3 Week2–3 0.669676 0.525424
Week5 Week4 Week3–4 0.873588 0.59887
Week6 Week5 Week4–5 0.248732 0.694915
Week7 Week6 Week5–6 0.584079 0.59887
Week8 Week7 Week6–7 0.89559 0.632768
Week9 Week8 Week7–8 0.987398 0.305085
Week10 Week9 Week8–9 0.98691 0.649718
Week11 Week10 Week9–10 0.882611 0.706215
Week12 Week11 Week10–11 0.794255 0.548023
Week13 Week12 Week11–12 0.491824 0.734463
Week14 Week13 Week12–13 0.743331 0.661017
Week15 Week14 Week13–14 0.615906 0.638418
Week16 Week15 Week14–15 0.176804 0.59887
Week17 Week16 Week15–16 0.981393 0.525424
Week18 Week17 Week16–17 0.742617 0.672316
Week19 Week18 Week17–18 0.565846 0.683616
Week20 Week19 Week18–19 0.967438 0.559322
Week21 Week20 Week19–20 0.932697 0.711864
Week22 Week21 Week20–21 0.906246 0.621469
Week23 Week22 Week21–22 0.940825 0.468927
Week24 Week23 Week22–23 0.8779 0.824859
Week25 Week24 Week23–24 0.06059 0.632768
Week26 Week25 Week24–25 0.084561 0.615819

Continued
46 Enterprise Risk Management in Finance

Table 5.5 Continued


Time window Time window Time windows
for forecast for forecast for training
output input samples SCC (R2) VTFA

Week27 Week26 Week25–26 0.394098 0.615819


Week28 Week27 Week26–27 0.404327 0.570621
Week29 Week28 Week27–28 0.781131 0.638418
Week30 Week29 Week28–29 0.855221 0.525424
Week31 Week30 Week29–30 0.302073 0.610169
Week32 Week31 Week30–31 0.496343 0.548023
Week33 Week32 Week31–32 0.700048 0.644068
Week34 Week33 Week32–33 0.991294 0.536723
Week35 Week34 Week33–34 0.998906 0.587571
Week36 Week35 Week34–35 0.985648 0.689266
Week37 Week36 Week35–36 0.921044 0.632768
Week38 Week37 Week36–37 0.661739 0.59322
Week39 Week38 Week37–38 0.656383 0.638418
Week40 Week39 Week38–39 0.932568 0.451977
Week41 Week40 Week39–40 0.99876 0.711864
Week42 Week41 Week40–41 0.970877 0.60452
Week43 Week42 Week41–42 0.464741 0.508475
Week44 Week43 Week42–43 0.97924 0.666667
Week45 Week44 Week43–44 0.214432 0.587571
Week46 Week45 Week44–45 0.325735 0.564972
Week47 Week46 Week45–46 0.843826 0.429379
Week48 Week47 Week46–47 0.928572 0.508475
Week49 Week48 Week47–48 0.724147 0.672316
Week50 Week49 Week48–49 0.999941 0.553672

Observe that an average of 60.3225% is achieved for the volatility trend


forecast and an average of 71.2593% is achieved for the squared correlation
coefficient upon all the 177 companies with the 48 forecast time windows.
Figures 5.6 and 5.7 illustrate the price volatility forecasts for two specific com-
panies out of the 177, MDT and WAG. Figure 5.8 shows the VTFA for all the
time windows, corresponding to Table 5.5.
As shown in Figures 5.5 and 5.6, the predicted values, under most circum-
stances, correspond well to the actual values, although for occasional huge
oscillations the forecast result is not very good.
For both asset price volatility and trading volume volatility forecasts, an
average of over 60% of VTFA can be achieved, substantiating the existence
of firm correlations between information sentiment and volatility trend. The
experiment also indicates that during the days when there are a large number
of news items, incorporating information sentiment into the machine-learning
model can noticeably improve volatility trend forecasting; an average of over
Financial Risk Forecast Using Machine Learning and Sentiment Analysis 47

3
Price volatility
2.5
2
1.5
1
0.5
0
Week3
Week5
Week7
Week9
Week11
Week13
Week15
Week17
Week19
Week21
Week23
Week25
Week27
Week29
Week31
Week33
Week35
Week37
Week39
Week41
Week43
Week45
Week47
Week49
Week51
Time window

MDT’s real price volatility


MDT’s predicted price volatility

Figure 5.6 Price volatility forecast result for company MDT over all the time windows

1.4
1.2
Price volatility

1
0.8
0.6
0.4
0.2
0
Week3
Week5
Week7
Week9
Week11
Week13
Week15
Week17
Week19
Week21
Week23
Week25
Week27
Week29
Week31
Week33
Week35
Week37
Week39
Week41
Week43
Week45
Week47
Week49
Week51 Time window

WAG’s real price volatility


WAG’s predicted price volatility

Figure 5.7 Price volatility forecast result for company WAG over all the time windows
accurately predicted

90.00%
price volatility trend

80.00%
companies with
Proportion of

70.00%
60.00%
50.00%
40.00%
30.00%
20.00%
10.00%
0.00%
Week3
Week5
Week7
Week9
Week11
Week13
Week15
Week17
Week19
Week21
Week23
Week25
Week27
Week29
Week31
Week33
Week35
Week37
Week39
Week41
Week43
Week45
Week47
Week49
Week51

Time window

Figure 5.8 Price volatility trend forecast accuracies for all the time windows
48 Enterprise Risk Management in Finance

70% of SCC was achieved for both price volatility and trading volume vola-
tility forecasting, giving convincing evidence to the correlations between
these factors.
This experiment extends existing studies to a large set of companies.
Empirical results are reflected by aggregated statistics, indicating the effects of
information on entire stock markets. The results of this phase, although only
focused upon US markets, provides a vivid description of the macro influences
of financial news on financial volatility. It is of critical value to strategic deci-
sion-making by financial practitioners who seek to gain a panoramic picture
of the general market.

Conclusions

We have introduced GARCH-based ANN and SVM to investigate the correlations


between financial trading volume volatility and online information volume.
This enables effective prediction of financial risk in a network environment.
Both methods are capable of achieving favorable prediction results; GARCH-
based ANN performs better in predicting the volatility trend than GARCH-
based SVM, while GARCH-based SVM outperforms GARCH-based ANN in
forecasting the volatility itself. Moreover, online information is converted to
sentiment values, which constitutes another key input element for the machine-
learning models. Empirical studies indicate solid correlations between asset
price volatility and information sentiment, which is well captured and stored
by the SVM. Aggregated statistics show that a good forecast performance can
be achieved by use of GARCH-based SVM method under sliding time windows.
These empirical studies can be useful to financial investors, portfolio holders,
academicians, etc. in the sense that they provide an alternative tool to forecast
volatility and trend.
6
Online Stock Forum Sentiment
Analysis

Introduction

Over the past few decades, behavior finance and risk management have
attracted a great deal of attention from both researchers and practitioners
seeking to explain investor sentiment behavior, risk and loss perceptions,
factors affecting investment strategy and investor behavior related to ongoing
market trends.1
There is a great deal of unstructured data and information of public senti-
ments and opinions on market fluctuations posted on the internet today. This
includes, for example, discussion forums, blogs, or message boards such as
Facebook and Twitter, major sources of big data. Investors’ opinions and senti-
ments greatly impact on market volatility.2
This chapter develops a sentiment ontology for conducting context-sen-
sitive sentiment analysis of online opinion posts in stock markets by inte-
grating popular sentiment analysis into machine-learning approaches based
on support vector machine (SVM) and generalized autoregressive conditional
heteroskedasticity (GARCH)3 modeling.

Architectural design of GARCH-SVM based on sentiment index

Figure 6.1 presents the conceptual flow chart for the methodology we use to
conduct context-sensitive sentiment analysis of online opinion posts based on
multiple sources of data. We first manually label sentiment polarity of a subset
of postings. Then using sentiment analysis and these labeled postings, we
identify features from written stock forum text to automatically predict sen-
timent polarity of other postings. We then aggregate postings for each stock on
a daily basis. Thereafter we use an aggregated sentiment index and SVM clas-
sifiers to build GARCH-SVM4 models to predict future stock price volatility. As

49
50 Enterprise Risk Management in Finance

Data
preprocessing

Sentiment Sentiment Sentiment


Cleaning analysis aggregation time series

Internet Segmenting
Simulation,
Multi- prediction
source Trading,
data statement

Dynamic training
and forecasting

Historical Time series of Econometrics, data


Time series price, mining
return, volatility (e.g. GARCH-SVM)

Figure 6.1 Conceptual modeling of sentiment for volatility forecast

can be seen from Figure 6.1, two sources of data are collected and incorporated
into the model: sentiment-related data and historical financial time series.

Sentiment analysis
In market prediction, sentiment analysis technology is employed to automat-
ically classify unstructured reviews as positive or negative, and then identify
investor sentiment as either ‘bullish’ or ‘bearish.’ We consider two approaches
for sentiment analysis: the machine-learning-based approach and the lexicon-
based approach.5
In the machine-learning-based approach, an n-gram model is necessary for
sentiment classification. The n-gram model takes characters (letters, spaces, or
symbols) as the basic units. The advantages of the n-gram method are: (1) It
is language-free, and can be applied to texts in English, traditional Chinese
and simplified Chinese; (2) Linguistics processing, word segment and part-of-
speech tagging of the text are unnecessary; (3) It has good fault tolerance to
identify spelling mistakes, and the requirement for extant knowledge of text is
low; (4) Dictionaries and regulations are unnecessary.
If the n-gram model is selected, classification accuracy would decline with
the increase of order, i.e. 1-grams>2-grams>3-grams. We therefore first selected
1-grams as potential features from the training set, then manually adjusted the
characters according to the following principle: When selecting 1-grams, there
will be many punctuation marks (commas and periods) that contribute little to
classification. Table 6.1 demonstrates a simple example. In order to improve clas-
sification accuracy, those punctuation marks should be eliminated manually.
Figure 6.2 demonstrates the basic process of the lexicon approach, which
includes pretreatment, word segmentation, POS (part of speech) tagging,
polarity (‘bullish’ or ‘bearish’) tagging, combining and results output.
Online Stock Forum Sentiment Analysis 51

Table 6.1 Selecting 1-grams as features

File 1-grams

The large-cap stock is not bad. The


large-cap
stock
Is
not
bad

Text, Pre- Word POS Polarity Categorized


document treatment segmentation tagging Tagging Combining Text,
document

Figure 6.2 The Lexicon approach for sentiment classification

We chose the ICTCLAS System, developed by the Institute of Computing


Technology of the Chinese Academy of Sciences (http://ictclas.org/) for word
segmentation, and POS (part of speech) tagging. Because adjectives play an
important role in the sentiment classification of online reviews,6 we selected
adjectives as the keywords. Meanwhile, in order to take account of the
importance of the negative words, such as ‘no’ and ‘not,’ if negative words
appeared they were given equal importance to adjectives.
After word segmentation and POS tagging, each piece was decomposed and
converted into a keyword array, each of which was assigned a specific sen-
timent value based on the HowNet word score.
The overall sentiment for complete reviews was acquired by combining
sentiment values of all their keywords. Keywords in the same sentences are
grouped together. Then all the sentiment values of the keywords are added
together; the sum represents the overall sentiment of the specific review.

Data
Both stock forum data and financial time series data are used in this study.
The stock forum data utilized a corpus of stock reviews taken from a well-
known finance website, Sina Finance.

• Sina Finance (http://finance.sina.com.cn/) was founded in 1999. It was the


first choice of global Chinese financial portal because it has more than one
third of the market share of Chinese financial web sites.
• Customers registering on this platform can freely seek and share financial
information on a daily basis.
52 Enterprise Risk Management in Finance

The financial dataset consisted of 50 military-sector stocks.

• Stocks were chosen to focus on the military sector, because their stock forum
showed a wide range of activity.
• For a period of four months, from July to November 2012, we downloaded
every message posted to these forums. We then employed our voting algo-
rithm with the larger training set to assess each review and determine its
sentiment.

Figure 6.3 shows that 96.82% of the total reviews occurred during the working
day, indicating high volume during trading days, and low activity during
weekends and holidays.
Figure 6.4 shows that most reviews occurred at 9 am, 10 am, and 4 pm, indi-
cating that investors prefer to communicate with each other at the opening and
closing stages of the stock market. At the same time, many reviews occurred at
7 pm, during non-working hours. This differs from the American stock market.
We analyzed the content of the reviews and found that forecast reviews would
always be updated after 3 pm. One possible reason is that the stock market
in China closes at 3 pm, and investors will then make predictions about the
future stock market.
We chose the reviews updated after 3 pm on the trading day to create a
corpus of online stock reviews on the whole stock market, from July 15, 2012
to November 15, 2012. This corpus was divided into two sets: a training corpus
and a forecast corpus. We first manually labeled sentiment polarity (‘bullish’ or
‘bearish’) in the training corpus. Then, using sentiment analysis and the labeled
training reviews, we identified features from the written text of the stock forum
to automatically predict the sentiment polarity of the other reviews. When cre-
ating the forecast corpus, the number and date of the reviews were recorded.

600
500
400
300
200
100
0
1 2 3 4 5 6 7

Figure 6.3 Volume of reviews distributed by time of week


Online Stock Forum Sentiment Analysis 53

80

60

40

20

0
1 3 5 7 9 11 13 15 17 19 21 23

Figure 6.4 Volume of reviews distributed by time of day

Methodology comparison
Classification performance for the machine-learning approach and the
lexicon approach is compared. The resulting computation shows that
the statistical machine learning approach has a classification accuracy of
81.82%, higher than that of the semantic approach, which has a classifi-
cation accuracy of 75.58%. Classification accuracy is the degree of close-
ness of computed results to actual (true) value. Note that this comparison
is statistically significant at the 0.95 level ( p value = 0.018). Table 6.2 shows
that classification accuracy of the statistical machine-learning approach is
reasonably robust with respect to the size of the training set when the size
is more than 600. The p value = 0.018 suggests that this is true at the sig-
nificance level of 95%.
Reviews were classified by our algorithms into one of three types: bullish
(optimistic), bearish (pessimistic), and neutral. Here the chi-square test shows
that there are significant differences in classification accuracy between the
semantic method and the statistical method. Because the statistical approach
possessed higher accuracy when compared with the semantic approach, we
opted for the statistical approach to assign labels for the reviews: bullish,
bearish and neutral.
When the size of training set was relatively small, classification accuracy
could be improved by expanding the training set. But when training reviews
were increased to a certain number, accuracy declined, and expanding the
training set caused an increase in training time. Therefore, when choosing
the size of training set, it is necessary to balance efficiency and accuracy. To
achieve this balance, we first manually labeled about 30,000 reviews to three
distinct sentiments, 1 for bullish, −1 for bearish, and 0 for neutral sentiment –
referred to as ‘manual labels.’
54 Enterprise Risk Management in Finance

Table 6.2 Relative accuracies by sentiment

Accuracy

Sentiment Low (small) Medium High (large) X2 p value

Age High-Low 66.7 62.5 68.1 1.434 0.488


Positive 50 58.4 70.5
Negative 69.1 71.4 78.4
Firm size 73.1 65.7 58.4 9.329 0.009
Positive 70.5 66.7 41.65
Negative 76.6 75 67.3
Price-to-book 73.8 65.6 60.9 8.385 0.015
Positive 66.7 62.2 50
Negative 75 76.6 69.1

Sentiment and stock price volatility


We analyzed the relationship of sentiment and stock price volatility trend at
the individual stock level. Given that there appears to be a link from sentiment
to the index at the aggregated level, we drilled down to the individual stock
level to explore possible relations. Our analysis used the normalized stock price
and normalized sentiment index for each stock. This normalization allowed us
to stack up all the stocks together and then conduct the analysis using pooled
data.
We selected three kinds of stock features: age, firm size, and price-to-book
value. The price-to-book ratio is a financial ratio used to compare a firm’s
current market price to its book value. A higher price-to-book ratio usually
implies that investors expect management to create more value from a given
set of assets, all else equal. The price-to-book ratio can have greater discrimin-
ating power to identify value stocks and growth stocks.
For each day, we formed ten equal-weighted portfolios according to the stock
features of firm size (ME), age, and price-to-book value. We also calculated
portfolio stock price volatility accuracy for all the individual stocks. Table 6.2
shows that we then reported average portfolio accuracy over days in which
sentiment values from the previous day-end was positive, and days in which
it was negative.
Table 6.2 shows the following interesting patterns. In comparing the accuracy
of aggregated bullish and bearish daily ticker data, we found no significant dif-
ference in prediction accuracy between young stocks and old stocks, given that
the p value equals 0.488, which is greater than 0.05 at the significance level of
95%. One possible reason is that the Chinese stock market is relatively new, so
market investors are not fully considering stock age as an evaluation factor for
stock returns.
Online Stock Forum Sentiment Analysis 55

There were significant differences in prediction accuracy between small


stocks and large ones. When the firm size was relatively small, sentiment sen-
sitivity was high and accuracy was high, but when the scale of stock was rela-
tively large, sentiment sensitivity was low and accuracy was low. This suggests
that our sentiment-based GARCH-SVM approach works better for small com-
panies, which are more sensitive than large ones to various online reviews. We
also examined whether investor sentiment had a significant influence on value
and growth stock returns. Based on data for the period July 1963 to December
2000, Siegel7 divided annual returns for stock classified into size quintiles
(small cap to large cap) and price-to-book quintiles (value to growth). Growth
stocks returned 6.41% per year compared with value stocks, which returned
23.28% for small cap stocks. Value stocks returned 13.59% and growth stocks
returned 10.28% for large stocks. We obtained results consistent with those
of previous studies.8 Our computational results in Table 6.2 suggest that there
are significant differences in sentiment-oriented prediction accuracy between
high price-to-book stocks (value stocks) and low price-to-book stocks (growth
stocks). The model predicting the accuracy and sentiment sensitivity of the
growth stocks was lower than that of value stocks: prediction accuracy and
positive sentiment sensitivity were 73.8% and 66.7% respectively for value
stocks, higher than the 60.9% and 50% for growth stocks. In comparing
the accuracy of aggregated bullish and bearish daily ticker data, the results
support that bearish labels have higher predictive accuracy than bullish labels.
Furthermore, bullish sentiment was heavily influenced by the phenomenon of
wishful thinking, reducing its predictive accuracy.

Conclusions

There is a growing trend to use investor sentiment as an investment guide in


the stock market. In the equity market when the market sentiment is bullish,
most investors expect upward movement of stock prices; otherwise when the
majority of investors expect downward movement for stock price, the market
sentiment is called bearish.
In the stock market, effective forecasting of financial risk measured by vola-
tility is a critical activity, but is also very challenging. Therefore, accurate pre-
diction of financial volatility is important. Although stock price volatility has
been extensively discussed in the prior stock market literature, little published
research considers the difference in the predictive power between bullish and
bearish stock messages.
We employed sentiment analysis technology based using both a machine-
learning approach and a lexicon approach to automatically classify unstruc-
tured reviews as positive or negative, and then identify investor sentiment
56 Enterprise Risk Management in Finance

as either ‘bullish’ or ‘bearish.’ Empirical studies indicated solid correlations


between stock price volatility trend and stock forum sentiment, which was
well captured and stored by the SVM. Computational results demonstrated
that the statistical machine-learning approach has a classification accuracy of
81.82%, which is higher than that of the semantic approach, with a classifi-
cation accuracy of 75.58%, significant at the 95% level.
Further analysis show that:

• The proposed sentiment-based GARCH-SVM approach worked better for


small companies, which are more sensitive than large companies to various
online reviews.
• Investor sentiment had a particularly strong effect for value stocks relative
to growth stocks: the model predicting accuracy and positive sentiment
sensitivity is 73.8% and 66.7% for value stocks respectively, higher than
60.9% and 50% for growth stocks.
7
DEA Risk Scoring Model of Internet
Stocks

Introduction

In financial markets, there are many kinds of investments, with stock the most
popular. When investors choose which stock to invest in, they may expect
high returns from investing in high performance companies. However, the
greatest concern for investors is whether their investment has the potential for
high returns, and whether the high performance companies will always yield
high returns.
Even after the dotcom collapse, US internet stock has remained a popular
investment. However, investors are still concerned about future Internet
Bubbles. Thus, the US internet stock market is a useful research focus with
respect to financial performance.
From an accounting perspective, the return on equity (ROE) ratio is an
important indicator to measure the performance of a company, because the
goal of a company is to maximum the stockholders’ equity. The DuPont model
breaks ROE into three parts: profit margin, total asset turnover and financial
leverage.1 It enables us to identify the existence of many indicators that influence
the performance of a company. Hence, multiple indicators are considered. Data
Envelopment Analysis (DEA) is a performance evaluation method capable of
considering multiple inputs and multiple outputs. In this research, we aim to
formulate an evaluation process combining the DEA method with the concept
of ROE. Investors can use this as a stock selection method, and managers can
use it for performance evaluation.

Different methods of performance evaluation

Performance evaluation considers a number of attributes (or criteria) and covers


multiple levels. Items chosen for evaluating performance include both quan-
tifiable and non-quantifiable indicators. These may be mutually exclusive,

57
58 Enterprise Risk Management in Finance

related or independent of each other. In addition, the problems being faced are
extremely complex and unpredictable.
A number of techniques have been proposed. Objectivity, fairness and feasi-
bility are crucial for performance evaluation. This study reviews seven methods
applicable to the evaluation of performance. They are (1) Multivariate Statistical
Analysis,2 (2) Data Envelopment Analysis,3 (3) Analytic Hierarchy Process,4
(4) Fuzzy Set Theory,5 (5) Grey Relation Analysis,6 (6) Balanced Scorecard,7
and (7) Financial Statement Analysis.8 The fundamental theories of the seven
methods, and their advantages and disadvantages when applied to performance
evaluation, are described in detail below:

Multivariate statistical analysis


A statistical method used to quantify complex issues or events and to arrange
them systematically for the purpose of classification, inference, evaluation and
forecast.

Strengths:

i. It is based on traditional methods of statistics, with solid theoretical


foundation.
ii. The system is complete and could be applied in almost all areas of
research.

Weaknesses:

i. It requires a large sample size and normal distribution.


ii. Methods not including statistical testing cannot be used systematically,
which hampers further interpretation of the results.

Data envelopment analysis


Based on the concept of Pareto optimality. When measuring the efficiency
value of DMU, only the production margin is required. The production margin
would then be compared with actual production to calculate the efficiency
values.

Strengths:

i. DEA could be used to handle problems with multiple inputs and outputs.
ii. It would not be influenced by different scales.
iii. The results of DEA evaluation on efficiency is a composite indicator, and could
be used to apply the concept of total production factors in economics.
iv. The weighted value in the DEA model is the product of mathematical cal-
culation, and hence free from human subjectivity.
v. DEA can deal with interval data as well as ordinal data.
DEA Risk Scoring Model of Internet Stocks 59

vi. The results of the evaluation by DEA could provide more information on
the data used, which could be used as a reference in the decision-making
process.

Weaknesses:

i. It yields the efficient frontier, which may be quite large.


ii. If the sample size is too small, the outcome is less reliable.
iii. There should not be too many variables.
iv. The degree of relation between the input and output variables (indicators)
is not considered.

Analytic hierarchy process


An approach to quantify subjective estimates. Complex and non-systematic
issues are treated systematically in a stepwise process, yielding weighted value
of options (indicators).

Strengths:

i. Easy to apply.
ii. The results are subject to consistency checking.
iii. Solid theoretical foundation and is objective.
iv. Easier to handle qualitative problems.

Weaknesses:

i. When there are great differences across experts, diverse results yield little
value.
ii. Fails to discuss the relation between factors (indicators).

Fuzzy set theory


Provides an overall evaluation on events or phenomenon influenced by a number
of factors, by way of building up of subordinate functions. Accordingly, the
qualitative and quantitative values of the indicators would be interchangeable,
and a value in real numbers would be assigned to each factor under evaluation.
Priority would then be assessed.

Strengths:

i. It can deal with a large number of uncertain problems.


ii. Since it is a simulation of human thought and decision processing, it is
compatible with human behavior.
60 Enterprise Risk Management in Finance

Weaknesses:

i. The degree of subordination is indicated by a value between 0 and 1, so


the results of evaluation would be subject to influence by the choice over
subordination function.
ii. The relation between variables (indicators) is not discussed.

Grey relation analysis


Based on the homogeneity or heterogeneity of the trend development of factors
to find out if there is grey relation between two indicators and, if so, the extent
of this relationship.

Strengths:

i. No rigid requirement for sample size.


ii. Can still be applied when the distribution of data is uncertain.
iii. Is based on data analysis, and is free from traditional subjectivity in
decision making.
iv. The method of calculation is simple and easy to apply.

Weaknesses:

i. Cannot directly handle qualitative issues (is non-quantifiable).


ii. The criteria for choosing grey relation coefficient value would directly
affect the final evaluation result.

Balanced scorecard
A performance evaluation system containing four components for evaluation.
This is also called a strategic management system, which could help firms
translate strategy into actions. The four components are finance, customer,
internal process, and learning and growth.

Strengths:

i. Can integrate information, and put various key factors for the success of
the organization into a single report.
ii. Avoids information overload, since the indicators used for performance
measurement are the key indicators.

Weaknesses:

i. The procedure for the application of BSC is complex and time consuming.

Financial statement analysis


People use this approach with the belief that the result of business activities of
the firm would be reflected in its financial statement
DEA Risk Scoring Model of Internet Stocks 61

Strengths:

i. Objective: It is the reflection of actual events.


ii. Concrete: All data in the financial statement can be quantified.
iii. Measurable: Since the data in the financial statement can be quantified,
they are measurable.

Weaknesses:

i. There is no criterion for selecting a ratio that is agreeable by all users.


ii. The figures in the financial statement have been added or simplified, and
cannot satisfy the needs of all users.
iii. The financial statement cannot express qualitative information, such as
ability, morale, potential and trust.

Each of the above seven methods can be independently applied to evaluating


performance. However, none of them is perfect. There is a saying, ‘Whenever
there is an advantage, it entails a drawback.’ Researchers can only choose a
method to evaluate performance that has the least number of drawbacks for
that study’s particular situation. In contrast to other approaches such as AHP,
Multivariate Statistical Analysis and Grey Relation Analysis, DEA requires little
assumption about a functional form among variables; no prior information
on weight assigned to input/output variables is required. Thus, DEA provides
a very good tool to objectively gauge the DMU performance. DEA has been
widely used to yield new insights into activities (and entities) previously evalu-
ated by other methods such as TOPSIS and fuzzy methods.9

Basics of data envelopment analysis

DEA is a non-parametric approach to build an efficiency frontier to measure


relative efficiency for a set of homogeneous decision-making units (DMUs)
between multiple inputs and outputs. The theory of DEA can be traced back
to Farrell, who proposed using a production frontier to evaluate the tech-
nical efficiency. He divides efficiency into overall efficiency (OE) (or eco-
nomic efficiency), technical efficiency (TE), and allocative efficiency (AE).
Overall efficiency is composed of technical efficiency and allocative effi-
ciency. Technical efficiency shows the maximum products that factories can
produce, given their specific inputs. Allocative efficiency shows the inputs
that enterprises should put into, given their specific price and product tech-
nology. If we multiply technical efficiency by allocative efficiency, we get
economic efficiency. That is:

OE = AE × TE
62 Enterprise Risk Management in Finance

This is the efficiency measuring model that Farrell proposed in 1957.10 There are
two major DEA models: One is the CCR model proposed by Charnes, Cooper
and Rhodes,11 and the other is the BCC model proposed by Banker, Charnes
and Cooper, allowing variable returns to scale.12 The CCR model is input-
oriented, and the BCC is output-oriented. In this research, the output-oriented
BCC model is adopted because the variable returns to scale assumption is more
realistic, and the goal of companies is to maximize their outputs.
m
Min hs = ∑V X
j =1
j js + Ds

p
Subject to ∑U Y
k =1
k ks =1

p m

∑ U Y − ∑V X
k =1
k ki
j =1
j ji − Di ≤ 0 , i = 1,2,. . . n

Vi ≥ 0, j = 1,2, . . . m; Uk ≥ 0, k = 1,2,. . . p

Di is a constant and we can use Di as the index of the return scale of the DMU.
The standard is as follows:

Di > 0 → DMU is under decreasing returns to scale


Di = 0 → DMU is under constant returns to scale
Di < 0 → DMU is under increasing returns to scale.

Using duality theory and the slack variable to transform the equation,
we get:
⎧⎪ m p

+⎪
Max H s + ε ⎨∑ SVjs + ∑ SVks ⎬

⎩⎪ j =1 k =1 ⎭⎪
n
Subject to H sYks − ∑ Yki λ i + SVks+ = 0
i =1

n
X js − ∑ X ji λ i − SVjs− = 0
i =1

∑λ
i =1
i =1

+
λ i ≥ 0 , i = 1,2, . . . n; SVjs− ≥ 0 , j = 1,2, . . . m; SV kS ≥ 0, k = 1,2,. . . p
DEA Risk Scoring Model of Internet Stocks 63

We can see that compared with the CCR model, the BCC model adds the limi-
n
tation of ∑l
i =1
i = 1, to make sure that the production frontier will be raised to
the origin.

The proposed approach

DEA can deal with multiple inputs and outputs simultaneously, and DEA
models are broadly used in many fields. DEA is believed to be one of the most
commonly used approaches to measure company performance in the financial
industry. In this section, we propose a model to combine DEA models with a
financial analysis tool to evaluate efficiency of online companies.
Financial ratio analysis has been the standard technique used in economics
to examine business and managerial performances.13 Due to its simplicity and
ease of understanding, the analytical ratio measure has been widely applied
in many areas such as in financial investment and insurance industries.
Two of the most preferred analytical ratios are return on equity (ROE) and
return on assets (ROA), both providing insight into a financial institution
that allows management to make strategic decisions that can dramatically
affect its structure and profitability. ROA is defined as the ratio of net income
divided by total assets, and estimates how efficient we are at earning returns
per dollar of assets. ROA has been merged into DEA to evaluate efficiency and
effectiveness of an organization.14 ROE is calculated by dividing net income
by average equity, and identifies how efficiently we use our invested capital.
Companies that boast a high ROE with little or no debt are able to grow
without large capital expenditures, allowing the owners of the business to
withdraw cash and reinvest it elsewhere. ROE is just as comprehensive as ROA,
and could be a better indicator than ROA in terms of identifying a firm’s prof-
itability and potential growth, that is, the potential risk that a firm can take.
Moreover, from the accounting perspective, when we use the ROA ratio to
measure company performance, the ROE ratio has to be used simultaneously
to see whether a high ROE ratio arises from financial leverage, or whether the
company ROA is high. So in this research, we use the ROE ratio to make the
evaluation process more complete.

Net Income
ROA =
Assets

Net Income
ROE =
Equity

Many investors fail to realize, however, that two companies can have the same
return on equity, yet one can be a much better business. The DuPont model
64 Enterprise Risk Management in Finance

provides a tool to decompose ROE into three elements in the calculation of


ROE: The net profit margin, the asset turnover, and the equity multiplier. By
examining each input individually, we can discover the sources of a company’s
return on equity and compare it to its competitors.
Using the DuPont model, the ROE ratio can be decomposed into:

Net Income Sales Assets


ROE = × ×
Sales Assets Equity

(Effectiveness) (Efficiency) (Equity multiple)

= ROA × Equity multiple

Asset turnover (efficiency) is used to measure the ability of the firm to use its
assets, and can be deemed as the operational efficiency of a company. Profit
margin (effectiveness) is used to diagnose the effectiveness of a company; it
measures not only the competitiveness of the product but the expense control
ability of a company.
The equity multiple can be used to understand the capital structure of a
company, and companies can use financial leverage to control their capital
structure. So investors should take a higher risk to gain high returns. Hence,
we use the concept of ROE to test whether investing in companies with high
financial performance can get high returns or not, because it is important to
think of return and risk simultaneously when choosing an investment target.
Based on the concept of measuring firm performance by efficiency and
effectiveness, this research adopts the two-stage DEA model15 to evaluate the
performance of online companies. The two-stage DEA approach is shown in
Figure 7.1
The risk-scoring model is depicted in Figure 7.2, to include two sub-processes.
In contrast to Figure 7.1, Figure 7.2 introduces another process to understand
whether investing in companies with high financial performance can get high
returns or not.
These two processes can be used to evaluate the company from both enter-
prise (company performance) and investor (the returns per unit of risk available)
perspectives; hence this evaluation process can give investors and managers
more accurate criteria to make decisions.

Variable selection
In order to measure DMU efficiency, the selection of the input variables and
output variables is very important. In the internet industry, existing literature
defines a good set of variables to measure online company performance,
including both financial data and non-financial data.
DEA Risk Scoring Model of Internet Stocks 65

Effectiveness
Efficiency
Input Intermediate Output
variable variable variable

Stage 1 Stage 2

Figure 7.1 Two-stage DEA model

Process I

Effectiveness
Input Intermediate Output
Efficiency

variable variable variable

Stage 1 Stage 2

Input Return level Output


variable per unit of variable
risk

Process II

Figure 7.2 Proposed evaluation process

Some criteria are set up in order to facilitate the input and output selection
as follows:

1. The variables adopted by a paper measuring the internet industry using the
DEA approach can be considered.
2. There are limited papers measuring internet industry performance using
the DEA approach, so a two-stage DEA approach will be used for measuring
operating efficiency and effectiveness.
3. For measuring the investing risk, there is only one paper using the DEA
approach to measure the relationship between return and risk. Hence, the
variables adopted by that paper have been considered.
4. All the variables must conform to the ROE concept.

We based our variable selection on various financial measures employed in


the literature for evaluation of the efficiency of a financial institution.16 In
Evaluation Process 1, we chose operating expense, employees, total assets,
66 Enterprise Risk Management in Finance

revenue, gross profit, EPS and net income as the main variables to measure the
performance of online companies based on the literature review and sugges-
tions by experts. In order to measure efficiency, we used total assets, total
equity and operating expense as input variables to measure how much money
they can earn (revenue) and how many profits (gross profit) they can generate.
Total assets was chosen as an input because it is the sum of intangible asset,
current asset, and fixed asset – and the intangible asset which is shown in
the balance sheet as a result of a merger or takeover is hard to measure. The
variable of operating expense was chosen as an input because many internet
companies do not report the number of marketing expense or R&D expense. In
order to measure effectiveness, we used revenue and gross profits as input vari-
ables, to see how well companies controlled their expenses to generate money
(net income) and how much income was shared with stockholders (EPS).
In Evaluation Process 2, we choose Beta, book value to market value (BV/
MV), and rate of return as the main variables to measure how much return
a company can generate given the same risk. Beta was chosen as an input
because investing risk can be divided into systematic risk and non-systematic
risk. For investors, the non-systematic risk can be dispersed by diversification
effect. So, if systematic risk is the only concern, then the Beta coefficient is a
better way to measure the risk. BV/MV was chosen as an input because it is also
a commonly used ratio to measure investing risk.

Empirical study

The sample for this study includes listed online companies in the United
States. There are 127 such companies in NASDAQ categories, and they are
grouped into three categories: internet service providers, internet information
providers, and internet software and services providers. Because the data
for DMUs cannot be negative, the data of DMUs could not be collected, and
because some accounting periods were different across companies, we chose
only 27 listed online companies. Data was collected from Yahoo! Finance
(http://finance.yahoo.com/) and EDGAR Online (http://edgar.brand.edgar-
online.com/default.aspx) for 2006.

The DEA result


The DEA efficiency scores are a percentage value which varies between 0%
and 100%. If the efficiency score is equal to 100%, then the score is the best
efficiency and hence the unit is the most efficient unit. The DEA result for
Evaluation Process 1 is shown in Table 7.1.
We can see that 10 out of 27 listed online companies, namely GOOG, TZOO,
JCOM, AMZN, UNTD, DTAS, ADAM, EGOV, EBAY, and RATE, are BCC-efficient
on operating efficiency. There are 6 out of 27 listed online companies, namely
DEA Risk Scoring Model of Internet Stocks 67

Table 7.1 BCC-efficient scores on performance

Operating Effectiveness
Company name Stock code efficiency score score

Google, Inc. GOOG 100 100


A.D.A.M., Inc. ADAM 100 100
j2 Global Communications, Inc. JCOM 100 100
eBay, Inc. EBAY 100 65.88
Bankrate, Inc. RATE 100 64.24
Travelzoo, Inc. TZOO 100 50.54
Digitas, Inc. DTAS 100 40.03
NIC Inc. EGOV 100 35.16
United Online, Inc. UNTD 100 34.96
Amazon.com, Inc. AMZN 100 30.44
Varsity Group, Inc. VSTY 99.03 93.39
Citrix Systems, Inc. CTXS 93.81 56.05
Sabre Corp YHOO 91.64 100
Websense, Inc. WBSN 90.67 76.26
Aquantive, Inc. AQNT 89.54 36.69
Digital River, Inc. DRIV 86.64 96.26
IAC/InterActive Corp IACI 84.27 67.88
Sabre Corp TSG 82.46 32.18
Sohu.com Inc. SOHU 82.12 79.56
Priceline.Com, Inc PCLN 81.63 100
eCollege.com ECLG 79.94 23.92
Open Solutions, Inc. OPEN 75.56 59.98
Sina Corp SINA 72.87 67.65
iPass, Inc. IPAS 71.68 20.92
Online Resources Corp ORCC 68.58 100
Corillian Corp CORI 67.17 13.37
SupportSoft, Inc. SPRT 54.22 17.19

ADAM, ORCC, JCOM, PCLN, GOOG, and YHOO, which are BCC-efficient on
effectiveness.
Moreover, of the ten companies that are BCC-efficient on operating effi-
ciency, seven companies are not BCC-efficient on effectiveness. These are
TZOO, AMZN, UNTD, DTAS, EGOV, EBAY, and RATE. These seven companies
can use their resources to generate profits very well; however, they cannot use
their profits to generate income very well. This may be because when com-
paring to the companies that are BCC-efficient on both dimensions, these
seven companies do not control their expenses well, or they issue more stock
so that EPS becomes diluted.
On the other hand, of the six companies that are BCC-efficient on effect-
iveness, three are not BCC-efficient on operating efficiency. They are ORCC,
PCLN, and YHOO. These companies can use their profits to generate income
68 Enterprise Risk Management in Finance

very well; however, they cannot use their resources to generate profits very well.
This may be because when compared with the companies that are BCC-efficient
on both dimensions, these companies spend more costs on their products, or
the sales volume or price is lower than market, or their capital has mainly come
from inside (stockholder’s equity), not borrowed from outside (liabilities).
The DEA efficiency scores are a percentage value which varies between 0%
and 100%. In order to see the total efficiency of each company, we time the
efficiency scores of operating efficiency and effectiveness. The BCC efficiency
scores are also a percentage value between 0% and 100%. It can be observed
that only three companies perform best in both dimensions, showing that
they are BCC-efficient in both operating efficiency and effectiveness, which
are ADAM, GOOG and JCOM. These three companies can use their resources
to generate profits, as well as using their profits to generate income.
The DEA result for Evaluation Process 2 is shown in Table 7.2.

Table 7.2 BCC-efficient scores on the level of returns per unit of risk

Company name Stock code Risk score

j2 Global Communications, Inc. JCOM 100


Priceline.com, Inc. PCLN 100
Varsity Group, Inc. VSTY 100
A.D.A.M., Inc. ADAM 100
Online Resources Corp ORCC 60.89
Yahoo!, Inc. YHOO 45.93
Google, Inc. GOOG 42.39
Digital River, Inc. DRIV 39.62
Sohu.com, Inc. SOHU 37.20
Websense, Inc. WBSN 30.88
Bankrate, Inc. RATE 30.25
Travelzoo, Inc. TZOO 26.70
Amazon.com, Inc. AMZN 24.11
Aquantive, Inc. AQNT 23.62
Citrix Systems, Inc. CTXS 23.27
United Online, Inc. UNTD 22.48
Sina Corp SINA 22.40
eBay, Inc. EBAY 19.20
NIC, Inc. EGOV 18.92
Digitas, Inc. DTAS 15.67
Sabre Corp TSG 15.67
Open Solutions, Inc. OPEN 14.76
IAC/InterActiveCorp IACI 10.94
eCollege.com ECLG 10.24
Corillian Corp CORI 9.04
iPass, Inc. IPAS 8.86
SupportSoft, Inc. SPRT 5.02
DEA Risk Scoring Model of Internet Stocks 69

We can see there are 4 out of 27 listed online companies, namely JCOM,
PCLN, VSTY, and ADAM, which are BCC-efficient on investing risk. These four
companies have the highest level of returns per unit of risk, which means that
if investors choose them, they can expect the highest level of returns.
There is a trend between total efficiency and investing risk. Table 7.3 shows
the total efficiency and risk for the top ten and bottom four companies. If
investors choose companies with high scores in total efficiency, they can enjoy
a higher level of returns. Otherwise, if the investors choose the companies
with low scores in total efficiency, they will get a lower level of returns.
Moreover, through the comparison in Table 7.4, we can see the companies
that are BCC-efficient on operating efficiency but not BCC-efficient on effect-
iveness will perform worse in total efficiency. For example, TZOO, AMZN,
UNTD, DTAS, EGOV, EBAY, and RATE are BCC-efficient on operating effi-
ciency, but their performance on effectiveness is bad, and so the total effi-
ciency of these companies will be lower.
On the other hand, the companies that are BCC-efficient on effectiveness
but not on operating efficiency will perform better in total efficiency than the
companies that are BCC-efficient on operating efficiency but not on effect-
iveness. For example, ORCC,PCLN, and YHOO are BCC-efficient on effect-
iveness, but their performance on operating efficiency is not good. However,
these companies have good scores in total efficiency.
So, we can see that the main dimension that influences the total efficiency
of a company is its effectiveness. In the internet industry, the effectiveness of a
company is more important than its operating efficiency.

Table 7.3 Ranking of the BCC-efficient scores of total


efficiency and investing risk

Rank Total efficiency Investing risk

1 ADAM ADAM
2 JCOM JCOM
3 GOOG PCLN
4 VSTY VSTY
5 YHOO ORCC
6 DRIV YHOO
7 PCLN GOOG
8 WBSN DRIV
9 ORCC SOHU
10 EBAY WBSN
24 ECLG ECLG
25 IPAS CORI
26 SPRT IPAS
27 CORI SPRT
70 Enterprise Risk Management in Finance

Table 7.4 Ranking of the BCC-efficient scores of whole model

Rank Operating efficiency Marketability Total efficiency Investing risk

1 ADAM ADAM ADAM ADAM


2 JCOM JCOM JCOM JCOM
3 GOOG GOOG GOOG PCLN
4 AMZN PCLN VSTY VSTY
5 UNTD ORCC YHOO ORCC
6 DTAS YHOO DRIV YHOO
7 TZOO DRIV PCLN GOOG
8 EGOV VSTY WBSN DRIV
9 EBAY SOHU ORCC SOHU
10 RATE WBSN EBAY WBSN
24 IPAS ECLG ECLG ECLG
25 ORCC IPAS IPAS CORI
26 CORI SPRT SPRT IPAS
27 SPRT CORI CORI SPRT

There are only 2 out of the 27 listed online companies, namely ADAM
and JCOM, that are BCC-efficient on operating efficiency, effectiveness and
investing risk. These companies operate well, and investors can get the highest
returns by choosing them.
GOOG is BCC-efficient on operating efficiency and marketability; however,
its efficiency score on investing risk is low. This means that GOOG can use its
resources to generate profits, as well as using its profits to generate income, but
investors cannot get the high returns they hope for by investing in GOOG.
On the other hand, there are 3 of the 27 listed online companies, namely
IPAS, CORI, and SPRT, which are BCC-inefficient on operating efficiency,
effectiveness and investing risk. These companies operate their company
inefficiently, and investors get the lowest returns by choosing them.

Conclusions

This paper proposed a new performance measurement model, combining the


ROE and DEA approach, for both investors and managers. From empirical
study, we draw the following conclusions:

1. The main contribution of this study is to propose an accurate evaluation


process combining the ROE concept with the DEA model. The evaluation
process not only considers the performance of a company, but also considers
the return to investors as the whole performance to measure a company. For
investors, this model can be used to be the stock selecting strategy; based
on the relative score of each DMU, investors can easily rank the priority of
DEA Risk Scoring Model of Internet Stocks 71

the stocks. And for managers, this model can be used to be the performance
measurement model; based on the relative score of each DMU, the managers
can know the position where their company and also their competitors
stand, and they can know in each dimension whether their company is
performing well or needs to be improved.
2. Based on the research results, the main dimensional influence for a company’s
total efficiency is effectiveness. For the internet industry, the company’s
effectiveness is more important than its operating efficiency in influencing
performance. Hence, investors may focus on the net income and EPS when
they want to measure the US internet stock market. For managers, most of
the internet companies perform well on operating efficiency, which means
the companies can use their resources to generate profit well. However, most
of the internet companies have lower efficiency scores on effectiveness.
So companies should control their expenses, hence their net income will
be higher.
3. If investors choose companies with high scores in operating performance,
they should gain a higher level of returns. If the investors choose companies
with low scores in operating performance, they are likely to gain a lower
level of returns.

As with any study, this research is not without limitations. Three limitations
are noted:

1. The DEA model cannot use negative numbers.17 However, the listed online
companies can have negative net income or equity. In order to compare
these listed online companies on the same basis, companies with different
financial periods were excluded, leaving only 27 companies. The number of
decision making units (DMUs) is greater than the twice of the summation
of the number of input and output variables, so the sample size used in this
study still complies with the requirement of DEA approach.
2. Non-financial data are not included, which is also an important dimension
to measure online companies. This is because some data cannot be meas-
ured or may be confidential.
3. There are few previous research reports using DEA to evaluate the per-
formance of the internet industry, resulting in the lack of theoretical backup
in the input and output variable selection.
4. DEA can be combined with classical risk management such as value-at-risk18
to develop new methodologies for optimizing risk management.19
8
Bank Credit Scoring

Introduction

The concept of enterprise risk management (ERM) developed in the mid-1990s


in industry, expressing a managerial focus. ERM is a systematic, integrated
approach to managing all risks facing an organization.1 It has been encouraged
by traumatic recent events such as 9/11/2001 and business scandals, including
Enron and WorldCom. Consideration of risk has always been with business,
manifesting itself in 17th-century coffee houses such as Lloyd’s of London
spreading risk related to cargoes on the high seas. Businesses exist to cope with
specific risks efficiently. Uncertainty creates opportunities for businesses to
make profits. Outsourcing can offer many benefits, but also has a high level of
inherent risk, and ERM seeks to provide means to recognize and mitigate risks.
The field of insurance developed to cover a wide variety of risks, both external
and internal, covering natural catastrophes, accidents, human error, and even
fraud. Financial risk has been controlled through hedge funds and other tools
over the years, often by investment banks. With time, it was realized that many
risks could be prevented, or their impact reduced, through loss-prevention and
control systems, leading to a broader view of risk management.
The subprime crisis made companies become increasingly strict about the
effective functions of ERM. The failure of the credit rating mechanism trou-
bles companies that need timely signals about the underlying risks of their
financial assets. Recent development in major credit ratings agencies such as
Standard & Poor’s (S&P) and Moody’s have integrated ERM as an element of
their overall analysis of corporate creditworthiness.

Risk modeling

It is essential to use models to handle risk in enterprises. Risk-tackling models


can be (1) an analytical method for valuing instruments, measuring risk

72
Bank Credit Scoring 73

and/or attributing regulatory or economic capital; (2) an advanced or complex


statistical or econometric method for parameter estimation or calibration used
in the above; or (3) a statistical or analytical method for credit risk rating or
scoring.
Risk management tools can include creative risk financing solutions, blending
financial, insurance and capital market strategies (AIG).2 Capital market instru-
ments include catastrophe bonds, risk exchange swaps, derivatives/options,
catastrophe equity puts (cat-e-puts), contingent surplus notes, collateralized
debt obligations, and weather derivatives.
Many risk studies in banking involving analytic (quantitative) models have
been presented. Crouhy et al. (1998) provided comparative analysis of such
models.3 Value-at-risk models have been popular,4 partially in response to
the Basel II banking guidelines. Other analytic approaches include the simu-
lation of internal risk rating systems using past data. Jacobson et al. found that
Swedish banks used credit rating categories, and that each bank reflected its
own risk policy.5 One bank was found to have a higher level of defaults but
without adversely affecting profitability, due to its constraining high-risk loans
to low amounts. Elsinger et al. (2006) examined the systemic risk from overall
economic systems as well as the risk from networks of banks with linked loan
portfolios.6 The overall economic system risk was found to be much more
likely, while linked loan portfolios involved high impact but very low prob-
ability of default.
The use of scorecards has been popularized by Kaplan and Norton (1992)7
in their balanced scorecard, as well as other similar efforts to measure per-
formance on multiple attributes.8 In the Kaplan and Norton framework, four
perspectives are used, each with possible goals and measures specific to each
organization. Table 8.1 demonstrates this concept in the context of bank risk
management:
This framework of measures was proposed as a means to link intangible
assets to value creation for shareholders. Scorecards provide a focus on strategic
objectives (goals) and measures, and have been applied in many businesses and
governmental organizations with reported success. Papalexandris et al. (2005)9
and Calandro and Lane (2006)10 both have proposed use of balanced score-
cards in the context of risk management. Specific applications to finance,11
homeland security,12 and auditing13 have been proposed.
Model risk pertains to the risk that models are either incorrectly implemented
(with errors) or that make use of questionable assumptions or assumptions that
no longer hold in a particular context. It is the responsibility of the executive
management in charge of areas that develop and/or use models to determine
what models this policy applies to.
Lhabitant (2000)14 summarized a series of cases where model risk led to large
banking losses. These models vary from a trading model in pricing stripped
74 Enterprise Risk Management in Finance

Table 8.1 Balanced scorecard perspectives, goals, and measures

Perspectives Goals Measures

FINANCIAL Survive Cash flow


Succeed Quarterly sales, growth, operating
income by division
Prosper Increase in market share, Increase in
Return on Equity
CUSTOMER New products % sales from new products, % sales
from proprietary products
Responsive supply On-time delivery (customer definition)
Preferred suppliers Share of key accounts’ purchases,
ranking by key accounts
Customer partnerships # of cooperative engineering efforts
INTERNAL Technology capability Benchmark vs. competition
BUSINESS Manufacturing excellence Cycle time, unit cost, yield
Design productivity Silicon efficiency, engineering effi-
ciency
New product innovation Schedule: actual vs. planned
INNOVATION Technology leadership Time to develop next generation
& LEARNING Manufacturing learning Process time to maturity
Product focus % products equaling 80% of sales
Time to market New product introduction vs. compe-
tition

mortgage-backed securities, to risk and capital models in deciding on the struc-


tured securities, to decision models in issuing a gold card. Table 8.2 summa-
rizes some model risk events in banking.
Sources of model risk arise from the incorrect implementation and/or use
of a performing model (one with good predictive power) or the correct imple-
mentation/use of a non-performing model (one with poor predictive power).
To address these risks, vetting of a statistical model is comprised of two main
components: vetting and validation. Vetting focuses on analytic model compo-
nents, includes a methodology review, and verifies any implementation, while
validation follows vetting and is an ongoing systematic process to evaluate
model performance and to demonstrate that the final outputs of the model are
suitable for the intended business purpose.

Performance validation in credit rating

Performance validation/back testing focuses on credit rating in two key aspects:


discriminatory power (risk discrimination) and predictive accuracy (model
calibration). Discriminatory power generally focuses on the model’s ability to
Bank Credit Scoring 75

Table 8.2 Model risk events in banking

Trading and position Decision models in Risk and capital


Model management models retail banking models

Model risk Booking with a model Incorrect statistical Use of an unvetted or


that does not incorp- projections of loss, incorrect model; poor
orate all features of making an incorrect or incorrect esti-
the deal; booking decision (e.g. lending mation of model
with an unvetted or decision) or incor- parameters; testing
incorrect model; rectly calculating and limitations due to a
incorrect estimation reporting the Bank’s lack of historic data;
of model inputs risk (e.g. default and weak or missing
(parameters); loss estimation) as a change control
incorrect calibration result of an inad- processes, etc.
of the model; etc. equate model, etc.

rank-order risk, while predictive accuracy focuses on the model’s ability to


predict outcomes accurately (e.g. probability of defaults, loss given defaults,
etc.). Various statistic measures can be used to test the discriminatory power
and predictive accuracy of a model. Commonly used measures in credit rating
include the divergence, Lorenz curve/CAP curve and the Kolmogorov-Smirnov
(KS) statistic.15

Case study: credit scorecard validation

The predictive scorecard that is currently being used in a large Ontario bank
is validated. This bank has a network with a total of more than 8000 branches
and 14,000 ATM machines operating across Canada. This bank successfully
conducted a merger of two brilliant financial institutions in 2000, and became
Canada’s leading retail banking organization. It has also become one of the
top three online financial service provider by providing online services to
more than 2.5 million online customers. The scorecard system used in retail
banking strategy will then need to be validated immediately due to this merger
event. The scorecard system under evaluation predicts the likelihood that a
60–120-day delinquent account (mainly on personal secured and unsecured
loans and lines of credit) will be cured within the subsequent three months.
By breaking up funded accounts into three samples based on their limit issue
date, the model’s ability to rank order accounts based on creditworthiness
was validated for individual samples and compared to the credit bureau score.
Tables 8.3, 8.4, and 8.5 give the sample size, mean, and standard deviation of
these three samples: Sample 1 involves accounts from January 1999 to June 1999,
Sample 2 from July 1999 to December 1999, and Sample 3 from January 2000
76 Enterprise Risk Management in Finance

Table 8.3 Scorecard performance validation, January–June 1999

Scorecard
Beacon/ (No Bureau Application Bureau
Scorecard Beacon Empirical score) alone alone

Good N 26783 25945 26110 673 26783 26783


Mean 250 734 734 222 42 208
Std. Dev 24 55 55 22 9 21
Bad N 317 292 296 21 317 317
Mean 228 685 685 204 40 188
Std. Dev 23 55 55 13 9 22
Total N 27100 26.237 26406 694 27100 27100
Mean 249 733 733 221 42 207
Std. Dev 24 55 55 22 9 21
KS% KS Value 39 37 37 44 14 36
Score 240 733 735 215 45 204
Divergence 0.869 0.792 0.790 0.877 0.070 0.814
Bad% 1.17 1.11 1.12 3.03 1.17 1.17

Table 8.4 Scorecard performance validation, July–December 1999

Scorecard
Beacon/ (No Bureau Application Bureau
Scorecard Beacon Empirical score) alone alone

Good N 20,849 20,214 20,302 547 20,849 20,849


Mean 248 728 728 222 42 206
Std. Dev 24 54 54 23 9 21
Bad N 307 296 297 10 307 307
Mean 231 691 692 208 40 191
Std. Dev 23 55 55 12 9 22
Total N 21,256 20,510 20,599 557 21,156 21,156
Mean 248 727 727 222 42 206
Std. Dev 24 54 54 22 9 21
KS% KS value 33 26 26 42 10 29
Score 246 731 731 216 43 200
Divergence 0.528 0.450 0.435 0.624 0.040 0.498
Bad% 1.45 1.44 1.44 1.80 1.45 1.45

to June 2000. Cases of 90 days’ delinquency or worse, and accounts that were
closed with a ‘NA (non-accrual)’ status or that were written off were included as
bad performance. Good cases were defined as those that did not meet the def-
inition of ‘bad.’ The ‘bad’ definition is evaluated at 18 months. Three samples of
cohorts were created and compared. Specified time periods refer to month-end
dates. For the performance analyses, the limit issue dates were considered,
Bank Credit Scoring 77

Table 8.5 Scorecard performance validation, January–June 2000

Scorecard
(No
Beacon/ Bureau Application Bureau
Scorecard Beacon Empirical score) alone alone

Good N 23,941 23,254 23,361 580 23,941 23,941


Mean 246 723 723 223 41 205
Std. Dev 24 54 54 21 9 21
Bad N 533 490 495 38 533 533
Mean 225 683 683 216 38 187
Std. Dev 21 51 51 16 9 20
Total N 24,474 23,744 23,856 618 24,474 24,474
Mean 245 723 723 222 41 204
Std. Dev 24 54 54 20 9 21
KS% KS value 38 33 33 26 14 34
Score 239 704 704 215 43 202
Divergence 0.843 0.606 0.598 0.147 0.078 0.789
Bad% 2.18 2.05 2.07 6.15 2.18 2.18

while the population analyses used the application dates. In order to validate
the relative effectiveness of the scorecard, we conducted statistical analysis, and
reported results for the following statistical measures: divergence test, Lorenz
curve, Kolmogorov–Smirnov (K–S) test, and population stability index.

Statistical results and discussion


Tables 8.3, 8.4 and 8.5 document scorecard performance validation for
January–June 1999, July–December 1999 and January–June 2000 respectively.
Table 8.6 presents the summary analysis for performance samples. The numbers
in these tables are rounded off to demonstrate the nature of score values
assigned to different customer accounts. This also helps prevent revealing the
bank’s business details, for security. Using the rounded number values, we can
easily compute the divergence values close to those in the last row of each
table. For example, relating to the scorecard in Table 8.3: Mean Good = 250,
Std. Dev. Good = 24, Mean Bad = 228, Std. Dev. Bad = 23. The difference (Mean
Good – Mean Bad) is equal to 22, and the average variances sum is equal to
552.5. The divergence is the fraction 484/552.5 = 0.876, which is very close to
the 0.869 based on non-rounded values given in Table 8.3. The corresponding
Lorenz curves are depicted in Figures 8.1, 8.2, 8.3 and 8.4. Note that all figures
are based on best applicant. In a dataset that has been sorted by the scores in
ascending order, with a low score corresponding to a risky account, the perfect
model would capture all the ‘bads’ as quickly as possible. The Lorenz curve
assesses a model’s ability to effectively rank order these accounts. For example,
78 Enterprise Risk Management in Finance

Table 8.6 Summary of performance samples

January–June 99 July–December 99 January–June 00

Good N 26,783 20,849 23,941


Mean 250 248 246
Std. Dev 24 24 24
Bad N 317 307 533
Mean 228 231 225
Std. Dev 23 23 21
Total N 27,100 21,156 24,474
Mean 249 248 245
Std. Dev 24 24 24
KS% KS value 39 33 38
Score 240 246 239
Divergence 0.869 0.528 0.843
Bad% 1.17 1.45 2.18
* Low score override 574 504 533
** Instore accounts 459 569 1,353

100%
90%
80%
Percent of bads captured

70%
60%
Scorecard
50%
Beacon
40% Beacon/Empirica
Scorecard (No Bureau score)
30% Applicant Alone
20% Bureau Alone
Random
10% Exact

0%
0%

5%

%
0%
10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

95
10

Percent into distribution

Figure 8.1 Lorenz curve, January–June 1999 sample

if 15% of the accounts were bad, the ideal or exact model would capture all
these bads within the 15th percentile of the score distribution (the x-axis).
Similarly, the K–S and divergence statistics determine how well the models
distinguished between ‘good’ and ‘bad’ accounts by assessing the properties of
their respective distributions.
The results indicate that the scorecard is a good predictor of risk. Amongst
the three selected sampling periods, the two periods January–June 99 and
January–June 00 highlight a slightly better predictive ability than the period
July–December 99. Scorecard also performs better than, though not by a sig-
nificant margin, the credit bureau score.
100%
90%
Percent of bads captured 80%
70%
60% Scorecard
50% Beacon
Beacon/Empirica
40% Scorecard (No Bureau Score)
30% Applicant Alone
Bureau Alone
20% Random
10% Exact
0%
0%

5%

%
0%
10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

95
10
Percent into distribution

Figure 8.2 Lorenz curve, July–December 1999 sample

100%
90%
80%
Percent of bads captured

70%
60%
50% Scorecard
Beacon
40% Beacon/Empirica
Scorecard (No Bureau Score)
30% Applicant Alone
20% Bureau Alone
Random
10% Exact

0%
0%

5%

%
0%
10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

95
10
Percent into distribution

Figure 8.3 Lorenz curve, January–June 2000 sample

100%
90%
80%
Percent of bads captured

70%
60%
50%
40% Jan–Jun’99
Jul–Dec’99
30%
Jan–Jun’00
20% Random
10% Exact

0%
0%

5%

%
0%
10

15

20

25

30

35

40

45

50

55

60

65

70

75

80

85

90

95
10

Percent into distribution

Figure 8.4 Performance comparison of three samples


80 Enterprise Risk Management in Finance

The performance statistics for the three selected samples shown in Tables 8.3,
8.4 and 8.5 indicate the superiority of the scorecard as a predictive tool. The
scorecard was found to be a more effective assessor of risk for the earlier sample,
January–June 99, and the latest sample, January–July 00, but was slightly less
effective for the July–December 99 sample. There was a more distinct separation
between ‘goods’ and ‘bads’ for the above-mentioned first two samples than the
last: the maximum difference between the ‘good’ and ‘bad’ cumulative distri-
butions was 39% and 38% respectively, versus 33% for the remaining sample.
Similarly, the divergence values were 0.869 and 0.843, versus 0.528 for the less
effective sample.
It is possible that the scorecard was better able to separate ‘good’ accounts
from ‘bad’ ones for the earlier sample. On the other hand, the process to clean
up delinquent unsecured line of credit accounts starting from mid-2001 may
result to more bad observations for the latest sample (those accounts booked
between January 00 and June 00 with an 18-month observation window will
catch up with this clean-up process). This can be evidenced by the bad rate
of 2.18% for the January–June 00 sample, compared to 1.45% for the July–
December 99 sample, and 1.17% for the January–June 99 sample. If most of
these bad accounts in the clean-up have a low initial score, the predictive
ability of the scorecard on this cohort will be increased.

Population distributions and stability


We conduct a comparison analysis between the initial sample used to develop the
model and subsequent sampling periods, which provides insight into whether
or not the scorecard is being used to score a different population. The analyses
considering all applicants are included, but outliers have been excluded, that is,
invalid scorecard points. We consider four sampling periods for the cumulative
and interval population distribution charts: the FICO development sample,
January–June 99, July–December 99, and January–July 00, which had a very
notable population shift across the samples where the recent applicants clearly
were scoring lower points than before. On the other hand, the development
sample was markedly distinct from the three selected samples.
We now use the population stability index to estimate the change between
the samples. As mentioned in Section 4, a stability index of < 0.10 indicates an
insignificant shift; 0.10–0.25 requires some investigation; and > 0.25 means
that a major change has taken place between the populations being compared.
Tables 8.7, 8.8 and 8.9 present a detailed score distribution report together
with the six-month population stability index for each of the above three
selected samples, included funded accounts only. Computation shows that the
indexes for the three samples on funded accounts are greater than 0.1, and
the more recent samples score a lower index than the older samples: 0.2027
for the January–June 99 sample, 0.1461 for the July–December 99 sample, and
Table 8.7 Population stability, January–June 1999

Ascending
cumulative
Ascending of
FICO FICO Proportion Weight Contribution cumu- January–
Score development January– development January– Change Ratio of evidence to index lative of June 99
range (#) June 99 (#) (%) June 99 (%) (5)–(4) (5)/(4) in [(7)] (8)*(6) FICO % (%)

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)

<170 37601 1430 7.13 1.96 −0.0517 0.2749 −1.2912 0.0668 7.13 1.96
170–179 25093 1209 4.76 1.66 −0.0310 0.3483 −1.0546 0.0327 11.89 3.62
180–189 30742 1888 5.83 2.59 −0.0324 0.4440 −0.8119 0.0263 17.72 6.21
190–199 37128 3284 7.04 4.50 −0.0254 0.6394 −0.4471 0.0114 24.77 10.71
200–209 42055 4885 7.98 6.70 −0.0128 0.8398 −0.1746 0.0022 32.74 17.41
210–219 46355 5735 8.79 7.86 −0.0093 0.8944 −0.1116 0.0010 41.53 25.27
220–229 49068 6716 9.31 9.21 −0.0010 0.9895 −0.0106 0.0000 50.84 34.48
230–239 48577 7543 9.21 10.34 0.0113 1.1226 0.1156 0.0013 60.06 44.83
240–249 48034 8762 9.11 12.02 0.0290 1.3187 0.2767 0.0080 69.17 56.84
250–259 46023 9121 8.73 12.51 0.0378 1.4328 0.3596 0.0136 77.90 69.35
260–269 40541 8826 7.69 12.10 0.0441 1.5739 0.4535 0.0200 85.59 81.45
270–279 37940 8310 7.20 11.40 0.0420 1.5835 0.4596 0.0193 92.78 92.85
>280 38050 5216 7.22 7.15 −0.0006 0.9910 −0.0090 0.0000 100.00 100.00
Total 527207 72925 100 100 .2027

Note: Population Stability Index (sum of contribution): 0.2027


The contribution index can be interpreted as follows:
< = 0.10 indicates little to no difference between the FICO development score distribution and the current score distribution.
0.10 to 0.25 indicates some change has taken place.
> = 0.25 indicates a shift in the score distribution has occurred.
Table 8.8 Population stability, July–December 1999

Ascending
Weight cumulative
FICO July– FICO July– Proportion of evi- Contribution Ascending of July–
Score development December development December change Ratio dence to index Cumulative December
range (#) 99 (#) (%) 99 (#) (5)–(4) (5)/(4) in [(7)] (8)*(6) of FICO % 99 (#)

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)

<170 37601 1447 7.13 2.12 −0.0502 0.2968 −1.2146 0.0609 7.13 2.12
170–179 25093 1352 4.76 1.98 −0.0278 0.4156 −0.8781 0.0244 11.89 4.10
180–189 30742 2106 5.83 3.08 −0.0275 0.5284 −0.6379 0.0175 17.72 7.18
190–199 37128 3609 7.04 5.28 −0.0176 0.7498 −0.2880 0.0051 24.77 12.46
200–209 42055 5452 7.98 7.98 0.0000 0.9999 −0.0001 0.0000 32.74 20.43
210–219 46355 6169 8.79 9.03 0.0023 1.0265 0.0261 0.0001 41.53 29.46
220–229 49068 7009 9.31 10.25 0.0095 1.1018 0.0969 0.0009 50.84 39.71
230–239 48577 7454 9.21 10.91 0.0169 1.1836 0.1685 0.0029 60.06 50.62
240–249 48034 7908 9.11 11.57 0.0246 1.2699 0.2389 0.0059 69.17 62.19
250–259 46023 7774 8.73 11.37 0.0264 1.3029 0.2646 0.0070 77.90 73.56
260–269 40541 7362 7.69 10.77 0.0308 1.4007 0.3370 0.0104 85.59 84.33
270–279 37940 6716 7.20 9.83 0.0263 1.3654 0.3114 0.0082 92.78 94.16
>280 38050 3993 7.22 5.84 −0.0138 0.8094 −0.2114 0.0029 100.00 100.00
Total 527207 68351 100 100 .1461

Note: Population Stability Index (sum of contribution): 0.1461.


Table 8.9 Population stability, January–June 2000

Ascending
FICO FICO Proportion Weight of Contribution Ascending cumulative
Score development January– development January– change Ratio evidence to index cumulative of January–
range (#) June 00 (#) (%) June 00 (%) (5)–(4) (5)/(4) in [(7)] (8)*(6) of FICO (%) June 00 (%)

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)

<170 37601 1928 7.13 2.46 −0.0467 0.3448 −1.0648 0.0498 7.13 2.46
170–179 25093 1838 4.76 2.34 −0.0242 0.4925 −0.7082 0.0171 11.89 4.80
180–189 30742 3136 5.83 4.00 −0.0183 0.6859 −0.3770 0.0069 17.72 8.80
190–199 37128 4784 7.04 6.10 −0.0094 0.8664 −0.1434 0.0013 24.77 14.91
200–209 42055 6505 7.98 8.30 0.0032 1.0401 0.0393 0.0001 32.74 23.20
210–219 46355 7212 8.79 9.20 0.0041 1.0462 0.0451 0.0002 41.53 32.40
220–229 49068 8250 9.31 10.52 0.0122 1.1306 0.1227 0.0015 50.84 42.92
230–239 48577 8762 9.21 11.18 0.0196 1.2129 0.1930 0.0038 60.06 54.10
240–249 48034 8769 9.11 11.18 0.0207 1.2276 0.2050 0.0043 69.17 65.28
250–259 46023 8451 8.73 10.78 0.0205 1.2348 0.2109 0.0043 77.90 76.06
260–269 40541 7850 7.69 10.01 0.0232 1.3020 0.2639 0.0061 85.59 86.07
270–279 37940 6736 7.20 8.59 0.0140 1.1939 0.1772 0.0025 92.78 94.67
>280 38050 4182 7.22 5.33 −0.0188 0.7391 −0.3024 0.0057 100.00 100.00
Total 527207 78403 100 100 .1036

Note: Population stability index (sum of contribution): 0.1036.


Table 8.10 Total population stability index

Contribution February- March- April- May- August- September- October- November- December-
index January-00 00 00 00 00 June-00 July-00 00 00 00 00 00

< = 0.10 .0959 .0962 .0826 .0999 .0919 .0940 .0926 .0693 .0656
0.10 to 0.25 .1097 .1313 .1236
Contribution January-01 February-01 March-01 April-01 May-01 June-01 July-01 August-01 September-01 October-01 November-01 December-01 January-02
index
< = 0.10 .0940 .0898 .0787 .0979 .0829 .0615 .0696 .0701 .0907 .0816 .0817 .0915 .0771
Bank Credit Scoring 85

0.1036 for the January–June 00 sample. We also computed the monthly popu-
lation stability, showing the monthly index for total applications (funded or
not funded) in the two years starting from January 2000. This result further
confirms the declining trend with the monthly indexes for the past 20 months
all resting within 0.1 (see Table 8.10).
As indicated in Figures 8.5 and 8.6, more of the latest sample accounts had
a lower score compared to the older samples; a tendency of scores to drop over
time has been revealed. All of the three samples had a score distribution higher
than the development sample.
The stability indices revealed that the greatest population shift occurred
when the scorecard was originally put in place, then the extent of shift reduced
gradually across time. The indexes stayed within 0.1 for the past 20 months.

Conclusions

Maintaining a certain level of risk has become a key strategy to make profits in
today’s economy.
Risk in enterprise can be quantified and managed using various models.
Models also provide support to organizations seeking to control enterprise
risk. We have discussed risk modeling, and reviewed some common risk meas-
ures. Using the variation of these measures, we demonstrate support to risk
management through validation of predictive scorecards for a large bank. The
scorecard model is validated and compared to credit bureau scores. A com-
parison of the K–S value and the divergence value between scorecard and
bureau score in the three samples indicated scorecard to be a better tool than

100%
90%
Cumulative distribution

80%
70%
60%
50%
40% FICO Development
30% Jan–Jun99
20% Jul–Dec99
Jan–Jun00
10%
0%
280&UP
0–169

170–179

180–189

190–199

200–209

210–219

220–229

230–239

240–249

250–259

260–269

270–279

Score

Figure 8.5 Cumulative population distribution on all applicants


86 Enterprise Risk Management in Finance

14%
Interval population distribution

12%

10%

8%

6%
FICO Development
4% Jan–Jun99
Jul–Dec99
2% Jan–Jun00

0%
0–169

170–179

180–189

190–199

200–209

210–219

220–229

230–239

240–249

250–259

260–269

270–279

280&UP
Score

Figure 8.6 Interval population distribution on all applications

bureau score to distinguish the ‘bads’ from the ‘goods.’ In practice, however,
the vetting and validation of models may encounter many challenges. For
example, when retail models under vetting are relatively new to the enterprise,
when there are large amounts of variables and data to manipulate and there
is limited access to these datasets due to privacy restrictions, when validation
tests are not standardized and there are demands for ability to change the
measure if results do not look favorable, these challenges become apparent.

Appendix A8.1: Informal Definitions

a. Bad accounts refer to cases 90 days delinquent or worse, accounts closed


with a ‘NA (non-accrual)’ status or that were written off.
b. Good accounts were defined as those that did not meet the ‘bad’ definition.
c. Credit score is a number that is based on a statistical analysis of a person’s
credit report, and is used to represent the creditworthiness of that person –
that is, the likelihood that the person will pay his or her debts.
d. A credit bureau is a company that collects information from various sources
and provides consumer credit information on individual consumers for a
variety of uses.
e. Custom score refers the score assigned to existing customers or new
applicants.
f. Beacon score is the credit score produced at the most recognized agency
Equifax in Canada.
g. The FICO score is the credit score from Fair Isaac Corporation, a publicly
traded corporation that created the best-known and most widely used credit
score model in the United States.
9
Credit Scoring using Multiobjective
Data Mining

Introduction

The technique for order preference by similarity to ideal solution (TOPSIS) is a


classical method first developed by Hwang and Yoon,1 subsequently discussed
by many.2 TOPSIS is based on the concept that alternatives should be selected
that have the shortest distance from the positive ideal solution (PIS) and the
longest distance from the negative ideal solution (NIS), or nadir. The PIS has
the best measures over all attributes, while the NIS has the worst measures over
all attributes.
TOPSIS provides a mechanism that is attractive in data mining,3 because it
can consider a number of attributes in a systematic way without very much
subjective human input. TOPSIS does include weights over the attributes that
are considered. However, such weights can be obtained through regression of
standardized data (where measurement scale differences are eliminated).4 This
allows machine learning, in the sense that data can be analyzed without sub-
jective human input. This chapter demonstrates the method to automatically
classify credit score data into groups of high expected repayment and low
expected repayment, based upon the concept of TOPSIS.

TOPSIS for data mining

The overall approach is to begin with a set of data, which in traditional data
mining practice is divided into training and test sets. The data can consist of
continuous or binary numeric data, with the outcome variable being binary.
The training set data is used to identify maximum and minimum measures
for each attribute. This training set is then standardized over the range of
0 to 1, with 0 reflecting the worst measure and 1 the best measure over each
attribute. Then relative weight importance is obtained by regression over the
standardized data to explain outcome performance in the training data set.

87
88 Enterprise Risk Management in Finance

(An intermediate third data set could be created for generation of weights if
desired.)

Steps of the TOPSIS data mining method


The algorithm we propose thus consists of the following steps:

Step 1: Standardize data


In accordance with the presentation above, the training data set is standard-
ized so that each observation j over each attribute i is between 0 and 1. Let the
decision matrix X consist of m indicators over n observations. The normalized
matrix transforms the X matrix. For indicator i = 1 to m, identify the minimum
xi− and the maximum xi+. Then each observation xij for j = 1 to n can be normal-
ized by the following formulas:
For measures to be maximized:

xij − xi−
yij = (9.1)
xi+ − xi−

For measures to be minimized:

xij − xi− (9.2)


yij = 1 −
xi+ − xi−

which yields values between 0 (the worst) and 1 (the best).

Step 2: Determine ideal and nadir solutions


The ideal solution consists of standardized values of 1 over all attributes, while
the nadir solution consists of values of 0 over all attributes.

Step 3: Calculate weights


In decision analysis, these weights would reflect relative criterion importance
(as long as scale differences are eliminated through standardization). Here we
are interested in the relative value of each attribute in explaining the outcome
of each case. These m weights wi will all be between 0 and 1, and will have a
sum of 1. Because weights are continuous, we use ordinary least squares (OLS)
regression over the standardized data to obtain the i = 1 to m different weights
from regression βi coefficients.

m
0 ≤ wi ≤ 1, ∑w
i =1
i =1
Credit Scoring using Multiobjective Data Mining 89

Step 4: Calculate distances


TOPSIS operates by identifying Di+, the weighted distance from the ideal, and
Di−, the weighted distance from the nadir. Different metrics, such as L1, L 2, or
L∞, could be used.5 Least absolute value regression (L1) has been found to be
useful when it is desired to minimize the impact of outlying observations,
and has been shown to be effective in a variety of applications, such as real
estate valuation6 and sports ratings.7 The ordinary least squares metric is L 2,
widely used. The Tchebychev metric (L∞) focuses on the extreme performance
among the set of explanatory variables. Each metric focuses on the different
features described. Olson8 found L1 and L2 to provide similar results, both
better than L∞. The weights from Step 3 are used. Lee and Olson9 compared
different metrics for predicting outcomes of binary games, and found L 2 and
L∞ to provide similar results, both better than L1. Thus, none of these metrics is
clearly superior to the others for any specific set of data. For the L1 metric, the
formula for the weighted distance from the ideal is:
m
D1j + = ∑ w × (1 − y )
i =1
i ij for j = 1 to n (9.3)

The weighted distance from the nadir solution is:


m
D1j − = ∑ w × (y )
i =1
i ij for j = 1 to n (9.4)

The formulas for the L2 metric are very similar:


m
Dj2 + = ∑ w × (1 − y )
i =1
2
i
2
ij for j = 1 to n (9.5)

The weighted distance from the nadir solution is:


m
Dj2 − = ∑ w × (y )
i =1
2
i
2
ij for j = 1 to n (9.6)

The L∞ metric (the Tchebychev metric) by formula involves the infinite root of
an infinite power, but this converges to emphasizing the maximum distances.
The weights become irrelevant. Thus the L∞ distance measures are:

Dj∞ − = MAX{ yij }; Dj∞ + = MAX{1 − yij } (9.7)

Step 5: Calculate closeness coefficient


Relative closeness considers the distances from the ideal (to be minimized) and
from the nadir (to be maximized) simultaneously through the TOPSIS formula:
DjL −
Cj = L−
(9.8)
D j + DnL +
90 Enterprise Risk Management in Finance

Step 6: Determine cutoff limit for classification


The training data set contained a subset of observations in each category of
interest. In a binary application (such as segregating training observations into
loans that were defaulted, Neg, and loans that were repaid, Pos), the proportion
of Neg observations, PNeg, is identified. The closeness coefficient, Cj, has high
values for cases that are close to the ideal and far from the nadir, and thus
can be sorted with low values representing the worst cases. Thus the rank of
the largest sorted observation in the Neg subset JNeg would be PNeg × (Neg + Pos).
The cutoff limit, CLim, can be identified as a value greater than that of ranked
observation, JNeg, but less than that of the next largest ranked observation.

Step 7: Apply formula


For new cases with unknown outcomes, the relative closeness coefficient Cj can
be calculated by Formula (9.7) and compared with the cutoff limit obtained
in Step 6. The only data feature that needs to be considered is that it is pos-
sible for the test data to contain observations outside the range of data used to
determine training parameters.

IF yij < 0 THEN yij = 0

IF yij > 1 THEN yij = 1

This retains the standardized features of the test data. The model application is
then obtained by applying the rules to the test data:

IF Cj < CLim THEN classification is Negative

IF Cj > CLim THEN classification is Positive

Model fit is tested by traditional data mining coincidence matrices.

Dataset

A company’s financial performance can be represented by various ratios taken


from financial statements.10 Such ratios provide useful information to describe
credit conditions from various perspectives, such as financial conditions and
credit status. The diagnostic process involves multiple criteria. We present a
real set of loan cases from Canadian banking. The data reflects operations in
1995 and 1996; there are 177 observations for 1995 (17 defaulting, 160 good),
and 126 (11 defaulting, 115 good) for 1996. While the dataset is unbalanced
(banks would hope that it was), it is typical. Models for decision trees can be
susceptible to degeneration, as they often classify all observations in the ‘good’
category.11 This did not prove to be a problem with this dataset, but Laurikkala
Credit Scoring using Multiobjective Data Mining 91

Table 9.1 Independent variables for Canadian banking data set

Training set Training set


minimum maximum
Variable value value Goal

Total Assets TA 332 421,029 Maximize


Capital Assets CA 107 269,188 Maximize
Interest Expense IE 0 70,938 Minimize
Stability of Earnings INSTAB 34.781 74,672.86 Maximize
Working Capital WC −403,664 169,523 Maximize
Total Current Liabilities CL 33 578,857 Minimize
Total Liabilities TL 33 584,698 Minimize
Retained Earnings RE −486,027 225,719 Maximize
Shareholders Equity SE −430,935 298,903 Maximize
Net Income NI −238,326 97,736 Maximize
Earnings before Tax & Depr. EBITDA −132,388 158,401 Maximize
Cash Flow from Operations CF −41,387 95,427 Maximize

(2002)12 and Bull (2005)13 provide procedures to deal with such problems of
unbalanced data if they detrimentally affect data mining models. The dataset
consisted of the outcome variable (categorical: default, good) and 12 continuous
numeric independent variables, as given in Table 9.1.
This dataset demonstrates many features encountered with real data. Most
variables are to be maximized, but here, three of the twelve variables would
have the minimum as preferable. There are negative values associated with the
dataset as well.

TOPSIS model over training data

Step 1: Standardize data


The data set was standardized using Formulas (9.1) and (9.2).

Step 2: Determine ideal and nadir solutions


The ideal solution here is a vector of standardized scores of 1:

{1 1 1 1 1 1 1 1 1 1 1 1}

reflecting the best performance identified in the training set for each variable.
The nadir solution is conversely:

{0 0 0 0 0 0 0 0 0 0 0 0}.

All n observations would have a standardized score vector consisting of m (here


m = 12) values between 0 and 1.
92 Enterprise Risk Management in Finance

Table 9.2 Standardized data regression

Regression Absolute Proportional


Variable coefficient βi P-Value value of βi weight

TA −1.4205 0.926 1.4205 0.103


CA 0.5263 1.000 0.5263 0.038
IE −1.7013 0.043 1.7013 0.123
INSTAB −0.3245 0.453 0.3245 0.023
WC −0.1028 1.000 0.1028 0.007
CL 0.3010 1.000 0.3010 0.022
TL −0.6058 0.977 0.6058 0.044
RE 0.3551 0.477 0.3551 0.026
SE 2.1597 0.935 2.1597 0.156
NI 3.3446 0.051 3.3446 0.242
EBITDA −2.2372 0.084 2.2372 0.162
CF 0.7510 0.056 0.7510 0.054
Totals 13.8298 1.000

Step 3: Calculate weights


Weights were obtained by regressing over the standardized data with the
outcome of 0 for default and 1 for no default. Table 9.2 shows the results of that
regression (using ordinary least squares).
This model had an R-Square value of 0.287 (adjusted R-Square of 0.235) –
relatively weak. Correlation analysis indicated a great deal of multicollinearity
(demonstrated by the many insignificant beta coefficients in Table 9.2), so a
trimmed model using the three uncorrelated variables of NI, EBITDA, and CF
was run. This trimmed model had an R-Square of 0.245 (adjusted R-Square of
0.232), but the predictive capability of this model was much weaker than that of
the full model. Multicollinearity would be a problem with respect to variable β
coefficient significance, but since our purpose is prediction of the overall model
rather than interpretation of the contribution of each independent variable,
this is not a problem in this application. Therefore, the full regression model
was used. Weights obtained in Step 3 are therefore given in the last column of
Table 9.2. However, these weights should not be interpreted as accurate reflec-
tions of variable prediction importance due to the model’s multicollinearity,
which makes these weights unstable given overlapping information content.

Step 4: Calculate distances


Three metrics were used for the TOPSIS models in this study.
For the L1 model, the values for Di1+ were obtained by generating by Formula
(9.3) for each observation over each variable in the training set, and Di1−
obtained by Formula (9.4). Formulas (9.5) and (9.6) were used for the L 2 model,
and the formulas given in (9.7) for the L∞ model.
Credit Scoring using Multiobjective Data Mining 93

Step 5: Calculate closeness coefficient


Formula (9.8) was applied to the distances obtained in step 4 for the training set.

Step 6: Determine cutoff limit for classification


The 177 closeness coefficient values were then sorted, obtaining a 17th ranked
closeness coefficient and an 18th ranked closeness coefficient. For the L1
model, these were 0.56197 and 0.561651. Thus an L1 cutoff limit of 0.5615 was
obtained for application on the test set and for classification of future values.
For the L 2 model, the corresponding numbers were 0.410995 and 0.412294,
yielding a cutoff limit of 0.411. For the L∞ model, these numbers were 0.624159
and 0.624179, and a cutoff limit of 0.62416 was used.

Step 7: Application of model


The last step is to apply models to test data. Results are given.

Model comparisons

The original raw data was used with two commercial data mining software
tools (PolyAnalyst and See5) for decision tree models. The PolyAnalyst decision
tree model used only two variables, NI and WC. The decision tree is given in
Figure 9.1.
This model had a 0.865 correct classification rate over the test set of 126
observations, as shown in Table 9.3.

IF NI < 1250
AND IF WC < 607 THEN 0 (Neg)
ELSE IF WC >= 607 THEN 1 (Pos)
ELSE IF NI >= 1250 THEN 1 (Pos)

Yes Yes
THEN
IF IF Negative
NI<1250 WC<607 outcome
predicted

No No

THEN THEN
Positive Positive
outcome outcome
predicted predicted

Figure 9.1 PolyAnalyst decision tree


94 Enterprise Risk Management in Finance

Table 9.3 Coincidence matrix – PolyAnalyst decision tree

Model 0 (Neg) Model 1 (Pos)

Actual 0 (Neg) 9 2 11
Actual 1 (Pos) 6 109 115
15 111 126

IF NI > 1256 THEN 1 (Pos)


ELSE IF NI <= –26958
AND IF CF > 24812 THEN 1 (Pos)
ELSE IF CF <= 24812 THEN 0 (Neg)
IF NI > –26958 AND CA <= 828 THEN 0 (Neg)
ELSE IF CA > 828 AND IE <= 2326 THEN 1 (Pos)
ELSE IF IE > 2326 THEN 0 (Neg)

No Yes
IF IF IF
NI > 1256 NI ≤ –26958 CF > 24812
Yes

Yes No
THEN
No
Positive
THEN IF outcome
Positive CA ≤ 828 predicted
THEN
outcome
Yes Negative
predicted
outcome
No
predicted
THEN
IF Negative
Yes IE > 2326 outcome
predicted
THEN
Negative No
outcome
predicted
THEN
Positive
outcome
predicted

Figure 9.2 See5 decision tree

The errors in this model were a bit more proportional in the bad case of
assigning actual default cases to the predicted on-time payment category.
However, cost vectors were not used, so there was no reason to expect the
model to reflect this.
Credit Scoring using Multiobjective Data Mining 95

See5 software yielded the following decision tree, using four independent
variables.
Table 9.4 shows the results for the decision tree model obtained from See5
software, which had a correct classification rate of 0.817, just a little worse than
the PolyAnalyst model (although this is for the specific data, and in no way is
generalizable):

Table 9.4 Coincidence matrix – See5 decision tree

Model 0 (Neg) Model 1 (Pos)

Actual 0 (Neg) 8 3 11
Actual 1 (Pos) 13 102 115
21 105 126

Finally, the TOPSIS models were run. Results for the L1 model are given in
Table 9.5, with a correct classification rate of 0.944.

Table 9.5 Coincidence matrix – TOPSIS L1 model

Model 0 (Neg) Model 1 (Pos)

Actual 0 (Neg) 6 5 11
Actual 1 (Pos) 2 113 115
8 118 126

The results for the L 2 model are given in Table 9.6, with a correct classifi-
cation rate of 0.921.

Table 9.6 Coincidence matrix – TOPSIS L2 model

Model 0 (Neg) Model 1 (Pos)

Actual 0 (Neg) 6 5 11
Actual 1 (Pos) 5 110 115
11 115 126

The results for the L∞ model are given in Table 9.7, with a correct classification
rate of 0.844. Here all three metrics yield similar results (for this data, superior
to the decision tree models; but that is not a generalizable conclusion).

Table 9.7 Coincidence matrix – TOPSIS L∞ model

Model 0 (Neg) Model 1 (Pos)

Actual 0 (Neg) 6 5 11
Actual 1 (Pos) 2 113 115
8 118 126
96 Enterprise Risk Management in Finance

The results for the different models are given in Table 9.8.

Table 9.8 Comparison of model results

Model Actual 0 Actual 1 Proportion


Model 1 Model 0 correct

Polyanalyst decision tree 2 6 0.937


See5 decision tree 3 13 0.873
TOPSIS L1 5 2 0.944
TOPSIS L 2 5 5 0.921
TOPSIS L∞ 5 2 0.944

These models were applied to one data set, demonstrating how TOPSIS prin-
ciples can be applied to data mining classification. In this one small (but real)
data set for a common data mining application, the TOPSIS models gave better
fit to test data than did two well-respected decision tree software models. This
does not imply that the TOPSIS models are better, but it provides another tool
for classification. The TOPSIS models are easy to apply in spreadsheets, however
much data has been fit into that spreadsheet. Any number of independent vari-
ables could be used, limited only by database constraints.

Simulation of model results

The Monte Carlo simulation provides a good tool to test the effect of input
uncertainty over output results.14 Simulation was applied to examine the sen-
sitivity of the five models to perturbations in test data. Each test data variable
value was adjusted by adding an adjustment equal to:

Perturbation × Uniform random number × Standard normal variate

The perturbations used were 0.25, 0.5, 1, and 2. These values reflect
increasing noise in the data. The adjustments are standard normal variates
with mean 0 and standard deviation found in the training dataset for that
variable. Simulation results are shown in Table 9.9.

Table 9.9 Simulation results

PADT PADT C5 C5 L1 L1 L2 L2 L∞ L∞
Perturbation Min Max Min Max Min Max Min Max Min Max

0 0.9365 0.9365 0.8730 0.8730 0.9444 0.9444 0.9206 0.9206 0.9444 0.9444
0.25 0.7381 0.9365 0.6746 0.8651 0.7619 0.9286 0.7619 0.9127 0.6825 0.8889
0.50 0.7063 0.9444 0.5952 0.8413 0.7143 0.8968 0.6190 0.8730 0.6349 0.8492
1.0 0.6905 0.8968 0.5317 0.7937 0.6349 0.8492 0.5873 0.8333 0.4683 0.7460
2.0 0.6587 0.8968 0.5238 0.7857 0.5714 0.8175 0.5476 0.7937 0.3810 0.6508
Credit Scoring using Multiobjective Data Mining 97

The first decision tree model was quite robust, and in fact retained its pre-
dictive power in most of all five models as perturbations were increased. The
second decision tree model included more variables and a more complex
decision tree. However, it not only was less accurate without perturbation, it
also degenerated much faster than the simple two-variable decision tree. While
this is not claimed as generalizable, it is possible that simpler trees could be
more robust. (As a counterargument, models using more variables may have
less reliance on specific variables subjected to noise, so this issue merits further
exploration.)
The L1 and L 2 TOPSIS metrics had less degeneration than the four-variable
decision tree, but a little more than the two-variable decision tree. The L1
TOPSIS model was less affected by perturbations than was the L 2 model, which
in turn was quite a bit less affected than the L∞ model. This is to be expected,
as the L1 model is less affected by outliers, which can be generated by noise.
The L∞ model focuses on the worst case, which is a reason for it to be adversely
impacted by noise in the data.

Conclusions

TOPSIS is attractive in that it follows automatic machine learning principles.


TOPSIS was originally presented in the context of multiple-criteria decision-
making, where the relative importance of the decision-maker’s preference was
a factor, and subjective weights were input. In the data mining application
presented here, the weights are obtained from the data itself, removing the
subjective element. Weights here reflect how much each independent variable
contributes to the best ordinary least squares fit to the data. Standardizing
the data removes differences in scale across independent variables. Thus the
TOPSIS models provide a straightforward way to classify data with any number
of independent variables and observations.
The classical methods for classification, decision trees, are valuable tools.
Decision trees have a useful feature in that they provide easy-to-interpret sets
of rules, as shown in Step 5. In the spirit of data mining, the TOPSIS models
presented can provide an additional tool for comparative analysis of data.
Three metrics were presented here. The L2 metric is traditionally used, although
the L1 and L∞ metrics are just as valid. The L1 metric is usually considered less
susceptible to the influence of outlier data, as squaring the distance from
the measure of central tendency in the L2 metric has a greater impact. In the
Tchebychev L∞ metric, the greatest difference determines the outcome, which is
attractive in some contexts. If outliers are not intended to have greater influence,
the L1 metric might therefore be preferred. If all variables are to be considered to
the greatest degree, the L∞ metric is attractive. Here, however, we confirm prior
results cited, and find that the L2 metric seems to perform quite well.
98 Enterprise Risk Management in Finance

Simulation was used to demonstrate relative model performance under


different levels of noise. While simulation of data mining models involves
quite a bit of extra computation, it can provide insight as to how robust models
are expected to be.
Nevertheless, future research with TOPSIS data mining is suggested. The pos-
sible direction includes developing new techniques to derive weights, instead
of using the linear regression approach in this chapter.
10
Online Banking Efficiency and Risk
Evaluation with Principal Component
Analysis

Introduction

Online banking has always been important from different stakeholder


perspectives.1 Improving the efficiency of Internet banking is now consid-
ered to be important. Banks hope that internet banking will help them
maintain profitable growth by enabling them to automate work, reduce
costs, and retain customers simultaneously. 2 Internet banking may help
reduce expenditure on ‘bricks and mortar,’ and reduce capital expendi-
tures. 3 Internet banking can give customers 24-hour access, and provide
convenience for customers. Cost-effective use of the internet can attract
many users to online banking services, but there has been little research
examining the superiority of banks providing online banking services over
those that do not. The role of online banking in leading to better decisions
and creating more profit needs study.
The Economist has reported that online banking costs were approximately
$0.01 per transaction in 2000, accounting for only 1% of all banking trans-
actions.4 Online banking costs are increasing rapidly. Jupiter5 reported that
there were an estimated 30 million families in the US using internet banking
in 2004, and forecast that the numbers would reach 56 million in 2008. The
growth of internet banking is due to economic globalization and the mat-
uration of computer technology. Economic globalization encourages banks
to serve and exploit international markets. Opening up a new virtual bank
branch requires high expenditure and faces limitations, but online banking
websites can provide services to customers throughout the world as long as
customers are able to surf the internet. The reduction of computer prices makes
their purchase accessible to most people. Meanwhile, perceived improvement
in computer security makes customers and banks sufficiently comfortable to
use them for their business. The consequent rapid increase of online banking
transactions creates an urgent need to examine the efficiency of online banking.

99
100 Enterprise Risk Management in Finance

Furthermore, efficiency evaluation is a source of ideas for bank managers, and


motivates them to improve the quality of online banking services.
Serrano-Cinca et al. (2005)6 suggested that financial information alone might
not always be sufficient to judge an online business. Therefore, constructing
performance evaluation methods that can combine a number of possible inputs
and outputs is attractive. Banks can evaluate their online banking performance
using online performance measurement considering an efficient frontier of
tradeoffs across selected criteria. Previous literature on online banking per-
formance evaluation includes the following approaches: linear regression (e.g.
the logit model in Furst et al.), DEA (e.g. Sherman and Gold, 19857; Soteriou and
Zenios, 19998), free disposal hull (e.g., Tulkens, 19939), the stochastic frontier
approach (also called econometric frontier approach, e.g. Berger and Humphrey,
199210), the thick frontier approach (e.g. Berger and Humphrey, 199711), the dis-
tribution free approach (e.g., Berger et al., 199312), and others. The main differ-
ences between these approaches lie in how much restriction is imposed on the
specification of the best practice frontier and the assumptions of random errors
and inefficiency.13 DEA is recognized as useful for performance analysis in
banking industry because of its advantages in allowing for dynamic efficiency
without requirements for prior assumptions on the specification of the best
practice frontier. This translates into practical advantages for DEA over other
methods, in that an explicit specification of a mathematical form for the pro-
duction function is not needed, and DEA can provide an integrated efficiency
score to decide the level of efficiency of a specific bank. Conversely, there can be
computational difficulty if there are too many input and output variables.
Principal component analysis (PCA) is a flexible approach that can be used
to reduce the number of variables and to classify independent variables. PCA
can thus provide powerful support to DEA by providing a means to reduce the
number of input variables and integrating output variables. PCA has been applied
in many scientific disciplines, including statistics, economics, finance, biology,
physics, chemistry, and so on. In internet banking, Eriksson et al. (2008)14 used
PCA to identify determinants of customer satisfaction. PCA has also been used to
study the acceptance of internet banking by customers, with a primary conclusion
suggesting that banks place effort on making internet banking user-friendly.

Data and variables

To analyze efficiency, we looked into a few banks chosen from the UK and the
US. Data was gathered from 2007 annual reports (see the reference list for such
reports) of these banks divided into the input variables and the output vari-
ables. Table 10.1 gives input and output variables:
Table 10.2 presents online banking data for ten large banks, six from the
US and four from the UK: Bank of America,15 Citigroup,16 HSBC,17 Barclays,18
Performance Evaluation and Risk Analysis of Online Banking 101

Table 10.1 Online banking DEA variables

INPUT variables Designator OUTPUT variables Designator

Total deposits A Total revenue 1


Operating cost B Daily visits 2
Employees C
Equipment D

Table 10.2 Online banking data

Total Operating Daily


Bank deposits cost Employees Equipment Revenue reach

Bank of America 805177 89881 210000 9404 68068 4524000


Citigroup 826230 61488 387000 8191 81698 623000
HSBC 278693 39042 330000 6054 87601 136000
Barclays 657058 26398 135000 5992 40410 1083000
Chase 740728 110560 180667 3779 71400 2030000
Wells Fargo 344460 22824 159800 1294 39390 1051000
Lloyds 393092 11134 70000 4014 58180 623000
Royal Bank of 595908 28106 226400 3524 108934 134000
Scotland
SunTrust 119877 5234 32323 1739 8251 129000
Wachovia 449120 9465 29940 6605 17653 551000

Chase,19 Wells Fargo,20 Lloyds,21 Royal Bank of Scotland,22 SunTrust,23 and


Wachovia.24

Results and analysis

We first conducted a plain DEA analysis and then used PCA scenario analysis
and PCA-DEA modeling.
Table 10.3 presents the scores produced from normal DEA models based on
all 45 combinations of variables in the data. A score of 1.000 (in bold) indi-
cates efficiency. It can be seen in the ABDCD12 column that seven of the ten
banks are found efficient. The problem is that using more variables obviously
finds more efficient solutions. The ABCD12 model generates too many effi-
cient DMUs/ties due to many input and output variables. But many variables
and factors can affect each other in real world; for example, operating cost is
usually related to the number of employees in the corporation. Therefore PCA
is applied to reduce these measures when applying DEA.
To conduct PCA analysis, we use various combinations of variables to see if we
can reduce the number of variables and/or detect structure in the relationships
between variables. Table 10.4 presents maximum component loadings matrix in
different models with different data. All models are weighted with a positive sign
Table 10.3 DEA combinations and their efficiencies

A1 B1 C1 D1 AB1 AC1 AD1 BC1 BD1 CD1 ABC1 ABD1 ACD1 BCD1 ABCD1

0.269 0.145 0.390 0.234 0.309 0.521 0.366 0.390 0.234 0.447 0.521 0.366 0.521 0.447 0.521
0.315 0.254 0.254 0.323 0.447 0.501 0.458 0.254 0.339 0.400 0.501 0.458 0.501 0.400 0.501
1.000 0.429 0.319 0.468 1.000 1.000 1.000 0.429 0.555 0.526 1.000 1.000 1.000 0.555 1.000
0.196 0.293 0.360 0.218 0.362 0.403 0.294 0.360 0.343 0.414 0.403 0.362 0.414 0.414 0.414
0.307 0.124 0.476 0.611 0.307 0.605 0.611 0.476 0.611 0.751 0.605 0.611 0.751 0.751 0.751
0.364 0.330 0.297 0.985 0.545 0.582 0.985 0.330 0.985 0.985 0.582 0.985 0.985 0.985 0.985
0.471 1.000 1.000 0.469 1.000 1.000 0.677 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.582 0.742 0.579 1.000 1.000 1.000 1.000 0.742 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.219 0.302 0.307 0.154 0.390 0.420 0.271 0.307 0.311 0.319 0.420 0.390 0.420 0.319 0.420
0.125 0.357 0.709 0.087 0.357 0.709 0.154 0.709 0.357 0.709 0.709 0.357 0.709 0.709 0.709

A2 B2 C2 D2 AB2 AC2 AD2 BC2 BD2 CD2 ABC2 ABD2 ACD2 BCD2 ABCD2

1.000 0.865 1.000 0.592 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.134 0.174 0.075 0.094 0.198 0.134 0.149 0.174 0.199 0.135 0.198 0.199 0.149 0.199 0.199
0.087 0.060 0.019 0.028 0.087 0.087 0.087 0.060 0.068 0.038 0.087 0.087 0.087 0.068 0.087
0.293 0.705 0.372 0.223 0.764 0.372 0.344 0.705 0.774 0.375 0.764 0.774 0.375 0.774 0.774
0.488 0.315 0.522 0.661 0.488 0.522 0.786 0.522 0.661 0.952 0.522 0.786 0.952 0.952 0.952
0.543 0.791 0.305 1.000 0.892 0.543 1.000 0.791 1.000 1.000 0.892 1.000 1.000 1.000 1.000
0.282 0.961 0.413 0.191 1.000 0.413 0.308 0.961 1.000 0.413 1.000 1.000 0.413 1.000 1.000
0.040 0.082 0.028 0.047 0.090 0.040 0.060 0.082 0.094 0.062 0.090 0.094 0.062 0.094 0.094
0.192 0.423 0.185 0.091 0.462 0.192 0.192 0.423 0.445 0.185 0.462 0.462 0.192 0.445 0.462
0.218 1.000 0.854 0.103 1.000 0.854 0.218 1.000 1.000 0.854 1.000 1.000 0.854 1.000 1.000

A12 B12 C12 D12 AB12 AC12 AD12 BC12 BD12 CD12 ABC12 ABD12 ACD12 BCD12 ABCD12

1.000 0.865 1.000 0.592 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.395 0.254 0.254 0.324 0.492 0.514 0.507 0.254 0.399 0.423 0.514 0.525 0.542 0.434 0.542
1.000 0.429 0.319 0.468 1.000 1.000 1.000 0.429 0.555 0.526 1.000 1.000 1.000 0.555 1.000
0.403 0.707 0.476 0.223 0.764 0.522 0.462 0.707 0.774 0.558 0.764 0.774 0.558 0.774 0.774
0.652 0.316 0.644 0.661 0.652 0.789 0.834 0.644 0.661 1.000 0.789 0.834 1.000 1.000 1.000
0.747 0.794 0.391 1.000 0.924 0.783 1.000 0.794 1.000 1.000 0.924 1.000 1.000 1.000 1.000
0.651 1.000 1.000 0.472 1.000 1.000 0.786 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.582 0.742 0.579 1.000 1.000 1.000 1.000 0.742 1.000 1.000 1.000 1.000 1.000 1.000 1.000
0.348 0.432 0.337 0.155 0.488 0.466 0.365 0.432 0.445 0.353 0.488 0.488 0.466 0.445 0.488
0.280 1.000 1.000 0.103 1.000 1.000 0.280 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000

Notes: A: Deposit; B: Operation cost; C: The number of employees; D: Equipment; 1: Revenue; 2: Web reaches.
Performance Evaluation and Risk Analysis of Online Banking 103

on the first component and other variables, thus, the first component is named
‘overall measure of efficiency,’ and is the higher-weighted value in general. In all
models, variables can be reduced so that three principal components can be used
to explain more than 85% of the variation in the raw data.
In this research we chose the principal components with the accumulative
contribution ratio equal to or above 90%, and applied them to reevaluate the
efficiency of the online banking. For instance, in model ABCD12 we can con-
clude that only three principal components are used, because the accumulative
contribution of three principal components is 90.2%. The calculation is easy
due to the reduction of data complexity, and the ranks are reasonable.
The data extraction process is shown in Figure 10.1, where every vector
represents each of the six variables. How each variable contributes to the three
principal components is indicated by the direction and length of the vector.

Table 10.4 Maximum component loadings matrix in different models

Model PC1 PC2 PC3

ABCD1 0.851 −0.561 0.504


ABCD2 0.895 0.750 −0.598
ABC12 0.892 0.719 −0.467
ABD12 0.906 0.804 0.640
ACD12 0.868 0.705 −0.526
BCD12 0.849 0.739 0.662
ABCD12 0.881 −0.675 0.674
Notes: A: Deposit; B: Operation cost; C: Number of employees; D: Equipment; 1:
Revenue; 2: Web reaches.

1
Component 3

0
Employees
Total revenue
–1 Equipment
–1
Total deposits

–0.5 Operating cost

Daily visits
0

0.5
1
Component 2 0.5
1 –0.5 0
–1
Component 1

Figure 10.1 3D plot of PCA analysis (ABCD12 model)


104 Enterprise Risk Management in Finance

In the three-dimensional solid area it is shown that the first principal com-
ponent, represented by the Component 1 axis, has positive coefficients for all
six variables. The second principal component, represented by the Component
2 axis, has negative coefficients for variables Total deposits (A), Operating
cost (B), Equipment (D) and Daily visits (2), and positive coefficients for the
remaining two variables. The Component 3 axis has all positive and negative
coefficients for the variables. This figure indicates that these components dis-
tinguish between online banking that has high values for the three sets of
variables, and low for the rest. This approach can effectively achieve dimen-
sionality reduction without losing too much information.

PCA-DEA analysis
This section analyzes online banking using an integrated PCA-DEA model.
As internet usage grew, it considerably changed the channel between banks
and clients. We used both financial and non-financial variables. The main
objective is to construct a framework with DEA and PCA approaches, and use
it for the measurement of online banking based on data collected from annual
reports and web metrics.
The basic DEA model in Table 10.2 shows us that we have seven banks that
are efficient. To reduce the number of variables and obtain more accurate
results, we ran a PCA-DEA analysis at 75%. This left us with three efficient
banks: Bank of America, Lloyds and the Royal Bank of Scotland. Table 10.5
presents integrated PCA-DEA scores, and Table 10.6 gives variance explained
by integrated PCA-DEA.
Only the Bank of America gets 100% efficiency in all models. Note that in
Model ABCD1, where all inputs and outputs except for Daily Reach are taken

Table 10.5 Integrated PCA-DEA score

PCA-DEA – 75% ABD12 ABC12 ABCD1 ACD12 ABCD12

Bank of America 100% 100% 100% 100% 100%


Citigroup 48% 43% 14% 50% 48%
HSBC 86% 80% 5% 76% 75%
Barclays 47% 56% 39% 53% 57%
Chase 65% 60% 54% 99% 77%
Wells Fargo 100% 75% 67% 100% 98%
Lloyds 87% 100% 38% 97% 100%
Royal Bank of Scotland 100% 94% 5% 100% 100%
SunTrust 36% 45% 20% 39% 41%
Wachovia 29% 50% 28% 32% 38%
Min 29% 43% 5% 32% 38%
Max 100% 100% 100% 100% 100%
Mean 70% 70% 37% 75% 73%
Standard Deviation 0.2808 0.2239 0.3005 0.2805 0.2579
Performance Evaluation and Risk Analysis of Online Banking 105

into account, only one bank gets a score of 100% and the average efficiency is
only 37%. In the other models where the daily reach is taken into account the
average ranges from 70 to 75%.
To analyze all efficiency scores in Table 10.2, we ran PCA on all 45 sets of
scores for all banks. Figure 10.2 gives a plot of principal component loadings
in different DEA models, where different scores can be understood from a
different perspective. The plots of principal component loadings are converted
from the matrix of component loadings and show a set of directional vectors.
The computed component loadings result in meaningful naming for both
horizontal (the first component) and vertical axis (the second component).
The horizontal axis in Figure 10.2 is from west to east, representing the ‘overall

Table 10.6 Variance explained in integrated PCA-DEA

Variance ABD12 ABC12 ABCD1 ACD12 ABCD12

A 65.53 73.66 71.45 65.53 68.21


B 15.05 19.39 19.65 15.05
C 14.39 0.00 8.89 14.39 20.02
D 5.04 6.95 0.00 5.04 11.77
1 53.85 53.85 53.85 100.00 53.85
2 46.15 46.15 46.15 0.00 46.15

1.0
AB1 ACD1&ABC1
0.9 ABD1
0.8 A1 AD1 ACD1&ABCD1

0.7 D1 BD12
0.6 B1 BCD1
AD12
Cost oriented (Vertical)
0.5 D12 CD1
AC12 ACD12
0.4 ABD12
Component 2

BC1 AB12
0.3 ABC12
A12 ABCD12
iency
0.2 ll Effic
Overa
0.1 C1
CD12
BD12
0.0 (Horizontal)
B12 BCD12
–0.1
–0.2
BC12
C12
–0.3 D2
–0.4
AD2
–0.5 Online oriented (Vertical) CD2&ACD2
–0.6 A2 (AB2&BC2&ABC2&ABD2
B2
C2 AC2 &BCD2&ABCD2)
–0.7 BD2
–0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Component 1

Figure 10.2 Plots of principal component loadings in different DEA models


106 Enterprise Risk Management in Finance

Table 10.7 Multivariate linear regression analysis – DV bank revenue

Standardized Correlation
coefficients T Sig. coefficients

Constant NA 0.710 0.517


A TotalDeposits 0.319 0.632 0.561 0.487
B Cost −0.011 −0.017 0.987 0.473
C Employees 0.749 1.698 0.165 0.783
D Equipment −0.246 −0.553 0.610 0.275
2 Daily Reach −0.030 −0.056 0.958 0.077

measure of efficiency’; the more efficient ones overall will be located in the
right direction. From the origin to north and south are the ‘cost oriented’ and
‘online oriented’ models respectively. Interestingly enough, such a finding is
consistent with existing work based on data from other nations.

Risk factors
This section seeks to detect the key variables that contribute the most to bank
revenue. We ran both multivariate linear regression and correlation analysis
and present computed values in Table 10.7. It can be seen that the Employees
variable has the largest effect on revenue with a regression coefficient value of
0.749 and correlation coefficient value of 0.783. This means that allocation of
Employees will affect profit or loss the most, holding the other variables con-
stant. That is to say, misutilization of Employees will bring huge potential risks,
which should be considered by banks’ managers when they decide to enter
the online banking market. This is because the risks have become important
factors affecting the survival of enterprises. Meanwhile, the Basel Committee
on Banking Supervision requires that every bank must have an efficient regu-
lation and risk management system where enterprise risk managers should
take an important role on the board of directors.25 Three other variables, that
is, total deposits, cost, and equipment, have less effect on revenue, and daily
reach has no effect on revenue at all. It may be easy to understand that the
people who click the online banks’ websites do not necessarily conduct trans-
actions over the websites. Therefore, future research should examine whether
the number of transactions will significantly affect revenue.

Conclusions

Various financial and non-financial variables have been used in this study
to analyze the online banking service of some giant banks. PCA is employed
to identify the variables contributing the most information content for DEA
Performance Evaluation and Risk Analysis of Online Banking 107

models. The results enable identification of the most efficient banks in terms of
the particular variables selected through DEA. This information is then further
analyzed in terms of multivariate linear regression, which enables significance
and correlation to be seen across variables.
The combination of models applied to banking risk management issues (in
this case, online banking) can provide useful tools to benchmark banking oper-
ations and to identify opportunities for improvement in those operations.
11
Economic Perspective

The traditional economic view

An early economic view of risk was addressed by Frank H. Knight,1 who focused
on the difference between uncertainty (a domain evading accurate meas-
urement: ‘cases of the non-quantitative type’) and risk: (‘a measurable uncer-
tainty, or “risk” proper’). Risk applied to cases where knowledge is available
about future outcomes and their probabilities, while uncertainty applies to
cases where there is knowledge about future outcomes but not about their
probabilities. Risk was viewed as important, drawing upon Courcelle-Seneuil’s
view2 that profit is due to the assumption of risk, compatible with von Thünen’s
view3 that profit was, in part, payment for certain risks. This was also expressed
by F. B. Hawley,4 who argued that risk-taking was the essential function of
the entrepreneur. Knight viewed risk as the objective form of the idea, and
reserved subjective elements to uncertainty. Since risk was measurable, those
things to which it applied had statistical distributions available.
Ganegoda and Evans (2012)5 proposed a framework for uncertainty assess-
ment in financial markets. This framework is displayed in Table 11.1.
Ignorance covers cases where there is no reliable evidence. Ambiguity covers
cases where subjectivity reigns, allowing different humans to interpret the
same term differently;such cases arise frequently in banking and investment.
Ganegoda and Evans use Li’s Gaussian copula model6 to price collateralized
debt obligations as a case in point. Although Li’s model (which sought to
identify the risk of a CDO by a correlation metric) seemed to work quite well
before the 2008 crisis, it failed under the conditions of stress that that crisis
entailed. Much financial forecasting is actually ambiguous in nature, because
while we gather statistics on past performance, we bet on future outcomes, and
if the underlying conditions of the future vary a great deal from those of the
past (and things are always somewhat different over time), statistical assump-
tions need to be considered in light of changed conditions. The difference

108
Economic Perspective 109

Table 11.1 Realms of uncertainty

Label Realm Example

Ignorance Future events are unknown Value of real estate in a war zone
Ambiguity Future events vaguely defined Credit rating of asset-backed
securities
Uncertainty Events known but probability Risk of natural disaster
distribution is not High-degree unique events (fines)
Risk Events and associated probability Most insurance, market risk for
distribution known banks

Source: Based on Ganegoda and Evans (2012).

between uncertainty and risk is somewhat clearer, although the boundary


between uncertainty and ambiguity is fuzzy. The definition of ambiguity
would emphasize an infinity of possible outcomes, while uncertainty could be
reserved for the realm of definable outcomes.
The conventional economic and financial theory of risk is represented by
Harry Markowitz’s7 definition of risk as variance. By analyzing the statistical
properties of investment alternatives, wise investors can minimize their risk
by diversifying, especially across investment alternatives with low or negative
correlations. Risk introduces a second criterion of investment along with
profit. This leads to the concept of an efficient frontier, the set of investment
alternatives which have the lowest variance for a given return, or conversely
the highest return for a given variance. Related ideas of William Sharpe (1970)
lead to the capital asset pricing model (CAPM), evaluating investment alterna-
tives in terms of risk and return relative to the market as a whole. In CAPM,
the riskier a stock, the greater the profit potential (variance would thus be
opportunity). Efficient market theory8 posed that the market price incorpo-
rated perfect information, with random variations about an accurate price.
This approach assumes a realm of risk, as defined in Table 11.2.
For other realms, other tools are needed. Obviously, research and development
would alleviate uncertainty. The argument would be that if something was
important to know, a firm should invest up to the value of learning that some-
thing, in order to find out – but the catch is that the outcome of research is always
uncertain. Scenario analysis using plausible future scenarios has been proposed,
using the idea of maximizing expected utility in the worst-case scenario. A
variant is stress testing, studying expected performance under extreme scenarios.
However, one can always imagine an outcome that would make any decision
fail – such as a nuclear holocaust, or a massive asteroid collision.
In the case of risk, where outcomes and probability distributions are known,
logic trees (decision trees of outcomes and associated probabilities) can apply.
110 Enterprise Risk Management in Finance

Table 11.2 Evolution of risk management

Time Theory Essence Relevance

Late 50s, State preference theory Efficient allocation of Underpins derivatives,


early 60s Arrow, Debreu resources & risks requires ultimate role of
complete set of securities securities markets to
permitting hedging efficiently allocate risk
1952 Mean variance Investors can analyze Basis for portfolio
Harry Markowitz risk & expected return choices to optimize risk
level at given return
1958 Indifference theory In perfect market, value Not true, suggesting
Modigliani, Miller of company independent need for efficient capital
of capital structure structure and risk
mitigation through
hedging
1960s Capital asset pricing Markets compensate Hedging should be left
model investors for systematic to investors
Sharpe et al. risk, but not idiosyncratic
risk – can eliminate latter
through hedging
1973 Options-pricing model Volatility of a security a Allows risk transfer,
Black, Scholes, Merton key factor in options companies can price
price waiting
1976 Arbitrage pricing Price of a security driven Segmentation of CAPM
theory by a number of factors systematic risk into
Stephen Ross factors – if prices diverge
from expected, arbitrage
can bring back into line
1977 Underinvestment Stockholders avoid There is a shareholder
problem low-risk/low return to value in better risk
Myers, Smith, Stulz avoid shifting wealth to management
debt holders
1979 Binomial option Variations in price over Allows deeper markets
pricing model time can be used to more for long-dated options
Cox, Ross, Rubinstein accurately price options and options paying
dividends
1993 Risk management Goal of risk management Managers try to control
framework to ensure firm has cash risk as a strategic set of
Froot, Scharfstein, available for value- choices
Stein enhancing investments

Source: Extracted from Buehler et al. (2008).

If there are many branches to logic trees, Monte Carlo simulation is widely
used for analysis. The good feature of Monte Carlo simulation is that any dis-
tribution assumption (and any systemic property) can be assumed. However,
interpreting the output can easily become challenging.
Economic Perspective 111

Buehler et al. (2008)9 provided a review of their view of the evolution of risk
management theory. Table 11.2 displays their timeline.
Risk has been a fundamental topic of economic theory for centuries, and for
many years, risk management was pretty much fully described by insurance
and ad hoc hedging. The evolution outlined in Table 11.2 shows the appear-
ance of derivatives which came to be used as a risk management tool. Financial
theory has evolved to a much greater level of sophistication. Markowitz gave
us the mean-variance approach, defining risk as variance, and providing a
tool to identify the efficient frontier where risk and return were traded off.
Markowitz’s model also allows correlations to be included, realizing that
investment returns are not independent of the returns of other investments.
Sharpe (1970)10 extended Markowitz’s work to the capital asset pricing model,
where investments are evaluated in terms of risk and return relative to the
market as a whole. In this view, the riskier the stock, the greater the expected
profit. Thus this leads to the conclusion that risk is opportunity. Efficient
market theory views the market price as incorporating perfect information.
Prices then vary randomly around their appropriate equilibrium value.
However, the complexities of life demonstrated in 2008 that every model
leaves some bit of reality out.

The human factor

Humans are known to have limited abilities to deal with extreme event proba-
bilities.11 Sometimes, conventional statistical training does not aid this ability.
Humans have been found to have biases, some of which affect their decision-
making when they are trying to manage risks. A common mistake is over-
confidence in one’s ability and knowledge. This also applies to experts. The
problem of anchoring involves bias in estimations dependent upon initial
estimates. Availability involves reliance upon memory of past experience
as a guide in estimating event probability. Risk managers are called upon to
estimate the likelihood of possible scenarios in stress testing, but they may
be biased by recent events, especially those that make the news. Framing is
a cognitive bias in which a view of risk depends upon context. For instance,
gamblers who are ahead are notably risk averse, while those that are behind
are risk seeking (determined to break even). Risk managers may do the same,
depending upon company profitability in the recent past. Herding involves a
tendency to mimic others, which often is found in investments. Essentially,
herding can lead to bubbles, or to bank runs.

Reality
Humans have always been susceptible to bubbles. Charles Mackay (1841)12
reviewed a number, to include the Dutch tulip mania in the early 17th century,
112 Enterprise Risk Management in Finance

the South Sea Company bubble of 1711–1720, and the Mississippi Company
bubble of 1719–1720. Patterson (2010)13 gave Isaac Newton’s famous quote
complaining of losing £20,000 in the South Sea Bubble in 1720: ‘I can calculate
the motion of heavenly bodies but not the madness of people.’
More recent problems include the London Market Exchange (LMX) spiral.
In 1983, excess-of-loss reinsurance was popular, especially with Lloyd’s of
London. Syndicates unintentionally paid themselves to insure themselves
against ruin.14 These risks were viewed as independent – but in fact they were
not, because they involved a cycle of hedging to the same pool of insurers.
Hurricane Alicia was very damaging in 1983, and nearly brought down
Lloyd’s of London, much of the damage being blamed on the LMX cycle of
reinsurance.
Black Monday, October 19, 1987 was another critical event, when the stock
exchange nearly melted down. Some blamed portfolio insurance, based on
efficient-market theory and models implementing computer trading to take
advantage of what were viewed as temporary diversions from a fundamental
price reflecting value.
Yet another recent problem arose with Long Term Capital Management
(LTCM), a firm created to benefit from Black–Scholes formulation of the value
of derivatives. Lowenstein (2000)15 reviewed the case in depth, beginning
with the theoretical model of the value of the new instrument of derivatives,
LTCM’s spectacular success, and its ultimate collapse due to positions held
in Russian banking in the latter part of 1998. LTCM was bailed out by the
Federal government in the US because it was viewed as too big to be allowed
to collapse.
In 2002, highly popular information technology stocks plummeted. In the
1990s, venture capitalists were highly amenable to throwing money at any
proposal suggesting a way to implement computer technology. Stock prices
for many new startups skyrocketed, in hopes that each was the new IBM, or
Microsoft, SAP, or Oracle. But most were not viable, and this market sector
proved to be yet another bubble.
Today, we are still suffering the aftereffects of the subprime mortgage collapse.
Banks seemingly prospered in a climate of deregulation and merger begun in
1981. Many invested heavily in financial instruments created out of pools,
including mortgages, many of which were generated by aggressive marketing
to those who were offered homes beyond their ability to pay for. This was not
viewed as a problem, as the housing market in most of the United States had
little evidence of decline in value, and in some regions (California, Florida,
Nevada) were vastly outperforming other types of investment return. However,
in 2008 house values proved capable of declining in value, and many of the
most aggressive investing banks were stuck with mammoth deficits. Some (not
Economic Perspective 113

all) such banks were bailed out by the federal government, and most credit this
action with saving the US economy (if not the world economy).
Difficulty in grasping extreme event probabilities has long been noted. Taleb
(2007)16 notes that we are trained to consider fair coin flips to have a 0.5 prob-
ability of heads as well as tails. He proposes that if we observe 99 consecutive
coin flips, statistical training steers us to assume that the next coin flip will
have a probability of 0.5 for both heads and tails. Taleb argues that a more
pragmatic estimate of events is that something is crooked. Taleb also discussed
casino treatment of risk, one of their definite core competencies. Casinos have
mechanisms in place (risk management) to assure they don’t go broke. But
operating a casino is not completely immune to risk. Taleb related the four
biggest losses casinos experienced in recent times:

• A tiger bit a member of the Siegfried and Roy entertainment team, costing
the casino about $100 million by one estimation.
• A contractor was constructing a hotel annex, suffered losses in the project,
and sued. He lost the suit, and tried to dynamite the casino.
• Casinos (rightfully so) are required to file tax returns with the Internal
Revenue Service. An employee charged with this duty failed to perform –
not once but over a number of years. When the malfeasance was discovered,
the casino was liable, and had to pay a huge fine as its license was at risk.
• A casino owner’s daughter was kidnapped. In violation of law, he used casino
money to raise the ransom demanded. While this was understandable, the
casino was liable.

These four examples represent very rare (hopefully) events, with little basis
for accurate actuarial calculation. They fall into the scale of ignorance, in that
they have outcomes that were probably deemed beyond the realm of like-
lihood. But firms must operate in environments that include the possibility of
meteors hitting the earth and causing the end of life as we know it, of wars, of
terrorism, and global warming threatening islands and port cities.
Taleb presented the Black Swan problem. Most humans try to be scientific,
and learn from their observations and history. But while nobody in Europe had
seen a black swan, and had thus assumed they didn’t exist, when they settled
Australia they found some, disproving their empirical hypothesis. Taleb also
noted fallacies on the part of investors, who assume data is normally distrib-
uted. In practice, especially during bubble bursts, fat tails with higher extreme
probabilities are often observed. Cognitive psychology can explain some of
this. Kahneman and Tversky (2000)17 emphasized human biases from framing,
with different attitudes toward risk found during winning and losing streaks.
Humans also have been found to overestimate the probability of rare events,
114 Enterprise Risk Management in Finance

such as the odds of the next asteroid impacting the earth, or the risk of terror-
ists on airplanes. Akerlof and Shiller (2009)18 argued that standard economic
theory makes too many assumptions; when human decisions are involved, his-
torical data is not a good predictor of future performance.

Risk mitigation
ERM seeks to provide means to recognize and mitigate risks. The field of
insurance developed to cover a wide variety of risks, external and internal,
covering natural catastrophes, accidents, human error, and even fraud.
Financial risk has been controlled through hedge funds and other tools over
the years, often by investment banks. With time, it was realized that many
risks could be prevented, or their impact reduced, through loss-prevention and
control systems, leading to a broader view of risk management.

Risk tolerance
A key concept of risk management is risk tolerance. Risk tolerance bounds the
risk a firm is willing to assume. It can be calculated as the maximum amount
of surplus a company is willing to lose over a given period for a specific event.
It also could express the most the company is willing to lose per year. Yet
another definition is to express the probability that a capital adequacy ratio
(the ratio of reserves to potential payouts) will fall below a given level.
Another key concept is that organizations are in business to cope with risks
in their area of expertise (core competence), but should shed risks that are not
in this core. Insurance is the most common means to shed these non-core
risks.

Recent events
Doherty et al. (2009)19 sought to explain the underlying causes of the 2008
global financial crisis, concluding that values, incentives, decision processes,
and internal controls all played a role. Oversight and control were found
lacking in the years prior to 2008, with heavy use of leverage on the part of
investment firms that amplified risks, which in turn were hidden by the strong
profit performance measurement. A negative view of risk management would
be to ensure loss avoidance. A positive view would emphasize risk management
as part of value creation.
The insurance industry by its nature is focused on risk transfer. During the
2008 and subsequent period, while the insurance industry remained profitable
for the greater part, it experienced significant declines in return as well as
losses of risk capital. Doherty et al. proposed approaches to managing risk in
this market:
Economic Perspective 115

• Solvency management focuses on limiting the probability of failure to


levels in line with the organization’s risk tolerance. Solvency costs include
lost future cash flow, opportunity costs of liquidation, and costs of regulatory
intervention. Demand for insurance also is affected by credit ratings, which
are in turn impacted by solvency. Thus the firm’s access to capital markets
is affected, as is the firm’s reputation. Managing solvency risk requires bal-
ancing actual capital with the capital required to support portfolio risks as
a specified risk tolerance. Strategies and tools to deal with unknown risks
include:
• slack (having extra cash), contingent equity (could be an option to put
newly issued preferred shares in the case of stock price falling below a
strike level and an event such as reaching a threshold for payment of
claims).
• mutualization (which apportions losses over a large pool of members).
The insured parties end up sharing the losses of the insurer. A related
concept is a pari-mutuel market allowing hedgers and speculators to
place bets on events.
• Profitable growth ensures funding for strategic investments, particu-
larly after major losses. The problem is that when cash flow is short, man-
agement tends to pass up promising investment opportunities. Insurance
against adverse events provides new capital in those cases. Reinsurance is a
macro-level pooling of risks aimed at developing portfolios of diverse risks
expected to have low correlations. Insurance operates on the assumption
that the following anomalies are not present:
• Moral hazard, which arises when the party being insured has prior
knowledge that the probability of claim will be much larger than the
norm, which is unknown by the insurer. An example is subprime loan
initiators in the mid-2000s selling mortgages to parties known to have a
high risk of default, and passing this risk on to other investors.
• Adverse selection, which is a market process where undesired results
arise due to imbalanced information access by contracting parties. An
individual might buy cancer insurance if their doctor tells them they
have unfavorable symptoms, but they don’t share this information with
the insurer. Sometimes this exact circumstance involves the government
requiring insurers to provide coverage to all applying parties regardless of
pre-existing conditions (regulatory adverse selection).
• Transparency reduces investor uncertainty, and can be obtained in part
by eliminating non-core activity impact on reported earnings and capital.
Market values were contended to be associated with stable earnings. If
noncore risks are transferred to the nonrecurring category in corporate
earnings reports, investors should receive a more accurate approximation of
116 Enterprise Risk Management in Finance

firm value. The issue of defining what things belong in recurring or nonre-
curring categories does involve a level of subjectivity, lending itself to abuse.
The basic argument is that honesty pays in the long run.

Conclusions

Risk is traditionally modeled as the product of probability times severity. These


two dimensions can be considered in different contexts.20 A formulation of
expected frequency plotted against expected severity is appropriate for loss
portfolios adding independent events. For single random events, it is more
appropriate to view probability versus severity through mechanisms such as
risk matrices. The conventional risk finance paradigm would suggest three
strategies: If expected severity and expected frequency were both high, risk
should be avoided through some control. If expected severity was high but
expected frequency low, risk transfer through hedging, insurance, or risk miti-
gation would be appropriate. If expected frequency were high and expected
severity low, diversification through pooling through frequency mitigation
would be useful. If both expected frequency and expected severity were low,
informal diversification would protect against these rare events.
Powers et al. noted two problems with this strategic approach:

1. Low frequency/high severity events (ambiguity/uncertainty) call for risk


transfer. This assumes that there will always be a counterparty willing to
accept the transferred risk. However, in dire times, insurance companies
have been observed to reject business.
2. Under conditions of high expected frequency, pooling through diversifi-
cation is suggested if expected severity is low, while risk avoidance is recom-
mended if expected severity is high. Higher expected severity events are
often observed to have fat tails, which inhibit diversification.

Powers et al. extended their analysis to the continuous case, where it is not
necessary to establish boundaries between high and low expected frequencies
or severities.
Financial theory has developed a number of tools to supplement insurance
and hedging as means of implementing risk management. Derivatives are
intended as means to hedge in a more sophisticated manner. But the human
factor enters into the equation when customers do not understand what they
are purchasing (and possibly salespeople don’t either). Humans have a ten-
dency to be exuberant that has manifested itself over and over with time, cre-
ating high variance in market prices that we know as bubbles.
Buehler et al. concluded that companies have their own appropriate ratio
of debt-to-equity related to the probability of incurring loss. Greater equity
Economic Perspective 117

capital than required will lead to inefficient capital use, as more profits will
be needed to maintain its average per-share profitability. Insufficient equity
capital risks default or financial stress, as well as limiting the firm’s ability to
take advantage of new growth opportunities. Optimal debt level is a function of
a firm’s key market, financial, and operating risks. The firm’s ability to mitigate
those risks varies. The risks within the firm’s competence to readily deal with
should be retained, while the risks that they are less capable of dealing with
should be transferred.
12
British Petroleum Deepwater Horizon

Introduction

The Macondo well, operated by BP, aided by driller Transocean Ltd. and
receiving cement support from Halliburton Co., blew out on April 20, 2010,
leading to 11 deaths. The subsequent 87-day flow of oil into the Gulf of Mexico
dominated news in the US for an extensive period of time, polluting fisheries in
the Gulf as well as the coastal areas of Louisiana, Mississippi, Alabama, Florida,
and Texas. The cause was attributed to defective cement in the well.
Subsequent studies by regulatory agencies could detect no formal risk assess-
ment by BP.1 Risk management in this context is not traditional insurance risk
management, nor the financial risk management that made hedging famous.
Rather, risk management in this broader sense means responsibility to rec-
ognize hazards and take action to prevent them to the extent possible. The
Deepwater oil spill demonstrates such a context, and the disastrous conse-
quences of not accomplishing such risk management.
Our economy gets involved in many extraction activities, most of which
involve high degrees of risk. In the 19th century, the prevailing attitude was
that miners were paid above the market level of wages in compensation for
the risks to their lives involved in their work. We still have mining, but the
20th century saw more activity in petroleum extraction, again with high levels
of life-threatening events. Government has reflected society’s changing atti-
tudes. This evolution is still going on, but we feel that the trend is for some
group to take the attitude that for every catastrophe, ‘we will not rest until that
never happens again.’ The Occupational Safety and Health Act of 1972 went
far to make workplaces safer. It has taken many years for our society to work
out this transition to safer work practices as a requirement, even at the expense
of operating cost. (In fact, we remember when one of the initial motivators for
firms to accept OSHA was that ‘it pays’ through reduced legal suits and fines.
There have been many acts passed in the US to regulate all industry, including
all phases of petroleum production.

118
British Petroleum Deepwater Horizon 119

Deepwater horizon

The Macondo Mississippi Canyon Block 252 oil well was 5000 feet deep, 40
miles southeast of the Louisiana shore. It was owned by Transocean Ltd.,
leased to British Petroleum. On April 20, 2010, the well erupted after a blowout
resulting in explosion and fire. Eleven workers were killed, another 17 injured.
Two days after the explosion, the entire rig sank. Gas and mud from the well
had triggered an explosion that sank the platform and cut the well pipe at
the sea floor. A containment cap was in place, but failed. The wellhead was
compromised, discharging crude at the rate peaking at 9000 barrels per day
into the Gulf of Mexico. The oil spill shut down Louisiana seafood.2
The oil spill persisted from April until July, gushing nearly 5 million barrels
of oil before it was finally stopped. The response effort was massive, with some
2600 vessels deployed. Shoreline protection efforts included 4.4 million feet of
sorbent boom, and there was heavy use of oil dispersants. Silves and Comfort
provided a listing of conditions describing BP risk management with respect to
Deepwater Horizon (Table 12.1).

Table 12.1 BP risk factors

Conditions Tolerated conditions ensuing in accident

Technical factors Technological and environmental learning advanced in


response and recovery stage;
Corporate information sharing grew;
Government regulation was reformed.
Outside expertise Drew on world-class expertise and equipment, but decision
making was not highly adaptive;
Information sharing satisfactory at best;
Public relations failures.
Outside consulting Consulted marine science community.
Government role BP had difficulty recognizing interdependency with
government.
Interorganizational Spill planning sophisticated;
factors State & Federal officials had plan and system, but not well
prepared for discharge a mile under sea level that continued for
months.
Socio-technical issues Principal–agent problems between BP & Transocean;
Profit/safety conflicts;
Overconfidence in sea floor containment cap;
Experiment in drilling 1 mile deep, 4 further miles into the sea;
Problems using National Incident Management System in
parallel with National Contingency Plan and Oil Pollution Act.

Source: Extracted from Sylves and Comfort (2014).


120 Enterprise Risk Management in Finance

The Oil Pollution Act of 1990


Under this US act, spillers were liable for much greater amounts than previ-
ously, with stiffer civil and criminal penalties. Spillers were required to pay
for the cleanup of oil spills and to compensate those economically injured.
States were allowed to impose unlimited liability on shippers. A federal fund
financed from an existing 5 cent per barrel oil tax was created to cover cleanup
and compensation costs that spillers did not cover. Shippers were responsible
for drafting worst-case oil spill response plans for quick cleanup. Oil tankers
crossing US waters were required to be double-hulled by 2010. Federal oil spill
response capability was reinforced by positioning response teams across the
country, coordinated by a national command center. The US President was
authorized to take control of oil spill cleanup. A multi-agency oil pollution
panel was established to coordinate federal research.

Recovery factors in Macondo


As is normal in highly visible disasters, a great deal of blame was passed on
to BP, as well as to the Minerals Management Service (MMS), the regulatory
agency most involved. Government regulatory planning was in place, but the
mile-deep sea floor well was beyond regulatory experience. While spill plan-
ning had existed, BP expertise failed. BP was accused of tolerating conditions
resulting in what Perrow3 referred to as ‘normal accidents.’ The reaction after
the event, however, was positive in that BP invested a high level of effort, and
government regulation was reformed. BP accessed whatever equipment and
expertise it could obtain. The technical issues involved are shown in part in
Table 12.2.

Table 12.2 Factors in recovery

Oil properties Light and gaseous oil, evaporating quickly in warm climate

Volume Months of discharge created immense volume, very difficult to stop


Challenges Protection of migratory water fowl, fisheries, wetlands, shorelines
Interference with water freight, tourism, recreation
Confidence in oil platform safety jeopardized
Environmental Skimming
remediation In situ burning (to a slight degree)
Berming
Absorbent booms
Animal rescue
Massive use of dispersants on sea floor
Reliance in natural forces

Source: Extracted from Sylves and Comfort (2014).


British Petroleum Deepwater Horizon 121

The BP oil spill led to a number of socio-technical mitigation and/or pre-


vention measures for the industry. There was more regulation and testing of
containment cap technology. There was a moratorium imposed on deep-water
drilling in the Gulf. Oil platform drilling and operations were more stringently
regulated. Corporate safety culture improvement was induced. Oil companies
shared more information on deepwater drilling technology. Oil spiller liability
was increased.

Risk management factors


Borison and Hamm4 addressed risk management in the BP case. They cited the
dominant publicly accepted views to be:

1. the disaster was a black swan, which could not have been foreseen; and
2. BP and its regulators were incompetent black sheep.

Borison and Hamm considered a third perspective, seeking insight into what
better risk management could have done.
The uncertainty (ambiguity) of the case makes traditional tools such as VaR
analysis inappropriate; while there is a statistical database of oil spills, they are
hopefully rare events. So tools such as stress testing were proposed as more
appropriate. MMS was responsible for regulation, and had a system in place
to deal with oil spills that included tools to estimate likelihood and impact
based on historical data. However, MMS was not required to quantify risk of
a large spill from the Macondo well, and appears not to have done so. MMS
relied on qualitative analysis in the form of developers specifying a worst-case
scenario and giving detailed plans of response. Thus BP was aware of a large oil
spill, although no probability estimates were recorded. Thus BP did not under-
estimate the probability of the spill – rather, BP didn’t estimate it at all. When
risk analysis is solely qualitative, there is a great deal of personal interpretation.
For instance, a ‘worst-case’ could imply a 1 in 100 event, or a 1 in 1,000,000
event.
Borison and Hamm suggested a Bayesian analysis instead. Eckle and Burgherr
(2013)5 applied Bayesian analysis to oil chain fatalities, using a Poisson dis-
tribution for accident frequency and a gamma distribution for accident
severity. Monte Carlo simulation modeling was used to combine these factors.
Abbasinejad et al. (2012)6 studied Iranian energy consumption since 1983,
finding it unbalanced, with excessive use of oil and gas damaging the envir-
onment and encouraging a shift to natural gas and electricity. High growth in
energy demand was expected, and a Bayesian vector autoregressive forecasting
model of energy consumption was applied. Bayesian analysis bases probabil-
ities on judgment rather than on empirical observations. While statistically it
122 Enterprise Risk Management in Finance

is of course preferable to have data as a basis, risk management often involves


situations where the relevant data is not available. Judgment can be based upon
expert opinion, which is the preferred source. Oil production was given as a
case where data was widely available. Oil price, however, depends upon many
highly variable factors, including war, piracy, dry wells, pipeline rupture, train
wrecks, and drilling company bankruptcy. Bayesian analysis relies upon expert
judgment as the basis for estimating the magnitude and/or probability of these
factors.

Conclusions

The BP spill is only the most recent salient event demonstrating the risk to
the environment from energy policy. One alternative is nuclear energy, with
far less probability of a spill. However, there is of course the fear of such a spill
being far more dangerous than mere petroleum. Our global economy requires
a great deal of transportation, and we have developed a complex system relying
on petroleum-based energy. There are known relative risks of spills in the
system. Drilling in the ocean taps vast reserves, and occurs at a distance from
population centers. But as the BP case demonstrates, spills can be dramatic.
Conversely, land-based drilling creates a need to transport oil. Pipelines are
safer than trains, but political issues make it difficult to create new pipelines
while existing (old) railroad track becomes the major shipment technique. We
could switch to electric cars and trucks to vastly reduce the use of petroleum
for transportation – but the added strain on the electrical generation system
would open new opportunities for disaster. The problem is essentially that our
economies are complex and interrelated.
Borison and Hamm inferred three major lessons from the BP oil spill:

1. Assessment of unusual events is becoming important enough to call for


greater rigor. The nature of such events means that we have relied on quali-
tative assessment. Borison and Hamm say we need something better. We
conclude that the nature of such rare events mean that we probably have to
live with qualitative assessment, but techniques such as scenario analysis,
stress testing, and Bayesian inference can be applied.
2. Borison and Hamm call for greater formal attention to developing and evalu-
ating customized preparations and responses. They accuse the response
to the Macondo well event to have been ‘off-the-shelf.’ We would call for
having a variety of such off-the-shelf remedies available that can be quickly
implemented. Since it is nearly impossible to anticipate unusual catastro-
phes, a flexible response would seem appropriate.
British Petroleum Deepwater Horizon 123

3. Borison and Hamm call for systematic learning to ensure that risk man-
agement improves over time. We think that they are absolutely correct.
Understanding the interactions of complex systems is critical in enabling
effective response.

The BP oil spill demonstrates a common risk context, involving massive engin-
eering undertakings such as seen in construction projects of many kinds. Tools
to plan for risk that were suggested were stress testing (war games, what-if
scenario analysis, many other names) and simulation using subjective inputs.
13
Bank Efficiency Analysis

Introduction

In today’s economy, the banking industry is of great importance. With the


availability of new technology and the internet, more and more organizations
are entering some aspect of the banking business and this results in intense
competition in the financial services markets. Major domestic banks continue
to pursue all the opportunities available to enhance their competitiveness.
Consequently, performance analysis in the banking industry has become part
of their management practices. Top bank management wants to identify and
eliminate the underlying causes of inefficiencies, thus helping their firms to
gain competitive advantage, or, at least, meet the challenges from others.
Traditionally, banks have focused on various profitability measures to
evaluate their performance. Usually multiple ratios are selected to focus on
the different aspects of operations. However, ratio analysis provides a rela-
tively insignificant amount of information when considering the effects
of economies of scale, the identification of benchmarking policies, and
the estimation of overall performance measures of firms. As alternatives to
traditional bank management tools, frontier efficiency analyses allow man-
agement to objectively identify best practices in complex operational envi-
ronments. Five different approaches, namely, Data Envelopment Analysis
(DEA), Free Disposal Hull (FDH), Stochastic Frontier Approach (SFA), Thick
Frontier Approach (TFA), and Distribution Free Approach (DFA) have been
reported in the literature as methods to evaluate bank efficiency.1 These
approaches primarily differ in how much restriction is imposed on the spe-
cification of the best practice frontier and the assumption of random error
and inefficiency. Compared to the other approaches, DEA is a better for
organizing and analyzing data since it allows efficiency to change over time
and requires no prior assumption on the specification of the best practice
frontier. Thus, DEA is a leading approach for the performance analysis of the

124
Bank Efficiency Analysis 125

banking industry in the academic literature. However, the DEA frontier is


very sensitive to the presence of the outliers and statistical noise, which indi-
cates that the frontier derived from DEA analysis may be warped if the data
are contaminated by statistical noise. On the other hand, DEA is not good for
predicting the performance of other decision-making units. As a result, arti-
ficial neural networks (ANNs) have recently been introduced as a good alter-
native for estimating efficiency frontiers for decision makers (Wang, 2003). 2
The idea of combination of neural networks and DEA for classification and/
or prediction was first introduced by Athanassopoulos and Curram (1996). 3
They treated DEA as a preprocessing methodology to screen training cases in
a study of forecasting the number of employees in the healthcare industry.
After selecting samples, the ANNs are then trained as tool to learn a non-
linear forecasting model. Costa and Markellos (1997)4 analyzed London
underground efficiency with time series data. They explained how the
ANNs results are similar to corrected ordinary least squares (COLS) and DEA.
However, ANNs offer advantages in decision-making by denoting the impact
of constant vs. variable returns to scale or congestion areas. Fleissig et al.
(2000)5 employed neural networks for cost functions estimation. They found
convergence problems when the properties of symmetry and homogeneity
were imposed on the ANNs. Santin et al. (2004)6 used a neural network for
a simulated nonlinear production function, and compared its performance
with traditional alternatives like stochastic frontier and DEA under condi-
tions of different numbers of observations and noise. Pendharkar and Rodger
(2003)7 used DEA for data screening to create a subsample training data set
that is ‘approximately’ monotonic, which is a key assumption in certain
forecasting problems. Their results indicated that the predictive power of an
ANN that is trained on the ‘efficient’ training data subset is stronger than
the predictive performance of an ANN that is trained on an ‘inefficient’
training data subset. As two nonparametric models, there are many similar-
ities between ANNs and DEA models such as:8

• Neither DEA nor ANNs make assumptions about the functional form that
links its inputs to outputs.
• DEA seeks a set of weights to maximize technical efficiency, whereas ANNs
seek a set of weights to derive the best possible fit through observations of
the training dataset.

Bank branch efficiency is a comprehensive measure using various performance


aspects with a number of financial variables. This indicates that the rela-
tionship between bank branch efficiency and multiple variables is highly
complicated and nonlinear. For example, an efficiency improvement for a bank
branch from 0.5 to 0.6 might simply be the result of personnel cost reduction.
126 Enterprise Risk Management in Finance

But the improvement of its efficiency from 0.8 to 0.9 could be due to many
causes, for example, scale economy, or an increase in several outputs. ANNs
have been viewed as a good tool to approximate numerous nonparametric and
nonlinear problems. Thus, the banking industry provides good opportunities
for the applications of ANNs. To the best of our knowledge, there are no studies
using ANNs dealing with bank branch efficiency, but this chapter presents
a DEA-NN approach to evaluate the branch performance of a big Canadian
bank. The results are also compared with the corresponding efficiency ratings
obtained from DEA. The fact that the DEA property of unit invariance is similar
to the property of scale preprocessing required by NNs validates the rationale
to implement a comparison between pure DEA results and DEA-NN results.
Based on this analysis, similarities and differences between ANNs and DEA
models are further investigated.

Data envelopment analysis (DEA)

DEA is used to establish a best practice group amongst a set of observed units
and to identify the units that are inefficient when compared to the best practice
group. DEA also indicates the magnitude of the inefficiencies and improve-
ments possible for the inefficient units. Consider n DMUs to be evaluated,
DMUj (j = 1,2 ... n) that consume the amounts Xj = {xij} of m different inputs
(i = 1, 2, ... , m) and produce the amounts Yj = {yrj} of r outputs (r = 1 , ... , s).
The input-oriented efficiency of a particular DMU0 under the assumption
of variable returns to scale (VRS) can be obtained from the following linear
programs (input-oriented BCC model:9

r r
min z0 = θ − ε ⋅ 1s + − ε ⋅ 1s −
θ , λ , s+ , s −

s.t . Y λ − s + = Y0 (13.1)
θ X0 − X λ − s − = 0
r
1λ = 1
λ , s+ , s− ≥ 0

where s+ and s− are the slacks in the system.


Performing a DEA analysis requires the solution of n linear programming
problems of the above form, one for each DMU. The optimal value of the
variable θ indicates the proportional reduction of all inputs for DMU0 that
will move it onto the frontier which is the envelopment surface defined by
the efficient DMUs in the sample. A DMU is termed efficient if and only if the
optimal value θ* is equal to 1 and all the slack variables are zero. This model
Bank Efficiency Analysis 127

allows variable returns to scale. The dual program of the above formulation is
illustrated by:

max w0 = μ T Y0 + u0
μ ,ν

s.t . v T X0 = 1
r
μ T Y − v T X + u0 1 ≤ 0 (13.2)
r
−μ ≤ − ε • 1
T

r
− vT ≤ − ε • 1
u0 free

r
If the convexity constraint (1λ = 1) in (13.1) and the variable u0 in (13.2) are
removed, the feasible region is enlarged, which results in the reduction in the
number of efficient DMUs, and all DMUs are operating at constant returns to
scale (CRS). The resulting model is referred to as the CCR model.10 DEA has a
rich literature base of over 3000 papers and several books for those who require
detailed information on this technology.
In summary, each DEA model seeks to determine which of the n DMUs
define an envelopment surface that represents best practice, referred to as the
empirical production function or the efficient frontier. Units that lie on the
surface are deemed efficient in DEA, while those units that do not are termed
inefficient. DEA provides a comprehensive analysis of relative efficiencies
for multiple input-multiple output situations by evaluating each DMU and
measuring its performance relative to an envelopment surface composed of
other DMUs. Those DMUs are the peer group for the inefficient units known
as the efficient reference set. As the inefficient units are projected onto the
envelopment surface, the efficient units closest to the projection and whose
linear combination comprises this virtual unit form the peer group for that
particular DMU. The targets defined by the efficient projections give an indi-
cation of how this DMU can improve to become efficient.

Neural networks

Neural networks provide a new way for feature extraction (using hidden layers)
and classification (e.g., multilayer perceptrons). In addition, existing feature
extraction and classification algorithms can also be mapped into neural
network architectures for efficient (hardware) implementation.
Backpropagation neural network (BPNN) is the most widely used neural
network technique for classification or prediction.11 Figure 13.1 provides the
structure of the backpropagation neural network.
128 Enterprise Risk Management in Finance

W11 Z1
V1

X1

W12 Z2
V2

Wn2 ŷ
Wn1

Vm
W1m
Xx

Wnm
Zm y–ŷ

Input layer Hidden layer Output layer

Figure 13.1 Backpropagation neural networks

With backpropagation, the related input data are repeatedly presented to the
neural network. The output of the neural network is compared to the desired
output and an error is calculated in each iteration. This error is then back-
propagated to the neural network, and used to adjust the weights so that the
error decreases with each iteration and the neural model gets closer and closer
to producing the desired output. This process is known as training.
When the neural networks are trained, three problems should be taken into
consideration. First, it is very challenging to select the learning rate for the
nonlinear network; if the learning rate is too large, it leads to unstable learning,
but on the contrary, if the learning rate is too small, it results in incredibly
long training iterations. Second, settling in a local minimum may be good or
bad, depending on how close the local minimum is to the global minimum
and how accurate an error is required. In either case, backpropagation may
not always find the correct weights for the optimum solution. We may reini-
tialize the network several times to guarantee the optimal solution. Finally,
the network is sensitive to the number of neurons in its hidden layers; too few
neurons can lead to under-fitting, however, too many neurons can cause over-
fitting. Although all training points are well fit, the fitting curve takes wild
oscillations between these points. In order to solve these problems, we pre-
process the data before training. The scale of data values is bounded to 10 and
100 by dividing by a constant value, such as 10 or 100. The weights are initial-
ized with a random decimal fraction ranging from –1 to 1. Moreover, there are
about 12 training algorithms for BPNN. After preliminary analyses and trial,
Bank Efficiency Analysis 129

we chose the fastest training algorithm, the Levenberg–Marquardt algorithm,


which can be considered as a trust-region modification of the Gauss–Newton
algorithm.

The energy we use

One hundred and forty-two branches of a big Canadian bank in the Toronto
area were involved in the analysis. The data covered the period October to
December, 2001. Summary statistics for the inputs and outputs are reported
in Table 13.1.
From the table, no consistent trend in the data was found over the time
horizon of analysis. There are no significant variations in terms of deposits
and loans.

Bank branch efficiency analysis


Comparing the performance of the ANN for both efficient and inefficient
training data subsets in the healthcare forecasting problem, Pendharkar and
Rodger (2003)12 indicate that the predictive performance of an ANN that is
trained on the efficient training data subset is higher than the predictive per-
formance of an ANN that is trained on the inefficient training data subset.
Therefore, DEA-efficient branches are all selected as training data in building
a NN for branch efficiency analysis. Troutt et al. (1995)13 suggest that training

Table 13.1 Summary statistics of data

Other
general
Personnel expenses Deposits Loans Revenues

Average 55,222 38,239 90,967,688 98,321,387 150,972


Standard 39,284 36,023 59,801,588 123,239,727 93,409
October
Deviation
2001
Min 9,323 4,100 2,912,171 6,456,600 25,666
Max 401,584 337,833 535,562,721 1,147,686,344 918,611
Average 52,427 29,015 91,887,319 98,261,715 163,285
November Standard 38,719 29,467 60,326,161 123,526,412 96,536
2001 Deviation
Min 13,286 5,032 2,821,525 6,471,055 25,618
Max 413,439 283,236 542,933,642 1,159,694,084 956,827
Average 54,156 27,654 92,382,692 98,350,514 159,142
Standard 39,897 26,958 61,138,567 123,919,287 94,745
December Deviation
2001 Min 14,206 3,029 2,832,913 6,549,843 33,409
Max 433,591 264,904 558,736,191 1,177,876,733 889,346
130 Enterprise Risk Management in Finance

data for nonparametric models should be at least ten times the number of input
variables. Since we have five inputs in the ANNs, a minimum of 50 training
examples was necessary to have reasonable learning of connection weights.
Since we had less than ten efficient branches (efficiency score is 1) for each
training, and a minimum of 50 branches for training was desirable, we used a
grouping technique by roughly pre-specified cutoff efficiency threshold values
of 0.98, 0.8 and 0.5. Thus, we obtained sufficient branches in our ‘efficient’
set. Note that the word efficient in the DEA context means DMUs with an effi-
ciency score of 1. Since in our case the ‘efficient’ set does not only include all
DMUs with an efficiency score of 1, we have used quotation marks to indicate
that the word ‘efficient’ has a slightly different meaning from the DEA context.
The same logic applies for the word inefficient (efficiency score < 1). The per-
formance of ‘efficient’ and ‘inefficient’ branches was then tested on the entire
dataset so that industry efficiency could be predicted and analyzed.
The neural network was trained for each month using different combina-
tions of subsets. A computer program was written using Matlab language as
well as the neural network add-in module of Matlab.
Table 13.2 presents the parameters of the estimated neural networks in the
algorithm. The estimated neural network incorporates four tanh hidden units,
and the Levenberg–Marquardt algorithm is employed for training.
After training, we obtained three estimated neural networks for each month,
the best two of which were used for the branch efficiency prediction for each
month. The best two estimated neural networks are denoted as DEA-NN1 and
DEA-NN2, using training data subset S1◡ S2◡ S3 or S1◡ S2◡ S4, respectively.
The results are consistent with Pendharkar et al.’s (2003) findings that it is
better to train ANN on an efficient training data subset in order to improve the
predictive performance of an ANN.

Table 13.2 Estimated neural network parameters

Concept Result

Data pre-processing Input/Raw data/1000 (Scale preprocessing)


[10, 500]
Output/ efficiencies invariant [0,1]
Network architecture 5–10–1
Activation function: hidden/output tanh/linear
Algorithm Levenberg–Marquardt
Epochs (max.) 20000
R2 0.985
learning rate 0.6
mean square error 0.0001
Bank Efficiency Analysis 131

Based upon the best two estimated neural networks, the branch efficiencies
are calculated. The results are shown in Tables 13.3 and 13.4. Table 13.3 gives
results of number of branches corresponding to each efficiency interval for
three months. We grouped dataset SS2 into four categories depending on the
efficiency value intervals (0.98, 1], (0.8, 0.98], (0.5, 0.] and (0, 0.5], then did a
statistics analysis of DEA-NN efficiencies.
Table 13.4 shows that the efficiency scores for some DMUs are larger than 1
in the DEA-NN model, which is not allowable in the DEA context. This occurs
in the DEA-NN model since NNs actually generate a stochastic frontier based
upon the ‘efficient’ DMUs due to the statistical and probabilistic (and thus
varying) properties embedded in NNs. Neural network models have the ability
to approximate complex nonlinear functions in a semi-parametric fashion. We
observe that the bank branch performance is very close in the three-month
period, since there is no significant change in the bank’s policy and the eco-
nomic conditions during the examined period.

Table 13.3 Number of branches corresponding to each efficiency interval

(0.98,1] (0.8,0.98] (0.5,0.8 ] (0,0.5]

No. of branches by CCR in October 5 11 115 11


No. of branches by DEA-NN1 in October 1 20 115 6
No. of branches by DEA-NN2 in October 5 11 113 13
No. of branches by CCR in November 9 26 78 29
No. of branches by DEA-NN1 in 9 31 77 25
November
No. of branches by DEA-NN2 in 2 27 95 18
November
No. of branches by CCR in December 9 26 78 29
No. of branches by DEA-NN1 in December 2 47 62 31
No. of branches by DEA-NN2 in December 6 26 89 21

Table 13.4 Statistical results corresponding to each efficiency interval

Standard
Max Min Mean Median Deviation

Statistic results by CCR of October 1.00 0.38 0.66 0.66 0.12


Statistic results by DEA-NN1 of October 1.01 0.47 0.67 0.66 0.12
Statistic results by DEA-NN2 of October 1.02 0.40 0.66 0.65 0.13
Statistic results by CCR of November 1.00 0.21 0.66 0.64 0.18
Statistic results by DEA-NN1 of November 1.04 0.28 0.67 0.63 0.18
Statistic results by DEA-NN2 of November 0.99 0.40 0.66 0.68 0.17
Statistic results by CCR of December 1.00 0.21 0.66 0.64 0.18
Statistic results by DEA-NN1 of December 1.04 0.36 0.68 0.69 0.18
Statistic results by DEA-NN2 of December 1.17 0.26 0.68 0.69 0.17
132 Enterprise Risk Management in Finance

Inefficient branches can improve their performance by mimicking the prac-


tices of their efficient reference set. Furthermore, even a small improvement
can result in large monetary savings. However, it is difficult to improve a
branch from 0.90 to 0.95, while it is relatively easier to implement practices
that will improve the branch efficiency score from 0.6 to 0.7, which can result
in rapid improvement and substantial cost savings.
This concept can be demonstrated using the DEA-NN2 model as an example.
Currently only 2 of the 142 branches are ‘efficient’ in the DEA-NN2 model. The
remaining branches are distributed as follows:

Table 13.5 Efficiency score distribution

Efficiency score intervals (0.98,1] (0.8,0.98] (0.5,0.8] (0,0.5]

No. of branches by DEA-NN2 2 27 95 18


in November

If the four least efficient branches could improve their efficiency rating to the
0.5– 0.8 range while all the other branches remained the same, the branches
involved could save 18% of their costs on average (from Table 13.6). The DEA-NN
approach can provide guidance for inefficient branches to improve their per-
formance to any efficiency rating that management thinks necessary.
To verify the rationality of our proposed DEA-NN approach, a regression ana-
lysis between the efficiency result achieved by our current DEA-NN approach
and that by the normal DEA model is conducted. In the regression, a unity
(1) slope, a zero value of intercept and unity R2 coefficient indicate that our
current DEA-NN results provide a good estimation for the DEA result. The
regression results based on three months’ data are shown in Tables 13.5, 13.6,
and 13.7, where slope, intercept and r-squared coefficient are calculated. The
predicted efficiency has a strong correlation with that calculated by DEA,
which indicates that the predicted efficiency is a good proxy to classical DEA
results.

Table 13.6 Implication of slight efficiency improvement on branch costs

Current
Current Improved expenses Difference
Transit no. score score (p.a.) (p.a.)

21 0.27 0.6 $80,295 $26,554


115 0.39 0.6 $41,518 $8,652
1 0.45 0.6 $90,721 $14,053
34 0.48 0.6 $157,170 $18,483
Total 18%
Bank Efficiency Analysis 133

From Table 13.7, the DEA-ANN results are highly correlated with those
obtained in the DEA CRS techniques.

Short-term efficiency prediction


For short-term efficiency prediction, another neural network (DEA-NN3)
is applied. We used the October data set for training and DEA-NN3 is then
applied to the November and December datasets to predict the bank branches’
efficiency ratings. Results are shown in Tables 13.8 and 13.9 respectively.
Postprocessing the calculated efficiencies is accomplished by regression ana-
lysis between DEA-NN results and CCR DEA results E2. On the whole, the
predicted efficiency has similar correlation with that calculated by DEA, espe-
cially the DEA-NN3 results for November with an r-squared coefficient of 0.71,
which indicates that the predicted efficiency is to some extent a proxy to clas-
sical DEA results.

Table 13.7 Regression analysis for branch efficiency prediction using October data

Parameter Slope Intercept R 2 coefficient

DEA-NN1 for October 0.88 0.09 0.92


DEA-NN2 for October 0.98 0.01 0.95
DEA-NN1 for November 0.90 0.08 0.89
DEA-NN2 for November 0.85 0.10 0.91
DEA-NN1 for December 0.81 0.14 0.80
DEA-NN2 for December 0.64 0.26 0.67

Table 13.8 Number of branches in each efficiency interval

Efficiency score interval (0.98,1] (0.8,0.98] (0.5,0.8 ] (0,0.5]

No. of Branches by CCR of November 9 26 78 29


No. of Branches by DEA-NN3 of 16 38 84 4
November
No. of Branches by CCR of December 9 26 78 29
No. of Branches by DEA-NN3 of December 9 38 84 11

Table 13.9 DEA-NN3 results

Standard
Factor Max Min Mean Median Deviation

Statistic results by CCR of November 1.00 0.21 0.66 0.64 0.18


Statistic results by DEA-NN3 of November 1.33 0.27 0.77 0.76 0.17
Statistic results by CCR of December 1.00 0.21 0.66 0.64 0.18
Statistic results by DEA-NN3 of December 1.31 0.26 0.74 0.73 0.16
134 Enterprise Risk Management in Finance

Table 13.10 Regression analysis for short-term efficiency prediction

Parameter Slope Intercept R 2 coefficient

DEA-NN3 for November 0.68 0.32 0.71


DEA-NN3 for December 0.53 0.39 0.56

Table 13.11 Comparison of best-practice branches

November December

DEA efficient DEA-NN3 efficient DEA efficient DEA-NN3 efficient

#3, #13, #31, #40, #3, #49, #64, #40, #81, #3, #4,, #49, #64, #3, #49, #64, #81,
#49, #64, #81, #92 #92, #104, #131, #81, #91,#127 #93, #91,#131, #110
#93,#110, #16, #29,
#135, #91

Table 13.11 presents the comparison of best-practice branches by DEA and


DEA-NN3 in November and December respectively. It can be seen that DEA-NN
always has more efficient units on the frontier, since neural networks have the
flexibility to solve complex problems where the main information, or ‘know-
ledge,’ lies implicitly in the data. More good performance patterns in the ‘effi-
cient’ (but not pure DEA-efficient) region are explored so that these inefficient
DMUs by DEA are termed efficient by NNs. Neural networks have the ability
to approximate complex nonlinear functions in a semi-parametric fashion and
provide the main basis for adaptive learning systems. Therefore, by capturing
performance patterns and self-learning, neural networks can always generate
more efficient units on the frontier.

Conclusions

This chapter presents a DEA-NN study applied to the branches of a big Canadian
bank. The results are comparable to the normal DEA results. However, the
DEA-NN approach produces a more robust frontier and identifies more effi-
cient units since more good performance patterns are explored. Furthermore,
the DEA-NN approach identifies those less-than-optimal performers, and
suggests areas in which their performance can be improved to attain better
efficiency ratings. We conclude this section with a comparison of the two
methodological approaches in Table 13.12. In summary, the neural network
approach requires no assumptions about the production function (the major
drawback of the parametric approach) and is highly flexible.
Bank Efficiency Analysis 135

Table 13.12 Comparison of DEA and DEA-NN to efficiency measurement

Neural
DEA Network

Similarities
Nonparametric Nonparametric
No assumptions about the functional No assumptions about the functional
form that links its inputs to outputs form that links its inputs to outputs
Optimal weights to maximize the Optimal weights to derive the best
efficiency possible fit
Unit and scale invariant Scale preprocessing
Differences
Medium assumptions about functional Low assumptions about functional
form and data form and data
Medium flexibility High flexibility
Many theoretical studies/applications Few theoretical studies
on efficiency
Low cost of software, estimation time High cost of software, estimation time
14
Catastrophe Bond and Risk Modeling

Introduction

On May 12, 2008, the Wenchuan earthquake occurred in Sichuan province of


China, killing at least 69,000. This great disaster caused widespread damage to
the infrastructure and huge economic losses to Chinese society. The loss to the
Chinese insurance sector was in excess of 65 million RMB about 70 days after
the quake. Figure 14.1 displays insurance claim payoffs relative to the days
elapsing after the Wenchuan earthquake.
Catastrophe (cat) events such as the Wenchuan earthquake and the Swine
Flu epidemic of recent years have motivated (re)insurance companies to create
many cat risk instruments in order to hedge high risk exposures from natural
disasters. The first cat instrument, a cat equity put option, was issued to offer
the cat option owner the right to issue convertible preferred shares at a fixed
price. This instrument was issued by RLI Corp. in 1996. Since then, the con-
tingent capital market has grown very rapidly due to the increase of unantici-
pated catastrophic events. Therefore, there is a great demand for insurance and
reinsurance companies to appropriately price contingent instruments such as
cat bonds.

Catastrophe risk instruments

Catastrophe bonds, or cat bonds, are the most common type of cat risk-linked
securities. Cat bonds have complicated structures, and refer to a financial
instrument devised to transfer insurance risk from insurance and reinsurance
companies to the capital market. The payoff from cat bonds is dependent on the
qualifying trigger event(s) such as natural disasters, such as earthquakes, floods
and hurricanes, or manmade events such as fire, explosions and terrorism. We
will review the modeling approaches of cat bonds as follows.

136
Catastrophe Bond and Risk Modeling 137

60

50 Total

40
Payoff

30

Property insurance
20

10 Life insurance

0
0 10 20 30 40 50 60 70

t (in days)

Figure 14.1 Insurance payoffs (million RMB) for the Wenchuan earthquake (from www.
circ.gov.cn)

Traditional derivative pricing approaches use Gaussian assumptions, but


these are not appropriate when applied to instruments such as cat bonds
because of the properties of the underlying contingent stochastic processes.
There is evidence that catastrophic natural events have (partial) power-law
distributions associated with their loss statistics.1 This is not compatible with
the traditional log-normal assumption of derivative pricing models. There
are also well-known statistical difficulties associated with the moments of
power-law distributions, thus rendering it impossible to employ traditional
pooling methods and consequently the central limit theorem. Several studies
have examined pricing models with respect to catastrophe derivatives such
as cat bonds. Geman and Yor2 analyze catastrophe options with payoff
(L(T)−K) + where L(T) is the aggregate claim process modeled by a jump-
diffusion process.3 Dassios and Jang4 used a doubly-stochastic Poisson process
for claim processes to price catastrophe reinsurance contract and derivatives.
Jaimungal and Wang5 studied the pricing and hedging of catastrophe put options
(CatEPut) under stochastic interest rates with a compound Poisson process. In
contrast, Cox and Pedersen6 priced a cat bond under a term structure model,
together with an estimation of the probability of catastrophic events. Lee and
Yu7 adopted a structural approach to value the reinsurance contract, using the
idea of credit risk modeling in corporate finance.8 This allows the reinsurer to
transfer the risk to the capital market via cat bonds and, in effect, to reduce the
risk of the reinsurer’s default risk. Since the payments from cat bonds cannot
138 Enterprise Risk Management in Finance

be replicated by the ordinary types of securities available in financial markets,


the pricing has to be done using an incomplete market model.
The most important feature of cat bonds is their conditional payment. Trigger
conditions are generally divided into three categories: indemnity-based trigger
conditions, index-based trigger conditions and parametric trigger conditions.
An indemnity trigger involves the actual losses of the bond-issuing insurer.
It was very popular when the catastrophe bond market emerged. An industry
index trigger involves an index created from property claim service (PCS) loss
estimates. A parametric trigger is based on quantitative parameters of the catas-
trophe event, for example earthquake magnitude, central pressure, wind pres-
sure, wind speed, hurricane rainfall, and so on. This chapter focuses on the
choice of catastrophe loss model.

Loss model

The catastrophe loss model is very important to catastrophe derivatives pricing.


Table 14.1 presents various loss models used in the literature. There are three
sets of popular models that are widely discussed in recent catastrophe bond
pricing literature: the compound Poisson model, the jump-diffusion model
and the double exponential jump-diffusion model.9

Demonstration of computation
Earthquake loss data obtained from the China Statistics Yearbook is presented
in Table 14.3. From Table 14.2, we can see that the biggest earthquake was in
2008 in Sichuan province. In this earthquake catastrophe, although the loss
claims were mainly from the government and public endowment, insurers did
claim high losses. For example, according to some government surveys (www.
circ.gov.cn), the excluded liability of earthquake risk in the clauses of some life
insurances was waived by most insurers. Obviously, for modeling catastrophe
loss, log-normal performs better than normal distribution. This is consistent
with the majority of published research.13
Next, we analyze the statistical characteristics of the logarithm of catas-
trophe loss: mean reversion and fat tail. We use the presence of an autoregres-
sive (AR) feature to test mean reversion in the data. In linear processes with

Table 14.1 Catastrophe loss models

Model Literature

Compound Poisson model Vaugirard (2003);10 Jaimungal and Wang (2005)


Jump-diffusion model Cox(2000); Geman and Yor (1997)
Double exponential jump-diffusion Zhu(2008);11 Chang and Hung(2009)12
model
Catastrophe Bond and Risk Modeling 139

Table 14.2 Chinese earthquake loss data, 1966–2008

Year Frequency Magnitude Loss (10,000 RMB)

1966 1 7.2 210300


67,68 0 NA 0
1969 2 6.4 21460
1970 1 7.7 64380
71,72,73 0 NA 0
1974 1 7.1 19287
1975 1 7.3 173259
1976 2 7.4,7.8 2861420
77,78 0 NA 0
1979 1 6.0 50346
80,81,82 0 NA 0
1983 1 5.9 55146
1984 0 NA 0
1985 2 5.0,7.4 16555
1986 1 5.4 24330
1987 0 NA 0
1988 1 7.6 330825
1989 4 6.7,6.6,6.1,5.4 125005
1990 3 5.1,6.9,6.2 55827
1991 0 NA 0
1992 1 5.0 9404
1993, 94 0 NA 0
1995 1 6.5 45692
1996 3 7.0,6.9,6.4 242480
1997 2 6.6,6.6 59198
1998 5 6.2,6.6,5.3,6.2,5.1 88924
1999 1 5.6 7524
2000 2 6.5,5.5 59747
2001 1 5.2 25583
2002 0 NA 0
2003 6 6.8,6.2,5.9,6.1,5.5,6.1 231980
2004 3 5.9,5.6,5.0 36197
2005 3 6.2,5.3,5.7 105681
06,07 0 NA 0
2008 1 8.0 26432976

normal shocks, this amounts to checking stationarity. If this is true, this can be
a symptom for mean reversion, and a mean reverting process can be adopted as
a first assumption for the data.
Mean reversion cam be defined as the property of always reverting to a
certain constant as time passes. This property is true for an AR(1) process if
the absolute value of the autoregression coefficient is less than one, that is,
|α| < 1. Since for the AR(1) process |α| < 1 is also a necessary and sufficient con-
dition for stationarity, testing for mean reversion is equivalent to testing for
140 Enterprise Risk Management in Finance

stationarity. Note that in the special case where |α| = 1, the process behaves like
a pure random walk with constant drift. There is a growing number of station-
arity statistical tests available and ready for use in many econometric pack-
ages. The most popular are: the Dickey and Fuller (DF) test; the Augmented
DF (ADF) test; the Phillips–Perron (PP) test (PP); the Variance Ratio (VR) test;
and the Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test. We choose the ADF
test due to its robust features. Computation is shown in Table 14.3, indicating
a t-statistic value of −4.123 and probability value of 0.0021. This means the
earthquake loss data pass the ADF test, indicating mean reversion.
To test for fat tails we first use a graphical tool called QQ-plot, which
compares the tails in the data with those from the Gaussian distribution. The
QQ-plot gives immediate graphical evidence of the possible presence of fat
tails. Further evidence can be obtained by considering the skewness and excess
kurtosis values in order to see how the data differ from the Gaussian distri-
bution. Computations indicate skewness and excess kurtosis values of 4401
and 23 respectively, suggesting the earthquake loss data do have fat tails.

Parameter estimation
Our parameter estimation is based on Markov chain Monte Carlo (MCMC)
approaches, which are a class of algorithms for sampling from probability
distributions based on constructing a Markov chain that has the desired dis-
tribution as its equilibrium distribution. MCMC methods are particularly well
suited for financial pricing applications with stochastic process problems, for
several reasons. First, state variables solve stochastic differential equations,
which are built from Brownian motions, Poisson processes, or other i.i.d.
shocks. Therefore, standard tools of Bayesian inference can be directly used
here. Second, MCMC is a unified estimation procedure which simultaneously
estimates parameters and latent variables. MCMC directly computes the distri-
butions of the latent variables and parameters given the observed data. This
is a strong alternative to the usual approach of applying approximate filters
or latent variable proxies. Finally, MCMC is based on conditional simulation
without any optimization. MCMC provides a strategy for generating samples
x0:t, while exploring the state space using a Markov chain mechanism. This
mechanism is constructed so that the chain spends more time in the most

Table 14.3 Results of ADF testing

t-Statistic Prob.

Augmented Dickey–Fuller test statistic −4.123695 0.0021


Test critical values 1% level −3.571310
5% level −2.922449
10% level −2.599224
Catastrophe Bond and Risk Modeling 141

important regions. The MCMC approach in this chapter uses random walk
MH (Metropolis–Hastings) algorithm which assumes uniform distribution in
[−0.5, 0.5]. Parameter vector θ is updated by following θ′ = θ + τε, where ε is an
error term, and τ is a harmonic parameter. Variance of error term is rectified
by changing τ. For details of approximating parameters, readers can refer to
Andrieu and Freitas.14 Codes are written and implemented in Matlab language.
Table 14.4 presents the computational result of parameter estimation using all
three sets of models.
Figures 14.2 and 14.3 depict the fitted curve of a compound Poisson model
and different jump-diffusion processes using historical and simulated data

Table 14.4 Results of parameter estimation

Model Distribution Parameters estimation

Compound Normal λ = 1.1628, μ = 632469, σ = 374451


Poisson Log-normal λ = 1.1628, μ = 10.4256, σ = 1.5701
model Gamma λ = 1.1628, a = 0.2433, b = 259921
Loglogistic λ = 1.1628, μ = 0.185159, σ = 0.76712
Jump- Normal μ = 0.0586, σ = 0.4568, λ = 0.9426, μY = 7.9434, σy = 3.6244
diffusion Log-normal μ = 0.0566, σ = 0.4011, λ = 0.9034, μY = 2.0797, σy = 1.1976
model Gamma μ = 0.0566, σ = 0.4011, λ = 0.9312, a = 1.1381, b = 10.4235
Loglogistic μ = 0.0566, σ = 0.4011, λ = 0.9034, μY = 0.9524, σy = 2.0759
Double exponential μ = 0.0968, σ = 2.1346, λ = 1.0136, η1 = 10.4950,
jump-diffusion process η 2 = 6.1976, p = 0.9735, q = 0.0265

16

14

12

10

0
2 3 4 5 6 7 8 9

Figure 14.2 Frequency histogram of logarithm of earthquake loss with normal fit
142 Enterprise Risk Management in Finance

× 10–6

1.4 Loss
Normal
Lognormal
1.2
Gamma
Log-Logistic
1
Density

0.8

0.6

0.4

0.2

0
0.5 1 1.5 2 2.5
Data ×107

Figure 14.3 Historical vs. simulated data distribution of compound Poisson model

distribution respectively. Figure 14.3 suggests that for compound Poisson


models, the loglogistic distribution is the best choice. Figure 14.4 implies
that for jump-diffusion models, log-normal distribution is the best choice;
however, the double exponential jump-diffusion model fits Chinese earth-
quake loss data best. These conclusions will be further verified in the next
section.
To validate our conclusion, we test the goodness of fit by the Kolmogorov–
Smirnov (K–S) test, which is based on the absolute value of the maximum
difference D max between the cumulative distributions of two data matri-
ces.15 One of the advantages of the K–S test is that it leads to a graphical
representation of the data, which enables the user to detect normal distri-
butions. For larger datasets with, for example, a sample size of greater than
40, the Central Limit Theorem suggests that the t-test will produce valid
results even in the presence of non-normally distributed data. However,
highly non-normal datasets can cause the t-test to produce fallible results,
even for large N datasets. In the last example you will see a case where the
t-test fails at N = 80. Table 14.5 gives the computation result of the K–S
test, where the absolute value of the maximum difference D max and the
p value are given. The result of the K–S test supports Figures 14.3 and 14.4.
The double exponential jump-diffusion model is the best fit for Chinese
earthquake data.
Catastrophe Bond and Risk Modeling 143

×10–7

7
Loss
Jump diffusion-Normal
6 Jump diffusion-Lognormal
Jump diffusion-Camma
5 Jump diffusion-Log-logistic
Double exponential jump diffusion
Density

0
0 0.5 1 1.5 2 2.5
Data
×107

Figure 14.4 Historical vs. simulated data of stochastic process

Table 14.5 Results of K–S test

Model Distribution Dmax of K–S test P value of K–S test

Compound Poisson model Normal 0.3323 0.2832


Log-normal 0.1817 0.7041
Gamma 0.2541 0.5651
Loglogistic 0.1778 0.7415
Jump-diffusion model Normal 0.1531 0.6718
Log-normal 0.0946 0.8092
Gamma 0.1845 0.7146
Loglogistic 0.1124 0.7485
Double exponential jump-diffusion model 0.0951 0.8356

Error analysis

This section provides error analysis in order to validate solutions from prior
material by running simulations. The goodness of fit can also be tested,
following the error analysis. We first simulate 10,000 paths of earthquake loss
data using the Monte Carlo simulation. Then we calculate averages of 10,000
144 Enterprise Risk Management in Finance

Table 14.6 Error analysis

Model Distribution msd_Error avg_Error max_Error

Compound Poisson Normal 5.39623 2.4758 3.64896


Log-normal 0.64278 0.8751 0.78545
Gamma 1.82371 1.3045 1.54816
Loglogistic 0.31423 0.6026 1.59713
Jump-diffusion Normal 2.45201 1.52306 1.62546
Log-normal 0.52460 0.19821 1.57233
Gamma 2.24742 1.0564 1.81238
Loglogistic 0.28252 0.8991 1.61375
Double exponential jump-diffusion 0.13571 0.1024 1.2963
model

paths to estimate loss data. Finally, we compute their mean square deviation,
mean absolute error and maximal absolute error. The computational results are
shown in Table 14.6.
As can be seen from Table 14.6, the mean square deviation value and
absolute average error value of the double exponential jump-diffusion model
are 0.13571 and 0.1024 respectively, which are less than the corresponding
values of other models. Second, the maximal absolute error value in the double
exponential jump-diffusion model is 1.2963, close to the value in loglogistic
jump-diffusion model, which is 0.8991. These values suggest that the double
exponential jump-diffusion model is the best fit for Chinese earthquake
loss data.

Conclusions

In this chapter, we have reviewed the state-of-the-art approaches in modeling


catastrophe losses for cat bond modeling and pricing. We tested Chinese earth-
quake loss data using three models: the compound Poisson model, the jump-
diffusion model and the double exponential jump-diffusion process, where
normal, log-normal, gamma and loglogistic distributions were employed
for comparison. Markov chain Monte Carlo (MCMC) was used for parameter
estimation, and Monte Carlo simulation was employed to generate simulated
data for error analysis. Results indicate that for compound Poisson models the
loglogistic distribution is the best choice, while for jump-diffusion models the
log-normal distribution is the best. Results also suggest that the double expo-
nential jump-diffusion model is the best fit for Chinese earthquake loss data.
15
Bilevel Programming Merger Analysis
in Banking

Introduction

In modern organizations, component elements members are mutually


dependent on a common set of finite resources.1 These organizational resources
include funds, personnel, time, effort, and information.2 As a result, organiza-
tions have been described as large pools of scarce shared resources, for which
component elements (subgroups) compete.3 MEI, a global provider of trade pro-
motion management solutions, surveyed 52 consumer packaged goods (CPG)
manufacturers in May 2011, finding that ‘Trade promotions budgets do not
grow and IT budgets are still clamped down, yet these organizations somehow
need to improve promotion effectiveness. They are no longer concerned with
streamlining the deduction reconciliation process, but they do want better
visibility into where their scarce dollars are being spent.’4 Due partly to pres-
sure from competition and shareholders, many corporations, including banks
and other financial institutions, seek ways to rearrange their organizational
structure and to widen their geographical reach and product variety. These
changes often aim to improve efficiency through potentially higher economies
of scale and wider scope. This chapter aims to examine how such merger per-
formance is gauged in the presence of scarce shared resources, using a banking
organization as an example. In this evaluation, we concentrate on the within-
firm competition for the common resources.
Mergers involve a series of decision processes at different stages of M&A
activity. Jemison and Sitkin (1986)5 identified four impediments to effective
decision-making during M&A: activity segmentation, escalating momentum,
ambiguity, and misapplication of acquiring company systems in the acquired
company. They emphasized the complexity and ambiguity present in the M&A
process, and pointed out that activity segmentation helps executives manage
that complexity. In this context, in order to gain a competitive advantage, bank
merger decision-makers should identify and benchmark their true managerial

145
146 Enterprise Risk Management in Finance

efficiency with respect to network-merging and restructuring processes such as


the merging of bank branches.6 For example, when United Overseas Bank had
to rationalize its operations from retail to wholesale banking in late April 2005,
66 out of its 67 branches were merged with Banco de Oro. This chapter offers
an efficiency evaluation of financial operations viewed as a series of supply
chain operations.
We take a view of mergers differing from traditional inter-organizational
concerns by considering the context of a single hierarchical firm. In our study,
a merger refers to a combination of operations of different parallel business
units, each with two levels of decision-making, with occasionally conflicting
objectives. The leader at the upper level of operations and the follower at the
lower level seek to optimize their individual objectives, and make their own
set of decisions. The hierarchical process means that the leader sets the value of
their decisions first and then the follower reacts, bearing in mind the selection
of the leader. The goal of the leader is also to optimize their specific objective
while incorporating the reaction of the follower to their course of action. For
example, we consider an investment bank which operates in both a primary
and secondary capital market. The bank deals with transactions in long-term
instruments with maturity longer than one year, such as corporate debentures,
government bonds and preference shares. Banks receive payments based on
these long-term instruments from the primary capital market and sell them in
the secondary capital market, which provides liquidity and marketability. This
investment banking operation can then be viewed as a supply chain where the
primary and secondary markets are upstream and downstream chain members
respectively. Within the same bank, the front office is responsible for the sec-
ondary capital market business. The middle office or marketing division is
responsible for collecting loans from the primary market. The two divisions
must compete for scarce resources, for example, a budget for marketing activity
and IT maintenance. From this viewpoint, M&A in the banking industry
requires a framework that takes into account both performance in the primary
and secondary markets and the competition for common resources among the
different subsystems.

A conceptual banking chain with constrained resources

A typical serial (supply) chain includes a stream of processes (operational


activities) of goods and services that starts with the customer order, goes
from raw materials through the supply and production stages, and ends with
the distribution of products to the customer.7 We next discuss and analyze
banking functions with constrained resources from this serial chain per-
spective. Banking activities can be roughly divided into the two markets
mentioned earlier: the primary market and the secondary market. In Wu and
Bilevel Programming Merger Analysis in Banking 147

Birge (2012),8 the primary-market operations are to initiate mortgage loans


which are then delivered to residential and commercial borrowers, all with
associated costs and consumed resources. The secondary-market business
operations include selling the mortgage loans obtained from the primary
market to investors as whole loans, or pooled as mortgage-backed secur-
ities. In contrast to Wu and Birge, the conceptual banking chain model here
includes other operations, such as IT-intensive or industry-specific products
where sub-chains compete for necessary resources.9 Examples of players in
both the primary and secondary markets might include Canadian Imperial
Bank of Commerce (CIBC) and Air Canada as examples of industry-specific
players involved in the selling (CIBC as an initiator) and buying of asset-
backed commercial paper.
Figure 15.1 displays a conceptual model of the market-level banking
process with constrained resources from a supply chain perspective. For any
investment bank branch that manages business in both markets, the physical
function of the chain is entirely focused on the conversion of a primary
market sale (the act of originating a residential or commercial loan) into a
secondary market sale (the act of selling and delivering that loan to a capital
markets investor).
To evaluate the potential gains of a merger involving banking institutions
and their divisions, we analyze the banking chain merging problem as a
serial (supply) chain including a leader–follower relationship. In the following
sections, we build this model and illustrate its applicability. We note that the
leader in the model can be at either the upstream or the downstream level, and

Finite resources,
e.g., IT Budget

Personnel Personnel Profit


Primary market Secondary market
business Loans business
Others Borrower selection; Securitization; Loan recovery
Underwriting Information Derivative trading

Information flow Product flow


Sales, risks, availability, stock, customer satisfaction

Figure 15.1 Supply chain model of the banking process with constrained resources
148 Enterprise Risk Management in Finance

the follower can be correspondingly positioned at the downstream or upstream


level. This yields two possible game structures:

• The bank puts more emphasis on the primary-market business than the
secondary-market business. The primary-market business unit seizes
constrained resources to reach its goal, and the secondary-market business
unit attempts to maximize its profits, defined and solved as the follower. We
denote this as an upper-leader (UL) game.
• The bank places greater emphasis on the secondary-market business than
on the primary-market business. The secondary-market business unit serves
as a leader, and the primary-market business unit serves as a follower in the
Stackelberg game. We denote this as a lower-r (LL) game. The problem for
the leader at the lower level is to find an optimal resource allocation and
weighting scheme that maximize the total profit subject to the follower’s
optimal strategy. The follower at the upper level attempts to find an optimal
weighting scheme to maximize its total profit given the resource allocating
scheme determined by the leader.

Mathematical model

A bilevel programming problem is a hierarchical optimization problem


consisting of two levels when the constraints of an optimization problem are
also determined by the other optimization problem. The upper level, which is
also termed the leader’s level, is dominant over the lower level which is also
seen as the follower’s level. The leader makes the choice first, to optimize his
objective function. Observing the leader’s decisions, the follower makes their
own decisions which in turn affect the leader’s strategy. A bilevel linear pro-
gramming problem (BLP) given by Bard (1998)10 is formulated as follows:

min F( x, y ) = p1T x + q1T y


x

s.t . A1 x + B1y ≤ b1
(I)
min f ( x, y ) = p2T x + q2T y
y

s.t . A2 x + B2 y ≤ b2

where x ϵ Rn, y ϵ Rm refer to the decision variables corresponding to the upper


and lower level respectively, p1, p2 ϵ Rn, q1, q2 ϵ Rm, b1 ϵ Rc, b2 ϵ Rd, A1 ϵ Rc×n,
B1 ϵ Rc×m, A2 ϵ Rd×n, B2 ϵ Rd×m, and T denotes transpose. Existing work such as
Hansen et al. (1992)11 and Bard (1998)12 provides methods to transform the
linear bilevel programming problem into a single-level programming problem,
which is standard mathematical programming and relatively easy to solve.
Bilevel Programming Merger Analysis in Banking 149

Data envelopment analysis (DEA) is a linear programming methodology to


measure the efficiency of multiple organizations and indicate the differences
between the inefficient ones and the best-practice ones. DEA is a widely used
technique to evaluate the performance of various organizations in public and
private sectors. In DEA, the organization is also called a decision-making unit
(DMU). Generically, a DMU is regarded as the entity responsible for converting
inputs into outputs. For example, banks, supermarkets, car makers, bank
branches etc. can all be deemed as DMUs. Consider n DMUs that use a vector
of p inputs: xi = (xi1, ... ,xip) to produce a vector of q outputs yi = ( yi1, ... ,yiq). The
profit efficiency for DMU j can be evaluated based on a linear programming
model proposed by Cooper et al. (2000)13

q p
max ∑d
r =1
T
r y% jr − ∑ csT x% js
s =1
n
s.t . ∑λ x
i =1
i ir ≤ x% jr ( r = 1,..., p ), (II)
n

∑λ y
i =1
i is ≥ y% js ( s = 1,..., q ),

λi ≥ 0 (i = 1,..., n ),

where (x~j1, ... ,x~jp, y~j1, ... ,y~jq) are decision variables and c = (c1, ... ,cp) and
d = (d1, ... ,dq) are the unit price vectors attached to the input x~ = (x~j1, ... ,x~jp) and
output y~j = (y~j1, ... ,y~jq) vectors respectively, λ = (λ1, ... ,λn) is a nonnegative multi-
plier used to aggregate existing production activities. Based on an optimal
solution (x*j1, ... , x*jp, y*j1, ... , y*jq) of the above model, the profit efficiency of DMU
j (PEj) is computed as follows:

q p

∑d y r jr − ∑ cs xjs
PEj = r =1
q
s =1
p
(III)
∑ dr y − ∑ cs xjs*
r =1
*
jr
s =1

where yj = (yj1, ... ,yjq), xj = (xj1, ... ,xjq), are the vectors of observed values for
q p
DMU j. Under the positive profit assumption, i.e., ∑d
r =1
T
r y jr − ∑ csT xjs > 0 , we
s =1

have 0 < PEj ≤ 1, and DMU j is profit-efficient if and only if PEj = 1.


The bilevel programming and DEA model are combined to create an inte-
grated bilevel programming-DEA model to evaluate the performance of a hier-
archical system and its sub-levels under two game situations. The NP-hard
bilevel-programming-DEA model is reformulated into at standard linear-bilevel
150 Enterprise Risk Management in Finance

programming form, which can then be easily transformed to a more tractable


single-level programming problem.

Merger evaluation

Evaluation of a merger with bilevel structure can be accomplished in two


stages. First, a firm is evaluated using the average input bundle of existing pro-
duction. Second, the average firm production is doubled in scale to reach the
merged firm production. Then the performance of the two (merged and virtual
firm) are compared using the average input bundle.14 The first stage is called
the harmony effect and the second is called the scale effect. The harmony
effect is useful because, if firms shared the combined input equally and used
the identical average bundle, each would produce this higher level of output,15
assuming the relationships within the DEA framework.
To analyze the potential gains from a merger of n bilevel systems, the
following five steps are proposed to compute the harmony efficiency, scale effi-
ciency, and merger efficiency.16 We use the CRS assumption as a demonstration
example. Similarly, the VRS assumption can also be employed to analyze the
post-merger returns to scale.

Step 1: Solve the bilevel programming DEA problem for each DMU, using inte-
grated bilevel programming-DEA model to obtain optimal solutions. Using
nonnegative multipliers, solutions of integrated bilevel programming-DEA
give the best-practice frontier through a convex combination of existing pro-
duction activity.
Step 2: For each variable, compute the average slack-adjusted input-output
bundle, and the profits of leader and follower are computed as the average
input-output bundle.
Step 3: Solve the bilevel programming DEA problem with the average input-
output bundle to generate efficiency values and record the corresponding
optimal input-output bundle for the leader, follower and system.
Step 4: Compute the total (slack-adjusted) input and output bundles of n
systems, and the profits of leader, follower and system using the total input-
output bundle are computed.
Step 5: Solve the bilevel programming DEA problem with the total input-output
bundle. With the solution, we can compute merger efficiency, harmony and
scale efficiency of the whole bilevel system as well as for the subsystems.

If the computed merger efficiency is greater than 1, then n merger members


will benefit from the potential profit generated by the merger; otherwise, it
would be more efficient to keep these units separate.
Bilevel Programming Merger Analysis in Banking 151

Incentive incompatibility problems occur in merger evaluation from two


primary sources: First, there is a possibility of double marginalization, which
occurs when both upstream and downstream firms (or firm divisions) have
monopoly power and each firm reduces output from the competitive level
to the monopoly level, creating two deadweight losses.17 Second, the leader
can use their stronger market power to seize more resources than the follower
receives. If this is achieved by sacrificing the follower’s benefit, the follower
will not participate in the merger. Therefore, there is a need to revise the
strategy to yield an incentive-compatible strategy. To motivate an inefficient
follower to participate in the merger activity, the leader promises to share at
least α% of their profit with the follower. Therefore, the total profit that the fol-
lower would receive is at least their actual profit plus α% of the leader’s profit.
The total profit of the system remains unchanged under this profit-sharing
strategy. This profit-sharing strategy adds an incentive-compatible constraint
to the bilevel programming DEA model.

A numerical example for incentive incompatibility


To illustrate the incentive compatibility issue, we consider a hypothetical
example with eight bilevel systems to be merged. Each system consists of the
leader and the follower. For the leader, we employ three inputs (two direct
inputs X D1 and one shared input X1) and three outputs (two direct outputs Z1
and one intermediate output Y ). For the follower, we utilize four inputs (two
direct inputs X D2, one shared input X2 and one intermediate input Y ) and two
direct outputs Z2. The raw data is shown in the table of the online supple-
mentary appendix.
A premerger analysis suggests that the leader’s gain can be improved by
around 25% of the observed profits, while the follower can gain nothing
since their relative efficiency scores are 1. This implies that a potential merger
activity might not be favored by the followers. We therefore employ the profit-
sharing strategy to treat this problem. Under the first situation with the CRS
assumption, to find a suitable α for the leader in this strategy, we must calculate

Table 15.1 Input and output data for the 8 branches in the numerical example

Branch XD1 X1 Z1 Y X D2 X2 Z2 PL PF

DMU1 2.5 13 4 35 60 30 1.5 12 16 55 65 105.5 60.5


DMU2 7 12 13.4 76 53 55 5.6 13 6.6 87 45 151.6 51.8
DMU3 3 7 9.8 52 42 40 4 15.4 10.2 65 56 114.2 51.4
DMU4 9 18 4.6 63 71 70 8.8 11.2 15.4 78 89 172.4 61.6
DMU5 2.3 12.5 5 33 62 35 1.6 12.3 15 52 65 110.2 53.1
DMU6 7.4 11.7 14 73 50 53 5.8 13 6 85 42 142.9 49.2
DMU7 3.5 7.5 10 57 45 38 4 15.6 10 69 56 119 57.4
DMU8 8.8 17.9 5 60 70 72 9 11.5 15 75 90 170.3 57.5
152 Enterprise Risk Management in Finance

the profit efficiency values for both leaders and followers, and then change
the value of the profit-sharing parameter α from 0 to 0.1. A numerical analysis
suggests that both leaders and followers are better off when α « (0, α̂), where
α̂ ≈ 0.01.

Case study: banking chain illustration

This section conducts a banking chain merger efficiency analysis using our
proposed approach. The Canadian banking industry experienced an increas-
ingly dynamic market environment due to a change in the legislative regime
of the Canadian government in the early 1990s. Benefiting from new and
cost-effective technology, Canadian banks have in many ways increased per-
formance measurements and reduced operating costs. They have maintained or
even increased the quality of their services while expanding to a broader cus-
tomer base in order to be more competitive in the global banking market. For
example, GIS-based technologies have been employed by Canadian banks for
merger evaluation, particularly for derivation of market boundaries and market
share estimation.18 Both negative and positive effects of mergers need to be taken
into consideration along with uncertainties and risks stemming from multiple
sources. Gauging the potential gains from mergers, and the decomposition of
these gains into harmony and scale effects, provide support for decisions made
by banks on whether to green-light a merger with its underlying conditions.
The hierarchical structure of the banking chain is similar to Figure 15.1. For
clarity of exposition, in the bilevel banking chain model we extract two direct
inputs (personnel costs, and other expenses); one intermediate output (loans);
and two final outputs (profit, and loan recovery). We consider the annual
IT budget as a constrained input resource to support computers, required
software and systems, data network rentals, and any maintenance and repair.
The large Canadian bank we consider has the greatest market share in the
Canadian e-banking business, which relies heavily on IT support. Personnel
costs include salaries and benefit payments for full-time, part-time, and con-
tract employees. Other expenses are costs other than personnel and IT budget,
such as marketing and advertising expenditures, training and education costs,
and communication expenses. Loans are composed of credit notes issued to
individual customers.
Thirty branches of this bank in Ontario are selected to consider mergers of the
branches as a form of intra-firm reorganization. The potential for performance
improvement in these branches provides the incentive to consider gains from
merging independently operated branches with other banking processes. We
chose branches from the same bank for two primary reasons. First, we expect
high relative efficiency levels because of the similarity of the technologies and
locations and the cooperation among the branches. Second, existing work from
Bilevel Programming Merger Analysis in Banking 153

Table 15.2 Raw data for 30 branches

XD1 X1 Y X2 Z2

Other Personal IT IT Loan


expense cost budget Loan budget Profit recovery PL PF
Branch (×105) (×105) (×105) (×105) (×105) (×105) (×105) (×105) (×105)

DMU1 71.3 1.5 0.133 1447.8 2.5 523.2 1427.7 1374.9 500.6
DMU2 107.1 1.7 0.169 1950.2 2.3 534 1923.3 1841.2 504.8
DMU3 122.4 2.35 0.24 2095.2 1.65 536.3 2066 1970.2 505.4
DMU4 41 1.1 0.159 1364.4 2.9 495.4 1324.8 1322.1 452.9
DMU5 36.3 2.11 0.156 1390.2 1.89 521.1 1365.2 1351.6 494.2
DMU6 40.9 1.33 0.18485 1520.6 2.67 523.7 1496.3 1478.2 496.7
DMU7 91.8 0.6 0.5642 8118.6 3.4 610.3 8005.2 8025.6 493.5
DMU8 123.5 0.71 0.12 1144.1 3.29 519.9 1126.9 1019.8 499.4
DMU9 182.1 1.2 0.198 1742.5 2.8 527.4 1712.9 1559 495
DMU10 191.5 1.2 0.198 1742.5 2.8 527.4 1712.9 1549.6 495
DMU11 302.8 2 0.137 3153.7 2 442.9 2980.6 2848.8 267.8
DMU12 544 3.8 0.297 4517.7 0.2 386 4300.9 3969.6 169
DMU13 87.4 0.5 0.131 1434.2 3.5 517.7 1412.7 1346.2 492.7
DMU14 691.8 3.7 0.125 3249.1 0.3 564.8 3070.4 2553.5 385.8
DMU15 458 4 0.138 2622 0 402 2283.8 2159.9 63.84
DMU16 124.1 1.1 0.144 1749.3 2.9 524.3 1728.3 1624 500.4
DMU17 45 0.53 0.076 951.2 3.47 506.7 932.18 905.59 484.2
DMU18 589.2 3.45 0.155 4246.9 0.55 600.2 4026.1 3654.1 378.8
DMU19 713.8 3.82 0.14 3915.8 0.18 372.5 3559.5 3198 15.98
DMU20 97.3 1.28 0.126 1898.7 2.72 524.3 1870.4 1800 493.3
DMU21 229.4 1.36 0.12843 1876.5 2.64 487.1 1805.2 1645.6 413.2
DMU22 44.4 0.55 0.059 754.6 3.45 515.3 744.79 709.59 502
DMU23 50.8 0.57 0.057 759.5 3.43 512.3 749.78 708.07 499.1
DMU24 37 0.98 0.141 1690.6 3.02 523.3 1658.5 1652.5 488.2
DMU25 39.5 1.04 0.146 1726.4 2.96 526.3 1697.1 1685.7 494
DMU26 268 2.06 0.196 3643 1.94 560.1 3577.4 3372.7 492.6
DMU27 78.1 0.67 0.105 1158.1 3.33 512 1143 1079.2 493.6
DMU28 87.2 1 0.121 2220.7 3 524.8 2158.5 2132.4 459.6
DMU29 175.7 0.106 0.127 2067 3.894 525.3 2042.2 1891.1 496.6
DMU30 193.9 1.72 0.165 2132.5 2.28 493.5 2030.1 1936.7 388.9

the theory of transfer effects suggests that merger performance is higher when
business units operate in similar industries, due to a positive transfer effect; the
merger performance could suffer if it was made in a different industry, where a
negative transfer effect may occur.19 In general, similar organizational cultures
may enhance the chances of gains from operational mergers.
Considering this situation, we expect potential gains from mergers, in par-
ticular from the harmony effect due to new market conditions, and envir-
onmental regulations may create new potential demand for the extension
154 Enterprise Risk Management in Finance

of services. The qualifications and structures of the extension offices may


adapt more slowly than more established offices due to union restrictions,
recruitment difficulties, etc. We set the output price or weight vectors to unity,
which means that all output variables are equally weighted in the compu-
tation. Program codes in the Matlab language were generated, and the com-
putation was performed on a PC Pentium at 1.79GHz with 1.99GB of RAM
under Windows XP. Codes were written in Matlab 7.1 Release 14, and required
11 minutes of computation time to solve the models.
To conduct a pre-merger analysis, we investigate the profit efficiency of the
system, the leader, and the follower for the 30 existing branches and to assess
potential gains. The models are implemented under both the CRS and VRS
assumptions. Table 15.3 gives the mean profit efficiency, the standard devi-
ation of the profit efficiencies, the number of fully efficient bank branches,
and the lowest and highest profit efficiency among all bank branches. From
the summary of different estimations, the level of profit inefficiency (1-PE)
of the bank from the Great Toronto area is 20–30% in most specifications.
The interpretation of this result is that if everyone learned best practices, total
profits could be improved by 20–30% without changing the organization of
the bank.
We can make three observations from the efficiency values in Table 15.3.
First, the DEA efficiency increases from the CRS assumption to VRS assump-
tion, which is consistent with existing DEA literature20 (and naturally follows
from the reduced constraint set). Second, there exist substantial inefficiencies
for most bank branches both in the full chain and its members. Moreover,
if we assume a CRS technology, we see no branch that is profit efficient as a
system though some branches are profit-efficient in their leader operations
(e.g., Branch 7, 11 and 28) and others are profit-efficient in their follower opera-
tions (e.g., Branch 22 and 23). If we assume a VRS technology instead, three
bank branches, i.e., Branch 2, 22 and 23, are profit-efficient in the system, the
leader, and the follower. This validates Proposition 1, namely, that the system
is profit-efficient only if both the leader and the follower are profit-efficient.
Third, profit efficiency scores of the system are close to those of the leader but

Table 15.3 Profit efficiency values

Statistics PEL-CRS PEF-CRS PES -CRS PEL-VRS PEF-VRS PES -VRS

Mean 0.727833 0.565767 0.7017 0.798533 0.861167 0.809433


Standard deviation 0.177358 0.202374 0.151079 0.189591 0.256551 0.162195
Minimum 0.432 0.029 0.467 0.456 0.036 0.524
Maximum 1 1 0.963 1 1 1
#{E=1} 3 2 0 8 3 3
Bilevel Programming Merger Analysis in Banking 155

quite different from those of the follower. This seems to suggest that the system
profit efficiency is mainly affected by the leader.
To examine how resource allocation impacts the firm’s performance, it is
necessary to align resource allocation to a company’s bottom-line business
goals. For this assessment, banking operations consist of two main value-
added activities in two market. Under the UL game structure, the upstream-
level sub-division has stronger market power and decides on the amount of
resources it needs to maximize efficiency. Hence, we identify stage 1 as a
resource allocation-related value-added activity under the UL game scenario.
In contrast, stage 2 is related to resource allocation and value-added activity
under the LL game scenario. This assumption, although possibly more
restrictive than in reality, may be reasonable and necessary because of the
lack of information about how management can allocate and spend annual
IT budgets.
Under the UL game scenario and the VRS assumption of the 30 branches,
three achieved overall efficiency, eight were rated efficient in stage 1 (resource
allocation-related activity), and three were rated efficient in stage 2. The results
show five branches ass rated efficient in the resource allocation-related activity
(stage 1) without achieving overall efficiency. The following classification of
branches aims to provide a means for management to better evaluate their
resource allocation-related operations.21

• Type 1 represents branches efficient in the resource allocation-related value-


added activity, but which cannot achieve overall efficiency. Branches 7, 18,
19, 24, and 28 belong to this classification.
• Type 2 represents branches which achieved overall efficiency in spite of
inefficiency in the resource allocation-related activity. None of the branches
belongs to this classification.
• Type 3 represents branches which are efficient both in resource allocation
and overall. Branches 5, 22, and 23 belong to this classification.
• Type 4 represents branches which are inefficient both in the resource allo-
cation and overall. Branches 1–4, 6, 8–17, 20, 21, 25–27, 29, and 30 belong
to this classification.

Different types of branches can now be studied in more detail to identify the
reasons for performance differences. To explain the systematic differences
between different types of branches, management may refer to factors such as
the type of system used, management practices, and implementation proce-
dures. Similar to Wang et al. (1997)22 an interesting observation from this
classification is that inefficiency in resource allocation always leads to overall
inefficiency. Moreover, an overall inefficiency seems to imply efficiency in
resource allocation-related activity.
156 Enterprise Risk Management in Finance

We now compare the proposed approach with existing efficiency evaluation


tools: normal DEA and stochastic frontier analysis (SFA). In the SFA framework,
we estimate linear and log-linear specifications of the mean structure and trun-
cated normal distribution for the inefficiency error term. In the normal DEA
framework, we compute efficiency values for the scale assumptions CRS (CCR)
and VRS (BCC). These existing tools simply consider a firm as an input-output
black box. That is, they are not concerned with the inner intermediate inputs/
outputs or any decision hierarchy or game structure. For each estimation
method, Table 15.4 shows the mean Farrell efficiency, the standard deviation
of the Farrell efficiencies, the number of fully efficient bank branches, and the
lowest and highest Farrell efficiency among all bank branches. From Table 15.4,
the level of profit inefficiency (1-PE) in the bank from the Great Toronto
area is 0–10% in most specifications if we ignore inner activities (e.g., loan
collection) among the leader and follower business units, which results in sig-
nificant overestimates of bank performance.
We further compute Pearson correlation coefficients and significant values
in Table 15.5. The correlation analysis shows that PE-CRS and PE-VRS generate
highly correlated results, with a Pearson correlation value of 0.842, but do not
seem to produce a correlated result with existing DEA and SFA. We also note
that the Spearman correlation between the individual efficiencies calculated
in the SFA linear model and the CCR model are 0.127, while it is −0.031 in the
SFA log-linear model. The Spearman correlation between BCC DEA and SFA
cannot be computed, because the BCC efficiency score is unity, suggesting no
discriminating power. In general, then, these models do not suggest agreement
in the individual evaluations. This implies that existing models analyzed here
cannot directly be used as authoritative efficiency analysis models for bank
branches with bilevel structures. Traditional DEA or SFA provide a measure
of overall performance, but the overall efficiency measure derived cannot
provide insight into the efficiencies of sub-systems (either the leader or the
follower) operations and their importance in realizing final outputs of both
pre-merger and post-merger entities. In the analysis of specific merger cases,

Table 15.4 A comparison with existing DEA and SFA

SFA
Statistics PE-CRS PE-VRS CCR BCC SFA Linear loglinear

Mean 0.7017 0.809433 0.9829 1 0.99167 0.915705


Standard 0.151079 0.162195 0.030204 0 0.000691 0.087243
deviation
Minimum 0.467 0.524 0.889 1 0.989663 0.658106
Maximum 0.963 1 1 1 0.993046 0.988997
Bilevel Programming Merger Analysis in Banking 157

Table 15.5 Correlations

SFA SFA
PE-CRS PE-VRS CCR BCC LINEAR LOGLINEAR

PE-CRS Correlation 1.000


Sig. (2-tailed) .
PE-VRS Correlation .842** 1.000
Sig. (2-tailed) .000 .
CCR Correlation .483** .544** 1.000
Sig. (2-tailed) .007 .002 .
BCC Correlation . . . NA .
Sig. (2-tailed) . . . .
SFA LINEAR Correlation .099 .222 .127 1.000
Sig. (2-tailed) .602 .238 .502 .
SFA OGLINEAR Correlation .011 .113 −.031 .917** 1.000
Sig. (2-tailed) .953 .553 .873 .000 .

Notes: ** Pearson Correlation is significant at the 0.01 level (two-tailed); NA cannot be computed
because at least one of the variables is constant.

it is then important to open the black box and develop good underlying pro-
duction models of the technology accommodating inner intermediate inputs/
outputs and game structure.

Post merger
To respect the fact that most branches favor merging two adjacent branches,
we examine potential profit gains by merging two branches at a time. This
leads to a total of 435 possible mergers involving two branches. Therefore, the
relative profit efficiency of these 435 possible mergers is computed with ref-
erence to the original DMU by our bilevel programming-DEA model. We tested
the merger gains from all of these combinations using both the CRS and VRS
bilevel DEA chain merger models.
We examined two kinds of merger activities: a merger of individual sub-
chain (either leader or follower) members and a chain merger. Table 15.6 gives
the computational statistics under both the CRS and VRS assumptions: the
number of the efficient and coordinated (using profit-sharing strategy) mergers,
and the average merger efficiency scores Em. Tables 15.7 and 15.8 present merger
efficiencies of the top ten most promising mergers under both CRS and VRS
assumptions respectively.
If we believe in the estimated CRS technology, under the 1st situation with
UL game structure, we see that at an overall scale the average potential profit
gains in the (435–12) bilevel system mergers is 2.2% (=1.022–1) while this
number decreases to 0.7% (=1.007–1) under the second situation, with LL game
structure. Indeed, in the detailed results, 100 out of the 225 effective system
158 Enterprise Risk Management in Finance

Table 15.6 Statistics under both CRS and VRS assumptions

CRS VRS
_ _
Game Measure (>100%) Number Em Number Em

The UL game Efficient mergers for the leader 218 1.023 213 1.779
Efficient mergers for the follower 191 1.02 13 1
Efficient mergers for the whole 225 1.022 213 1.61
system
Coordinated efficient mergers 163 1.02 7 1.45
The LL game Efficient mergers for the new 299 1 70 1
leader
Efficient mergers for the new 249 1.035 113 1.263
follower
Efficient mergers for the whole 335 1.007 112 1.194
system
Coordinated efficient mergers 193 1.006 3 1.2

Table 15.7 Top ten promising mergers under UL game structure

RTS Merger ELm HL SL EFm HF SF ESm HS SS

22,23 1.327 1.327 1 1.138 1.138 1 1.253 1.253 1


2,22 1.188 1.188 1 1.051 1.051 1 1.146 1.146 1
2,23 1.186 1.186 1 1.056 1.056 1 1.147 1.147 1
22,29 1.173 1.173 1 1.061 1.061 1 1.138 1.138 1
16,22 1.155 1.155 1 1.005 1.005 1 1.109 1.109 1
CRS
16,23 1.147 1.147 1 1.055 1.055 1 1.119 1.119 1
4,22 1.137 1.137 1 1.001 1.001 1 1.095 1.095 1
17,23 1.132 1.132 1 1.076 1.076 1 1.112 1.112 1
17,22 1.129 1.129 1 1.076 1.076 1 1.111 1.111 1
13,23 1.106 1.106 1 1.019 1.019 1 1.081 1.081 1
5,24 4.111 1 4.111 0.996 1 0.996 3.334 1 3.333
5,27 4.084 1.237 3.3 0.997 1 0.997 3.335 1.18 2.827
20,22 3.867 1.295 2.985 0.997 1 0.997 3.087 1.215 2.54
1,5 3.715 1.212 3.064 0.997 1 0.996 3.1 1.164 2.662
22,30 3.48 1.261 2.76 0.998 1 0.998 2.947 1.205 2.446
VRS 24,27 3.398 1.11 3.063 0.996 1 0.996 2.851 1.085 2.628
17,30 3.311 1.116 2.966 0.998 1 0.998 2.836 1.092 2.596
1,17 3.213 1.002 3.208 0.999 1 0.999 2.655 1.001 2.652
23,29 3.168 1.137 2.786 0.997 1.001 0.997 2.639 1.104 2.39
16,22 3.136 1.088 2.882 0.997 1 0.997 2.629 1.068 2.463

pairs have an improvement potential of more than 5% under the first situ-
ation, with UL game structure, and 128 out of the 335 effective system pairs
have an improvement potential of more than 2% under the second situation,
with LL game structure.
Bilevel Programming Merger Analysis in Banking 159

Table 15.8 Top ten promising mergers under LL game structure

RTS Merger ELm HL SL EFm HF SF ESm HS SS

15,29 1.047 1.047 1 1.033 1.033 1 1.001 1.001 1


21,29 1.027 1.027 1 1.019 1.019 1 1.001 1.001 1
15,23 1.002 1.002 1 1.002 1.002 1 1.004 1.004 1
14,23 1.001 1.001 1 1.031 1.031 1 1.111 1.111 1
15,17 1.001 1.001 1 1.002 1.002 1 1.005 1.005 1
CRS
28,29 1.001 1.001 1 1.001 1.001 1 1.003 1.003 1
15,24 1.001 1.001 1 1.002 1.002 1 1.006 1.006 1
21,25 1.001 1.001 1 1.001 1.001 1 1.005 1.005 1
13,21 1.001 1.001 1 1.001 1.001 1 1.004 1.004 1
12,16 1.001 1.001 1 1.009 1.009 1 1.036 1.036 1
21,27 1.008 1.708 0.59 0.961 0.872 1.101 0.969 1.024 0.946
4,21 1.008 1.977 0.51 0.818 0.928 0.881 0.847 1.092 0.776
28,30 1.007 1.819 0.554 0.858 0.944 0.909 0.883 1.089 0.81
3,29 1.007 1.999 0.504 1.004 1.177 0.853 1.005 1.326 0.758
21,28 1.007 1.755 0.574 0.887 0.938 0.946 0.908 1.077 0.843
VRS 13,21 1.007 1.802 0.559 0.682 0.818 0.833 0.733 0.974 0.752
7,21 1.006 2.481 0.406 0.384 0.973 0.395 0.435 1.097 0.397
20,21 1.006 1.773 0.567 0.804 0.795 1.01 0.836 0.95 0.88
15,19 1.005 1.603 0.627 0.78 0.916 0.851 0.811 1.01 0.803
4,30 1.005 2.033 0.494 0.627 0.934 0.671 0.685 1.103 0.621

If we assume a VRS technology instead, the corresponding results are given


in the last two columns of Table 15.6. In the VRS calculations, 12 mergers
under the first situation, with UL game structure, have improvement potential.
There were no mergers under the second situation, with LL game structure.
These merged branches are outside the technology determined by the 30 bank
branches. The explanation is that when two branches are merged, they become
very large compared to the existing branches (with a similar mix of resources
and services) and consequently are above the estimated optimal scale size for
this mix. Therefore, the existing best-practice production does not seem to
suggest that the resulting production plans are feasible.
As shown in Table 15.6, considerable potential gains are observed from
merging the bilevel system. If a CRS technology is assumed, we see that 51.7%
(=225/435) of all possible chain merger scores are larger than one under the
first situation, with UL game structure, and 77% (=335/435) of merger scores
are larger than one under the second situation, with LL game structure.
Under the VRS assumption, Table 15.6 indicates that only 49% and 25.7%
of merger scores in two game structures are larger than unity, and the gains
from merging (as opposed to the gains from individual improvements) are
considerably less.
160 Enterprise Risk Management in Finance

To examine how a profit-sharing strategy solves the incentive incompati-


bility problem, in Table 15.6 we show the number of coordinated mergers
using profit-sharing strategy and their respective mean profit efficiency scores
under both CRS and VRS assumptions. From Table 15.6, it can be seen that
the number of the coordinated efficient mergers under the CRS assumption is
greater than that under the VRS assumption in both of the game situations. In
particular, 44.3% of the total 435 mergers, which is 193 mergers, are coordi-
nated efficient under CRS in the second situation. In the first situation, with
UL game structure, 218 mergers are found to be efficient from the leader’s per-
spective, and 74.8% of them, 163 mergers, are coordinated efficient. The mean
efficiency scores of the leader in general decreases, but the mean efficiency
scores of the follower improve.
To further examine the most promising mergers, Tables 15.7 and 15.8
provide the top ten most promising mergers under both the CRS and VRS
technologies for two game structures where we report merger efficiency,
the harmony effect, and the scale effect for both leaders and followers. We
make three observations from these results. First, both tables illustrate that
there are potential gains from mergers. Second, with CRS, the scale effects
SL , SF and SS are unity, as there is no gain in resizing with constant returns to
scale. Under the VRS technology, both Tables 15.7 and 15.8 indicate that the
harmony effect favors mergers while the scale effects of the followers appear
to work against mergers. Third, if a CRS technology is assumed, all ten mergers
are coordinated efficient under two game structures, since both leaders and
followers yield potential gains from mergers. However, if a VRS technology is
assumed, all ten mergers are non-coordinated since followers yield efficiency
scores that are smaller than unity. From Table 15.6, we see that under the UL
game structure and VRS technology, seven mergers are found to be coordi-
nated efficient.
In the second situation, with LL game structure, where the leader and the
follower are interchanged, under the CRS assumption 299 mergers are found
to be efficient from the new leader’s perspective, and 193 mergers are found
coordinated efficient. Under the VRS assumption, 70 mergers are found effi-
cient from the leader perspective, but only three of these mergers are found to
be coordinated efficient.

Managerial insights

A multi-methodological approach is developed to evaluate the potential gains


of a banking operations merger. This multi-methodological approach incor-
porates both analytical model development, such as bilevel programming
and DEA, and a case study based on real banking operations data. The theory
and the case study results for an intra-firm analysis of merger possibilities are
Bilevel Programming Merger Analysis in Banking 161

meant to provide management with a tool to identify potential improvement


areas resulting from combining units within a common supply chain subject
to constrained resources. The quantitative theory development and the case
study have a two-way relationship: the case study is used for understanding
the metrics and their relationships in quantitative analysis. On the other
hand, quantitative theory development is used to understand the phenomena
observed in the case study. The case study examines a phenomenon in the
natural setting of banking operations merger, employing multiple methods of
data collection from several sources. Based on this theory, management can
find possible rewarding new alignments with incentive-compatible merger
activities. The demonstrative case study shows how the models can provide
management with information about potentially promising merger cases that
respect constrained resources and sub-unit incentives.
Existing work 23 suggests that various internal or external factors influence a
firm’s decision to become an ‘acquirer’ or a ‘target.’ Potential internal factors
involve economic factors (e.g., the financial profile of the firm) or non-
economic factors (e.g., managerial motives). Potential external factors include
macro or industrial conditions such as growth, capacity utilization, market
share, regulations, antitrust policies, tax structure etc. The case study illumi-
nates internal factors of interest to management. These factors may dominate
external factors in this analysis for two reasons: first, the DEA approach by
default assumes that all entities under evaluation are homogeneous, which
implies that the external factors are most likely to affect all the merger entities
in a similar manner. Second, the external factors may be a function of internal
factors which are included in the two-stage system of the study. Based on the
computations here, we can recommend potential areas to combine to save IT,
facility, and other costs, and to ensure that the firm’s internal supply chain
activities operate more efficiently.

Conclusions

Evaluation of potential gains from merging firms with chain operations


subject to constrained resources is a multi-criteria decision problem for many
organizations. In some situations, the evaluation of the performance of chain
operations involves factors which simultaneously play the role of inputs and
outputs. A concurrent consideration of multiple criteria complicates the per-
formance evaluation of such merger decisions. Competing business divisions,
indeed, have different levels of achievement under multiple criteria.
We have studied banking operations with a leader–follower game structure
from a series chain perspective, where the primary and secondary markets are
upstream and downstream chain members respectively. We described bilevel
programming series-chain DEA models to evaluate the potential merger. We
162 Enterprise Risk Management in Finance

also defined merger efficiency concepts for both single units and multiple
units under this structure, and developed an approach to solve the NP-hard
bilevel programming-DEA model. In addition, we discussed the decomposition
of merger efficiency into a harmony effect and a scale effect at both the chain
and sub-chain levels. In this framework, we have shown that the supply chain
with constrained resources and a leader–follower relationship is efficient if and
only if both leader and follower are efficient. We proposed a profit-sharing
strategy to address incentive incompatibility problems that might be present in
merging firms with such a leader–follower structure. Both leaders and followers
benefit under the proposed incentive-compatible strategy.
A case study of potential intra-firm banking-chain mergers with limited
input resources was also presented to illustrate the proposed approach. Using
435 potential mergers involving branches merging in pairs, the results show
significant potential gains from these mergers in banking chains with a leader–
follower structure and constrained recourses. The case study demonstrates
that bank branches achieve potential gains by conducting intra-firm mergers,
which can be incentive-compatible. The findings of the case study also provide
insights into the consequences of different pairings of firm entities and the
results of different types of M&A deals. This allows a deeper understanding of
mergers in the financial sector and its implications on the acquiring banking
entities with chain operations.
16
Sustainability and Risk in Globalization

We all are aware that we live in a world of change, and that there are many
things going on that appear to be problematic. There are islands in the Pacific
that are going underwater (well, they aren’t sinking, but the water level is
rising). There are glaciers that are disappearing. Europeans spent a good part
of the second millennium AD looking for a Northwest Passage to Asia – but
nature appears to be providing one for us in the third. Climate appears to be
changing, although weather being what it is, long-range trends are elusive at
best. But North Dakota might someday be a winter haven, and Omaha might
be a seaport. We need to worry about the environment. We will, of course,
argue about how to cope. Some want to ban coal and petroleum use NOW.
Others prefer to consider moving uphill.
The utilization of high technology for loss-prevention and control systems
in natural disasters or fires, accidents and quantitative models in derivatives
for insurances and finance has expanded substantially in the past decade.
Encouraged by traumatic recent events such as 9/11/2001 and business scandals
including Enron and WorldCom,1 risk management has not only developed a
control focus, but most importantly it remains a tool to enhance the value of
systems, both commercial and communal. Integrated approaches to manage
risks facing organizations have been developed along with new business phil-
osophies of enterprise risk management.

Enterprise sustainability

Enterprise sustainability is a term addressing the need to consider an organ-


ization’s practices in light of economic, social, and environmental impacts.2
Businesses need to maintain their customer base and their brand reputation.
This is related to corporate social responsibility and to the need for busi-
nesses to contribute toward the solution of social problems, as well as to risk

163
164 Enterprise Risk Management in Finance

management’s aim to identify present and future threats and to devise plans
to mitigate or eliminate these risks.
All human endeavors involve uncertainty and risk. Mitroff and Alpaslan
(2003)3 categorized emergencies and crises into three categories: natural disas-
ters, malicious activities, and systemic failures of human systems. Nature does
many things to us, disrupting our best-laid plans and undoing much of what
humans have constructed. Events such as earthquakes, floods, fires and hurri-
canes are manifestations of the majesty of nature. Recent events, including the
tsunami in the Indian Ocean in 2004 and Hurricane Katrina in New Orleans in
2005 demonstrate how powerless humans can be in the face of nature’s wrath.
Global economic crisis risks are profound and widespread over the last decade.
Businesses in fact exist to cope with risk in their area of specialization. But
chief executive officers are responsible for dealing with any risk fate throws at
their organization.
Malicious acts are intentional on the part of fellow humans who are either
excessively competitive or who suffer from character flaws. Examples include
the Tylenol poisonings of 1982, syringes being placed in Pepsi cans in 1993,
the bombing of the World Trade Center in 1993, Sarin gas attacks in Tokyo
in 1995, terrorist destruction of the World Trade Center in New York in 2001,
and corporate scandals within Enron, Andersen, and WorldCom in 2001.
More recent malicious acts include terrorist activities in Spain and London,
and in the financial realm, the Ponzi scheme of Bernard Madoff uncovered
in 2009. Wars fall within this category, although our perceptions of what is
sanctioned or malicious are colored by our biases. Criminal activities such
as product tampering or the blend of kidnapping and murder are clearly not
condoned. Acts of terrorism are less easily classified, as what is terrorism to
some of us is the expression of political behavior to others. Similar gray cat-
egories exist in the business world. Marketing is highly competitive, and
positive spinning of your product often tips over to malicious slander of com-
petitor products. Malicious activity has even arisen within the area of infor-
mation technology, in the form of identity theft or tampering with company
records.
Probably the most common source of crises is unexpected consequences
arising from overly complex systems.4 Examples of such crises include Three
Mile Island in Pennsylvania in 1979 and Chernobyl in 1986 within the nuclear
power field, the chemical disaster in Bhopal, India, in 1984, the Exxon Valdez
oil spill in 1989, the Ford-Firestone tire crisis in 2000, and the Columbia space
shuttle explosion in 2003. The financial world is not immune to systemic
failure, as demonstrated by the Barings Bank collapse in 1995, the failure of
Long-Term Capital Management in 1998, and the subprime mortgage bubble
implosion leading to near-failure in 2008. The use of electric cars, which are
Sustainability and Risk in Globalization 165

viewed as a solution to global warming, have systemic problems related to


the shortage of batteries with sufficient power and duration at reasonable
cost, as well as the paucity of recharging stations that would make their use
practical.
All organizations need to prepare themselves to cope with crises from
whatever source. Managers are expected to anticipate everything bad that
could happen to their organization, and ideal risk management would have a
plan for each possible contingency. It is, of course, a good idea to be prepared.
However, crises by definition are almost always the result of things not going
according to plan, whether the result of nature, malicious humans, or other
systemic features catching us unprepared. We cannot expect to cope with
every contingency, but it is prudent to develop sufficient spare resources to
deal with the unexpected.

Types of risk

In general, Risk is defined as the unknown change in the future value of a


system. Risks can be viewed as threats, but businesses exist to cope with spe-
cific risks. Thus, if they encounter a risk that they are specialists in dealing
with, the encounter is viewed as an opportunity. Risks have been categorized
into five groups:5

Opportunities – events presenting a favorable combination of circumstances


giving rise to the chance of beneficial activity;
Killer risks – events presenting an unfavorable combination of circumstances
leading to hazard or major loss or damage resulting in permanent cessation
of operations;
Other perils – events presenting an unfavorable combination of circumstances
leading to hazard of loss or damage leading to disruption of operations with
possible financial loss;
Cross-functional risks – common risks leading to potential loss of reputation;
Business process unique risks – risks occurring within a specific operation or
process, such as withdrawal of a particular product for quality reasons.

Opportunities should be capitalized upon in most circumstances. Not taking


advantage of opportunities leads to the growth of competitors, and thus
increased risk; but if opportunities are pursued, enterprise strategy can be
modified to manage the particular risks involved. Killer risks are threats to
enterprise survival, and call for continuous risk treatment, monitoring, and
reporting. The other perils require analysis to assess ownership, treatment,
residual risk, measurement, and reporting.
166 Enterprise Risk Management in Finance

Contexts of sustainable risk


Risks arise from everything that humans attempt. Life is worthwhile because
of its challenges. Doing business has no profit without risk, rewarding those
who best understand systems and take what turns out to be the best way to
manage these risks. We have addressed risk management as applied to pro-
duction in the food we eat, the energy we use to live, and the manifestation of
the global economy via supply chains.

What we eat
One of the major issues facing human culture is the need for quality food. Two
factors that need to be considered are: human population growth, and threats
to the environment. We have understood since Malthus that the human popu-
lation cannot continue to grow exponentially. Some countries, such as China,
have been proactive in seeking to control their population growth. Other areas,
such as Europe, are documenting decreases in population growth, probably
due to societal consensus. But other areas, which include India and Africa,
continue to experience rapid increases in population. This may change as these
areas become more affluent (see China and Europe). But there is no universally
acceptable way to control human population growth. Thus, we expect to see a
continued increase in demand for food.
Agricultural science has been relatively effective in developing better strains
of crops through a number of methods, including bioengineering and genetic
science; this led to what was expected to be a ‘green revolution’ a generation
ago. As with all of mankind’s schemes, the best-laid plans of humans involve
many complexities and unexpected consequences. North America has devel-
oped the means to increase production of food that is free from many of the
problems that existed a century ago. However, people in Europe, Australia,
Asia, and Africa, are concerned about new threats arising from genetically
modified agricultural crops. Many US citizens are concerned about the risks of
genetically modified food. This is another example of human efforts to reach
improvements that lead to new dangers, or unintended consequences, with
great disagreement about what is seen to be the truth.
A third factor complicating the food issue is distribution. North America and
Ukraine have long been fertile producing centers, generating surpluses of food.
This connects to supply chain issues. But the fundamental issue is the inter-
connected global human system with surpluses in some locations and food
scarcities in others. Technically, this is a supply chain issue. But more import-
antly it is an economic issue of sharing food, which is a series of political issues.
The heavy reliance of contemporary businesses on international collaborative
supply chains leads to many risks arising from shipping, as well as to other
factors such as political stability, physical security from natural disaster, piracy
Sustainability and Risk in Globalization 167

on the high seas, and changing regulations. Sustainable supply chain man-
agement has become an area of pressure potentially applied by governing agen-
cies, customers, and the various corporate entities involved within a supply
chain. These interests can include industry cartels such as OPEC, regulatory
environments such as the Eurozone, industry lobbies as in the sugar industry,
and so forth that complicate international business on the scale induced by
global supply chains.
Water is one of the most abundant assets on the Earth, probably next to
oxygen, which chemists know is a related element. Rainwater used to be consid-
ered pure. The industrial revolution caused the unintended production of acid
precipitation with numerous unanticipated consequences, locally, regionally
and globally. Water used to be free in many places; only 30 years ago, in those
places paying for water would have been considered the height of idiocy.
Managing water is recognized as a major issue in that less than 3% of the
world’s water is fresh. Lambooy (2011)6 called for attention to waste water man-
agement, management of freshwater consumption, and groundwater control
management. Water management is increasingly an economic issue, leading to
the political arena. Wherever there is a scarcity of water, this induces political
efforts to gain a greater share for each political entity, involving allocations not
only of drinking water for cities, but also for irrigation of agriculture lands and
even for sustainable levels to enable river navigation.

The energy we use


The generation of energy in various forms is a major issue leading to political
debate concerning tradeoffs between those seeking to expand the extraction of
fossil fuels to meet the expanding demands vs. those who seek to help reduce
the climate change that causes release of fossil carbon dioxide by working to
make the transition to renewable energy and a strong emphasis on reduction
of inefficiencies in the entire system. Oil continues to be a major source of
energy, but its extraction, refining, transportation, and usage not only causes
numerous environmental risks but also causes related catastrophic risks such as
oil spills and market risks such as highly fluctuating prices.
Increasingly, many nations, and regions within nations, are expanding their
efforts to transition to post-fossil-carbon societies by changing to renewable
energy-based systems. For example, Ng and Goldsmith (2010)7 developed a
conceptual and dynamic programming model to explain the entry behaviors
of different types of bioenergy businesses, and to demonstrate that bio energy
entry decisions emphasize a basic trade-off involving gains from a commit-
ment to specialized, and correspondingly higher cost assets, and gains from
remaining flexible with lower levels of fixed and less specific assets. Meyler
et al. (2007)8 developed insights into complex landscapes of risk in which the
168 Enterprise Risk Management in Finance

natural environment and well-being of residents was largely ignored as fuel


prices and energy security were debated.
The impact of oil exploration in the Mexican rain forest was reported
by Santiago (2011):9 urbanization and civilization were highly localized,
collapsing for reasons not well understood in Northern Veracruz. Cost
risks in alternative energy resources have been studied,10 and financial
management using the Pinch concept have been proven to be useful in
development of tools at preliminary design stage for rough target setting,
alternatives evaluation and decision-making. Risks involved in the impact
of gasoline blending11 and of ethanol production and usage.12 Technology
choices in process residue handling and in fuel combustion are key, whilst
site-specific environmental management tools should address the broader
biodiversity issues.
Mining is a field which traditionally faces high production risks, such as
uncertain supply yield or marginal cost variability. Akcil (2006)13 reported on
practices in gold and silver mining in Turkey, identifying the importance of
cyanide management in that environment.

Globalization

Living and working in today’s environment involves many risks. The processes
used to make decisions in regard to these risks must consider both the need
to keep people gainfully and safely employed through increased economic
activity and to protect the earth from threats arising from human activities.
We need to consider that there are many risks, and we have challenges in
developing strategies, controls, and regulations designed to reduce the risks
while seeking to achieve other goals.
Globalization has played a major role in expanding the opportunities for
many manufacturers, retailers, and other business organizations to be more
efficient. The tradeoff has always been the cost of transportation, as well as the
added risk of globalizing.
In 2010 the Eyjafjallajökull volcano in Iceland shut down air transportation
across most of Europe. Some Europeans spent a full week waiting for some
means to travel across Europe. Supply chains were also disrupted, as transpor-
tation (logistics) is key to linking production facilities in supply chains; many
in Europe found their supermarkets short of fresh fruit and flowers. Supply
chains often depend on lean manufacturing, requiring just-in-time delivery
of components. These systems are optimized, which means elimination of
the slack that would cover contingencies such as volcanic disruption of air
transport. Multiple sourcing, flexible manufacturing strategies, and logistics
networks capable of alternative routing are clearly needed.
Sustainability and Risk in Globalization 169

On March 11, 2011, an earthquake off the Pacific coast of Japan led to a cata-
strophic tsunami that destroyed most of a rich area of advanced technology
manufacturing. It also severely damaged a nuclear power plant, which at the
time of writing still saw damage control efforts. While the worst impact was
in terms of Japanese lives, there also was major impact on many of the world’s
supply chains. Organizations such as Samsung, Ford Motor Company, and
Boeing found production disrupted due to lack of key components from Japan.
Japanese plants produced about 20% of the semiconductors used worldwide,
and double that for electronic components; for example, Toshiba produced
one-quarter of the nano flash chips used, and on March 14, 2011, it had to halt
operations due to power outages.
Modern supply chains need to develop ways to work around any kind of
disruption. Wars of course lead to major disruption in supply chains, but tariff
regulations can have an impact as well. In 2002, Honda Motors had to spend
$3,000 per ton in airlifting carbon sheet steel to the US after tariff-related
supply disruptions. In January 2011, the Volkswagen, Porsche and BMW supply
chains in Germany were taxed by surging demand; Volkswagen actually had
to halt production due to engine and other part shortages. This was not due
to natural disaster or war, or any other negative factor, but rather to booming
demand in China and the United States. Lean manufacturing and modern
consumer retailing operations require maintenance of supply.
Supply chains can offer great value to us as consumers: competition has led to
better products at lower costs enabled by shipping (by land and air as well as sea)
through supply chains; outsourcing allows producers to access the best materials
and process them at the lowest cost. Lean manufacturing enables cost efficiency
as well. But both of these valuable trends lead to greater supply chain exposure.
There is a need to gain flexibility, which can be obtained in a number of ways:

• Use of diversified sources to enable use of alternatives in quick response to


disruptions;
• Flexible manufacturing strategies allowing options to produce critical prod-
ucts in multiple locations with rapid changeover capability;
• Flexible product design to reduce complexity and leverage common plat-
forms and parts, thus reducing exposure to supply disruption;
• Global logistics networks to access low cost and low risk through multiple
routs and contingency shipping plans.

Economically efficient supply chains push the tradeoff between cost and risk.
The lowest cost alternative usually is vulnerable to some kind of disruption.
Some of the economic benefit from low cost has to be invested in means to
enable flexible coping with disruption.
170 Enterprise Risk Management in Finance

Supply chain risk management

Supply chain risk management can be described as a systematic, integrated


approach to manage all risks facing an organization operating a supply chain.
The benefit is the development of means to anticipate, measure, and control risk.
The internet allows business to be conducted all over the globe; this presents
many new opportunities for organizations to market to new customers, and
thus improve their business opportunities.
It is interesting to compare the old way of organizing business by vertical
integration, made so successful by John D. Rockefeller and Standard Oil, by
U.S. Steel, Alcoa, and others. They took the idea of system logistics developed
by the military, and applied it to business, taking the approach that if there
was any profit to be made in their supply chain, they wanted it. This led to ver-
tical supply chains connecting mines, processing, transportation, and various
forms of production to different levels of marketing for massive monopolies.
Enforcement of such monopolies was easiest in businesses calling for high
capital investment.
The modern way of conducting business is quite different. Many of the for-
merly adversarial relationships of 19th and early 20th century businesses have
been replaced by cooperative arrangements among supply chain members. The
focus is on being more competitive, and thus emphasizing services related to
the products being made. There also is an emphasis on linking together special-
ists, with a dynamic integration of often reasonably independent entities to
work together to deliver goods and services. Goods and services seem ever less
distinguishable, making the old dichotomy of operations passé.
Global competition, technological change, and continual search for com-
petitive advantage have motivated risk management in supply chains. Supply
chains are often complex systems of networks, reaching hundreds or thou-
sands of participants from around the globe in some cases (such as Wal-Mart
or Dell). The term has been used both at the strategic level (coordination and
collaboration) and the tactical level (management of logistics across functions
and between businesses). In this sense, risk management can focus on identi-
fication of better ways and means of accomplishing organizational objectives
rather than simply preserving assets or avoiding risk. Supply chain risk man-
agement is interested in coordination and collaboration of processes and activ-
ities across functions within a network of organizations. Supply chains enable
manufacturing outsourcing to take advantages of global relative advantages, as
well as increase product variety. But there are many risks inherent in this more
open, dynamic system.
The petroleum supply chain is critical to the world economy. Disruption
of petroleum markets has had great financial impact on a number of econ-
omies. Downstream users of petroleum products feel at the mercy of upstream
Sustainability and Risk in Globalization 171

providers; but even providers have suffered extreme stress. Nigeria’s export
revenues were dramatically impacted by production shutdown from December
2005 to April 2007.14 Risk management in the petroleum industry is managed
through hedging, using futures, forwards, options, or swaps. Modeling support
in the form of Monte Carlo simulation is often applied. Optimization is some-
times applied as well, but encounters obstacles in the form of fitting distribu-
tions (specifically, the fat tail problem).
There is an increasing tendency toward an integrated or holistic view of risks.
Enterprise Risk Management (ERM) is an integrated approach to achieving the
enterprise’s strategic, programmatic, and financial objectives with acceptable
risk. The philosophy of ERM generalizes these concepts beyond financial risks
to include all kinds of risks beyond disciplinary silos.
Contingency management has been widely systematized in the military.
Systematic organizational planning recently has been observed to include
scenario analysis, giving executives a means of understanding what might go
wrong, thus giving them some opportunity to prepare reaction plans. A compli-
cating factor is that organization leadership is rarely a unified whole, but rather
consists of a variety of stakeholders with potentially differing objectives.

Global business risks

Globalization involves more cross-organizational supply chains. Supply chains


involve many risks, which can be categorized as internal (involving issues such
as capacity variations, regulations, information delays, and organizational
factors) and external (market prices, actions by competitors, manufacturing
yield and costs, supplier quality, and political issues).15 Examples of internal
failures are not widely publicized, although they certainly exist.
Supply chain organizations need to worry about risks from every direction.
In any business, opportunities arise from the ability of that organization to
deal with risks. Most natural risks are dealt with either through diversification
and redundancy, or through insurance, both of which have inherent costs.
As with any business decision, the organization needs to make a decision
considering tradeoffs. Traditionally, this has involved the factors of costs
and benefits. Society is moving toward ever more complex decision-making
domains, requiring consideration of ecological factors as well as factors of
social equity.
Internal risk management is more directly the responsibility of the supply
chain organization and its participants. Any business organization is respon-
sible for managing financial, production, and structural capacities. It is respon-
sible for programs to provide adequate workplace safety, which has proven to
be cost-beneficial to organizations as well as fulfilling social responsibilities.
Within supply chains, there is need to coordinate activities with vendors, and
172 Enterprise Risk Management in Finance

to some degree with customers (through bar-code cash register information


providing instantaneous indication of demand). Information systems tech-
nology provides a new era of effective tools to keep on top of supply chain
information exchange. Another factor of great importance is the responsibility
of supply chain core organizations to manage the risks inherent in the tradeoff
between wider participation made possible through internet connections (pro-
viding a larger set of potential suppliers leading to lower costs) with the reli-
ability provided by long-term relationships with a smaller set of suppliers that
have proven to be reliable.
Dealing with external risks involves more opportunities to control risk
sources. In the past, some supply chains have had an influence on political
systems; arms firms like that of Alfred Nobel come to mind, as well as pet-
roleum businesses. While most supply chain entities are not expected to be
able to control political risks to include wars and regulations, they do have
the ability to create environments leading to labor unrest. But it is expected
that supply chain organizations have an even greater influence over economic
factors; while they are not expected to be able to control exchange rates, the
benefit brought by monopolies or cartels is their ability to influence price.
Business organizations also are responsible for developing technologies pro-
viding competitive advantage, and developing product portfolios in dynamic
markets with product life cycles. The risks arise from competitors’ abilities in
never-ending competition.
Ritchie and Brindley (2007) viewed five major components of a framework in
managing supply chain risk.16

1. Risk context and drivers: Risk drivers arising from the external environment
will affect all organizations, and can include elements such as the potential
collapse of the global financial system, or wars. Industry specific supply
chains may have different degrees of exposure to risks. A regional grocery
will be less impacted by recalls of Chinese products involving lead paint
than will those supply chains carrying such items. Supply chain configur-
ation can be the source of risks. Specific organizations can reduce industry
risk by the way the make decisions with respect to vendor selection. Partner
specific risks include consideration of financial solvency, product quality
capabilities, and compatibility and capabilities of vendor information
systems. The last level of risk drivers relate to internal organizational proc-
esses in risk assessment and response, and can be improved by better equip-
ping and training of staff and improved managerial control through better
information systems.
2. Risk management influencers: This level involves actions taken by the organ-
ization to improve their risk position. The organization’s attitude toward
risk will affect its reward system, and mold how individuals within the
Sustainability and Risk in Globalization 173

organization will react to events. This attitude can be dynamic over time,
responding to organizational success or decline.
3. Decision makers: Individuals within the organization have risk profiles. Some
humans are more risk averse, others more risk seeking. Different organiza-
tions have different degrees of group decision making. More hierarchical
organizations may isolate specific decisions to particular individuals or
offices, while flatter organizations may stress greater levels of participation.
Individual or group attitudes toward risk can be shaped by their recent
experiences, as well as by the reward and penalty structure used by the
organization.
4. Risk management responses: Each organization must respond to risks, but
there are many alternative ways in which the process used can be applied.
Risk must first be identified. Monitoring and review requires measurement
of organizational performance. Once risks are identified, responses must be
selected. Risks can be mitigated by an implicit tradeoff between insurance
and cost reduction. Most actions available to organizations involve knowing
what risks the organization can cope with because of their expertise and
capabilities, and which risks they should outsource to others at some cost.
Some risks can be dealt with, others avoided.
5. Performance outcomes: Organizational performance measures can vary widely.
Private for-profit organizations are generally measured in terms of profit-
ability, short-run and long-run. Public organizations are held accountable in
terms of effectiveness in delivering services as well as the cost of providing
these services.

In normal times, there is more of a focus on high returns for private organiza-
tions, and lower taxes for public institutions. But risk events can make their
preparations to deal with risk exposure much more important, focusing on
survival.

Conclusions

Technology has grown rapidly, a characteristic of our advancing civilization.


We have seen tremendous gains in computer technology, in technology for
war machinery, and in the use of technology to gain strategic innovation.
Globalization is made possible by the ability to communicate worldwide over
the internet, enabling supply chain operation through the exchange of files
and information. This technology has enabled improved production methods
and the development of global supply chains.
While we all recognize and appreciate these benefits of technology, we have
all seen cases where technology rebounds upon us with unexpected conse-
quences. There are many downside risks to technology. Nuclear energy, which
174 Enterprise Risk Management in Finance

provides excellent characteristics with respect to global warming, is widely


adopted in Europe and Japan, although viewed very negatively in the United
States. Genetically engineered food is viewed in the US as a potential sal-
vation for many starving people, but is viewed as unacceptably risky in Europe
and Africa. Chinese manufacturing is considered a very important element
in manufacturing survival by most retailers throughout the world, although
subcontracting risks have arisen on occasion. Technology provides many
valuable tools, but also introduces new risks.
The history of risk management has evolved since time immemorial.
Levantine and Chinese traders prior to A.D. undoubtedly coped with the risks
of sailing in order to trade, as the Egyptians and Babylonians did before them.
The coffee house of Lloyds of London developed as a meeting place for the
seeds of the insurance industry. But adopting insurance has a cost.
Risk is what businesses exist to deal with. Frederick Bernard Hawley (1907)17
declared risk-taking to be the essential function of the entrepreneur, and thus
the basis of his income. Frank Knight (1921)18 argued for profit as due to the
assumption of risk. Risk management is therefore not the avoidance of risk,
but rather avoiding risks that the organization is not competent to cope with,
while seeking risk in its area of expertise.
A number of psychological-based researchers have emphasized that the role
of human preference expands the interest of risk management beyond objective
data concerning probabilities to the more complex judgmental forum requiring
subjectivity. The works of Kahneman and Tversky (2000)19 have led to many
studies in the rich field studying the psychological impact of rational decision-
making under conditions of uncertainty. We continue to be challenged by the
complexities of the interacting natural and social systems which generate the
risks that keep us concerned and active.
17
Risk from Natural Disasters

Introduction

Natural disasters by definition are surprises, causing a great deal of damage and
inconvenience. Earthquakes are among the most terrifying and destructive
natural disasters threatening humans. Emergency management has been
described as the process of coordinating an emergency or its aftermath through
communication and organization for deployment and the use of emergency
resources. This chapter provides the state-of-the-art studies of risk and emer-
gency management related to the Wenchuan earthquake that happened in
China in May 2008.
Natural disasters are the biggest challenge that risk managers face, due to the
threats that go with them.1 Natural disasters by definition are surprises, causing
a great deal of damage and inconvenience. Some things we do to ourselves,
such as revolutions, terrorist attacks and wars; terrorism led to the gassing of
the Japanese subway system, to 9/11/2001, and to the bombings of the Spanish
and British transportation systems. Some things nature does to us – volcanic
eruptions, tsunamis, hurricanes and tornados. The SARS virus disrupted public
and business activities, particularly in Asia.2 More recently, the H1N1 virus has
sharpened the awareness of the response system world-wide. Some disasters
combine human and natural causes – we dam up rivers to control floods, to
irrigate, to generate power – and even for recreation, as at Johnstown, PA, at
the turn of the 20th century. We have developed low-pollution, low-cost elec-
tricity through nuclear energy, as at Three-Mile Island in Pennsylvania and
Chernobyl. We have built massive chemical plants to produce low cost chemi-
cals, as at Bhopal, India.
Lee and Preston provide a review of high-impact, low-probability events,
focusing on analysis of the Eyjafjallajökull volcano.3 This event was represen-
tative of “black swans”,4 that is, impossible-to-predict events with a very low
likelihood but high costs of mitigation. Other examples include Hurricane

175
176 Enterprise Risk Management in Finance

Katrina in New Orleans, the Japanese earthquake and tsunami of 2011, and
the 2003 SARS outbreak. What we don’t do to ourselves in the form of wars and
economic catastrophes, nature trumps with overwhelming natural disasters.
These disruptions are major components of supply chain systems, which
have become key components of today’s global economy. The ability to cope
with unexpected events has proven critical for global supply chain success,
as demonstrated by Nokia in the past few years, as well as production halts
experienced by Toyota and Sony due to the 2011 earthquake and tsunami in
Japan.

Preparing for high-impact, low-probability events

In a natural disaster, there will inevitably be many who feel that whatever
the authorities did was overkill and unnecessary, just as there will be many
who feel that the authorities didn’t do enough to (1) prevent the problem, and
(2) mitigate the problem after it occurred. It is the nature of emergency man-
agement to be unthanked. Transparency, especially during and after a crisis,
helps to assure the public that decisions are made on the best available evi-
dence in order to gain public confidence and to manage vested interests.
Globalized supply chains, particularly those based on just-in-time methods,
are vulnerable. Famous historical examples include Nokia’s response to a
March 2000 lightning strike in Albuquerque, NM, leading to a fire in a Royal
Philips Electronics fabrication line that supplied RFID chips. Both Nokia and
its competitor Ericsson were served by this key supply chain link,5 and Philips
estimated that at least a week would be required to return to full production.
Ericsson passively waited – but Nokia proactively arranged for alternative
supplies, as well as redesigning products to avoid the need for those chips.
Nokia gained significant advantage in this market, turning in a profit while
Ericsson suffered an operating loss.6 Lee and Preston state that the maximum
tolerance for supply chain disruption in a just-in-time global economy is one
week.
Lee and Preston draw the following recommendations from the
Eyjafjallajökull experience:

1. Stress-testing risk mechanisms: This recommendation includes specifics calling


for broad coordination with governments and businesses to determine as
much as possible that costs and risks of worst-case situations are identified.
They also call for scenario-building exercises and sharing of best practices.
2. Crisis communication: Care should be given to develop robust communi-
cation, to include websites, especially to keep the public and the media
informed of risks. Independent, risk notification hubs supported by govern-
ments, businesses and industry associations were called for.
Risk from Natural Disasters 177

3. Enhancing business resilience and shock response: Governments should set up


global pooling systems for reinsurance. A reference library of observations
from past events was also called for. Businesses were cited as needing pre-
paredness for management continuity, and to conduct cost-benefit analyses
for alternative disruption actions.

Integrated supply chains have delivered improvements in efficiency and


improved the value of products we have available to us as consumers. However,
highly optimized supply chain networks are inherently risky, in part because
they eliminate most system slack in order to lower costs. The impact of unex-
pected events (Lee and Preston use SARS in 2003 and the Tōhoku earthquake/
tsunami in 2011 as examples) can be highly disruptive. The white paper does a
good job of classifying the risks of various exported products to the European
Union. Analysis of the impact of extended disruption was noted. Additional
impact of global warming was also evaluated.

Be prepared
While natural disasters come as surprises, we can nevertheless be prepared. In
some cases, such as Hurricane Katrina or Mount Saint Helens, we get warning
signs, but we never completely know the extent of what is going to happen.7
Emergency management has been described as the process of coordinating
an emergency or its aftermath through communication and organization for
deployment and the use of emergency resources.8 Emergency management is
a dynamic process conducted under stressful conditions, requiring flexible
and rigorous planning, cooperation, and vigilance. During emergencies, a
variety of organizations are often involved, and commercial rivalry can lead to
normal competition, rivalry, and mutual distrust. At the governmental level,
one would expect cooperation in attaining a common goal, but often so many
diverse agencies get involved that attention to the overriding shared goal is
dimmed by specific agency goals. Cooperation is also hampered by differences
in technology.

Risks and emergencies

Risks exist in every aspect of our lives. In the food production area, science
has made great strides in genetic management. But there are concerns about
some of the manipulations involved, with different views prevailing across the
globe. In the United States, genetic management is generally viewed as a way
to obtain better and more productive sources of food more reliably. However,
there are strong objections to bioengineered food in Europe and Asia. Some
natural diseases, such as mad cow disease, have appeared that are very dif-
ficult to control. The degree of control accomplished is sometimes disputed.
178 Enterprise Risk Management in Finance

Europe has strong controls on bioengineering, but even there a pig breeding
scandal involving hazardous feed stock and prohibited medications has arisen.9
Bioengineering risks are important considerations in the food chain.10 Genetic
mapping offers tremendous breakthroughs in the world of science, but involve
political risks when applied to human resources management.11 Even applying
information technology to better manage healthcare delivery risks involves
risks.12 Reliance on computer control has been applied to flying aircraft, but
hasn’t always worked.13
Risks can be viewed as threats, but businesses exist to cope with risks.
Different disciplines have different ways of classifying risks. We propose the
following way of classifying risks: field based and property based.

● Field based classification: Financial risks, which basically include all sorts of
risks related to financial sectors and financial aspects in other sectors; these
include, but are not restricted to, market risk, credit risk, operational risk,
operational risk, liquidity risk.

Nonfinancial risks, which includes risks from sources that are not related to
finance. These include, but are not restricted to, political risks, reputational
risks, bioengineering risks, and disaster risks.

● Property based classification: We think risks can have three properties: Probability,
dynamics, and dependence. The first two properties have been widely recog-
nized in inter-temporal models from behavior decision and behavior econom-
ics.14 The last property is well studied in the finance discipline.

The probability of the occurrence and severity of risks mainly involves the util-
ization of probability theory and various distributions, to model risks. This can
be dated back to the 1700s, when Bernoulli, Poisson, and Gauss used to model
normal events, and generalized Pareto distributions and generalized extreme
value distribution to model extreme events. The dynamics of risks mainly deals
with stochastic process theory in risk management. This can be dated back to
the 1930s, when the Markov processes, Brownian motion and Levy processes
were developed. The dependence of risks mainly deals with correlation among
risk factors; various copula functions are built, and Fourier transformations are
also used here.

Technical tools

Many tools have been developed to aid emergency management. Reported


examples from geoscience include image enhancement through combining
multiple images into a clearer composite image (mosaicing).15 Televideos and
Risk from Natural Disasters 179

wireless communications devices have been applied to aid quick response to


disasters in the oil and natural gas sector.16 Statistical analysis of growth models
have been used to categorize disasters into three categories of growth (damped
exponential, normal, fluctuating), so that news stories can be monitored at the
onset of a disaster to better predict those events that will dissipate, as opposed
to those that will grow into serious disasters.17 Advanced modeling in the form
of multi-objective evolutionary algorithms in combination with geographical
information systems have been developed to support evacuation planning.18
Even open source software products play a role.19 SAHANA (Sinhalese for
relief) is a Sri Lankan open source system built after the 2004 Asian tsunami.
SAHANA supports finding missing people, managing volunteers and aid
resources, and other disaster-related activities. SAHANA was deployed in the
2008 Burma cyclone and the 2008 Sichuan earthquake. Another open source
system is Innovative Support to Emergencies, Diseases and Disasters (InSTEDD),
started in 2006 by Larry Brilliant of the Google Foundation. InSTEDD is
designed to process data from multiple sources (weather reports, news, field
reports, sensor data), to detect and manage disease outbreaks.

Emergency management

Local, state and federal agencies in the United States are responsible for
responding to natural and man-made disasters. This is coordinated at the federal
level through the Federal Emergency Management Agency (FEMA). While
FEMA has done much good, it is almost inevitable that more is expected of it
than it delivers in some cases, such as hurricane recovery in Florida in various
years and the Gulf Coast from Hurricane Katrina in 2005. National security
is the responsibility of other agencies, military and civilian (Department of
Homeland Security – DHS). They are supported by non-governmental agen-
cies such as the American Red Cross. Again, these systems seem to be effective
for the greater part, but are not failsafe, as demonstrated by Pearl Harbor and
9/11/2001.
Disasters are abrupt, calamitous events that cause great damage, loss of lives,
and destruction. Emergency management is accomplished in every country
to some degree. Disasters occur throughout the world, in every form: natural,
man-made, and combination. Disasters by definition are unexpected, and tax
the ability of governments and other agencies to cope. A number of intelli-
gence cycles have been promulgated, but all are based on the idea of:

1. Identification of what is not known;


2. Collection – gathering information related to what is not known;
3. Production – answering management questions;
4. Dissemination – getting the answers to the right people (Mueller, 2004).
180 Enterprise Risk Management in Finance

Information technology has been developing at a very rapid pace, creating a


dynamic of its own. Many technical systems have been designed to gather,
process, distribute, and analyze information in emergencies. These systems
include communications and data. Tools to aid emergency planners commu-
nicate include telephones, whiteboards, and the internet. Tools to aid in dealing
with data include database systems (for efficient data organization, storage, and
retrieval), data mining tools (to explore large databases), models to deal with
specific problems, and combinations of these resources into decision support
systems to assist humans in reaching decisions quickly or expert systems to
make decisions rapidly based on human expertise.

Emergency management support systems

A number of software products have been marketed to support emergency


management. These are often various forms of a decision support system. The
Department of Homeland Security in the US developed a National Incident
Management System. A similar system used in Europe is the Global Emergency
Management Information Network Initiative (Thompson et al., 2006). While
many systems are available, there are many challenges due to unreliable inputs
at one end of the spectrum, and overwhelmingly massive data content at the
other extreme.
Decision support systems (DSS) have been in existence since the early 1970s.
A general consensus is that DSSs consist of access to tailored data and custom-
ized models with real-time access for decision makers. With time, as computer
technology has advanced and as the internet has become more available, there
has been a great deal of change in what can be accomplished. Database systems
have seen tremendous advances since the original concept of DSS. Now weather
data from satellites can be stored in data warehouses, as can masses of point-
of-sale scanned information for retail organizations, and output from enter-
prise information systems for internal operations. Many kinds of analytical
models can be applied, ranging from spreadsheet models through simulations
and optimization models. While the idea of DSS is now over 30 years old, it can
still be very useful in support of emergency management. It still can take the
form of customized systems accessing specified data from internal and external
sources, as well as a variety of models suitable for specific applications needed
in emergency management situations. The focus is still on supporting humans
making decisions, but if problems can be so structured that computers can
operate on their own (Hal in 2001 comes to mind), decision support systems
evolve into expert systems. Expert systems can be, and have been, used to
support emergency management.
An example decision support system directed at aiding emergency response
is the Critical Infrastructure Protection Decision Support System (CIPDSS).20
Risk from Natural Disasters 181

CIPDSS was developed by Los Alamos, Sandia, and Argonne National


Laboratories, sponsored by the Department of Homeland Security in the US.
The system includes a range of applications to organize and present infor-
mation, as well as system dynamics simulation modeling of critical infra-
structure sectors, such as water, public health, emergency services, telecom,
energy, and transportation. The system also includes multiattribute utility
functions based upon interviews with infrastructure decision-makers. CIPDSS
thus serves as an example of what can be done in the way of an emergency
management support system.
Other systems in place for emergency management include the US National
Disaster Medical System (NDMS), providing virtual centers designed as a focal
point for information processing, response planning, and inter-agency coord-
ination. Systems have been developed for forecasting earthquake impact21 or
the time and size of bioterrorism attacks. This demonstrates the need for DSS
support not only during emergencies, but also in the planning stage.

Conclusions

Emergencies of two types can arise. One is repetitive – hurricanes have


hammered the Gulf Coast of the US throughout history, and will continue to
do so (just as tornadoes will hit the Midwest and typhoons the Pacific). A great
deal of experience and data can be gathered for those events, and our weather
forecasting systems have done a very good job of providing warning systems
for actual events over the short term of hours and days. However, humans will
still be caught off-guard, as with Hurricane Katrina. The other basic type of
emergency are surprises. These can be natural (such as the tsunami of 2004) or
human-induced, such as September 11, 2001. We cannot hope to anticipate,
nor will we find it economic to massively prepare for, every surprise; we don’t
think that, for example, a good asteroid collision prevention system would be a
wise investment of our national resources. On the other hand, there is growing
support for an effective global warming prevention system.
The first type of emergency is an example of risk – we have data to estimate
probabilities. The second type is an example of uncertainty – we can’t accur-
ately estimate probabilities for the most part. (People do provide estimates of
the probability of asteroid collision, but the odds are so small that they don’t
register in our minds. Global warming probabilities are near certainty, but
the probability of a compensating cooling event in the near future currently
evades calculation.)
We have reviewed some of the tools that have been reported for use in
supporting disaster or emergency management. This issue includes papers that
report on the effectiveness of response systems in the 2008 Sichuan earth-
quake. It also includes two papers addressing tools, one developed to improve
182 Enterprise Risk Management in Finance

evaluation of damage, the other applying genetic algorithm models to aid


rapid decision-making in emergencies. A fourth paper addresses financial tools
to deal with the insurance aspects of emergency response.
Thus the crux of the problem in supporting emergency management is that
tools exist to gather data, and tools such as data mining exist to try to make some
sense of it, but the problem is that we usually won’t have the particular data
that will be useful to make decisions in real time. It is also reasonably certain
that after any event, critics will be able to review what data was available and
to point to tell-tale information that could have enlightened decision-makers
at the time but didn’t, for example, after World War II, the US was flooded
with people who thought that the US Navy should have known the Japanese
would bomb Pearl Harbor. CNN and national networks have very predictable
scripts for every emergency, with reporters playing to the camera, pointing
out the gross malfeasance of those in control in not knowing, preparing for,
and countering whatever happened. That’s how they raise their ratings – the
audience likes conspiracy theories. But having data is not enough – human
minds have to comprehend the core information, and the more information
that is provided, the harder that is. The solution is not LESS information, but
some filters to focus on the critical core would help provide a solution. The
next problem, though, is that we don’t know how to create such filters, espe-
cially in new problem domains.
Emergency management is thus a no-win game. However, someone has to
do it. They need to do the best they can in preplanning:

1. gathering and organizing data likely to be pertinent;


2. developing action plans that can be implemented at the national, regional,
and local level; this can include development of and implementation of
building codes, environmental awareness, and insurance systems;
3. organizing people into teams to respond nationally, regionally, and locally,
trained to identify events, and to respond with all needed systems (rescue,
medical, food, transportation, control, etc.); this can include the training
and development of planners and managers of response teams.
18
Pricing of Carbon Emission Exchange
in the EU ETS

Introduction

Carbon emission exchange originated from emission trading proposed by


economists in the 1970s. Carbon trading, an important environmental policy
in market economy countries, has emerged as the foremost policy instrument
for reducing worldwide greenhouse gas emission. The United Nations
Intergovernmental Panel on Climate Change (UNIPCC) passed the United
Nations Framework Convention on Climate Change (UNFCC) on June 4, 1992.
The Kyoto Protocol, passed in December 1997, the first additional convention,
uses the market mechanism as a new way to resolve the issue of greenhouse
gas reduction, of which carbon emission is the most prominent. Thus carbon
emission rights become a tradable commodity, leading to the emergence of a
carbon emission exchange mechanism.
In accordance with the reduction commitment made in the Kyoto Protocol,
some countries from the European Union (EU) signed an expense-sharing con-
vention in June 1998. At the same time, the European Union Commission
released a report, entitled Climate Change: Towards an EU Post-Kyoto Strategy,
which authorized an exchange system in the European Union before 2005.
A draft of the European Union Emission Trading System (EU ETS) was sub-
mitted and discussed formally in 2001, and passed in the European Union
Parliament in October, 2002. One year later, the revised version was passed in
the European Union Parliament and Council in July, and Emission Exchange
Directive 2003/87/EC came into effect on October 13, authorizing the EU ETS
to start exchanging carbon emissions beginning in January 2005. Thus the
European emission exchange system was established.
The European Union Emission Trading System (EU ETS) is not only the largest
multinational emission trading scheme in the world, but is also a major pillar
of the EU climate policy aimed at efficiently reaching the European emission

183
184 Enterprise Risk Management in Finance

commitment targets assigned by the Kyoto Protocol at the minimum cost. The
EU ETS comprises three trading periods or phases, the first from January 2005
to December 2007, the second from January 2008 to December 2012, and the
third from January 2013 to December 2020.
Under the EU ETS, each state member is assigned a national emission quota,
precisely stated in the National Allocation Plan (NAP), which has to be approved
by the EU Commission. Then national allowances are distributed among their
industrial operators with permits, and the actual emission amount in line with
NAP is supervised. Allowances are supposed to expire at the end of each year
and traded or privately reassigned, on the over-the-counter market or one of the
European climate exchanges, such as European Climate Exchange, BlueNext,
PowerNext, Nord Pool and others.
As a commodity, the supply and demand of allowances in the EU ETS market
determines the price for carbon, like other bulk commodities. A greater supply
of allowances will lead to a lower carbon price. On the other hand, too much
demand for allowances will result in a higher carbon price.
During Phase I, all the participating countries accepted most of their allow-
ances freely. However, the share of auctioning allowances was improved
greatly during Phase II, which seemed to be more market-driven. In Phase
III, a substantial number of permits are being allocated centrally with a large
share of auctioning permits, a different method from that used in the National
Allocation Plan.
Within a trading phase, banking and borrowing is allowed. For example, a
2009 EUA could be used in 2010 (Banking) or in 2008 (Borrowing). However,
cross-period borrowing and banking is not allowed. Member states do not have
the discretion to bank EUA from Phase I to Phase II.
The price mechanism of the carbon emission exchange is one of the crucial
factors for its success due to the increasingly intensified world attention to
global carbon emission reduction. The advanced price mechanism of the EU
ETS was based on five years of experience, and a few scholars conducted initial
theoretical and empirical research studying it. Cities in China, such as Beijing,
Tianjin, Shanghai, Wuhan, Changsha, Shenzhen and Kunming, have gradually
set carbon emission exchanges, but they are only at the preliminary stage and
the trading volumes are expected to be low. Therefore, learning or studying the
previous EU ETS experience seems useful in establishing the Chinese carbon
emission exchange system.

Literature review

There is growing interest in carbon emission exchange for scholars, espe-


cially with the development of the EU ETS over the past five years. Several
scholars have studied the EU ETS price mechanism, which covers three parts:
Pricing of Carbon Emission Exchange in the EU ETS 185

micro-simulation models, empirical research, and determinants of carbon


emission exchange.
We begin with simulation models. Scholars have studied numerical micro
simulation models regarding carbon emission exchange, frequently using the
marginal abatement cost curve (MACC) as an analysis tool. Kainuma et al.
(1999)1 generated an Asia-Pacific Integrated Model (AIM), which treats carbon
rights as a constraint on a production function, and as a result emission targets
and trading channels become key factors in influencing the price schemes of
carbon emission. The Regional Integrated Model of Climate and the Economy
(RIMCE) proposed by Nordhaus and Boyer (1999)2 and by Nordhaus (2001)3
differs from AIM in the way that it includes the participation of the USA and
that the emission targets of each country are determinants of CO2 prices. Other
researchers4 simulate the allowance price, and their calculations yield between
€6 and €35 per ton of CO2, depending on their models and settings. However,
these models only reflect the equilibrium price of carbon emission allowances
and cannot simulate price fluctuations over periods.
Carbon emission trading develops quickly, and some scholars have conducted
empirical research on the price mechanism using the abundance of trading data.
Daskalakis et al. (2009)5 found that market participants adopt non-arbitrage
standard pricing in order to check emission allowance prices and derivatives.
Uhrig-Homburg and Wagner (2006)6 studied the optimal design of deriva-
tives based on emission allowances. Paolella and Taschini (2006)7 integrated
both the EU ETS market and the US Clean Air Act Amendments, providing
an econometric analysis investigating the unconditional tail behavior and the
heteroskedastic dynamics in the returns on CO2 and SO2 allowances in order
to set up hedging and purchasing strategies. Benz and Trück (2006)8 argued
that the emission allowance market is different from the classical stock market
as the value of stocks depends on the profit expectations of firms, but the
price of carbon emission allowances is determined by the supply and demand
of carbon permits in the market. Using the concept of convenience yields,
Borak et al. (2006)9 focused on term structure and stochastic properties, and
showed that the carbon emission allowances market differs from the existing
commodities markets. Seifert et al. (2008)10 set an equilibrium model appro-
priate to features of the EU ETS to analyze spot price dynamics of CO2. They
found that an adequate CO2 process did not necessarily follow any seasonal
patterns. Daskalakis et al. (2009) analyzed the effect on pricing of banking and
borrowing design among different phases in three exchanges under the EU
ETS and provided corresponding interphase and intraphase pricing and arbi-
traging strategies. But none of the scholars provided both a pricing model and
an empirical analysis considering the two phases of the EU ETS.
Identifying the determinants of allowances price is yet another research area.
Burtraw (1996)11 categorized the influencing factors into three groups: policy
186 Enterprise Risk Management in Finance

issues, market fundamentals and the demand and supply of carbon emission
allowances. Burniaux (2000)12 alternatively assumed that the price of fuels,
the average carbon content in energy usage, energy substitution possibilities,
and the price and availability of clean substitute fuels are the driving factors
for CO2 pricing. Considine (2000)13 studied emission movements considering
the impact of weather. The hotter and colder seasons have a great impact on
energy consumption, and as a result the demand for carbon emission allow-
ances becomes much larger. Therefore, temperature variation is one of the
driving factors. On the other hand, the prices of oil and gas have a positive
effect on carbon emission allowances prices. Sijm et al. (2005)14 come to a
similar conclusion, and find that energy prices, characteristics of the energy
sector, demand and supply of allowances, and economic structure play key
roles in the formation of allowances price. Ciorba et al. (2001)15 pointed out
that the price of energy, the level of emission, the geographical features of a
country and its climatic conditions and temperature are the most influential
factors on setting allowances prices. Springer (2003)16 indicated that the deter-
minant of prices mainly lies on the level of emission. Springer and Varilek
(2004)17 explore other factors that could affect the cost of compliance with
the Kyoto Protocol, such as the number of sectors in the economy that do
not participate in either the trading of allowances or the inter-period transfer
of allowances. From an econometrical perspective, Manasanet-Bataller, Pardo,
and Valor (2007)18 attempted estimation of the effects of some determinants of
the EU ETS daily forward prices in 2005. Explanatory variables were oil, natural
gas and coal prices. That study also included weather variables, which are gen-
erally important determinants of allowances prices. In summary, most studies
concluded that the level of emission, energy price and weather variables are the
main driving factors in the formation of the carbon emission price.
In general, very few studies have used data from carbon emission exchanges
to test quantitative models for allowances pricing. Furthermore, there is no
literature on exploring variations of price dynamics with different phases of
the EU ETS. Therefore, in this study we have tried to investigate the EUA price
dynamics and select an appropriate model in order to investigate the distinc-
tions and improvements between the two phases.

Price movements

In this section, the price movements in the EU ETS over five years are described
and explained. Data and news are taken from news and weekly trading reports of
Point Carbon and Climate Group, two key websites in carbon emission research.
Figure 18.1 shows the price movements of carbon emission in Phase I, a
trial phase with large price volatility. It consisted of five stages in price move-
ments. The first stage was recorded from the start of the phase to the middle
Pricing of Carbon Emission Exchange in the EU ETS 187

35

30

25

20

15

10

0
05 3
05 3
05 –3
05 –3

06 –3
06 3
06 3
06 3
06 –3
06 –3

0 7 –3
07 3
07 3
07 3
07 –3
0 7 –3

3
20 –4–
20 –6–

20 – 2 –
2 0 –4–
20 – 6 –

20 –2–
20 –4–
20 – 6 –

2–
20 –8
20 –10

20 12

20 –8
20 –10

2 0 12

20 –8
2 0 –10
–1
05


20

Figure 18.1 EUA price movement in Phase I

of July 2005; the price was stimulated to €28.9 by the European Commission’s
announcement further cutting the National Allocation Plan (NAP) for Italy,
Poland and the Czech Republic, and the denial of the United Kingdom to
extend the amount of certificates. The second stage was from the middle of
July to December 2005; early participants in the EU ETS sold their allowances
at high prices before the price dropped when new members from Eastern
Europe entered the market. The third stage covered January to April 2006;
most research reports released an announcement about over-discharging CO2
compared with the EUA supply, and the carbon emissions price went up. The
fourth stage was the period from May to December 2006; auditing and super-
vision by the European Union was minimal, and every country excessively
distributed free allowances to their industry operators. Then the 2005 CO2 data
indicated that the current allowances in the market were greatly oversupplied
and the price dropped dramatically over a few trading days. The fifth stage
was between January and December 2007; European countries released their
Phase II NAPs and the rule of prohibition on banking and borrowing between
Phases I and II was clarified. The EUA price fell to zero.
The price movement in Phase II was stable (Figure 18.2) with reduction
commitment in accordance with the Kyoto Protocol. The period from January
2008 to 2010 could be divided into three stages. The first between January
and June 2008; with weak economic expectations and rising coal prices, the
EUA price was reduced to €18.84. However, it rose back up to €28.59 due to the
impact of oil and gas prices. The second stage was from July 2008 to February
2009; as the financial crisis burst out all over the world, demand for oil and
gas dropped significantly, which led to reduced demand for EUA. At the same
time, the ratio for free allotments and tradable allowances rose. The EUA
188 Enterprise Risk Management in Finance

35

30

25

20

15

10

0
–2

–2

–2

–2

–2

–2

–2

–2

–2

–2

–2

–2
1–

1–
–1

–3

–5

–7

–9

–1

–3

–5

–7

–9

–1

–3
–1

–1
08

08

08

08

08

09

09

09

09

09

10

10
08

09
20

20

20

20

20

20

20

20

20

20

20

20
20

20
Figure 18.2 EUA price movement in Phase II

price declined, with low trading volumes in this stage. The third stage was
from March 2009 to 2010; oil, gas and power prices went up with economic
recovery and the EUA price increased from €7.9 to €15, over one year fluctu-
ating between €12 and €15.
This data shows that the price volatility in Phase I was much larger than in
Phase II, primarily influenced by political issues and trading rules. First, dispensed
allowances exceeded discharged CO2, which led to the distortion of the EUA price.
The price mechanism was especially disturbed by political issues. For example,
the European Union announcement regarding the oversupply of allowances led
to the EUA price drop to €10 within several trading days. Second, micro statistical
data are absent, which is one of the reasons for the oversupply circumstances of
EUA. Consequently, a high portion of free allowances distributed to enterprises
pushed the EUA price down. Power enterprises received much higher free allow-
ances than other enterprises, and sold their allowances for profits. This made
the existing EUA superfluous, adverse to formation of a rational price in Phase I.
In Phase II, the EU ETS improved the rules on NAP supervision, distribution of
allowances and allotment methods. As a result, the price mechanism returned
to fundamentals: energy price, power price, and weather variables. An advanced
trading and price mechanism has preliminarily been formed.

Model, data and sample

Generally, the right for carbon emission is a public good with externality,
and its impact is not directly reflected in market cost and price. However,
political conventions, such as the United Nations Framework Convention on
Climate Change (UNFCC) and the Kyoto Protocol led to market scarcity and
Pricing of Carbon Emission Exchange in the EU ETS 189

added economic value as a special commodity. Carbon emission exchange


makes use of basic market functions in governing climate change, and the
price of carbon emission can reflect resource scarcity and governance cost.
The rights to carbon emission take on attributes of a commodity through the
exchange, which means every participant treats the cost of carbon discharge
as an important factor in investment, with price signals providing guidance to
internalize external environmental costs. With the expansion of the carbon
emission market, improvement of trading transparency, and monetization of
carbon, carbon emission rights become a financial asset with high liquidity
due to the participation of more financial institutions. The carbon emission
market is an important emerging financial market, and it is appropriate to use
financial market models to study its price mechanism. We use EGACH19 to
model the price volatility of Carbon Emission Exchange in the EU ETS.
The EU ETS includes a lot of carbon emission exchanges: PowerNext, Nord
Pool and European Climate Exchange (ECX) are the top three. We examine
spot close price of carbon emission exchange in the ECX over the period April
2005 to April 2010. This period is divided into two parts, as two independent
samples based on the European Commission directive. The first part is from
April 2005 to December 2007, the second from January 2008 to April 2010. All
data are collected from Bloomberg.

Analysis of EUA logreturns


We analyze the volatility of EUA logreturns in order to study its price mech-
anism. Figures 18.3 and 18.4 show logreturns of EUA. The volatility of logre-
turns in the first phase is apparently much larger than in the second. In the
first phase, logreturn fluctuates between 1.0 and −1.5 and some peaks appear
in April 2006, in the middle of the time frame, and at the end of 2007. In the

1.0

0.5

0.0

–0.5

–1.0

–1.5
100 200 300 400 500 600

Figure 18.3 Logreturns of EUA in Phase I


190 Enterprise Risk Management in Finance

.15

.10

.05

.00

–.05

–.10

–.15
50 100 150 200 250 300 350 400 450 500

Figure 18.4 Logreturns of EUA in Phase II

Table 18.1 Descriptive statistics of EUA logreturns in Phase I

Standard
Series N Mean Median Max Min Deviation Skew Kurt

In-sample 433 −0.002 0 0.418 −0.396 0.046 −0.489 32.792


(2005.4–2006.12)
Out-of-sample 237 −0.027 0 0.916 −1.386 0.145 −2.589 41.592
(2007.1–12)

Table 18.2 Descriptive statistics of EUA logreturns in Phase II

Standard
Series N Mean Median Max Min Deviation Skew Kurt

In-sample 458 −0.001 0 0.111 −0.127 0.029 −0.204 4.770


(2008.1–2009.12)
Out-of-sample 68 0.003 4.26E-05 0.047 −0.033 0.019 0.261 2.552
(2010.1–4)

second phase, volatility is much smaller than in the first, fluctuating between
0.15 and −0.15.
To estimate and test the price mechanism of EUA, each phase is divided
randomly, one for estimation, the other for forecasting. The estimation stage
of Phase I is from April 2005 to December 2006, and the remainder for fore-
casting. For the second phase, the estimation stage is from January 2008 to
December 2009, and the remainder for forecasting. Tables 18.1 and 18.2 give
descriptive statistics of the two phases.
Pricing of Carbon Emission Exchange in the EU ETS 191

The skew of estimation and forecasting in Phase I is −0.489 and −2.589, while
the kurtosis measures are 32.792 and 41.592 respectively. Similarly, the skew of
estimation and forecasting during Phase II are −0.204 and 0.261, with kurtosis
of 4.770 and 2.552. These data show large skew and kurtosis. Both stages in
Phase I are left-skewed, but during phase II left-skewed in estimation and right-
skewed in forecasting. Due to asymmetry, excess kurtosis and heavy tails, the
normal distribution does not fit the data very well. Given the large volatility of
logreturns, the model used must convey and explain the data characteristics
described above.

Time series test


A root unit test is conducted on logreturns and its first-order difference in
order to test time series stationarity. Phase I shows that t statistics values are
larger than critical value at the 1%, 5% and 10% levels. They are all significant
at these levels, and null hypotheses are rejected. We can conclude that both
the logreturns and their first-order difference series are stable.
Similarly, the ADF test of logreturns and their first-order difference in Phase II
are also significant at the 1%, 5% and 10% levels, and null hypotheses are denied.
The logreturns and their first order difference series are also inferred to be stable.

Table 18.3 The ADF test of EUA logreturns in Phase I

t-Statistic Probability

Augmented Dickey–Fuller test statistic −24.631 0.0000


Test critical value 1% level −3.440
5% level −2.866
10% level −2.569

Table 18.4 The ADF test of EUA first-order difference in Phase I

t-Statistic Probability

Augmented Dickey–Fuller test statistic −17.751 0.0000


Test critical value 1% level −3.440
5% level −2.866
10% level −2.569

Table 18.5 The ADF test of EUA logreturns in Phase II

t-Statistic Probability

Augmented Dickey–Fuller test statistic −17.023 0.0000


Test critical value 1% level −3.440
5% level −2.866
10% level −2.569
192 Enterprise Risk Management in Finance

Table 18.6 The ADF test of EUA first-order difference in Phase II

t-Statistic Probability

Augmented Dickey–Fuller test statistic −15.201 0.0000


Test critical value 1% level −3.440
5% level −2.866
10% level −2.569

Autocorrelation Partial Correlation AC PAC Q-Stat Prob Autocorrelation Partial Correlation AC PAC Q-Stat Prob

1 0.048 0.048 1.5509 0.213 1 0.128 0.128 8.7264 0.003


2 0.025 0.022 1.9561 0.376 2 –0.103 –0.121 14.301 0.001
3 0.024 0.022 2.3391 0.505 3 0.027 0.060 14.701 0.002
4 0.028 0.025 2.8719 0.579
4 –0.010 –0.036 14.750 0.005
5 0.001 –0.003 2.8723 0.720
5 –0.014 0.003 14.847 0.011
6 0.017 0.015 3.0644 0.801
6 –0.022 –0.029 15.115 0.019
7 0.024 0.022 3.4711 0.838
8 0.087 0.084 8.6073 0.377 7 0.031 0.040 15.614 0.029
9 –0.016 –0.026 8.7861 0.457 8 0.028 0.012 16.030 0.042
10 –0.027 0.031 9.2784 0.506 9 –0.005 –0.001 16.045 0.066
11 0.053 0.053 11.228 0.424 10 –0.017 –0.016 16.196 0.094
12 –0.030 –0.038 11.845 0.458 11 –0.017 –0.014 16.345 0.129
13 0.051 0.055 13.650 0.399 12 –0.008 –0.007 16.380 0.174
14 –0.066 –0.075 16.667 0.274 13 0.051 0.055 17.802 0.165
15 –0.044 –0.0443 17.970 0.264 14 0.016 –0.001 17.936 0.210
16 –0.059 –0.059 20.403 0.203
15 0.019 0.029 18.135 0.256
17 0.020 0.031 20.665 0.242
16 0.013 0.002 18.233 0.310
18 –0.079 –0.072 24.974 0.126
17 0.046 0.054 19.383 0.307
19 –0.011 –0.012 25.063 0.158
20 –0.019 –0.007 25.311 0.190 18 –0.028 –0.043 19.806 0.344
21 –0.039 –0.041 26.357 0.193 19 –0.002 0.026 19.806 0.406
22 0.006 0.029 26.379 0.236 20 –0.013 –0.034 19.898 0.464
23 –0.006 0.007 26.407 0.282 21 –0.030 –0.017 20.395 0.496
24 0.024 0.025 26.797 0.314 22 –0.011 –0.014 20.459 0.554
25 0.051 0.057 28.602 0.281 23 –0.050 –0.049 21.847 0.530
26 0.008 0.011 28.648 0.327 24 –0.047 –0.039 23.052 0.517
27 –0.026 –0.016 29.113 0.355 25 –0.022 –0.019 23.310 0.559
28 0.047 0.040 30.650 0.333
26 –0.055 –0.060 24.967 0.521
29 –0.174 –0.172 51.976 0.005
27 0.041 0.059 25.889 0.525
30 –0.008 –0.014 52.027 0.008
31 –0.054 –0.050 54.116 0.006 28 0.044 0.014 26.947 0.521
29 0.044 0.055 28.052 0.515

Figure 18.5 Series correlation of EUA logreturns in Phases I and II

Time series correlations tests are also conducted. Figure 18.5 shows that
both the logreturns in Phase I and Phase II are not autocorrelated. Durbin–
Watson statistics values are 1.9996 and 1.9852 respectively, which is closer to
2, inferring that there is no autocorrelation.

GARCH effect test


In the third step, ARCH LM is chosen to test whether there is a GARCH effect
or not. Figures 18.3 and 18.4 show a clustering effect of EUA logreturns and
it probably has heteroskedasticity. An ARCH LM test is conducted for residual
series of EUA logreturns in Phase I and Phase II respectively. GARCH effect was
found in both phases (see Table 18.7).
Pricing of Carbon Emission Exchange in the EU ETS 193

Table 18.7 The ARCH LM test for EUA logreturns in Phases I and II

ARCH test

ARCH-statistic Critical value Probability

Phase I (2005.4 –2007.12) 6.4223e-004 3.842 0.980


Phase II (2008.1–2010.4) 2.3563e-004 3.842 0.867

Note: Significance level α = 0.05.

Table 18.8 The Akaike info criterion (AIC) and Schwarz criterion (SC) for the estimated
model in Phase I

Model Akaike info criterion (AIC) Schwarz criterion (SC)

Egarch (1, 1)-t −4.290 −4.243


Garch (1, 1)-normal −4.121 −4.092
Garch (1, 1)-GED −4.272 −4.235

Table 18.9 The Akaike info criterion (AIC) and Schwarz criterion (SC) for the estimated
model in Phase II

Model Akaike info criterion (AIC) Schwarz criterion (SC)

Egarch (1, 1)-t −4.406957 −4.361904


Garch (1, 1)-normal −4.362285 −4.335253
Garch (1, 1)-GED −4.389862 −4.353820

Method selection
The above evidence shows that GARCH cluster models fit the price mechanism
of carbon emission exchange better. Three typical models, EGARCH (1, 1)-t,
GARCH (1, 1)-normal and GARCH (1, 1)-GED, are chosen for comparison to
select the best model in the GARCH models cluster. To evaluate the models, the
Akaike info criterion (AIC) and Schwarz criterion (SC) are chosen to examine
the estimated models. Choosing the most parsimonious model, it is found that
the EGARCH (1, 1)-t model is better. In Phase I, while AIC values are −4.290,
−4.121, and −4.272 respectively, SC values are −4.243, −4.092 and −4.235.
Although the differences between EGARCH (1, 1)-t and GARCH (1, 1)-GED
are relatively smaller, EGARCH (1, 1)-t is still the best model considering both
criteria.
Similarly, in Phase II, AIC and SC values are −4.407, −4.362, −4.390 and
−4.362, −4.335 and −4.354 respectively (see Table 18.9). The differences between
EGARCH (1, 1)-t and GARCH (1, 1)-GED are very small as well. EGARCH (1, 1)-t
is still preferred.
194 Enterprise Risk Management in Finance

In conclusion, EGARCH (1, 1)-t is an adequate approach for modeling EUA


logreturns.

Estimation and forecasting


As mentioned above GARCH models are appropriate, and the EGARCH (1, 1)-t
is the best model among the three types for EUA price estimation and fore-
casting. Therefore, an EGARCH (1, 1)-t model is used to estimate and forecast
EUA price in Phase I and Phase II. All estimation and forecasting was obtained
using Eviews 6.0 and MATLAB2009a.

(1) Estimation and forecasting of logreturns in Phase I


In Phase I, the estimation period is from April 2005 to December 2006, and the
forecasting period is from January to December 2007.
Table 18.10 shows that all the coefficients of variables are significant at the
α = 0.05 level. The T-DIST.DOF value of 3.569 means that the model
EGARCH(1,1,)-t with a low degree of freedom can describe EUA pricing very
well. The AIC and SC values are −4.290 and −424, while the Durbin–Watson
statistic of 1.696 shows a low level of autocorrelation. An EUA estimation model
in Phase I is obtained.
Figure 18.6 shows residual, standard deviation and logreturns series in Phase
I. As expected, the trend of logreturns and residual fit closely, because they
have the same distribution.
The Phase I estimation model is used to forecast returns from January to
December, 2007, and the results are tested. Root mean squared error, mean
absolute error and mean abs. percent error values are 0.148, 0.061 and 43.038
respectively (see Table 18.11). This estimation model can predict the EUA price
trend very well and can be used to forecast EUA price in Phase I.

Table 18.10 Estimation of EUA logreturns in Phase I

Variable Coefficient Std. error z-Statistic Probability

Variance equation
C(1) −1.040 0.234 −4.436 0.000
C(2) 0.530 0.094 5.640 0.000
C(3) −0.121 0.057 −2.141 0.032
C(4) 0.903 0.029 31.259 0.000
T-DIST.DOF 3.569 0.709 5.033 0.000
Akaike info criterion −4.290 Durbin– 1.696
Watson stat
Schwarz criterion −4.243
Pricing of Carbon Emission Exchange in the EU ETS 195

Innovations
Innovation 2

–2
0 100 200 300 400 500 600 700

Conditional Standard Deviations


1
Deviation
Standard

0.05

0
0 100 200 300 400 500 600 700

2 Returns
Return

–2
0 100 200 300 400 500 600 700

Figure 18.6 Residual, standard deviation and logreturns series in Phase I

Table 18.11 EUA Logreturns forecasting in Phase I

Root mean Mean absolute Mean absolute


Model Variable no. squared error error percent error

EGARCH (1,1)-t 4 0.148 0.061 43.038

(2) Estimation and forecasting of logreturns in Phase II


In Phase II, the estimation period is from January 2008 to December 2009
and the forecasting period is from January to April 2010. Empirical results are
illustrated in Table 18.12 using the same method. All the coefficients of vari-
ables are significant at the α = 0.05 level. The T-DIST.DOF value of 8.423 means
EGARCH (1,1)-t at a low degree of freedom can describe EUA pricing very well.
The AIC and SC values are −4.407 and −4.362, while the Durbin–Watson stat-
istic is 1.722, which is closer to 2, exhibiting a low level of autocorrelation. An
EUA estimation model in Phase II is also conducted.
Figure 18.7 shows residual, standard deviation and logreturns series in Phase
II. The trend of logreturns and residual series match those of Phase I well.
On the other hand, volatility in Phase II is much smaller than in Phase I (see
Figures 18.6 and 18.7).
196 Enterprise Risk Management in Finance

Table 18.12 Estimation of EUA logreturns in Phase II

Variable Coefficient Std. error z-Statistic Probability

Variance equation
C(1) −0.312 0.110 −2.840 0.005
C(2) 0.126 0.053 2.363 0.018
C(3) −0.120 0.031 −3.881 0.000
C(4) 0.971 0.012 81.385 0.000
T-DIST.DOF 8.423 4.023 2.093 0.036
Akaike info criterion −4.407 Durbin– 1.722
Watson stat
Schwarz criterion −4.362

Innovations
0.2
Innovation

–0.2
0 100 200 300 400 500 600

0.1 Conditional Standard Deviations


Deviation
Standard

0.05

0
0 100 200 300 400 500 600

Returns
0.2
Return

–0.2
0 100 200 300 400 500 600

Figure 18.7 Residual, standard deviation and logreturns series in Phase II

The Phase II estimation model is used to forecast returns from January to


April 2010 and the results are analyzed. root mean squared error, mean absolute
error and mean abs. percent error values are 0.019, 0.015 and 100.000 respect-
ively (see Table 18.13). This estimation model can predict EUA price trend very
well, and can be used to forecast EUA price in Phase II.
Pricing of Carbon Emission Exchange in the EU ETS 197

Table 18.13 EUA Logreturn forecasting in Phase II

Root mean Mean absolute Mean absolute


Model Variable no. squared error error percent error

EGARCH(1,1)-t 4 0.019 0.0149 100.000

(3) Comparison of results


As shown in these results, the EGARCH (1,1)-t estimates and forecasts EUA
price very well in both Phase I and Phase II, and is appropriate to predict EUA
price. However, the different phases have distinctive price characteristics. First,
the price volatility differences in the two phases are seriously large; the EUA
price description in Figures 18.1 and 18.2, logreturns of EUA in Figures 18.3
and 18.4, Tables 18.1 and 18.2, and residual, standard deviation and logreturns
series in Figures 18.6 and 18.7 all show that large differences exist. The vola-
tility in Phase I is substantially greater than that in Phase II. Second, the price-
driven mechanisms in each phase are different. Estimation and forecasting
in Tables 18.8, 18.9, 18.10 and 18.11 show that EGARCH (1,1)-t can be used to
predict the EUA price mechanism. But coefficient, standard error and other
statistics reveal a diverse price driven mechanism. The data of Phases I and II
are modeled as Phase I for estimation and Phase II for forecasting, but the
results are not significant. Therefore, all the results of our analysis confirm the
divergent EUA price mechanism in each phase.

Conclusions

The carbon emission exchange as a market-driven reduction approach is one


of the most interesting topics for academic research in the last several years. In
this study, the price mechanism of the EU ETS in two phases were analyzed.
After a description of the EUA price movements in recent years, three typical
GARCH models for EUA pricing were examined: EGARCH (1,1)-t, GARCH (1,1)-
normal and GARCH (1,1)-GED were selected as the most appropriate to model
with AIC and SC. We found that EGARCH (1,1)-t is the best model among the
three. Estimation and forecasting in Phase I and Phase II were conducted. The
results strongly suggest that both price mechanism and volatility are dramat-
ically different in each phase. With some learning obtained in Phase I opera-
tions, an improved price mechanism formed in Phase II.
Modeling the price mechanism of EUA will not only be beneficial to traders,
brokers and risk managers in the carbon market, but may also enable com-
panies to monitor the costs of CO2 emission in their production processes.
198 Enterprise Risk Management in Finance

The research on carbon emission exchange is at a beginning stage. More


organizations and individuals will participate in the market, which will bring
more liquidity, trading transparency, and trade volumes. In this study we
found the best model for EUA price estimation and forecasting and saw that
there are substantial differences between Phase I and Phase II in price volatility
and mechanism. Future research should be able to investigate the reasons for
the differences between the two phases in order to more completely identify
trading rules and mechanisms in the EU ETS.
19
Volatility Forecasting of the Crude
Oil Market

Introduction

Risk analysis of the crude oil market has always been a core research problem
important to both practitioners and academia. Risks arise primarily from
changes in oil prices. During the 1970s and 1980s there were a number of
steep increases in oil prices; these price fluctuations reached new peaks in 2007
when the price of crude oil doubled during the financial crisis, and double
digit fluctuations continued between 2007 and 2008 for short periods. These
fluctuations would not be worrisome if oil was not such an important com-
modity in the world’s economy. But when oil prices become too high and their
volatility increases, they have a direct impact on the economy in general, and
affect the government decisions regarding market regulation, thus impacting
firm and individual consumer incomes.1
Price volatility analysis has been a hot research area for many years.
Commodity markets are characterized by extremely high levels of price vola-
tility. Understanding the volatility dynamic process of oil price is a very
important and crucial way for producers and countries to hedge various risks
and to avoid excess exposures to risks (Hung et al., 2008).
To deal with different phases of volatility behavior and the dependence of
variability of time series on their cycles, models allowing for autocorrelation as
well as heteroskedasticity such as ARCH, GARCH or regime-switching models
have been suggested. The former two are very useful in modeling a unique
stochastic process with conditional variance; the latter has the advantage of
dividing the observed stochastic behavior of a time series into several separate
phases with different underlying stochastic processes. Both types of models are
widely used in practice.
Hung et al. (2008)2 employed three GARCH models (GARCH-N, GARCH-t
and GARCH-HT) to investigate the influence of fat-tailed innovation processes

199
200 Enterprise Risk Management in Finance

on the performance of VaR estimates of energy commodities. Narayan et al.


(2008)3 used the exponential GARCH models to evaluate the impact of oil price
on the nominal exchange rate. Malik and Ewing (2009)4 employed bivariate
GARCH models to estimate the relations between five different US sector indexes
and oil prices in their validation of cross-market hedging and investor infor-
mation sharing. Regime-switching has also been used in modeling stochastic
processes with different regimes. Alizadeh et al. (2008)5 introduced a Markov
regime switching vector error correction model with GARCH error structure,
and demonstrated how portfolio risks are reduced using state dependent hedge
ratios. Aloui and Jammazi (2009)6 employed a two regime Markov-switching
EGARCH model for analysis of oil price changes, and calculated probabilities
of transition across regimes. Klaassen (2002)7 developed a regime-switching
GARCH model to account for the high persistence of shocks generated by
changes in variance processes. Oil shocks were found to better explain the
impact of oil on output growth.8 There is no clear evidence regarding which
approach outperforms the other.
Fan et al. (2008)9 argued that the GED-GARCH-based VaR approach is more
realistic and more effective than widely used historical simulation with ARMA
forecasts based on their empirical study. The FIAPARCH model is said to out-
perform the other models in VaR prediction.10 GARCH models also seem to
perform better than inversion of the Black equation in estimating implied
volatility. GARCH was believed to perform best when assuming GED distrib-
uted errors.11 Clear evidence of regime-switching has been discovered in the
oil market. Engel (1994)12 concluded that regime-switching models provide
a useful framework for predicting the evolution of volatility and forecasting
exchange rate volatility. The regime-switching stochastic volatility model
performs well in capturing major events affecting the oil market.13

Volatility models

Historical volatility
We assume εt to be the mean innovation for energy log price changes or price
returns. To estimate the volatility at time t over the last N days we have:

1/2
⎡ ⎛ 1 ⎞ N −1 ⎤
VH,t = ⎢⎜ ⎟ ∑ i = 0 ε t2− i ⎥ ,
⎣ ⎝ N ⎠ ⎦

where N is the forecast period. This is actually an N-day simple moving


average volatility, where the historical volatility is assumed to be constant over
the estimation period and the forecast period. To involve the long-run or
Volatility Forecasting of the Crude Oil Market 201

unconditional volatility using all previous returns available at time t, we have


many variations of the simple moving average volatility model.14

ARMA(R,M)
Given a time series Xt, the autoregressive moving average (ARMA) model is
very useful for predicting future values in time series where both an autore-
gressive (AR) term and a moving average (MA) term are present. The model is
usually then referred to as the ARMA(R,M) model, where R is the order of the
first term and M is the order of the second term. The following ARMA(R,M)
model contains the AR(R) and MA(M) models:

R M
Xt = c + ε t + ∑ ϕ i Xt − i + ∑ θ j ε t − j .
i =1 j =1

where φi and θj are parameters for AR and MA terms respectively.

ARMAX(R,M, b)
To include the AR(R) and MA(M) models and a linear combination of the last b
terms of a known and external time series dt, one can use an ARMAX(R,M, b)
model with R autoregressive terms, M moving average terms and b exogenous
inputs terms:

R M b
Xt = c + ε t + ∑ϕ X
i =1
i t −i + ∑ θ j ε t − j + ∑ η k dt − k ,
j =1 k =1

where η1, ... ,ηb are the parameters of the exogenous input dt.

ARCH(q)
Autoregressive Conditional Heteroskedasticity (ARCH) modeling is the
predominant statistical technique employed in the analysis of time-
varying volatility. In ARCH models, volatility is a deterministic function of
historical returns. The original ARCH(q) formulation proposed by Engle15
models conditional variance as a linear function of the first q past squared
innovations:

q
σ t2 = c + ∑ α i ε t2− i .
i =1

This model allows today’s conditional variance to be substantially affected by


the (large) squared error term associated with a major market move (in either
direction) in any of the previous q periods. It thus captures the conditional
202 Enterprise Risk Management in Finance

heteroskedasticity of financial returns, and offers an explanation of the per-


sistence in volatility. A practical difficulty with the ARCH(q) model is that in
many of the applications a long-length q is needed.

GARCH(p,q)
Bollerslev’s Generalized Autogressive Conditional Heteroskedasticity
[GARCH(p,q)] specification16 generalizes the model by allowing the current
conditional variance to depend on the first p past conditional variances as well
as the q past squared innovations. That is:
p q
σ t2 = L + ∑ βi σ t2− i + ∑ α j ε t2− j ,
i =1 j =1

where L denotes the long-run volatility.


By accounting for the information in the lag(s) of the conditional variance
in addition to the lagged t−i terms, the GARCH model reduces the number of
parameters required. In most cases, one lag for each variable is sufficient. The
GARCH(1,1) model is given by:

σ t2 = L + β 1σ t2−1 + α 1 ε t2−1.

GARCH can successfully capture thick-tailed returns and volatility clustering. It


can also be modified to allow for several other stylized facts of asset returns.

EGARCH
The Exponential Generalized Autoregressive Conditional Heteroskedasticity
(EGARCH) model introduced by Nelson (1991)17 builds in a directional effect
of price moves on conditional variance. Large price declines, for instance
may have a larger impact on volatility than large price increases. The general
EGARCH(p,q) model for the conditional variance of the innovations, with
leverage terms and an explicit probability distribution assumption, is:

p q ⎡ |ε t − j | ⎧⎪|ε t − j | ⎫⎪⎤ q ⎛ εt−j ⎞


logσ t2 = L + ∑ β i logσ t2− i + ∑ α j ⎢ −E⎨ ⎬⎥ + ∑ Lj ⎜⎜ ⎟
i =1 j =1 ⎢⎣ σ t − j ⎪⎩ σ t − j ⎪⎭⎥⎦ j =1 ⎝ σ t − j ⎟⎠

⎪⎧|ε t − j | ⎪⎫ 2
where, E {| zt − j |}E ⎨ ⎬= for the normal distribution, and
⎩⎪ σ t − j ⎭⎪ π
⎛ v −1⎞
Γ
⎧⎪|ε t − j | ⎫⎪ v − 2 ⎜⎝ 2 ⎟⎠
E {| zt − j |}E ⎨ ⎬= for the Student’s t distribution with degree
⎪⎩ σ t − j ⎪⎭ π ⎛v⎞
Γ⎜ ⎟
⎝2⎠
of freedom ν > 2.
Volatility Forecasting of the Crude Oil Market 203

GJR(p,q)
GJR( p,q) model is an extension of an equivalent GARCH( p,q) model with zero
leverage terms. Thus, estimation of initial parameter for GJR models should be
identical to those of GARCH models. The difference is the additional assump-
tion with all leverage terms being zero:

p q q
σ t2 = L + ∑ βi σ t2− i + ∑ α j ε t2− j + ∑ Lj St − j ε t2− j
i =1 j =1 j =1

where St–j = 1 if εt–j < 0, St–j = 0 otherwise, with constraints

p q
1 q
∑β + ∑α
i =1
i
j =1
j + ∑ Lj < 1
2 j =1
L ≥ 0, βi ≥ 0, α j ≥ 0, α j + Lj ≥ 0.

Regime-switching models
Markov regime-switching models have been applied in various fields such as
oil and the macroeconomy analysis,18 analysis of business cycles (Hamilton,
198919) and modeling stock market and asset returns (Vo, 2009).
We now consider a dynamic volatility model with regime-switching. Suppose
a time series yt follows an AR ( p) model with AR coefficients, together with the
mean and variance, depending on the regime indicator st:
p
yt = μ st + ∑ ϕ j ,st yt − j + ε t , «t ~ i.i.d. Normal (0, sst2 )
j =1

The corresponding density function for yt is:

1 ⎡ ω2 ⎤
f ( yt | st ,Yt −1 ) = ⋅ exp ⎢ − t 2 ⎥ = f ( yt | st , yt −1 ,..., yt − p ,
2πσ s2t ⎣ 2σ st ⎦

p
where ω t = yt − ω st − ∑ ϕ j , st yt − j .
j =1

The model can be estimated by use of maximum log likelihood estimation.


A more practical approach is to allow the density function of yt to depend on
not only the current value of the regime indicator st but also on past values
of the regime indicator st, which means the density function should take the
form of:

f(yt | St, St–1, Yt–1)

where St−1 = {st−1, st−2, ... } is the set of all the past information on st.
204 Enterprise Risk Management in Finance

Data

The data spans a continuous sequence of 866 days from February 2006 to July
2009, showing the closing prices of the NYMEX Crude Oil index during this time
period on a day-to-day basis. Weekends and holidays are not included in our data,
thus those days are considered as without price movement. Using the logarithm
of price changes means that our continuously compounded return is symmetric,
preventing us from having nonstationary oil price levels that would affect our
return volatility. Table 19.1 presents the descriptive statistics of daily crude oil
price changes. In Figure 19.1 we show a plot of crude oil daily price movement.
To get a preliminary view of volatility change, Table 19.2 shows descriptive
statistics for the logreturn of the Daily Crude Oil Index ranging over the period
February 2006 to July 2009. The corresponding plot is given in Figure 19.2.

Distribution analysis
Figure 19.3 displays a distribution analysis of our data ranging from February
2006 up to July 2009. The data is the log return of the daily crude oil price

Table 19.1 Statistics on the daily crude oil


index changes, February 2006–July 2009

Statistics Value

Sample Size 866


Mean 77.2329
Maximum 145.9600
Minimum 44.4100
Standard deviation 20.9270
Skewness 1.3949
Kurtosis 4.3800

160

140

120

100

80

60

40
0 100 200 300 400 500 600 700 800 900

Figure 19.1 NYMEX crude oil daily price movements


Volatility Forecasting of the Crude Oil Market 205

Table 19.2 Daily crude oil index logreturn


statistics, February 2006–July 2009

Statistics Value

Sample Size 865


Mean 1.8293e-005
Maximum 0.1003
Minimum 0.0874
Standard Deviation 0.0218
Skewness −0.0962
Kurtosis 6.1161

0.15

0.1

0.05

–0.05

–0.1
0 100 200 300 400 500 600 700 800 900

Figure 19.2 NYMEX crude oil daily logreturn

25
r data
Normal distribution
20 T location-scale distribution
Density

15

10

0
–0.08 –0.06 –0.04 –0.02 0 0.02 0.04 0.06 0.08 0.1
Data

Figure 19.3 Normal distribution vs. t-distribution


206 Enterprise Risk Management in Finance

movements over the time period mentioned above. We can see that the
best distribution for our data is a t-Distribution, shown by the blue line in
Figure 19.3. The red line represents the normal distribution of our data. So a
conditional t-Distribution is preferred to normal distribution in our research.
An augmented Dickey–Fuller univariate unit root test yields a p -value of
1.0*e-003, 1.1*e-003 and 1.1*e-003 for lags of 0, 1 and 2 respectively. All
p -values are smaller than 0.05, which indicates that the time series has a
property of trend-stationary.

Results

GARCH modeling
We first estimated the parameter of the GARCH(1,1) model using 865 observa-
tions in Matlab, and then tried various GARCH models using different prob-
ability distributions with the maximum likelihood estimation technique.
In many financial time series, the standardized residuals zt = εt / st usually
display high levels of kurtosis, which suggests departure from conditional nor-
mality. In such cases, the fat-tailed distribution of the innovations driving a
dynamic volatility process can be better modeled using the Student’s −t or the
Generalized Error Distribution (GED). Taking the square root of the conditional
variance and expressing it as an annualized percentage yields a time-varying
volatility estimate. A single estimated model can be used to construct forecasts
of volatility over any time horizon. Table 19.3 presents the GARCH(1,1) esti-
mation using the t-distribution. The conditional mean process is modeled by
use of ARMAX(0,0,0).
Substituting these estimated values in the math model, we yield the explicit
form as follows:

yt = 6.819e − 4 + «t
s t2 = 2.216e − 6 + 0.9146 s t2−1 + 0.0815 «t2−1.

Table 19.3 GARCH(1,1) estimation using the t-distribution

Model AIC BIC lnL Parameter Value Std error t-Statistic

Mean: C 6.819e−4 5.045e−4 1.352


ARMAX K 2.216e−6 1.306e−6 1.701
(0,0,0); −4559.9 −4536.1 2284.97 b1 0.915 0.017 52.651
Variance: a1 0.082 0.018 4.554
GARCH(1,1) DoF 34.603 8.442e−7 4.099e+7
Volatility Forecasting of the Crude Oil Market 207

Innovations
0.1
Innovation

0.05
0
–0.05
–0.1
0 100 200 300 400 500 600 700 800 900

Conditional Standard Deviations


0.05
Deviation
Standard

0.04
0.03
0.02
0.01
0
0 100 200 300 400 500 600 700 800 900

Returns
0.15
0.1
0.05
Return

0
–0.05
–0.1
0 100 200 300 400 500 600 700 800 900

Figure 19.4 Innovation, standard deviation, return

Figure 19.4 depicts the dynamics of the innovation, standard deviation, and
return, using the above estimated GARCH model, that is, the ARMAX(0,0,0)
GARCH(1,1) with a log likelihood value of 2284.97. We want to find a higher
log likelihood value for other GARCH modeling, so we use the same data with
different models in order to increase the robustness of our model.
We now try different combinations of ARMAX and GARCH, EGARCH and
GJR models. Computational results are presented in Table 19.4.
A general rule for model selection is that we should specify the smallest, sim-
plest models that adequately describe data, because simple models are easier
to estimate, easier to forecast, and easier to analyze. Model selection criteria
such as AIC and BIC penalize models for their complexity when considering
best distributions that fit the data. Therefore, we can use log likelihood (LLC),
Akaike (AIC) and Bayesian (BIC) information criteria to compare alternative
models. Usually, differences in LLC across distributions cannot be compared
since distribution functions can have different capabilities for fitting random
data, but we can use the minimum AIC and BIC, maximum LLC values as
model selection criteria.20
As can be seen from Table 19.4, the log likelihood value of ARMAX(1,1,0)
GJR(2,1) yields the highest log likelihood value 2292.32 and the lowest AIC
value −4566.6 among all modeling techniques. Thus we select GJR models. The
ARMAX(1,1,0) GJR(2,1) model was used in a simulation and a forecast for the
standard deviation of a 30-day period using 20,000 realizations.
208 Enterprise Risk Management in Finance

Table 19.4 Various GARCH modeling characteristics

Model AIC BIC lnL Parameter Value Std error t-Statistic

C 8.995e−4 6.685e−4 1.346


φ1 −0.312 0.439 −0.711
Mean:
θ1 0.236 0.447 0.529
ARMAX(1,1,0);
−4561.0 −4527.7 2287.5 K 2.056e−6 1.257e−6 1.636
Variance:
β1 0.9175 0.017 54.161
GARCH(1,1)
α1 0.0790 0.017 4.544
DoF 30.107 1.677e−4 1.795e+5
C 6.656e−4 6.237e−4 1.067
φ1 −0.307 0.390 −0.787
Mean: θ1 0.223 0.397 0.561
ARMAX(1,1,0); K −0.040 0.030 −1.334
−4557.8 −4524.5 2286.3
Variance: β1 0.995 3.626e−3 274.455
EGARCH(1,1) α1 0.146 0.028 5.198
L1 −0.032 0.016 −2.034
DoF 37.596 48.455 0.776
C 6.912 e−4 6.392e−4 1.081
φ1 −0.297 0.450 −0.660
Mean: θ1 0.222 0.457 0.485
ARMAX(1,1,0); K 2.151e−6 1.268e−6 1.696
−4560.9 −4522.8 2288.4
Variance: β1 0.919 0.017 54.718
GJR(1,1) α1 0.059 0.021 2.878
L1 0.034 0.025 1.354
DoF 38.36 1.197e−4 3.205e+4
C 5.647e−4 6.464e−4 0.874
φ1 −0.358 0.403 −0.889
θ1 0.284 0.414 0.687
Mean: K 3.504e−6 1.994e−6 1.757
ARMAX(1,1,0);
−4566.6 −4523.8 2292.3 β1 0 0.026 0.000
Variance: β2 0.868 0.029 29.559
GJR(2,1) α1 0.091 0.026 3.571
L1 0.068 0.035 1.955
DoF 50.013 6.069e−6 8.241e+6

The forecasting horizon was defined to be 30 days (one month). The simu-
lation used 20,000 outcomes over a 30-day period based on our fitted model
ARAMX(1,1,0) GJR(2,1) with a horizon of 30 days from ‘Forecasting.’ Figure 19.5
compares forecasts from ‘Forecasting’ with those derived from ‘Simulation.’
The first four panels of Figure 19.5 directly compare each of the forecasted
outputs with the corresponding statistical result obtained from simulation.
The last two panels of Figure 19.5 illustrate histograms from which we could
compute the approximate probability density functions and empirical confi-
dence bounds. When comparing forecasting with its counterpart derived from
Volatility Forecasting of the Crude Oil Market 209

Forecast of STD of cumulative


0.12 holding period Returns 0.024
0.11 Forecast of STD of residuals
0.0235
0.1

Standard Deviation
Standard Deviation

0.09 0.023
0.08
0.0225
0.07
0.06 0.022
Forecast results Forecast results
0.05 0.0215
Simulation results Simulation results
0.04
0.021
0.03
0.02 0.0205
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Forecast period Forecast period

–4
Standard error of forecast of returns ×10 Forecast of Returns
0.024 15
Forecast results Forecast results
0.0235
Simulation results Simulation results
0.023 10
Standard Deviation

0.0225
Return

0.022 5

0.0215

0.021 0

0.021

0.0205 –5
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Forecast period Forecast period

Cumalative Holding Period Simulated Returns at Forecast Horizon


3000 Returns at Forecast Horizon 4500
4000
2500
3500

2000 3000

2500
Count

Count

1500
2000

1000 1500
1000
500
500
0 0
–0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 –0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15
Return Return

Figure 19.5 Simulation and forecasting

the Monte Carlo simulation, we show computation for four parameters in the
first four panels of Figure 19.5: the conditional standard deviations of future
innovations, the MMSE forecasts of the conditional mean of the Nasdaq return
series, cumulative holding period returns and the root mean square errors
(RMSE) of the forecasted returns. The fifth panel of Figure 19.5 uses a histogram
to illustrate the distribution of the cumulative holding period return obtained
210 Enterprise Risk Management in Finance

if an asset was held for the full 30-day forecast horizon. In other words, we plot
the logreturn obtained by investing in NYMEX Crude Oil Index today, and
sold after 30 days. The last panel of Figure 19.5 uses a histogram to illustrate
the distribution of the single-period return at the forecast horizon, that is, the
return of the same mutual fund, the 30th day from now.

Markov regime-switching modeling

We now use a Markov regime-switching model. The purpose is twofold: first,


to see if Markov switching regressions can beat GARCH models in time series
modeling; second, to find turmoil regimes in historical time series. Table 19.5
illustrates the results. The model in Table 19.5 assumes Normal distribution
and allows all parameters to switch. We used S = [1 1 1] to control switching
dynamics, where the first elements of S control the switching dynamic of the
mean equation, while the last terms control the switching dynamic of the
residual vector, including distribution parameters mean and variance. A value
of 1 in S indicates that switching is allowed in the model while a value of 0 in
S indicates that parameter is not allowed to change states. Then the model for
the mean equation is:

State 1 (St = 1) State 2(St = 2 )


yt = −0.0015 − 0.0667yt −1 + ε t yt = 0.0012 − 0.0934yt −1 + ε t
ε t ∼ N (0,0.03062 ) ε t ∼ N (0,0.01152 ),

where εt is a residual vector which follows a particular distribution. The tran-


⎡0.99 0.01⎤
sition matrix, P = ⎢ ⎥ , controls the probability of a regime switch
⎣ 0.01 0.99⎦
from state 1(2) (column 1(2)) to state 2(1) (row 2(1)). The sum of each column
in P is equal to 1, since they represent full probabilities of the process for each
state.
In order to obtain the best fitted Markov regime-switching models, we
tried various parameter settings for the traditional Hamilton model and
complicated settings using t-distribution and Generalized Error Distribution.
Computational results are given in Tables 19.6, 19.7 and 19.8. A comparison
of log Likelihood values indicate that complicated setting using t-distribu-
tion and Generalized Error Distribution usually are preferred. The best fitted
Markov regime-switching models should assume GED and allow all parameters
to change states (see Table 19.8).
We now focus on analysis using the best fitted Markov regime-switching
model, that is, ‘MS model, S = [1 1 1 1 1] (GED)’ in Table 19.8. Figure 19.6
Volatility Forecasting of the Crude Oil Market 211

Table 19.5 Markov regime-switching computation example

Model log Non- Transition


(Distribution like- switching Switching parameters probabilities
assumption) lihood parameters state 1 state 2 matrix

MS Model, Model’s STD 0.0306 0.0115


Indep column 1 −0.0015 0.0012 0.99 0.01
S = [1 1 1] 2257.36 N/A
Indep column 2 −0.0667 −0.0934 0.01 0.99
(Normal)

Table 19.6 Markov regime-switching using Hamilton’s (1989) model

Transition
Model log Non- probabil-
(Distribution like- switching Switching parameters ities
assumption) lihood parameters state 1 state 2 matrix

Degrees of 100.00 1.5463


The Hamilton
Freedom
(1989) model, 1.00 0.00
2212.38 0.0135 (t dist)
S = [1 0 1] 0.00 1.00
Indep 0.0008 −0.0002
(t)
column 1
Model’s STD 0.0264 0.0113
The Hamilton Degrees of 7.8238 112.3094
(1989) model, Freedom 0.99 0.01
2257.34 N/A (t dist)
S = [1 1 1] 0.01 0.99
(t) Indep −0.0012 0.0010
column

presents transitional probabilities in Markov regime-switching with GED:


fitted state probabilities and smoothed state probabilities. Based on such a
transitional probability figure, we can classify historical data into two types
according to their historical states.
Figure 19.7 depicts the logreturn of two regimes in historical time series.
Figure 19.8 depicts the price of two regimes in historical time series. As can be
seen from Figures 19.7 and 19.8, the total historical time series are divided into
two regimes: a normal one with small change (state 2) and a turmoil regime
with high risk (state 1). For each state, the regime-switching model identifies
three periods of data. The normal regime includes two periods: February 10,
2006~December 11, 2006, and January 30, 2007~October 14, 2007. The turmoil
regime also includes two periods: December 12, 2006~January 29, 2007, and
October 15, 2007~July 7, 2009. The first turmoil lasted only one and a half
months, but the second one covered almost the total financial crisis.
212 Enterprise Risk Management in Finance

Table 19.7 Markov regime-switching using t-distribution

Transition
Model probabil-
(Distribution log like- Non-switching Switching parameters ities
assumption) lihood parameters state 1 state 2 matrix

Indep 0.0021 −0.0010


STD 0.0128
MS Model, column 1
0.45 0.57
S=[1 1 0 0] 2172.41 Degrees of Indep −0.3925 0.2553
0.55 0.43
(t) Freedom 2.9506 column 2
(t dist)
Model’s 0.0130 0.0117
STD
Degrees of 3.2408 2.3637
MS Model, freedom 0.80 0.98
S=[1 1 1 1] 2174.86 N/A (t dist) 0.20 0.02
(t) Indep 0.0013 −0.0034
column 1
Indep −0.2015 0.9080
column 2
Model’s 0.0262 0.0113
STD
Degrees of 7.4904 100.000
freedom
MS Model, (t dist)
Indep −0.0012 0.0011 0.99 0.01
S=[1 1 1 1 1] 2260.95 N/A
column 1 0.01 0.99
(t)
Indep −0.0736 −0.0915
column 2
Indep −0.0121 0.0422
column 3

Table 19.8 Markov regime-switching using GED

Model log Transition


(Distribution like- Switching parameters probabilities
assumption) lihood state 1 state 2 matrix

Model‘s STD 0.0029 0.0094


MS Model,
Value of k (GED dist) 1.4987 0.8011 0.06 0.26
S=[1 1 1 1] 2172.16
Indep column 1 0.0020 0.0013 0.94 0.74
(GED)
Indep column 2 0.8905 0.2207
Model’s standard 0.0203 0.0120
deviation
MS Model, Value of k (GED dist) 0.7122 0.4675 0.99 0.01
S=[1 1 1 1 1] 2263.06 Indep column 1 0.0014 0.0010 0.01 0.99
(GED) Indep column 2 0.0706 0.0848
Indep column 3 0.0287 0.0384
Volatility Forecasting of the Crude Oil Market 213

(a) 1 (b) 1.4


State 1
0.9 State 1 State 2
1.2

Smoothed states probabilities


State 2
Filtered states probabilities

0.8
0.7 1

0.6 0.8
0.5
0.6
0.4
0.3 0.4
0.2
0.2
0.1
0 0
0 100 200 300 400 500 600 700 800 900 0 100 200 300 400 500 600 700 800 900
Time Time

Figure 19.6 Transitional probabilities in Markov regime-switching with GED

State 1
1 -29 7
7-0 7-0
20
0
0 9-0
20

5
-12 0-1
6 -12 0 7-1
2 00 20

State 2
1
2-1 -14
0 6-1 7 -10
20 2 00

10 -30
6 -2- 7 -01
2 00 2 00

Figure 19.7 Returns of two regimes in historical time series


214 Enterprise Risk Management in Finance

State 1

-29 -07
7 -01 -07
2 00 00
9
2

5
-12 0-1
6 -12 0 7-1
2 00 20

State 2
-11 -14
6 -12 7 -10
2 00 2 00

10
6 -2- 1-3
0
2 00 7-0
0
20

Figure 19.8 Price of two regimes in historical time series

Conclusions

We examined crude oil price volatility dynamics using daily data for the
period February 13, 2006 up to July 21, 2009. We employed the GARCH,
EGARCH and GJR models and various Markov regime-switching models using
the maximum likelihood estimation technique to model volatility. Codes were
written in Matlab language. We compared several parameter settings in all
models. In GARCH models, the ARMAX (1,1,0)/GJR(2,1) yielded the best fitted
result, with maximum log likelihood value of 2292.32 when assuming that
our data followed a t-distribution. Markov regime-switching models generated
a similar fitted result, but with a bit lower log likelihood value. Markov regime-
switching modeling gave interesting results in classifying historical data into
two states: a normal period and a turmoil period. This can account for some
market performance during the financial crisis.
20
Confucius Three-stage Learning of Risk
Management

Introduction

During my class of risk management, I teach a Confucius three-step approach


of learning risk management to my students. Students seem to be well-moti-
vated to learn this interdisciplinary course using my approach. Confucius, a
philosopher born in 551 BCE during the Chou dynasty, may well be one of
the most influential philosophers in history.1 He wandered all around China,
trying to serve as an adviser to various rulers in the Spring–Autumn period.
Confucius, his students and many other followers developed ‘Four Books’ in
Confucianism. One of the most significant was ‘The Great Learning’, which
addresses classical themes of Chinese philosophy and political issues, and has
been extremely popular and very influential in both traditional and modern
Chinese thinking. ‘Eight steps’ were developed showing how to ‘investigate
things,’ which becomes one of the first stages to understanding ‘The Great
Learning’.2 Three famous steps are expressed:

Wishing to order well their States, they first regulated their families.
Wishing to regulate their families, they first cultivated their persons.
Their persons being cultivated, their families were regulated.
Their families being regulated, their States were rightly governed.

I borrow three words to describe these three stages: self-cultivation, family


regulation, and state harmonization. I first taught this at the RiskLab executive
course on risk management in 2008, later on courses offered to various Chinese
universities such as the Management School at the University of the Chinese
Academy of Sciences, the School of Economics and Management at Beihang
University, and the School of Business at Central South University.

215
216 Enterprise Risk Management in Finance

Self-cultivation

Let us start with an example of NETLIB Problem PILOT4 to illustrate use of


self-cultivation in learning sophisticated financial risk management tools.3
Netlib has been a repository of linear programming (LP) problems available to
test new codes and compare performance. This example is a LP with 1000 vari-
ables and 410 constraints, where the 372nd constraint is as follows.

aT x ≡ −15.79081x826 − 8.598819x827 − 1.88789x828 − 1.362417x829 − 1.526049x830


− 0.031883x849 − 28.725555x850 − 10.792065x851 − 0.19004 x852 − 2.757176 x853
− 12.290832 x854 + 717.562256 x855 − 0.057865x856 − 3.785417x857 − 78.30661x858
− 122.163055x859 − 6.46609x860 − 0.48371x861 − 0.615264 x862 − 1.353783x863
− 84.644257x864 − 122.459045x865 − 43.15593x866 − 1.712592 x870 − 0.401597x871
+ x880 − 0.946049x898 − 0.946049x916
≥ b ≡ 23.387405

Assuming that one investor employed a naïve investment strategy to maximize


return of portfolio, this financial engineering problem structure turns out
to be very similar to a Netlib problem. We observe that most coefficients in
the above constraint look ugly, with an accuracy of 5–6 digits. It is natural to
believe that coefficients of this type characterize certain market uncertainties
and risks.
Beyond operations research, probability theory is fundamental in financial
risk management development. Probability was not established until 1600
(although fashionable in 1700), and according to a lecturer by Professor Robert
Shiller was probably first used by William Shakespeare, who wrote a story
about a young lady, describing a man she likes, saying: ‘I like him very much.
I find him very probable’. Another interesting concept related to Shakespeare
is the Infinite Monkey Theorem, which states that if enough monkeys hit keys
at random on a typewriter keyboard for an infinite amount of time, one will
almost surely type a complete works of William Shakespeare. The infinite
monkey theorem is obviously one popular application of the law of very large
numbers from probability theory.4
Here I would like to summarize some fundamental math theories, especially
probability theory, useful in risk management. To do this, I need a basic classi-
fication of risk properties. Risk, measured by volatility in a security market, is
quoted by unit measures in terms of basis points. That explains to some extent
the relevance of having properties for risks as an object. I use four levels of clas-
sifications: Uncertainty of risks, Dynamics of risks, Clustering-dependence-
interconnection of risks and Complexity of risks. Basic probability theory,
to include various distribution and density functions, is used to characterize
Confucius Three-stage Learning of Risk Management 217

the first property of risks. This was developed in the 1600s and popularized
in 1700s with representative scholars such as Bernoulli, de Moivre, Laplace,
Poisson, Gauss, and Pareto. The second type of risk property uses various sto-
chastic modeling tools developed intensively since the 1930s by well-known
scholars such as Lévy, Khintchine, Kolmogorov, and Doob. The third property
of risks is new research problems in finance that have been developed since the
1950s by well-known scholars such as Fréchet and Sklar, who develop funda-
mental theories that are useful for derivative pricing. Complexity of risks can
be described by complexity science theory developed since the 1960s.5

Family regulation

I am going to tell a story extracted from a cartoon picture in the book The
Cartoon Introduction to Economics.6 I give the story a name: a story of family
risk management. This is a family of three people: the child, the mommy and
the daddy. On Monday morning, the family members all have questions about
what they are going to do.
The child is going to school and raises a question: is it going to rain today?
The mom is planning to buy a second-hand BMW car and raises a question: is
this used BMW a lemon or a peach? The daddy is reading news on the stock
market and raises the question: am I going to buy stock in Facebook or mutual
funds Comfort?
The child’s question reflects his attitude of risk-aversion to today’s weather.
This is the core theme in expected utility theory originated from classical eco-
nomics in Adam Smith’s time. The daddy’s question indicates his diversifi-
cation strategy for investment in risky assets: a portfolio optimization strategy
might be preferred. There are milestone works in financial risk theory: Harry
Markowitz presented mean-variance framework in a 1952 paper and a 1959
book7 on how to find the best possible diversification strategy, and William
Sharpe’s capital asset pricing model (CAPM) in 19648 provided tools to
determine a theoretically appropriate required rate of return of an asset, if that
asset is to be added to an already well-diversified portfolio, given that asset’s
non-diversifiable risk. James Tobin expanded on Markowitz’s work by adding a
risk-free asset to the analysis in 1958,9 leading to the development of a super-ef-
ficient portfolio and the capital market line. All these works won a Nobel Prize,
although they include strong assumptions that are opposed, such as perfect
capital markets, or log-normally distributed market data. The New York Times
ran a story when James Tobin died in 2002:

After he won the Nobel Prize, reporters asked him to explain the portfolio
theory. When he tried to do so, one journalist interrupted, ‘Oh, no, please
explain it in lay language.’ So he described the theory of diversification by
218 Enterprise Risk Management in Finance

saying: ‘You know, don’t put all your eggs in one basket.’ Headline writers
around the world the next day created some version of ‘Economist Wins
Nobel for Saying, “Don’t Put Eggs in One Basket.”’

The mommy’s question gives a key risk management problem in economics:


adverse selection. The mommy is checking a used car with a relatively high
price but maybe reasonably good quality. Because of information asymmetry,
she is not sure of the quality, so decides to buy another used car with lower
price. This continues until a dealer with a low-priced used car can sell her a par-
ticular car. The result is that owners of good cars will not place their cars on the
used car market but low-priced used cars with low quality prevail. So adverse
selection works against social welfare and refers to a market process where
undesired results occur when market players have asymmetric information.
This is another very important piece that helped the economist George Akerlof
to win the Nobel Memorial Prize in Economic Sciences in 2001.10
The story of family risk management demonstrates important risk prob-
lems and theories from various disciplines. From a simple family activity, we
see that risk theory is embedded in many other theories including expected
utility theory in decision sciences and microeconomics, portfolio optimization
theory and CAPM theory in modern finance, adverse selection, asymmetric
information, and contract theory in economics, especially in industry organ-
ization theory.

State harmonization

After the self-cultivation and family-regulation steps of learning risk man-


agement tools, you probably are armed with fundamental knowledge that can
assist you to treat real-world risk management problems at a state-harmoniza-
tion level. One typical example is the systemic risks that to some extent result
in the recent economic crisis that originated from the secondary mortgage
market.
Systemic risk refers to the potential collapse of the entire financial system,
as a result of risk associated with one individual entity, group or component,
leading to a cascading failure of the entire system. These are risks imposed by
inter-linkages and interdependencies in a system or market. These interdepend-
encies and the potential ‘clustering’ of institution failure are important state-
harmonization issues which policy-makers have to consider in order to protect
a system or system of systems against systemic risk. Government agencies such
as the US Fed and SEC (Securities and Exchange Commission) or central banks
usually have rules for safeguarding the trading interests of the market as a
whole, claiming that the investors in markets are exposure to dependencies of
risks arising from their inter-linkage.11
Confucius Three-stage Learning of Risk Management 219

In Table 20.1 we map previous chapters to different levels of learning risk


management.

Table 20.1 Risk management links

Chapter 1 – Overview Enterprise Risk Management Family regulation


Chapter 2 – Practice Enron State harmonization
Chapter 3 – Overview Financial Risk Management Family regulation
Chapter 4 – Practice The Real Estate Crash of 2008 State harmonization
Chapter 5 – Theory Financial Risk Forecast Using Self-cultivation
Machine Learning and Sentiment
Analysis
Chapter 6 – Theory On-line Stock Forum Sentiment Self-cultivation
Analysis
Chapter 7 – Theory DEA Risk Scoring Model of Internet Self-cultivation
Stocks
Chapter 8 – Theory Bank Credit Scoring Self-cultivation
Chapter 9 – Theory Credit Scoring using Multiobjective Self-cultivation
Data Mining
Chapter 10 – Theory Performance Evaluation and Risk Self-cultivation
Analysis of Online Banking
Chapter 11 – Overview Economic Perspective Family regulation
Chapter 12 – Practice British Petroleum Deepwater State harmonization
Horizon
Chapter 13– Theory Bank Efficiency Analysis Self-cultivation
Chapter 14 – Theory Catastrophe Bond and Risk Self-cultivation
Modeling
Chapter 15 – Theory Bilevel Programming Merger Self-cultivation
Analysis in Banking
Chapter 16 – Overview Sustainability and Risk in Family regulation
Globalization
Chapter 17 – Practice Risk from Natural Disaster State harmonization
Chapter 18 – Theory Pricing of Carbon Emission Self-cultivation
Exchange in the EU ETS
Chapter 19 – Theory Volatility Forecasting of the Crude Self-cultivation
Oil Market

Conclusions

This chapter presents a Confucian three-step process of learning risk man-


agement: self-cultivation, family regulation, and state harmonization.
220 Enterprise Risk Management in Finance

The self-cultivation perspective is the root and the first stage of learning risk
management. Fundamental math theories that are useful in risk management
are summarized. A basic classification of risk properties is given by Uncertainty
of risks, Dynamics of risks, Clustering/Dependence/interconnection of risks
and Complexity of risks. For each property, I gave a related math theory that
may be useful in treating risk problems.
A story of family risk management was presented to show the second stage
of learning risk management. This perspective demonstrates that risk theory
is embedded in many other theories, including expected utility theory in
decision sciences and microeconomics, portfolio optimization theory and
CAPM theory in modern finance, adverse selection, asymmetric information,
and contract theory in economics, especially in industry organization theory.
Armed with the self-cultivation stage tool and family-regulation stage know-
ledge, one might be ready for handling real-world risk management problems
at a state-harmonization level. I gave systemic risk as a typical problem example
for treating economic crisis originated from the secondary mortgage market.
Notes

1 Enterprise Risk Management


1. D.W. Hubbard (2009) The Failure of Risk Management: Why It ’s Broken and How to Fix
It. John Wiley & Sons.
2. H.N. Higgins (2012) ‘Learning internal controls from a fraud case at Bank of China,’
Issues in Accounting Education, 27(4): 1171–1192.
3. B. Ballou, D.L. Heitger (2005) ‘A building-block approach for implementing COSO’s
enterprise risk management–integrated framework,’ Management Accounting
Quarterly, 6(2): 1–10.
4. D. Williamson (2007) ‘The COSO ERM framework: A critique from systems theory
of management control,’ International Journal of Risk Assessment and Management,
7(8): 1089–1119.
5. F. Caron, J. Vanthienen, B. Baesens (2013) ‘A comprehensive investigation of the
applicability of process mining techniques for enterprise risk management,’
Computers in Industry, 64, 464–475.
6. L. Rittenberg, F. Martens (2012) Enterprise Risk Management: Understanding and
Communicating Risk Appetite. COSO.
7. The Association of Risk Managers (2010) A Structured Approach to Enterprise Risk
Management (ERM) and the Requirements of ISO 31000. COSO.
8. B.M. Bowling, L. Rieger (2005) ‘Success factors for implementing enterprise risk
management,’ Bank Accounting and Finance, 18(3): 21–26.

2 Enron
1. G. Ailon (2012) ‘The discursive management of financial risk scandals: the case of
Wall Street Journal commentaries on LTCM and Enron,’ Qualitative Sociology, 35:
251–270.
2. J.E. Stiglitz (2003) The Roaring Nineties: A New History of the World’s Most Prosperous
Decade. W.W. Norton & Co.
3. L. Fox (2003) Enron: The Rise and Fall. Wiley.
4. C. Hurt (2014) ‘The duty to manage risk,’ The Journal of Corporate Law, 39(2):
153–267.
5. Ailon (2012), op cit.
6. C. Hollingsworth (2012) ‘Risk management in the post-SOX era,’ International Journal
of Auditing, 16: 35–53.
7. H.N. Butler, L.E. Ribstein (2006) The Sarbanes–Oxley Debacle: What We’ve Learned;
How to Fix It, AEI.
8. A. Dey (2010) ‘The chilling effect of Sarbanes–Oxley: a discussion of Sarbanes–Oxley
and corporate risk-taking,’ Journal of Accounting and Economics, 49(1–2), 53–57.
9. J.D. Piotroski, S. Srinivasan (2008) ‘Regulation and bonding: the Sarbanes–Oxley Act
and the flow of international listings,’ Journal of Accounting Research, 46(2): 383–425.
10. L. Bargeron, K. Lehn, C. Zutter (2009) ‘Sarbanes–Oxley and corporate risk-taking,’
Journal of Accounting and Economics, 49(1–2): 34–52.

221
222 Notes

3 Financial Risk Management


1. R. Lowenstein (2000) When Genius Failed: The Rise and Fall of Long-Term Capital
Management, Random House.
2. H.M. Markowitz (1952) ‘Portfolio selection,’ The Journal of Finance, 17(1): 77–91.
3. W.F. Sharpe (1964) ‘Capital asset prices: a theory of market equilibrium under condi-
tions of risk,’ The Journal of Finance, 19(3): 425–442.
4. F. Black, M. Scholes (1972) ‘The valuation of option contracts and a test of market
efficiency,’ The Journal of Finance, 27(2): 399–417.
5. G.J. Alexander, A.M. Baptista (2004) ‘A comparison of VaR and CVaR constraints on
portfolio selection with the mean-variance model,’ Management Science, 50(9): 1261–
1273; V. Chavez-Demoulin, P. Embreechts, J. Nešlehová (2006) ‘Quantitative models
for operational risk: extremes, dependence and aggregation,’ Journal of Banking &
Finance, 30: 399–417.
6. J. Von Neumann, O. Morgenstern (1944) Theory of Games and Economic Behaviour,
2nd ed. Princeton University Press.
7. M. Friedman, L.J. Savage (1948) ‘The utility analysis of choices involving risk,’ The
Journal of Political Economy, 56(4): 279–304.
8. S.G. Mandis (2013) What Happened to Goldman Sachs: An Insider ’s Story of Organizational
Drift and Its Unintended Consequences. Harvard Business Review Press.
9. N.N. Taleb (2012) Antifragile: Things That Gain from Disorder. Random House.
10. D.X. Li (2000) ‘On default correlation: a copula approach,’ Journal of Fixed Income,
9(4): 43–54.
11. F. Salmon (2009) ‘Recipe for disaster: the formula that killed Wall Street,’ Wired, 17(3).

4 The Real Estate Crash of 2008


1. C.M. Reinhart, K.S. Rogoff (2008) ‘Is the 2007 Subprime Crisis so different? An
international historical comparison,’ American Economic Review, 98(2), 339–344.
2. G. Cooper (2008) The Origin of Financial Crises: Central Banks, Credit Bubbles and the
Efficient Market Fallacy. Vintage Books.
3. N. Dunbar (1999) Investing Money: The Story of Long-Term Capital Management and the
Legends Behind It. Wiley; R. Lowenstein (2000) When Genius Failed: The Rise and Fall
of Long-Term Capital Management. Random House.
4. B. Cohen (1997) The Edge of Chaos: Financial Booms, Bubbles, Crashes and Chaos. John
Wiley & Sons, Ltd.
5. A.S. Blinder (2013) After the Music Stopped: The Financial Crisis, the Response, and the
Work Ahead. The Penguin Press.
6. L. Laeven, F. Valencia (2010) ‘Resolution of banking crises: the good, the bad, and
the ugly,’ IMF Working Paper WP/10/146.
7. J. Taylor (2009) Getting Off Track: How Government Actions and Interventions Caused,
Prolonged, and Worsened the Financial Crisis. Hoover Press; B. Keys, T. Mukherjee, A.
Seru, V. Vig (2010) ‘Did securitization lead to lax screening? Evidence from subprime
loans,’ Quarterly Journal of Economics, 125, 307–362.
8. G. Gorton (2008) ‘The panic of 2007,’ NBER Working Paper No. 14358; M.
Brunnermeier (2009) ‘Deciphering the liquidity and credit crunch 2007–2008,’
Journal of Economic Perspectives, 23, 77–100; G. Dell’Arriccia, L. Laeven, D. Igan
(2008) ‘Credit booms and lending standards: evidence from the subprime mortgage
market,’ IMF Working Paper 08/106.
Notes 223

9. F.B. Wiseman (2013) Some Financial History Worth Reading: A Look at Credit, Real Estate,
Investment Bubbles & Scams, and Global Economic Superpowers. Abcor Publishers.
10. R. Boyd (2011) Fatal Risk: A Cautionary Tale of AIG’s Corporate Suicide. Wiley.
11. S. Patterson (2010) The Quants: How a New Breed of Math Whizzes Conquered Wall
Street and Nearly Destroyed It. Crown Business.
12. V. Sampath (2009) ‘The need for greater focus on nontraditional risks: the case of
Northern Rock,’ Journal of Risk Management in Financial Institutions, 2(3): 301–305.
13. H.S. Shin (2009) ‘Reflections on Northern Rock: the bank run that heralded the
global financial crisis,’ Journal of Economic Perspectives, 23(1): 101–119.
14. P. Goldsmith-Pinkham, T. Yorulmazer (2010) ‘Liquidity, bank runs, and bailouts:
Spillover effects during the Northern Rock episode,’ Journal of Financial Service
Research, 37(2/3): 83–98.
15. R. Shelp, A. Ehrbar (2009) Fallen Giant: The Amazing Story of Hank Greenberg and the
History of AIG. Wiley.
16. Ibid.
17. Ibid.
18. J.F. Egginton, J.I. Hilliard, A.P. Liebenberg, I.A. Liebenberg (2010) ‘What effect did
AIG’s bailout, and the preceding events, have on its competitors?’ Risk Management
and Insurance Review, 13(2): 225–249.
19. J. Hobbs (2011) ‘Financial derivatives, the mismanagement of risk and the case of
AIG,’ CPCU eJournal, 64(7): 1–8.
20. P.M. Linsley, R.E. Slack (2013) ‘Crisis management and an ethic of care: The case of
Northern Rock Bank,’ Journal of Business Ethics, 113(2): 285–295.

5 Financial Risk Forecast Using Machine Learning and


Sentiment Analysis
1. D. Dong, Q. Dong (2003) ‘HowNet – a hybrid language and knowledge resource,’
Proceedings of 2003 International Conference on Natural Language Processing and
Knowledge Engineering, 820–824, October 26–29.
2. T. Bollerslev (1986) ‘Generalized autoregressive conditional heteroskedasticity,’
Journal of Econometrics, 31(3): 307–327.
3. B. Freisleben, K. Ripper (1997) ‘Volatility estimation with a neural network,’
Proceedings of the IEEE/IAFE on Computational Intelligence for Financial Engineering,
177–181, March 24–25; C. Burges (1998) ‘A tutorial on support vector machines for
pattern recognition,’ Data Mining and Knowledge Discovery, 2(2): 121–167.
4. B. Freisleben and K. Ripper (1997) ‘Volatility estimation with a neural network,’
Proceedings of the IEEE/IAFE on Computational Intelligence for Financial Engineering,
177–181, March 24–25.
5. J.A.K. Suykens, T.V. Gestel, J.D. Brabanter et al. (2002) ‘Least squares support vector
machines,’ Singapore :World Scientific Press; H.F. Wang, D.J. Hu (2005) ‘Comparison
of SVM and LS-SVM for Regression,’ International Conference on Neural Networks and
Brain 2005, 1, 279–283, October 13–15, 2005.

6 Online Stock Forum Sentiment Analysis


1. B. Watkins (2003) ‘Riding the wave of sentiment: an analysis of return consistency
as a predictor of future returns,’ Journal of Behavioral Finance, 4(4): 191–200.
224 Notes

2. W. Antweiler, M. Frank (2004) ‘Is all that talk just noise? The information content
of internet stock message boards,’ Journal of Finance, 59(3): 1259–1295.
3. R.F. Engle (1982) ‘Autoregressive conditional heteroscedasticity with estimates of
variance of United Kingdom inflation,’ Econometrica, 50: 987–1008.
4. C. Burges (1998) ‘A tutorial on support vector machines for pattern recognition,’
Data Mining and Knowledge Discovery, 2(2): 121–167.
5. W. Chan (2003) ‘Stock price reaction to news and no-news: drift and reversal after
headlines,’ Journal of Financial Economics, 70: 223–260.
6. M. Baker, J. Wurgler (2006) ‘Investor sentiment and the cross-section of stock
returns,’ Journal of Finance, 61(4): 1645–1680.
7. J. Seigel (2002) Stocks for the Long Run. 3rd ed. McGraw-Hill.
8. D. Bathia, D. Bredin (2012) ‘An examination of investor sentiment effect on G7 stock
market returns,’ European Journal of Finance, DOI:10.1080/1351847X.2011.636834.

7 DEA Risk Scoring Model of Internet Stocks


1. S.A. Ross, R.M. Westerfield, B.D. Jordan (2007) Corporate Finance Essentials. McGraw-
Hill/Irwin.
2. G.J. Fielding, T.T. Babitsky, M.E. Brenner (1985) ‘Performance evaluation for bus
transit,’ Transportation Research, 19A(1): 73–82; L.V. Utikin (2007) ‘Risk analysis
under partial prior infermation and nonmonotone utility functions,’ International
Journal of Information Technology and Decision Making, 6: 625–647.
3. C.-T. Ho and D.S. Zhu (2004) ‘Performance measurement of Taiwan’s commercial
banks,’ International Journal of Productivity and Performance Management, 53(5):
425–434; D. Wu (2009) ‘Performance evaluation: an integrated method using data
envelopment analysis and fuzzy preference relations,’ European Journal of Operational
Research, 194(1): 227–235.
4. K. Cengiz, C. Ufuk and U. Ziya (2003) ‘Multi-criteria supplier selection using fuzzy
AHP,’ Logistics Information Management, 16(6): 382–394; T.L. Saaty (2008) ‘Decision
making with the analytic hierarchy process,’ International Journal of Services Sciences,
1(1): 83–98.
5. T.S. Felix and H.J. Chan (2003) ‘An innovative performance measurement method
for supply chain management,’ Supply Chain Management: An International Journal,
8(3): 209–223.
6. C.-T. Ho (2006) ‘Measuring bank operations performance: an approach based on
grey relation analysis,’ Journal of the Operational Research Society, 57, 227–349.
7. R.S. Kaplan and D.P. Norton (2006) Alignment: Using the Balanced Scorecard to Create
Corporate Synergies. Harvard Business School Press Books.
8. P. Espahbodi (1991) ‘Identification of problem banks and binary choice models,’
Journal of Banking and Finance, 15, 53–71.
9. D. Wu (2006) ‘A note on DEA efficiency assessment using ideal point: an improvement
of Wang and Luo’s model,’ Applied Mathematics and Computation, 2: 819–830.
10. M.J. Farrell (1957) ‘The measurement of productive efficiency,’ Journal of the Royal
Statistical Society, 120, 253–281.
11. W. Charnes, A. Cooper, E. Rhodes (1978) ‘Measuring the efficiency of decision
making units,’ European Journal of Operational Research, 2: 429–444.
12. R.D. Banker, A. Charnes, W. Cooper (1984) ‘Some models for estimating technical
and scale inefficiencies in data envelopment analysis,’ Management Science, 30,
1078–1092.
Notes 225

13. P.S. Sudarsanam and R.J. Taffler (1995) ‘Financial ratio proportionality and inter-
temporal stability: an empirical analysis,’ Journal of Banking & Finance, 19(1):
45–60.
14. C.-T. Ho, D.S. Zhu (2004) ‘Performance measurement of Taiwan’s commercial banks,’
International Journal of Productivity and Performance Management, 53(5): 425–434.
15. S.N. Huang, T.L. Kao (2006) ‘Measuring managerial efficiency in non-life insurance
companies: an application of two-stage data envelopment analysis,’ International
Journal of Management, 23(3): 699–720.
16. L.M. Seiford, J. Zhu (1999) ‘Profitability and marketability of the top 55 U.S. com-
mercial banks,’ Management Science, 45(9): 1270–1288; M. Gulser, M. Ilhan (2001)
‘Risk and return in the world’s major stock markets,’ Journal of Investing, (Spring):
62–67; X. Luo (2003) ‘Evaluating the profitability and marketability efficiency
of large banks: an application of data envelopment analysis,’ Journal of Business
Research, 56: 627–635; A. Barua, P.L. Brockett, W.W. Cooper, H. Deng, B.R. Parker,
T.W. Ruefli, A. Whinston (2004) ‘Multi-factor performance measure model with
an application to fortune 500 companies,’ Socio-Economic Planning Sciences, 38:
233–253; C.S. Carlos, F.C. Yolanda, M.M. Cecilio (2005) ‘Measuring DEA efficiency
in internet companies,’ Decision Support Systems, 38: 557–573; D. Wu, Z. Yang,
L. Liang (2006) ‘Using DEA-neural network approach to evaluate branch effi-
ciency of a large Canadian bank,’ Expert Systems with Applications, 31(1): 108–115;
S.F. Lo, W.M. Lu (2006) ‘Does size matter? finding the profitability and market-
ability benchmark of financial holding companies,’ Journal of Operational Research,
23(2): 229–246; H.C. Tsai, C.M. Chen, G.H. Tzeng (2006) ‘The comparative prod-
uctivity efficiency for global telecoms,’ International Journal of Production Economics,
103: 509–526.
17. D. Wu (2006) ‘A note on DEA efficiency assessment using ideal point: an improvement
of Wang and Luo’s model,’ Applied Mathematics and Computation, 2: 819–830; N.
Ahmad, D. Berg, G.R. Simons (2006) ‘The integration of analytic hierarchy process
and data envelopment analysis in a multi-criteria decision-making problem,’
International Journal of Information Technology and Decision Making, 5: 263–276.
18. J. Yao, Z. Li, K.W. Ng (2006) ‘Model risk in VaR estimation: an empirical study,’
International Journal of Information Technology and Decision Making, 5: 503–512.
19. Y. Shi, Y. Peng, G. Kou, Z. Chen (2006) ‘Classifying credit card accounts for business
intelligence and decision making: a multiple-criteria quadratic programming
approach,’ International Journal of Information Technology and Decision Making, 4:
1–19; S. Deng, Z. Xia (2006) ‘A real options approach for pricing electricity tolling
agreements,’ International Journal of Information Technology and Decision Making,
5, 421–436.

8 Bank Credit Scoring


1. G. Dickinson (2001) ‘Enterprise risk management: its origins and conceptual foun-
dation,’ The Geneva Papers on Risk and Insurance, 26(3): 360–366.
2. E.G. Baranoff (2004) ‘Risk management: a focus on a more holistic approach three
years after September 11,’ Journal of Insurance Regulation, 22(4): 71–81.
3. M. Crouhy, D. Galai, R. Mark (1998) ‘Model risk,’ Journal of Financial Engineering,
7(3/4): 267–288, reprinted in Model Risk: Concepts, Calibration and Pricing (ed. R.
Gibson), Risk Book, 2000, 17–31; M. Crouhy, D. Galai, R. Mark (2000) ‘A comparative
analysis of current credit risk models,’ Journal of Banking & Finance, 24: 59–117.
226 Notes

4. G.J. Alexander, A.M. Baptista (2004) ‘A comparison of VaR and CVaR constraints
on portfolio selection with the mean-variance model,’ Management Science, 50(9):
1261–1273; V. Chavez-Demoulin, P. Embrechts, J. Nešlehová (2006) ‘Quantitative
models for operational risk: extremes, dependence and aggregation,’ Journal of
Banking & Finance, 30: 2635–2658; R. Garcia, É. Renault, G. Tsafack (2007) ‘Proper
conditioning for coherent VaR in portfolio management,’ Management Science, 53(3):
483–494; N. Taylor (2007) ‘A note on the importance of overnight information in
risk management models,’ Journal of Banking & Finance, 31: 161–180.
5. T. Jacobson, J. Lindé, K. Roszbach (2006) ‘Internal ratings systems, implied credit
risk and the consistency of banks’ risk classification policies,’ Journal of Banking &
Finance, 30, 1899–1926.
6. H. Elsinger, A. Lehar, M. Summer (2006) ‘Risk assessment for banking systems,’
Management Science, 52(9): 1301–1314.
7. R.S. Kaplan, D.P. Norton (1992) ‘The balanced scorecard – measures that drive per-
formance,’ Harvard Business Review, 70(1): 71–79; and R.S. Kaplan, D.P. Norton (2006)
Alignment: Using the Balanced Scorecard to Create Corporate Synergies. Harvard Business
School Press Books.
8. D. Bigio, R.L. Edgeman, T. Ferleman (2004) ‘Six sigma availability management of
information technology in the office of the chief technology officer of Washington,
DC.,’ Total Quality Management, 15(5–6): 679–687; S. Scandizzo (2005) ‘Risk mapping
and key risk indicators in operational risk management,’ Economic Notes by Banca
Monte dei Paschi di Siena SpA, 34(2): 231–256.
9. A. Papalexandris, G. Ioannou, G. Prastacos, K.E. Soderquist (2005) ‘An integrated
methodology for putting the balanced scorecard into action,’ European Management
Journal, 23(2): 214–227.
10. J. Calandro, Jr., S. Lane, S. (2006) ‘An introduction to the enterprise risk scorecard,’
Measuring Business Excellence, 10(3): 31–40.
11. U. Anders, M. Sandstedt (2003) ‘An operational risk scorecard approach,’ Risk, 16(1):
47–50; H. Wagner (2004) ‘The use of credit scoring in the mortgage industry,’ Journal
of Financial Services Marketing, 9(2): 179–183.
12. S. Caudle (2005) ‘Homeland security,’ Public Performance & Management Review,
28(3): 352–375.
13. H.S.B. Herath, W.G. Bremser (2005) ‘Real-option valuation of research and devel-
opment investments: implications for performance measurement,’ Managerial
Auditing Journal, 20(1): 55–72.
14. F. Lhabitant (2000) ‘Coping with model risk,’ in The Professional Handbook of
Financial Risk Management, M. Lore, L. Borodovsky (eds), Butterworth-Heinemann.
15. J. Sobehart, S. Keenan (2001) ‘Measuring default accurately,’ Credit Risk Special Report,
Risk, 14: 31–33.

9 Credit Scoring using Multiobjective Data Mining


1. C.L. Hwang, K. Yoon (1981) Multiple Attribute Decision Making: Methods and
Applications. Springer-Verlag.
2. Y. Peng (2000) Management Decision Analysis. Science Publication; T.-C. Chu (2002)
‘Facility location selection using fuzzy TOPSIS under group decisions,’ International
Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, 10(6): 687–701.
3. D.L. Olson, D. Wu (2005) ‘Decision making with uncertainty and data mining,’
Advanced Data Mining and Applications: First International Conference, ADMA,
Notes 227

X. Li, S. Wang, Z.Y. Dong eds, Lecture Notes in Artificial Intelligence. Keynote paper.
Springer, 1–9.
4. D.L. Olson (2005) ‘Comparison of weights in TOPSIS models,’ Mathematical and
Computer Modelling, 40: 721–727.
5. M. Freimer, P.L. Yu (1976) ‘Some new results on compromise solutions for group
decision problems,’ Management Science, 22(6): 688–693; T.E. Dielman (2005) ‘Least
absolute value regression: recent contributions,’ Journal of Statistical Computation &
Simulation, 75(4): 263–286.
6. S.C. Caples, M.E. Hanna (1997) ‘Least squares versus least absolute value in real
estate appraisals,’ Appraisal Journal, 65(1): 18–24.
7. G.W. Bassett, Jr. (1997) ‘Robust sports rating based on least absolute errors,’ American
Statistician, 51(2): 99–105.
8. Olson (2005), ibid.
9. S.M. Lee, D.L. Olson (2004) ‘Goal programming formulations for a comparative ana-
lysis of scalar norms and ordinal vs. ratio data,’ Information Systems and Operational
Research, 42(3): 163–174.
10. A. Barnes (1987) ‘The analysis and use of financial ratios: a review article’, Journal
of Business and Finance Accounting, 14: 449–461; H. Deng, C.-H. Yeh, R.J. Willis
(2000) ‘Inter-company comparison using modified TOPSIS with objective weight’,
Computers & Operations Research, 27: 963–973.
11. D.L. Olson (2004) ‘Data set balancing’, Lecture Notes in Computer Science: Data Mining
and Knowledge Management, Y. Shi, W. Xu, Z. Chen, eds. Springer, 71–80.
12. J. Laurikkala (2002) ‘Instance-based data reduction for improved identification of
difficult small classes’, Intelligent Data Analysis, 6(4): 311–322.
13. B. Bull (2005) ‘Exemplar sampling: Nonrandom methods of selecting a sample
which characterizes a finite multivariate population,’ American Statistician, 59(2):
166–172.
14. D.L. Olson, D. Wu (2006) ‘Simulation of fuzzy multiattribute models for grey rela-
tionships,’ European Journal of Operational Research, 175(1): 111–120.

10 Online Banking Efficiency and Risk Evaluation with Principal


Component Analysis
1. K. Furst, W.W. Lang, D. Nolle (2000) ‘Internet banking: developments and pros-
pects,’ Economic and Policy Analysis, Working Paper 2000–9.
2. Dominion (2001) ‘Internet banking struggles for profits,’ available at www.stuff.
co.nz/inl/index/0,1008,779016a28,FF.html.
3. B. Stafford (2001) ‘Risk management and internet banking: what every banker needs
to know,’ Community Banker, 10(2): 48–49.
4. C-T.B. Ho, D. Wu (2009) ‘Online banking performance evaluation using data
envelopment analysis and principal component analysis,’ Computers & Operations
Research, 36(6): 1835–1842.
5. Jupiter Research.(2004) ‘FIND research, Institute for Information Industry,’ available
at http://www.find.org.tw.
6. C. Serrano-Cinca, Y. Fuertes-Calle’n, C. Mar-Molinero (2005) ‘Measuring DEA effi-
ciency in Internet companies,’ Decision Support Systems, 38: 557–573.
7. H.D. Sherman, F. Gold (1985) ‘Bank branch operating efficiency: evaluation with
data envelopment analysis,’ Journal of Banking and Finance, 9(2): 297–316.
228 Notes

8. A. Soteriou, S.A. Zenios, (1999) ‘Operations, quality, and profitability in the pro-
vision of banking services,’ Management Science, 45(9): 1221–1238.
9. H. Tulkens (1993) ‘On FDH efficiency analysis: some methodological issues and
applications to retail banking, courts and urban transit,’ Journal of Productivity
Analysis, 4(1–2): 183–210.
10. A.N. Berger, D.B. Humphrey (1992) ‘Measurement and efficiency issues in com-
mercial banking,’ in Z. Griliches, ed., Output Measurement in the Service Sectors, NBER
Studies in Income and Wealth, 245–300. The University of Chicago Press.
11. A.N. Berger, D.B. Humphrey (1997) ‘Efficiency of financial institutions: inter-
national survey and direction for future research,’ European Journal of Operational
Research, 98:175–212.
12. A.N. Berger, D. Hancock, D.B. Humphrey (1993) ‘Bank efficiency derived from the
profit function,’ Journal of Banking and Finance, 17(2–3): 317–348.
13. D.D. Wu (2009) ‘Performance evaluation: an integrated method using data
envelopment analysis and fuzzy preference relations,’ European Journal of Operational
Research, 194(1): 227–235.
14. K. Eriksson, K. Kerem, D. Nilsson (2008) ‘The adoption of commercial innovations
in the former Central and Eastern European markets: the case of internet banking
in Estonia,’ International Journal of Bank Marketing, 26(3): 154–169.
15. Bank of America (2007) Annual Report 2007, available at www.rbs.com/microsites/
gra2007/downloads/RBS_GRA_2007.pdf.
16. Citibank (2007) Annual Report 2007, available at www.citi.com/citi/fin/data/k07c.
pdf.
17. HSBC (2007) Annual Report 2007, available at www.investis.com/reports/hsbc_
ar_2007_En/report.php?type=1.
18. Barclays (2007) Annual Report 2007, available at www.barclaysannualreport.com/
index.html.
19. Chase (2007) Annual Report 2007, available at: http://investor.shareholder.com/
common/.
20. Wells Fargo (2007) Annual Report 2007, available at www.wellsfargo.com/downloads/
pdf/invest_relations/wf2007annualreport.pdf.
21. Lloyds (2007) Annual Report 2007, available at www.investorrelations.lloydstsb.com/
media/pdf_irmc/ir/2007/2007_LTSB_Group_R&A.pdf.
22. Royal Bank of Scotland (2007) Annual Report 2007, available at www.rbs.com/micro-
sites/gra2007/downloads/RBS_GRA_2007.pdf.
23. SunTrust (2007) Annual Report 2007, available at www.suntrustenespanol.com/
suntrust.
24. Wachovia (2007) Annual Report 2007, available at www.wachovia.com/file/2007_
Wachovia_Annual_Report.pdf.
25. Basel (2005) ‘Amendment to the Capital Accord to the Incorporate Market Risks,’ Basel
Committee on Banking Supervision, Basel.

11 Economic Perspective
1. F.H. Knight (1921) Risk, Uncertainty and Profit. Hart, Schaffner & Marx.
2. J.G. Courcelle-Seneuil (1852) ‘Profit,’ in Coquelin and Guillaumin, eds, Dictionnaire
de l’ėconomie politique, 2nd ed.
3. J.H. Von Thünen (1826) The Isolated State.
Notes 229

4. F.B. Hawley (1907) Enterprise and the Productive Process.


5. A. Ganegoda, J. Evans (2012) ‘A framework to manage the measurable, immeas-
urable and the unidentifiable financial risk,’ Australian Journal of Management, 39(5):
5–34.
6. D. Li (2000) ‘On default correlation: a copula approach,’ Journal of Fixed Income, 9(4):
43–54.
7. H.M. Markowitz (1952) ‘Portfolio selection,’ The Journal of Finance, 17(1): 77–91.
8. E.F. Fama (1965) ‘Random walks in stock market prices,’ Financial Analysts Journal,
51(1): 404–419.
9. K. Buehler, A. Freeman, R. Hulme (2008) ‘The new arsenal of risk management,’
Harvard Business Review, 86(9): 93–100.
10. W.F. Sharpe (1970) Portfolio Theory and Capital Markets. McGraw-Hill Book
Company.
11. D. Kahneman, A. Tversky (1972) ‘Subjective probability: a judgment of representa-
tiveness,’ Cognitive Psychology, 3: 430–454.
12. C. Mackay (1841) Extraordinary Popular Delusions and the Madness of Crowds.
13. S. Patterson (2010) The Quants: How a New Breed of Math Whizzes Conquered Wall
Street and Nearly Destroyed It. Crown Business.
14. J.N. Stanard, M.G. Wacek (1991) ‘The spiral in the catastrophe retrocessional market,’
Casualty Actuarial Society Discussion Paper, May, Arlington, VA.
15. R. Lowenstein (2000) When Genius Failed: The Rise and Fall of Long-Term Capital
Management. Random House.
16. N.N. Taleb (2007) The Black Swan: The Impact of the Highly Improbable. Penguin
Books.
17. D. Kahneman, A. Tversky (2000) Choices, Values, and Frames. The University of
Cambridge.
18. G.A. Akerlof, R.J. Shiller (2009) Animal Spirits: How Human Psychology Drives the
Economy, and Why It Matters for Global Capitalism. Princeton University Press.
19. N. Doherty, J. Lamm-Tennant, L.T. Starks (2009) ‘Lessons from the financial crisis
on risk and capital management: the case of insurance companies,’ Journal of Applied
Corporate Finance, 21(4): 52–59.
20. M.R. Powers, T.Y. Powers, S. Gao (2012) ‘Risk finance for catastrophe losses with
Pareto-calibrated Lévy-stable severities,’ Risk Analysis, 32(11): 1967–1977.

12 British Petroleum Deepwater Horizon


1. R. Zolkos, M. Bradford (2011) ‘Risk management faulted in probe of BP disaster’,
Business Insurance, 45(36): 4–25.
2. R.T. Silves, L.K. Comfort (2012) ‘The Exxon Valdez and BP Deepwater Horizon oil
spills: Reducing risk in socio-technical systems’, American Behavioral Scientist, 56(1):
76–103.
3. C. Perrow (1984) Normal Accidents: Living with High-risk Technologies, Basic Books.
4. A. Borison, G. Hamm (2011) ‘Black swan or black sheep?’ Risk Management, 58(3):
48–53.
5. P. Eckle, P. Burgherr (2013) ‘Bayesian data analysis of severe fatal accident risk in the
oil chain’, Risk Analysis, 33(1): 146–160.
6. H. Abbasinejad, Y. Gudarzi Farahani, E. Ashari Ghara (2012) ‘Energy consumption
in Iran with Bayesian approach’, OPEC Energy Review, 36(4): 444–455.
230 Notes

13 Bank Efficiency Analysis


1. H.D. Sherman, F. Gold (1985) ‘Bank branch operating efficiency: evaluation with
data envelopment analysis,’ Journal of Banking and Finance, 9(2): 297–316; H. Tulkens
(1993) ‘On FDH efficiency analysis: some methodological issues and applica-
tions to retail banking, courts and Urban transit,’ Journal of Productivity Analysis,
4(1/2): 183–210; A.N. Berger, D.B. Humphrey (1997) ‘Efficiency of financial insti-
tutions: international survey and directions for future research,’ European Journal
of Operational Research, 98, 175–212; J.A. Clark (1996) ‘Economic cost, scale effi-
ciency, and competitive viability in banking,’ Journal of Money, Banking and Credit,
28(3): 342–364; R. Deyoung (1997) ‘A diagnostic test for the distribution-free effi-
ciency estimator: an example using US commercial bank data,’ European Journal of
Operational Research, 98(2): 243–249.
2. S.H. Wang (2003) ‘Adaptive non-parametric efficiency frontier analysis: a neural-
network-based model,’ Computers & Operations Research, 30: 279–295.
3. A.D. Athanassopoulos, S.P. Curram (1996) ‘A comparison of data envelopment ana-
lysis and artificial neural networks as tool for assessing the efficiency of decision
making units,’ Journal of the Operational Research Society, 47(8): 1000–1016.
4. A. Costa, R.N. Markellos (1997) ‘Evaluating public transport efficiency with neural
network models,’ Transportation Research C, 5(5): 301–312.
5. A.R. Fleissig, T. Kastens, D. Terrell (2000) ‘Evaluating the semi-nonparametric fourier,
aim, and neural networks cost functions,’ Economics Letters, 68(3): 235–244.
6. D. Santin, F.J. Delgado, A. Valino (2004) ‘The measurement of technical efficiency:
a neural network approach,’ Applied Economics, 36(6): 627–635.
7. P.C. Pendharkar, J.A. Rodger (2003) ‘Technical efficiency-based selection of learning
cases to improve forecasting accuracy of neural networks under monotonicity
assumption,’ Decision Support Systems, 36(1): 117–136.
8. Athanassopoulos and Curram (1996), op cit.
9. R.D. Banker, A. Charnes, W.W. Cooper (1984) ‘Some models for estimating tech-
nical and scale inefficiencies in data envelopment analysis,’ Management Science, 30:
1078–1092.
10. A. Charnes, W.W. Cooper, E. Rhodes (1978) ‘Measuring the efficiency of decision
making units,’ European Journal of Operational Research, 6(2): 429–444.
11. R. Hecht Nielsen (1990) ‘Neural computing,’ Addison Wesley, 124–133; J.W. Shavlik,
R.J. Mooney, G. G.Towell (1991) ‘Symbolic and neural learning algorithms: an
experimental comparison,’ Machine Learning, 6: 111–143.
12. Pendharkar and Rodger (2003), op. cit.
13. M.D. Troutt, A. Rai, A. Zhang (1995) ‘The potential use of DEA for credit applicant
acceptance systems,’ Computers and Operations Research, 4: 405–408.

14 Catastrophe Bond and Risk Modeling


1. A. Dassios, J. Jang (2003) ‘Pricing of catastrophe reinsurance and derivatives using
the cox process with shot noise intensity,’ Finance and Stochastics, 7: 73–95.
2. H. Geman, M. Yor (1997) ‘Stochastic time changes in catastrophe option pricing,’
Insurance: Mathematics and Economics, 21: 185–193.
3. R.T. Silves, L.K. Comfort (2012) ‘The Exxon Valdez and BP Deepwater Horizon oil
spills: reducing risk in socio-technical systems,’ American Behavioral Scientist, 56(1):
76–103.
Notes 231

4. Dassios and Jang (2003), op cit.


5. S. Jaimungal, T. Wang (2005) ‘Catastrophe options with stochastic interest rates and
compound Poisson losses,’ Insurance: Mathematics and Economics, 38: 469–483.
6. S. Cox, H. Pedersen (2000) ‘Catastrophe risk bonds,’ North American Actuarial Journal,
48: 56–82.
7. J. Lee, M. Yu (2007) ‘Variation of catastrophe reinsurance with catastrophe bonds,’
Insurance: Mathematics and Economics, 41: 264–278.
8. R.C. Merton (1974) ‘On the pricing of corporate debt: the risk structure of interest
rates,’ The Journal of Finance, 29(2): 449–470.
9. C. Perrow (1984) Normal Accidents: Living with High-risk Technologies, Basic Books.
10. V.E. Vaugirard (2003) ‘Pricing catastrophe bonds by an arbitrage approach,’ The
Quarterly Review of Economics and Finance, 43: 119–132.
11. L. Zhu (2008) ‘Double exponential jump diffusion model for catastrophe bonds
pricing,’ Journal of Fujian University of Technology, 6: 336–338 (in Chinese).
12 . L.F. Chang, M.W. Hung (2009) ‘Analytical valuation of catastrophe equity options
with negative exponential jumps,’ Insurance: Mathematics and Economics, 44:
59–69.
13. Geman and Yor (1997), op cit.; Dassios and Jung (2003), op cit.
14. Andrieu, N. de Freitas, A. Doucet, M.I. Jordan (2003) ‘An introduction to MCMC for
machine learning,’ Machine Learning, 50: 5–43.
15. D. Wu, D.L. Olson (2010) ‘Enterprise risk management: coping with model risk in a
large bank,’ Journal of the Operational Research Society, 61(2): 179–190.

15 Bilevel Programming Merger Analysis in Banking


1. K. Aquino, A. Reed II (1998) ‘A social dilemma perspective on cooperative behavior
in organizations: the effects of scarcity, communication, and unequal access on the
use of a shared resource,’ Group & Organization Management, 23: 390–413.
2. P. Bonacich (1987) ‘Communication networks and collective action,’ Social Networks,
9: 389–396; J.A. Sniezek, D.R. May, J.E. Sawyer (1990) ‘Social uncertainty and inter-
dependence: a study of resource allocation decision in groups,’ Organizational
Behavior and Human Decision Processes, 46: 155–180.
3. E.A. Mannix (1993) ‘Organizations as resource dilemmas: the effects of power
balance on coalition formation in small groups,’ Organizational Behavior and Human
Decision Processes, 55: 1–22.
4. MEI Computer Technology Group Inc. (2011) ‘2011 Trade Promotion Management
Trends.’
5. Jemison, B. David, S.B. Sitkin (1986) ‘Corporate acquisitions: a process perspective,’
Academy of Management Review, 11(1): 145–163.
6. J. Paradi, S. Vela, H. Zhu (2010) ‘Adjusting for cultural differences, a new DEA model
applied to a merged bank,’ Journal of Productivity Analysis, 33: 109–123.
7. S. Kreipl, M. Pinedo (2004) ‘Planning and scheduling in supply Chain: an overview
of issues in practice,’ Production and Operations Management, 13(1): 29–77.
8. D.D. Wu, J.R. Birge (2012) ‘Serial chain merger evaluation model and application to
mortgage banking,’ Decision Sciences, 43(1): 5–36.
9. Mannix (1993), op cit.
10. J. Bard (1998) Practical Bilevel Optimization: Algorithms and Applications. Kluwer
Academic Publishers.
232 Notes

11. P. Hansen, B. Jaumard, G. Savard (1992) ‘New branch and bound rules for linear
bilevel programming,’ SIAM Journal on Scientific and Statistical Computing, 13(5):
1194–1217.
12. Bard (1998), op cit.
13. W.W.Cooper, L.M. Seiford, K. Tone (2000) Data Envelopment Analysis. Kluwer.
14. S.C. Ray (2004) Data Envelopment Analysis: Theory and Techniques for Economics and
Operations Research. Cambridge University Press, 189–208.
15. P. Bogetoft, D. Wang (2005) ‘Estimating the potential gains from mergers,’ Journal of
Productivity Analysis, 23: 145–171.
16. Wu and Birge (2012), op cit.
17. R. Maddigan, J. Zaima (1985) ‘The profitability of vertical integration,’ Managerial
and Decision Economics, 6(3): 178–179.
18. E.H. MacDonald (2001) ‘GIS in banking: evaluation of Canadian Bank mergers,’
Canadian Journal of Regional Science, 24(3): 419–442.
19. S. Finkelstein, H. Jerayr (2002) ‘Understanding acquisition performance: the role of
transfer effects,’ Organization Science, 13(1): 36–47.
20. Cooper et al. (2000), op cit.
21. C.H. Wang, R. Gopal, S. Zionts (1997) ‘Use of data envelopment analysis in assessing
information technology impact on firm performance,’ Annals of Operations Research,
73: 191 –213.
22. Ibid.
23. J.D. Cummins, X. Xie (2008) ‘Mergers and acquisitions in the US property-liability
insurance industry: productivity and efficiency effects,’ Journal of Banking & Finance,
2(1): 30–55.

16 Sustainability and Risk in Globalization


1. E.G. Baranoff (2004) ‘Risk management: a focus on a more holistic approach three
years after September 11,’ Journal of Insurance Regulation, 22(4): 71–81.
2. D.B. McDonald (2011) ‘When risk management collides with enterprise sustain-
ability,’ Journal of Leadership, Accountability and Ethics, 8(3): 56–66.
3. I.I. Mitroff, M.C. Alpaslan (2003) ‘Preparing for evil,’ Harvard Business Review, 81(4):
109–115.
4. C. Perrow (1999) Normal Accidents: Living with High-Risk Technologies. Princeton
University Press.
5. M. Drew (2007) ‘Information risk management and compliance – expect the unex-
pected,’ BT Technology Journal, 25(1), 19–29.
6. T. Lambooy (2011) ‘Corporate social responsibility: sustainable water use,’ Journal of
Cleaner Production, 19(8): 852–866.
7. D. Ng, P.D. Goldsmith (2010) ‘Bio energy entry timing from a resource based
view and organizational ecology perspective,’ International Food & Agribusiness
Management Review, 13(2): 69–100.
8. D. Meyler, J.P. Stimpson, M.P. Cutghin (2007) ‘Landscapes of risk,’ Organization &
Environment, 20(2): 204–212.
9. M. Santiago (2011) ‘The Huasteca rain forest,’ Latin American Research Review, 46:
32–54.
10. T.K. Zhelev (2005) ‘On the integrated management of industrial resources incorpor-
ating finances,’ Journal of Cleaner Production, 13(5): 469–474.
Notes 233

11. T.M. Mata, R.L. Smith, D.M. Young, C.A.V. Costa (2005) ‘Environmental analysis of
gasoline blending components through their life cycle,’ Journal of Cleaner Production,
13(5): 517–523.
12. H. Von Blottnitz, M.A. Curran (2007) ‘A review of assessments conducted on bio-
ethanol as a transportation fuel from a net energy, greenhouse gas, and environ-
mental life cycle perspective,’ Journal of Cleaner Production, 15(7): 607–619.
13. A. Akcil (2006) ‘Managing cyanide: health, safety and risk management prac-
tices at Turkey’s Ovacik gold–silver mine,’ Journal of Cleaner Production, 14(8):
727–735.
14. N. Gȕlpinar, E. Canakoglu, D. Pachamanova (2014) ‘Robust investment decisions
under supply disruption in petroleum markets,’ Computers & Operations Research, 44:
75–91.
15. F. Cucchiella, M. Gastaldi (2006) ‘Risk management in supply chains: a real option
approach,’ Journal of Manufacturing Technology Management, 17(6): 700–720.
16. B. Ritchie, C. Brindley (2007) ‘An emergent framework for supply chain risk man-
agement and performance measurement,’ Journal of the Operational Research Society,
58: 1398–1411.
17. F.B. Hawley (1907) Enterprise and the Productive Process.
18. F.H. Knight (1921) Risk, Uncertainty, and Profit. Hart, Schaffner & Marx.
19. D. Kahneman, A. Tversky (2000) Choices, Values, and Frames. The University of
Cambridge.

17 Risk from Natural Disasters


1. C. McDonald (2009) ‘New PRIMA president sees public RMs as masters of dis-
aster,’ National Underwriter/Property & Casualty Risk & Benefits Management, 113(21):
17–31.
2. W.-J. Tan, P. Enderwick (2006) ‘Managing threats in the global era: the impact and
response to SARS,’ Thunderbird International Business Review, 48(4): 515–536.
3. B. Lee, F. Preston (2012) Preparing for High-impact, Low-probability Events: Lessons from
Eyjafjallajökull. Chatham House Report.
4. N.N. Taleb, D.G. Goldstein, M.W. Spitznagel (2009) ‘The six mistakes executives
make in risk management,’ Harvard Business Review, 87(10): 78–81.
5. K. Hopkins (2003) ‘Value opportunity three: improving the ability to fulfill demand,’
Business Week, January 13.
6. A.S. Mukherjee (2008) The Spider’s Strategy: Creating Networks to Avert Crisis, Create
Change, and Really Get Ahead. FT Press.
7. N. Kapucu, M. Van Wart (2008) ‘Making matters worse: an anatomy of lead-
ership failures in managing catastrophic events,’ Administration & Society, 40(7):
711–740.
8. D. Alexander (2003) ‘Towards the development of standards in emergency
management training and education,’ Disaster Prevention and Management, 12:
113–123.
9. G. Suder, D.W. Gillingham (2007) ‘Paradigms and paradoxes of agricultural risk gov-
ernance,’ International Journal of Risk Assessment and Management, 7(3): 444–457.
10. L.A. Reilly, O. Courtenay (2007) ‘Husbandry practices, badger sett density and
habitat composition as risk factors for transient and persistent bovine tuberculosis
on UK cattle farms,’ Preventive Veterinary Medicine, 80(2–3): 129–142.
234 Notes

11. K.S. Markel, L.A. Barclay (2007) ‘The intersection of risk management and human
resources: an illustration using genetic mapping,’ International Journal of Risk
Assessment and Management, 7(3): 326–340.
12. D.H. Smaltz, R. Carpenter, J. Saltz (2007) ‘Effective IT governance in healthcare
organizations: a tale of two organizations,’ International Journal of Healthcare
Technology and Management, 8(1/2): 20–41.
13. D. Dalcher (2007) ‘Why the pilot cannot be blamed: a cautionary note about excessive
reliance on technology,’ International Journal of Risk Assessment and Management,
7(3): 350–366.
14. M. Baucells, F.H. Heukamp (2009) ‘Probability and time tradeoff,’ Working Paper,
http://ssrn.com/abstract=970570.
15. J. Pan, M. Wang, D. Li, J. Le4 (2009) ‘Automatic generation of seamline network
using area Voronoi diagrams with overlap,’ IEEE Transactions on Geoscience and
Remote Sensing, 47(6): 1737–1744.
16. D. Engel (2009) ‘Hi-tech solutions for crisis management,’ African Business,
352: 50.
17. J. Wei, D. Zhao, L. Liang (2009) ‘Estimating the growth models of news stories on
disasters,’ Journal of the American Society for Information Science and Technology, 60(9):
1741–1755.
18. M. Saadatseresht, A. Mansourian, M. Taleai (2009) ‘Evacuation planning using
multiobjective evolutionary optimization approach,’ European Journal of Operational
Research, 198(1): 305–314.
19. R. Morelli, A. Tucker, N. Danner, T.R. de Lanerolle, H.J.C. Ellis, O. Izmirli, D. Krizanc,
G. Parker (2009) ‘Revitalizing computing education through free and open source
software for humanity,’ Communications of the ACM, 52(8): 67–75.
20. N. Santella, L.J. Steinberg, K. Parks (2009) ‘Decision making for extreme events:
Modeling critical infrastructure interdependencies to aid mitigation and response
planning,’ Review of Policy Research, 26(4): 409–422.
21. F. Aleskerov, A.L. Say, A. Toker, H.OL. Akin, G. Altay (2005) ‘A cluster-based decision
support system for estimating earthquake damage and casualties,’ Disasters, (3):
255–276.

18 Pricing of Carbon Emission Exchange in the EU ETS


1. M. Kainuma, Y. Matsuoka, T. Morita (1999) ‘Development of AIM (Asian-Pacific
Integrated Model) for coping with global warming,’ Proceedings of the IEEE
International Conference on System Man and Cybernetics, 6: 569–574.
2. W.D. Nordhaus, J.G. Boyer (1999) ‘Requiem for Kyoto: an economic analysis of the
Kyoto protocol’, The Energy Journal, 20: 93–130.
3. W.D. Nordhaus (2001) ‘Climate change: global warming economics’, Science,
294(5545): 1283–1284.
4. P. Capros, L. Mantzos (2000) ‘The economic effects of industry-level emission
trading to reduce greenhouse gases’, Report to DG environment, E3M-Laboratory 21
at ICCS/NTUA; P. Criqui, A. Kitous (2003) ‘Impacts of linking JI and CDM credits to
the European emission allowance trading scheme,’ KPI technical report; G. Klepper,
S. Peterson (2004) ‘The EU emissions trading scheme: allowance prices, trade
flows, competitiveness effects’, European Environment, 14(4): 201–218; G. Klepper, S.
Peterson (2006) ‘Emissions trading, CDM, JI and more – the climate strategy of the
EU’, Energy Journal, 27(2): 1–26.
Notes 235

5. G. Daskalakis, D. Psychoyios, R.N. Markellos (2009) ‘Modeling CO2 emission allow-


ance prices and derivatives: evidence from the European trading’, Journal of Banking
& Finance, 33(7): 1230–1241.
6. M. Uhrig-Homburg, M. Wagner (2006) ‘Success chances and optimal design of
derivatives on CO2 emission certificates,’ Working Paper, University of Karlsruhe.
7. M.S. Paolella, L. Taschini (2006) ‘An econometric analysis of emission trading
allowances,’ Research Paper Series 06–26, FINRISK: National Center of Competence
in Research Financial Valuation and Risk Management.
8. E. Benz, S. Trück, (2006) ‘CO2 emission allowances trading in Europe – specifying a
new class of assets’, Problems and Perspectives in Management, 4(3): 30–40.
9. S. Borak, W. Härdle, S. Trück, R. Weron (2006) ‘Convenience yields for CO2 emission
allowance future contracts,’ SFB 649 discussion paper 2006–076, SFB Economic Risk
Berlin.
10. J. Seifert, M. Uhrig-Hombur, M. Wagner (2008) ‘Dynamic behavior of CO2 spot
prices,’ Journal of Environmental Economics and Management, 56(2): 180–194.
11. D. Burtraw (1996) ‘Cost savings sans allowance trades? Evaluating the SO2 emission
trading program to date,’ Discussion Paper 95–30-REV.
12. J.M. Burniaux, J.O. Martins (2000) ‘Carbon emission leakages: a general equilibrium
view,’ OECD Economics Department Working Papers No. 242.
13. T.J. Considine (2000) ‘The impacts of weather variations on energy demand and
carbon emissions,’ Resource and Energy Economics, 22: 295–314.
14. J.Sijm, S. Bakker, Y. Chen, H. Harmesen, W. Lise (2005) ‘CO2 price dynamics: the
implications of EU emissions trading on the price of electricity,’ Report ECNC-
05–81, Energy Research Center of the Netherlands (ECN).
15. U. Ciorba, A. Lanza, F. Pauli (2001) ‘Kyoto protocol and emission trading: does the
US make a difference?’ FEEM working paper 90.2001, Milan.
16. U. Springer (2003) ‘The market for tradable GHG permits under the Kyoto Protocol:
a survey of model studies’, Energy Economics, 25: 527–551.
17. U. Springer, M. Varilek (2004) ‘Estimating the price of tradable permits for green-
house gas emissions in 2008–2012’, Energy Policy, 32: 611–621.
18. M. Manasanet-Bataller, A. Pardo, E. Valor (2007) ‘CO2 prices, energy and weather’.
The Energy Journal, 28(3): 73–92.
19. D.B. Nelson (1991) ‘Conditional heteroskedasticity in asset returns: a new approach’,
Econometrica, 59, 347–370.

19 Volatility Forecasting of the Crude Oil Market


1. R. Bacon, M. Kojima (2008) ‘Coping with Oil Price Volatility,’ Energy sector man-
agement assistance program, Energy Security Special Report 005/08.
2. J.C. Hung, M.C. Lee, H.C. Liu (2008) ‘Estimation of value-at-risk for energy com-
modities via fat-tailed GARCH models,’ Energy Economics, 30(3):1173–1191.
3. P.K. Narayan, S. Narayan, A. Prasad (2008) ‘Understanding the oil price-exchange
rate nexus for the Fiji islands,’ Energy Economics, 30(5): 2686–2696.
4. F. Malik, B.T. Ewing (2009) ‘Volatility transmission between oil prices and equity
sector returns,’ International Review of Financial Analysis, 18(3): 95–100.
5. A.H. Alizadeh, N.K. Nomikos, P.K. Pouliasis (2008) ‘A Markov regime switching
approach for hedging energy commodities,’ Journal of Banking & Finance,
32(9):1970–1983.
236 Notes

6. C. Aloui, R. Jammazi (2009) ‘The effects of crude oil shocks on stock market shifts
behaviour: a regime switching approach,’ Energy Economics, 31(5): 789–799.
7. F. Klaassen (2002) ‘Improving GARCH volatility forecasts with regime-switching
GARCH,’ Empirical Economics, 27: 363–394.
8. A. Cologni, M. Manera (2009) ‘The asymmetric effects of oil shocks on output
growth: a Markov–Switching analysis for the G-7 countries,’ Economic Modelling,
26(1): 1–29.
9. Y. Fan, Y.J. Zhang, H.T. Tsaic, Y.M. Wei (2008) ‘Estimating ‘Value at Risk’ of crude oil
price and its spillover effect using the GED-GARCH approach,’ Technological Change
and the Environment, 30(6): 3156–3171.
10. C. Aloui, S. Mabrouk (2009) ‘Value-at-risk estimations of energy commodities via
long-memory, asymmetry and fat-tailed GARCH models,’ Energy Policy, 38(5):
2326–2339.
11. P. Agnolucci (2009) ‘Volatility in crude oil futures: a comparison of the pre-
dictive ability of GARCH and implied volatility models,’ Energy Economics, 31(2):
316–321.
12. C. Engel (1994) ‘Can the Markov switching model forecast exchange rates? ‘ Journal
of International Economics, 36(1): 151–165.
13. M.T. Vo (2009) ‘Regime-switching stochastic volatility: evidence from the crude oil
market,’ Energy Economics, 31(5): 779–788.
14. E. Fama (1970) ‘Efficient capital markets: a review of theory and empirical work,’
Journal of Finance, 25: 383–417.
15. R.F. Engle (1982) ‘Autoregressive conditional heteroscedasticity with estimates of
variance of United Kingdom inflation,’ Econometrica, 50: 987–1008.
16. T. Bollerslev (1986) ‘Generalized autoregressive conditional heteroskedasticity,’
Journal of Econometrics, 31: 307–327.
17. D.B. Nelson (1991) ‘Conditional heteroskedasticity in asset returns: a new approach,’
Econometrica, 59: 347–370.
18. J.E. Raymond, R.W. Rich (1997) ‘Oil and the macroeconomy: a Markov state-
switching approach,’ Journal of Money, Credit and Banking, 29(2): 193–213.
19. J.D. Hamilton (1989) ‘A new approach to the economic analysis of nonstationary
time series and the business cycle,’ Econometrica, 57(2): 357–384.
20. D. Cousineau, S. Brown, A. Heathcote (2004) ‘Fitting distributions using maximum
likelihood: methods and packages,’ Behavior Research Methods, Instruments, &
Computers, 36: 742–756.

20 Confucius Three-stage Learning of Risk Management


1. D. Gardner (2007) The Four Books. The Teachings of the Later Confucian Tradition.
Hackett Publishing.
2. X. Yao, H. Yao (2000) An Introduction to Confucianism. Cambridge University Press.
3. A. Ben-Tal, A. Nemirovski (2000) ‘Robust solutions of linear programming problems
contaminated with uncertain data’, Mathematical Programming, 88, 411–424.
4. J.C. Smith (2009) Pseudoscience and Extraordinary Claims of the Paranormal: A Critical
Thinker. Wiley-Blackwell. ISBN 978–1405181228.
5. D. Wu, D.L. Olson (2009) ‘Introduction to the special section on optimizing risk
management. Methods and tools’, Human and Ecological Risk Assessment, 15(2):
220–226.
Notes 237

6. Y. Bauman, G. Klein (2010) The Cartoon Introduction to Economics: Volume One:


Microeconomics. Hill and Wang.
7. H.M. Markowitz (1959) Portfolio Selection: Efficient Diversification of Investments. John
Wiley & Sons (reprinted by Yale University Press, 1970).
8. W.F. Sharpe (1964) ‘Capital asset prices: a theory of market equilibrium under condi-
tions of risk’, Journal of Finance, 19(3): 425–442.
9. J. Tobin (1958) ‘Liquidity preference as behavior towards risk’, The Review of Economic
Studies, 25: 65–86.
10. G.A. Akerlof (1970) ‘The market for “lemons”: quality uncertainty and the market
mechanism,’ Quarterly Journal of Economics, 84(3): 488–500.
11. Counterparty Risk Management Policy Group III (2008) Containing Systemic Risk,
August 6.
References

H. Abbasinejad, Y. Gudarzi Farahani, E. Ashari Ghara (2012) ‘Energy consumption in


Iran with Bayesian approach’, OPEC Energy Review, 36(4): 444–455.
P. Agnolucci (2009) ‘Volatility in crude oil futures: a comparison of the predictive ability
of GARCH and implied volatility models’, Energy Economics, 31(2): 316–321.
N. Ahmad, D. Berg, G.R. Simons (2006) ‘The integration of analytic hierarchy process and
data envelopment analysis in a multi-criteria decision-making problem’, International
Journal of Information Technology and Decision Making, 5: 263–276.
G. Ailon (2012) ‘The discursive management of financial risk scandals: the case of Wall
Street Journal commentaries on LTCM and Enron’, Qualitative Sociology, 35: 251–270.
A. Akcil (2006) ‘Managing cyanide: health, safety and risk management practices at
Turkey’s Ovacik gold-silver mine’, Journal of Cleaner Production, 14(8): 727–735.
G.A. Akerlof (1970) ‘The Market for “Lemons”: Quality Uncertainty and the Market
Mechanism,’ Quarterly Journal of Economics, 84(3): 488–500.
G.A. Akerlof, R.J. Shiller (2009) Animal Spirits: How Human Psychology Drives the Economy,
and Why It Matters for Global Capitalism. Princeton University Press.
F. Aleskerov, A.L. Say, A. Toker, H.O.L. Akin, G. Altay (2005) ‘A cluster-based decision
support system for estimating earthquake damage and casualties’, Disasters, 3:
255–276.
D. Alexander (2003) ‘Towards the development of standards in emergency management
training and education’, Disaster Prevention and Management, 12, 113–123.
G.J. Alexander, A.M. Baptista (2004) ‘A comparison of VaR and CVaR constraints on port-
folio selection with the mean-variance model’, Management Science, 50(9): 1261–1273.
A.H. Alizadeh, N.K. Nomikos, P.K. Pouliasis (2008) ‘A Markov regime switching approach
for hedging energy commodities’, Journal of Banking & Finance, 32(9): 1970–1983.
C. Aloui, R. Jammazi (2009) ‘The effects of crude oil shocks on stock market shifts
behaviour: a regime switching approach’, Energy Economics, 31(5): 789–799.
C. Aloui, S. Mabrouk (2009) ‘Value-at-risk estimations of energy commodities via long-
memory, asymmetry and fat-tailed GARCH models’, Energy Policy, 38(5): 2326–2339.
U. Anders, M. Sandstedt (2003) ‘An operational risk scorecard approach’, Risk, 16(1):
47–50.
W. Antweiler, M. Frank (2004) ‘Is all that talk just noise? The information content of
internet stock message boards,’ Journal of Finance, 59(3): 1259–1295.
K. Aquino, A. Reed II (1998) ‘A social dilemma perspective on cooperative behavior in
organizations: the effects of scarcity, communication, and unequal access on the use
of a shared resource,’ Group & Organization Management, 23: 390–413.
The Association of Risk Managers (2010) A Structured Approach to Enterprise Risk
Management (ERM) and the Requirements of ISO 31000. COSO.
A.D. Athanassopoulos, S.P. Curram (1996) ‘A comparison of data envelopment analysis
and artificial neural networks as tool for assessing the efficiency of decision making
units’, Journal of the Operational Research Society, 47(8): 1000–1016.
R. Bacon, M. Kojima (2008) ‘Coping with Oil Price Volatility’, Energy Sector Management
Assistance Program, Energy Security Special Report 005/08.
M. Baker, J. Wurgler (2006) ‘Investor sentiment and the cross-section of stock returns,’
Journal of Finance, 61(4): 1645–1680.

238
References 239

B. Ballou, D.L. Heitger (2005) ‘A building-block approach for implementing COSO’s


enterprise risk management-integrated framework’, Management Accounting Quarterly,
6(2): 1–10.
R.D. Banker, A. Charnes, W.W. Cooper (1984) ‘Some models for estimating technical
and scale inefficiencies in data envelopment analysis’, Management Science, 30:
1078–1092.
E.G. Baranoff (2004) ‘Risk management: a focus on a more holistic approach three years
after September 11’, Journal of Insurance Regulation, 22(4): 71–81.
J. Bard (1998) Practical Bilevel Optimization: Algorithms and Applications. Kluwer Academic
Publishers.
L. Bargeron, K. Lehn, C. Zutter (2009) ‘Sarbanes-Oxley and corporate risk-taking’, Journal
of Accounting and Economics, 49(1–2): 34–52.
A. Barnes (1987) ‘The analysis and use of financial ratios: a review article’, Journal of
Business and Finance Accounting, 14: 449–461.
A. Barua, P.L. Brockett, W.W. Cooper, H. Deng, B.R. Parker, T.W. Ruefli, A. Whinston
(2004) ‘Multi-factor performance measure model with an application to fortune 500
companies’, Socio-Economic Planning Sciences, 38: 233–253.
Basel Committee on Banking Supervision (2005) Amendment to the Capital Accord to the
Incorporate Market Risks. Basel.
G.W. Bassett, Jr. (1997) ‘Robust sports rating based on least absolute errors’, American
Statistician, 51(2): 99–105.
D. Bathia, D. Bredin (2012) ‘An examination of investor sentiment effect on G7 stock
market returns,’ European Journal of Finance, DOI: 10.1080/1351847X.2011.636834.
Y. Bauman, G. Klein (2010) The Cartoon Introduction to Economics: Volume One:
Microeconomics. Hill and Wang.
A. Ben-Tal, A. Nemirovski (2000) ‘Robust solutions of linear programming problems
contaminated with uncertain data’, Mathematical Programming, 88: 411–424.
E. Benz, S. Trück, (2006) ‘CO2 emission allowances trading in Europe – specifying a new
class of assets,’ Problems and Perspectives in Management, 4(3): 30–40.
A.N. Berger, D. Hancock, D.B. Humphrey (1993) ‘Bank efficiency derived from the profit
function,’ Journal of Banking and Finance, 17(2–3): 317–348.
A.N. Berger, D.B. Humphrey (1992) ‘Measurement and efficiency issues in commercial
banking,’ in Z. Griliches, ed., Output Measurement in the Service Sectors, NBER Studies in
Income and Wealth. The University of Chicago Press, pp. 245–300.
A.N. Berger, D.B. Humphrey (1997) ‘Efficiency of financial institutions: international
survey and direction for future research,’ European Journal of Operational Research, 98:
175–212.
D. Bigio, R.L. Edgeman, T. Ferleman (2004) ‘Six sigma availability management of infor-
mation technology in the office of the chief technology officer of Washington, DC,’
Total Quality Management, 15(5–6): 679–687.
F. Black, M. Scholes (1972) ‘The valuation of option contracts and a test of market effi-
ciency’, The Journal of Finance, 27(2): 399–417.
A.S. Blinder (2013) After the Music Stopped: The Financial Crisis, the Response, and the Work
Ahead. The Penguin Press.
P. Bogetoft, D. Wang (2005) ‘Estimating the potential gains from mergers’, Journal of
Productivity Analysis, 23: 145–171.
T. Bollerslev (1986) ‘Generalized autoregressive conditional heteroskedasticity’, Journal
of Econometrics, 31(3): 307–327.
P. Bonacich (1987) ‘Communication networks and collective action’, Social Networks, 9:
389–396.
240 References

S. Borak, W. Härdle, S. Trück, R. Weron (2006) ‘Convenience yields for CO2 emission
allowance future contracts,’ SFB 649 discussion paper 2006–076, SFB Economic Risk
Berlin.
A. Borison, G. Hamm (2011) ‘Black swan or black sheep?’ Risk Management, 58(3):
48–53.
B.M. Bowling, L. Rieger (2005) ‘Success factors for implementing enterprise risk man-
agement,’ Bank Accounting and Finance, 18(3): 21–26.
R. Boyd (2011) Fatal Risk: A Cautionary Tale of AIG’s Corporate Suicide. Wiley.
M. Brunnermeier (2009) ‘Deciphering the liquidity and credit crunch 2007–2008,’
Journal of Economic Perspectives, 23, 77–100.
K. Buehler, A. Freeman, R. Hulme (2008) ‘The new arsenal of risk management,’ Harvard
Business Review, 86(9): 93–100.
B. Bull (2005) ‘Exemplar sampling: nonrandom methods of selecting a sample which
characterizes a finite multivariate population,’ American Statistician, 59(2): 166–172.
C. Burges (1998) ‘A tutorial on support vector machines for pattern recognition,’ Data
Mining and Knowledge Discovery, 2(2): 121–167.
D. Burtraw (1996) ‘Cost savings sans allowance trades? Evaluating the SO2 emission
trading program to date,’ Discussion Paper 95–30-REV.
J.M. Burniaux, J.O. Martins (2000) ‘Carbon emission leakages: a general equilibrium
view,’ OECD Economics Department Working Papers No. 242.
H.N. Butler, L.E. Ribstein (2006) The Sarbanes-Oxley Debacle: What We’ve Learned; How
to Fix It, AEI.
J. Calandro, Jr., S. Lane (2006) ‘An introduction to the enterprise risk scorecard,’ Measuring
Business Excellence, 10(3), 31–40.
S.C. Caples, M.E. Hanna (1997) ‘Least squares versus least absolute value in real estate
appraisals,’ Appraisal Journal, 65(1): 18–24.
P. Capros, L. Mantzos (2000) ‘The economic effects of industry-level emission trading
to reduce greenhouse gases,’ Report to DG environment, E3M-Laboratory 21 at ICCS/
NTUA.
C.S. Carlos, F.C. Yolanda, M.M. Cecilio (2005) ‘Measuring DEA efficiency in internet
companies,’ Decision Support Systems, 38: 557–573.
F. Caron, J. Vanthienen, B. Baesens (2013) ‘A comprehensive investigation of the applic-
ability of process mining techniques for enterprise risk management,’ Computers in
Industry, 64: 464–475.
S. Caudle (2005) ‘Homeland security,’ Public Performance & Management Review, 28(3):
352–375.
K. Cengiz, C. Ufuk, U. Ziya (2003) ‘Multi-criteria supplier selection using Fuzzy AHP,
Logistics Information Management, 16(6): 382–394.
W. Chan (2003) ‘Stock price reaction to news and no-news: drift and reversal after head-
lines,’ Journal of Financial Economics, 70: 223–260.
L.F. Chang, M.W. Hung, (2009) ‘Analytical valuation of catastrophe equity options with
negative exponential jumps,’ Insurance: Mathematics and Economics, 44: 59–69.
A. Charnes, W.W. Cooper, E. Rhodes (1978) ‘Measuring the efficiency of decision making
units,’ European,’ Journal of Operational Research, 6(2): 429–444.
V. Chavez-Demoulin, P. Embreechts, J. Nešlehová (2006) ‘Quantitative models for oper-
ational risk: extremes, dependence and aggregation,’ Journal of Banking & Finance, 30:
399–417.
X. Chen, Z. Wang, D.D. Wu (2013) ‘Modeling the price mechanism of carbon emission
exchange in the European Union Emission Trading System,’ Human and Ecological Risk
Assessment, 19(5): 1309–1323.
References 241

Chien-Ta Ho, D.D. Wu, D.L. Olson (2009) ‘A risk scoring model and application to meas-
uring internet stock performance,’ International Journal of Information Technology and
Decision Making, 8(1): 133–149.
T.-C. Chu (2002) ‘Facility location selection using fuzzy TOPSIS under group deci-
sions,’ International Journal of Uncertainty, Fuzziness & Knowledge-Based Systems, 10(6):
687–701.
U. Ciorba, A. Lanza, F. Pauli (2001) ‘Kyoto protocol and emission trading: does the US
make a difference?’ FEEM working paper 90.2001, Milan.
J.A. Clark (1996) ‘Economic cost, scale efficiency, and competitive viability in banking,’
Journal of Money, Banking and Credit, 28(3): 342–364.
B. Cohen (1997) The Edge of Chaos: Financial Booms, Bubbles, Crashes and Chaos. John
Wiley & Sons Ltd.
A. Cologni, M. Manera (2009) ‘The asymmetric effects of oil shocks on output growth: a
Markov–Switching analysis for the G-7 countries,’ Economic Modelling, 26(1): 1–29.
T.J. Considine (2000) ‘The impacts of weather variations on energy demand and carbon
emissions,’ Resource and Energy Economics, 22: 295–314.
G. Cooper (2008) The Origin of Financial Crises: Central Banks, Credit Bubbles and the
Efficient Market Fallacy. Vintage Books.
W.W. Cooper, L.M. Seiford, K. Tone (2000) Data Envelopment Analysis. Kluwer.
COSO (2004) Enterprise Risk Management – Integrated Framework: Executive Summary.
September.
A. Costa, R. N Markellos (1997) ‘Evaluating public transport efficiency with neural
network models,’ Transportation Research C, 5(5): 301–312.
Counterparty Risk Management Policy Group III (2008) Containing Systemic Risk,
August 6.
J.G. Courcelle-Seneuil (1852) ‘Profit,’ in Coquelin and Guilllaumin, eds, Dictionnaire de
l’ėconomie politique, 2nd ed.
D. Cousineau, S. Brown, A. Heathcote (2004) ‘Fitting distributions using maximum like-
lihood: methods and packages,’ Behavior Research Methods, Instruments, & Computers,
36: 742–756.
S. Cox, H. Pedersen (2000) ‘Catastrophe risk bonds,’ North American Actuarial Journal,
48: 56–82.
P. Criqui, A. Kitous (2003) ‘Impacts of linking JI and CDM credits to the European emis-
sion allowance trading scheme,’ KPI technical report.
M. Crouhy, D. Galai, R. Mark (1998) ‘Model Risk,’ Journal of Financial Engineering, 7(3/4),
267–288; reprinted in Model Risk: Concepts, Calibration and Pricing, (ed. R. Gibson),
Risk Book, 2000, 17–31.
M. Crouhy, D. Galai, R. Mark (2000) ‘A comparative analysis of current credit risk
models,’ Journal of Banking & Finance, 24, 59–117.
F. Cucchiella, M. Gastaldi (2006) ‘Risk management in supply chains: a real option
approach,’ Journal of Manufacturing Technology Management, 17(6): 700–720.
J.D. Cummins, X. Xie (2008), ‘Mergers and acquisitions in the US property-liability
insurance industry: productivity and efficiency effects,’ Journal of Banking & Finance,
2(1): 30–55.
D. Dalcher (2007) ‘Why the pilot cannot be blamed: a cautionary note about excessive
reliance on technology,’ International Journal of Risk Assessment and Management, 7(3):
350–366.
G. Daskalakis, D. Psychoyios, R.N. Markellos (2009) ‘Modeling CO2 emission allow-
ance prices and derivatives: evidence from the European trading,’ Journal of Banking &
Finance, 33(7): 1230–1241.
242 References

A. Dassios, J. Jang (2003) ‘Pricing of catastrophe reinsurance and derivatives using the
cox process with shot noise intensity,’ Finance and Stochastics, 7: 73–95.
G. Dell’Arriccia, L. Laeven, D. Igan (2008) ‘Credit booms and lending standards: evi-
dence from the subprime mortgage market,’ IMF Working Paper 08/106.
H. Deng, C.-H., Yeh, R.J. Willis (2000) ‘Inter-company comparison using modified
TOPSIS with objective weight,’ Computers & Operations Research, 27: 963–973.
S. Deng, Z. Xia (2006) ‘A real options approach for pricing electricity tolling agreements,’
International Journal of Information Technology and Decision Making, 5: 421–436.
A. Dey (2010) ‘The chilling effect of Sarbanes-Oxley: a discussion of Sarbanes-Oxley and
corporate risk-taking,’ Journal of Accounting and Economics, 49(1–2): 53–57.
R. Deyoung (1997) ‘A diagnostic test for the distribution-free efficiency estimator: an
example using US commercial bank data,’ European Journal of Operational Research,
98(2): 243–249.
G. Dickinson (2001) ‘Enterprise risk management: its origins and conceptual foun-
dation,’ The Geneva Papers on Risk and Insurance, 26(3): 360–366.
T.E. Dielman (2005) ‘Least absolute value regression: recent contributions,’ Journal of
Statistical Computation & Simulation, 75(4): 263–286.
N. Doherty, J. Lamm-Tennant, L.T. Starks (2009) ‘Lessons from the financial crisis on
risk and capital management: the case of insurance companies,’ Journal of Applied
Corporate Finance, 21(4): 52–59.
D. Dong, Q. Dong (2003) ‘HowNet – a hybrid language and knowledge resource,’
Proceedings of 2003 International Conference on Natural Language Processing and Knowledge
Engineering, 820 – 824, October 26–29.
M. Drew (2007) ‘Information risk management and compliance – expect the unex-
pected,’ BT Technology Journal, 25(1): 19–29.
N. Dunbar (1999) Investing Money: The Story of Long-Term Capital Management and the
Legends Behind It. Wiley.
P. Eckle, P. Burgherr (2013) ‘Bayesian data analysis of severe fatal accident risk in the oil
chain,’ Risk Analysis, 33(1): 146–160.
J.F. Egginton, J.I. Hilliard, A.P. Liebenberg, I.A. Liebenberg (2010) ‘What effect did AIG’s
bailout, and the preceding events, have on its competitors?’ Risk Management and
Insurance Review, 13(2): 225–249.
H. Elsinger, A. Lehar, M. Summer (2006) ‘Risk assessment for banking systems,’
Management Science, 52(9), 1301–1314.
C. Engel (1994) ‘Can the Markov switching model forecast exchange rates?’ Journal of
International Economics, 36(1): 151–165.
D. Engel (2009) ‘Hi-tech solutions for crisis management,’ African Business, 352, 50.
R.F. Engle (1982) ‘Autoregressive conditional heteroscedasticity with estimates of
variance of United Kingdom inflation,’ Econometrica, 50, 987–1008.
K. Eriksson, K. Kerem, D. Nilsson (2008) ‘The adoption of commercial innovations in
the former Central and Eastern European markets: the case of internet banking in
Estonia,’ International Journal of Bank Marketing, 26(3): 154–169.
P. Espahbodi (1991) ‘Identification of problem banks and binary choice models,’ Journal
of Banking and Finance, 15, 53–71.
E.F. Fama (1965) ‘Random walks in stock market prices,’ Financial Analysts Journal, 51(1):
404–419.
E. Fama (1970) ‘Efficient capital markets: a review of theory and empirical work,’ Journal
of Finance, 25: 383–417.
Y. Fan, Y.J. Zhang, H.T. Tsaic, Y.M. Wei (2008) ‘Estimating ‘value at risk’ of crude oil price
and its spillover effect using the GED-GARCH approach,’ Technological Change and the
Environment, 30(6): 3156–3171.
References 243

M.J. Farrell (1957) ‘The measurement of productive efficiency,’ Journal of the Royal
Statistical Society 120: 253–281.
T.S. Felix, H.J. Chan (2003) ‘An innovative performance measurement method for
supply chain management,’ Supply Chain Management: An International Journal, 8(3):
209–223.
G.J. Fielding, T.T. Babitsky, M.E. Brenner (1985) ‘Performance evaluation for bus transit,’
Transportation Research, 19A(1): 73–82.
S. Finkelstein, H. Jerayr (2002) ‘Understanding acquisition performance: the role of
transfer effects,’ Organization Science, 13(1): 36–47.
A.R. Fleissig, T. Kastens, D. Terrell (2000) ‘Evaluating the semi-nonparametric fourier,
aim, and neural networks cost functions,’ Economics Letters, 68(3): 235–244.
L. Fox (2003) Enron: The Rise and Fall. Wiley.
M. Freimer, P.L. Yu (1976) ‘Some new results on compromise solutions for group decision
problems,’ Management Science, 22(6): 688–693.
B. Freisleben, K. Ripper (1997) ‘Volatility estimation with a neural network,’ Proceedings
of the IEEE/IAFE on Computational Intelligence for Financial Engineering, 177–181, March
24–25.
M. Friedman, L.J. Savage (1948) ‘The utility analysis of choices involving risk,’ The
Journal of Political Economy, 56(4): 279–304.
K. Furst, W.W. Lang, D. Nolle (2000) ‘Internet banking: developments and prospects,’
Economic and Policy Analysis, Working Paper 2000–9.
A. Ganegoda, J. Evans (2012) ‘A framework to manage the measurable, immeasurable
and the unidentifiable financial risk,’ Australian Journal of Management, 39(5): 5–34.
R. Garcia, É. Renault, G. Tsafack (2007) ‘Proper conditioning for coherent VaR in port-
folio management,’ Management Science, 53(3), 483–494.
D. Gardner (2007) The Four Books. The Teachings of the Later Confucian Tradition. Hackett
Publishing.
H. Geman, M. Yor (1997) ‘Stochastic time changes in catastrophe option pricing,’
Insurance: Mathematics and Economics, 21: 185–193.
P. Goldsmith-Pinkham, T. Yorulmazer (2010) ‘Liquidity, bank runs, and bailouts:
spillover effects during the Northern Rock episode,’ Journal of Financial Service Research,
37(2/3): 83–98.
G. Gorton (2008) ‘The panic of 2007,’ NBER Working Paper No. 14358.
N. Gülpinar, E. Canakoglu, D. Pachamanova (2014) ‘Robust investment decisions under
supply disruption in petroleum markets,’ Computers & Operations Research, 44: 75–91.
M. Gulser, M. Ilhan (2001) ‘Risk and return in the world’s major stock markets,’ Journal
of Investing, (Spring): 62–67.
J.D. Hamilton (1989) ‘A new approach to the economic analysis of nonstationary time
series and the business cycle,’ Econometrica, 57(2): 357–384.
P. Hansen, B. Jaumard, G. Savard (1992) ‘New branch and bound rules for linear bilevel
programming,’ SIAM Journal on Scientific and Statistical Computing, 13(5): 1194–1217.
F.B. Hawley (1907) Enterprise and the Productive Process.
R. Hecht Nielsen (1990) ‘Neural Computing,’ Addison Wesley: 124–133.
H.S.B. Herath, W.G. Bremser (2005) ‘Real-option valuation of research and development
investments: implications for performance measurement,’ Managerial Auditing Journal,
20(1): 55–72.
H.N. Higgins (2012) ‘Learning internal controls from a fraud case at Bank of China,’
Issues in Accounting Education, 27(4): 1171–1192.
C-T. Ho (2006) ‘Measuring bank operations performance: an approach based on grey
relation analysis,’ Journal of the Operational Research Society, 57: 227–349.
244 References

C-T. Ho, D.S. Zhu (2004) ‘Performance measurement of Taiwan’s commercial banks,’
International Journal of Productivity and Performance Management, 53(5): 425–434.
C-T. Ho, D. Wu (2009) ‘Online banking performance evaluation using data envelopment
analysis and principal component analysis,’ Computers & Operations Research, 36(6):
1835–1842.
J. Hobbs (2011) ‘Financial derivatives, the mismanagement of risk and the case of AIG,’
CPCU eJournal, 64(7): 1–8.
C. Hollingsworth (2012) ‘Risk management in the post-SOX era,’ International Journal of
Auditing, 16: 35–53.
K. Hopkins (2003) ‘Value opportunity three: improving the ability to fulfill demand,’
Business Week, January 13.
S.N. Huang, T.L. Kao (2006) ‘Measuring managerial efficiency in non-life insurance com-
panies: an application of two-stage data envelopment analysis,’ International Journal of
Management, 23(3): 699–720.
D.W. Hubbard (2009) The Failure of Risk Management: Why It’s Broken and How to Fix It.
John Wiley & Sons.
J.C. Hung, M.C. Lee, H.C. Liu (2008) ‘Estimation of value-at-risk for energy commodities
via fat-tailed GARCH models,’ Energy Economics, 30(3): 1173–1191.
C. Hurt (2014) ‘The duty to manage risk,’ The Journal of Corporate Law, 39(2): 153–267.
C.L. Hwang, K. Yoon (1981) Multiple Attribute Decision Making: Methods and Applications.
Springer-Verlag.
T. Jacobson, J. Lindé, K. Roszbach (2006) ‘Internal ratings systems, implied credit risk
and the consistency of banks’ risk classification policies,’ Journal of Banking & Finance,
30: 1899–1926.
S. Jaimungal, T. Wang (2005) ‘Catastrophe options with stochastic interest rates and
compound Poisson losses,’ Insurance: Mathematics and Economics, 38: 469–483.
Jemison, B. David, S.B. Sitkin (1986) ‘Corporate acquisitions: a process perspective,’
Academy of Management Review, 11(1): 145–163.
D. Kahneman, A. Tversky (1972) ‘Subjective probability: a judgment of representa-
tiveness,’ Cognitive Psychology, 3, 430–454.
D. Kahneman, A. Tversky (2000) Choices, Values, and Frames. The University of
Cambridge.
M. Kainuma, Y. Matsuoka, T. Morita (1999) ‘Development of AIM (Asian-Pacific Integrated
Model) for coping with global warming,’ Proceedings of the IEEE International Conference
on System Man and Cybernetics, 6: 569–574.
R.S. Kaplan, D.P. Norton (1992) ‘The balanced scorecard – measures that drive per-
formance,’ Harvard Business Review, 70(1): 71–79.
R.S. Kaplan, D.P. Norton (2006) Alignment: Using the Balanced Scorecard to Create Corporate
Synergies. Harvard Business School Press Books.
N. Kapucu, M. Van Wart (2008) ‘Making matters worse: an anatomy of leadership fail-
ures in managing catastrophic events,’ Administration & Society, 40(7): 711–740.
B. Keys, T. Mukherjee, A. Seru, V. Vig (2010) ‘Did securitization lead to lax screening?
Evidence from subprime loans,’ Quarterly Journal of Economics, 125: 307–362.
F. Klaassen (2002) ‘Improving GARCH volatility forecasts with regime-switching
GARCH,’ Empirical Economics, 27: 363–394.
G. Klepper, S. Peterson (2004) ‘The EU emissions trading scheme: allowance prices, trade
flows, competitiveness effects,’ European Environment, 14(4): 201–218.
G. Klepper, S. Peterson (2006) ‘Emissions trading, CDM, JI and more – the climate
strategy of the EU,’ Energy Journal, 27(2): 1–26.
F.H. Knight (1921) Risk, Uncertainty and Profit. Hart, Schaffner & Marx.
References 245

S. Kreipl, M. Pinedo (2004) ‘Planning and scheduling in supply chain: an overview of


issues in practice,’ Production and Operations Management, 13(1): 29–77.
L. Laeven, F. Valencia (2008) ‘Systemic banking crises: a new database,’ International
Monetary Fund Working Paper WP/08/224.
L. Laeven, F. Valencia (2010) ‘Resolution of banking crises: the good, the bad, and the
ugly,’ IMF Working Paper WP/10/146.
T. Lambooy (2011) ‘Corporate social responsibility: sustainable water use,’ Journal of
Cleaner Production, 19(8): 852–866.
J. Laurikkala (2002) ‘Instance-based data reduction for improved identification of dif-
ficult small classes,’ Intelligent Data Analysis, 6(4): 311–322.
B. Lee, F. Preston (2012) Preparing for High-impact, Low-probability Events: Lessons from
Eyjafjallajőkull. Chatham House Report.
J. Lee, M. Yu (2007) ‘Variation of catastrophe reinsurance with catastrophe bonds,’
Insurance: Mathematics and Economics, 41: 264–278.
S.M. Lee, D.L. Olson (2004) ‘Goal programming formulations for a comparative ana-
lysis of scalar norms and ordinal vs. ratio data,’ Information Systems and Operational
Research, 42(3): 163–174.
F. Lhabitant (2000) ‘Coping with Model Risk,’ in The Professional Handbook of Financial
Risk Management, M. Lore, L. Borodovsky (eds). Butterworth-Heinemann.
D.X. Li (2000) ‘On default correlation: a copula approach,’ Journal of Fixed Income, 9(4):
43–54.
N. Li, X. Liang, X. Li, C. Wang, Desheng D. Wu (2009) ‘Network environment and
financial risk using machine learning and sentiment analysis,’ Human and Ecological
Risk Assessment, 15(2): 227–252.
P.M. Linsley, R.E. Slack (2013) ‘Crisis management and an ethic of care: the case of
Northern Rock Bank,’ Journal of Business Ethics, 113(2): 285–295.
S.F. Lo, W.M. Lu (2006) ‘Does size matter? Finding the profitability and marketability
benchmark of financial holding companies,’ Journal of Operational Research, 23(2),
229–246.
R. Lowenstein (2000) When Genius Failed: The Rise and Fall of Long-Term Capital
Management. Random House.
C. Luo, L.A. Seco, H. Wang, D.D. Wu (2010) ‘Risk modeling in crude oil market: a com-
parison of Markov switching and GARCH models,’ Kybernetics, 39(5): 750–769.
X. Luo (2003) ‘Evaluating the profitability and marketability efficiency of large banks – an
application of data envelopment analysis,’ Journal of Business Research, 56: 627–635.
E.H. MacDonald (2001) ‘GIS in banking: evaluation of Canadian bank mergers,’ Canadian
Journal of Regional Science, 24(3): 419–442.
C. Mackay (1841) Extraordinary Popular Delusions and the Madness of Crowds. Richard
Bentley.
R. Maddigan, J. Zaima (1985) ‘The Profitability of Vertical Integration,’ Managerial and
Decision Economics, 6(3): 178–179.
F. Malik, B.T. Ewing (2009) ‘Volatility transmission between oil prices and equity sector
returns,’ International Review of Financial Analysis, 18(3): 95–100.
M. Manasanet-Bataller, A. Pardo, E. Valor (2007) ‘CO2 prices, energy and weather,’ The
Energy Journal, 28(3): 73–92.
S.G. Mandis (2013) What Happened to Goldman Sachs: An Insider’s Story of Organizational
Drift and Its Unintended Consequences. Harvard Business Review Press.
E.A. Mannix (1993) ‘Organizations as resource dilemmas: the effects of power balance
on coalition formation in small groups,’ Organizational Behavior and Human Decision
Processes, 55: 1–22.
246 References

K.S. Markel, L.A. Barclay (2007) ‘The intersection of risk management and human
resources: an illustration using genetic mapping,’ International Journal of Risk Assessment
and Management, 7(3): 326–340.
H.M. Markowitz (1952) ‘Portfolio selection,’ The Journal of Finance, 17(1): 77–91.
H.M. Markowitz (1959) Portfolio Selection: Efficient Diversification of Investments. John
Wiley & Sons (reprinted by Yale University Press, 1970).
T.M. Mata, R.L. Smith, D.M. Young, C.A.V. Costa (2005) ‘Environmental analysis of
gasoline blending components through their life cycle,’ Journal of Cleaner Production,
13(5): 517–523.
C. McDonald (2009) New PRIMA president sees public RMs as masters of disaster.
National Underwriter/Property & Casualty Risk & Benefits Management, 113(21): 17–31.
D.B. McDonald (2011) ‘When risk management collides with enterprise sustainability,’
Journal of Leadership, Accountability and Ethics, 8(3): 56–66.
MEI Computer Technology Group Inc. (2011) ‘2011 Trade Promotion Management
Trends.’
R.C. Merton (1974) ‘On the pricing of corporate debt: the risk structure of interest rates,’
The Journal of Finance, 29(2): 449–470.
D. Meyler, J.P. Stimpson, M.P. Cutghin (2007) ‘Landscapes of risk,’ Organization &
Environment, 20(2): 204–212.
I.I. Mitroff, M.C. Alpaslan (2003) ‘Preparing for evil,’ Harvard Business Review, 81(4): 109–115.
R. Morelli, A. Tucker, N. Danner, T.R. de Lanerolle, H.J.C. Ellis, O. Izmirli, D. Krizanc,
G. Parker (2009) ‘Revitalizing computing education through free and open source
software for humanity,’ Communications of the ACM, 52(8): 67–75.
A.S. Mukherjee (2008) The Spider’s Strategy: Creating Networks to Avert Crisis, Create
Change, and Really Get Ahead. FT Press.
P.K. Narayan, S. Narayan, A. Prasad (2008) ‘Understanding the oil price-exchange rate
nexus for the Fiji islands,’ Energy Economics, 30(5): 2686–2696.
D.B. Nelson (1991) ‘Conditional heteroskedasticity in asset returns: a new approach,’
Econometrica, 59: 347–370.
D. Ng, P.D. Goldsmith (2010) ‘Bio energy entry timing from a resource based view and
organizational ecology perspective,’ International Food & Agribusiness Management
Review, 13(2): 69–100.
W.D. Nordhaus (2001) ‘Climate change: global warming economics,’ Science, 294(5545):
1283–1284.
W.D. Nordhaus, J.G. Boyer (1999) ‘Requiem for Kyoto: an economic analysis of the Kyoto
protocol,’ The Energy Journal, 20: 93–130.
D.L. Olson (2004) ‘Data set balancing,’ Lecture Notes in Computer Science: Data Mining and
Knowledge Management, Y. Shi, W. Xu, & Z. Chen, eds. Springer, 71–80.
D.L. Olson (2005) ‘Comparison of weights in TOPSIS models,’ Mathematical and Computer
Modelling, 40: 721–727.
D.L. Olson, D. Wu (2005) ‘Decision making with uncertainty and data mining,’ Advanced
Data Mining and Applications: First International Conference, ADMA, X. Li, S. Wang,
Z.Y. Dong eds, Lecture Notes in Artificial Intelligence. Keynote paper. Springer, 1–9.
D.L. Olson, D. Wu (2006) ‘Simulation of fuzzy multiattribute models for grey relation-
ships,’ European Journal of Operational Research, 175(1): 111–120.
D.L. Olson, D. Wu (2008) Enterprise Risk Management. World Scientific.
J. Pan, M. Wang, D. Li, J. Le (2009) ‘Automatic generation of seamline network using area
Voronoi diagrams with overlap,’ IEEE Transactions on Geoscience and Remote Sensing,
47(6): 1737–1744.
References 247

M.S. Paolella, L. Taschini (2006) ‘An econometric analysis of emission trading allow-
ances,’ Research Paper Series 06–26, FINRISK: National Center of Competence in
Research Financial Valuation and Risk Management.
A. Papalexandris, G. Ioannou, G. Prastacos, K.E. Soderquist (2005) ‘An integrated meth-
odology for putting the balanced scorecard into action,’ European Management Journal,
23(2): 214–227.
J. Paradi, S. Vela, H. Zhu (2010) ‘Adjusting for cultural differences, a new DEA model
applied to a merged bank,’ Journal of Productivity Analysis, 33: 109–123.
S. Patterson (2010) The Quants: How a New Breed of Math Whizzes Conquered Wall Street
and Nearly Destroyed It. Crown Business.
P.C. Pendharkar, J.A. Rodger (2003) ‘Technical efficiency-based selection of learning
cases to improve forecasting accuracy of neural networks under monotonicity assump-
tion,’ Decision Support Systems, 36(1): 117–136.
Y. Peng (2000) Management Decision Analysis. Science Publication.
C. Perrow (1984) Normal Accidents: Living with High-risk Technologies. Basic Books.
C. Perrow (1999) Normal Accidents: Living with High-Risk Technologies. Princeton University
Press.
J.D. Piotroski, S. Srinivasan (2008) ‘Regulation and bonding: the Sarbanes-Oxley Act
and the flow of international listings,’ Journal of Accounting Research, 46(2): 383–425.
M.R. Powers, T.Y. Powers, S. Gao (2012) ‘Risk finance for catastrophe losses with Pareto-
calibrated Lévy-stable severities,’ Risk Analysis, 32(11): 1967–1977.
S.C. Ray (2004) Data Envelopment Analysis: Theory and Techniques for Economics and
Operations Research. Cambridge University Press, 189–208.
J.E. Raymond, R.W. Rich (1997) ‘Oil and the macroeconomy: a Markov state-switching
approach,’ Journal of Money, Credit and Banking, 29(2): 193–213.
L.A. Reilly, O. Courtenay (2007) ‘Husbandry practices, badger sett density and habitat
composition as risk factors for transient and persistent bovine tuberculosis on UK
cattle farms,’ Preventive Veterinary Medicine, 80(2–3): 129–142.
C.M. Reinhart, K.S. Rogoff (2008) ‘Is the 2007 subprime crisis so different? An inter-
national historical comparison,’ American Economic Review, 98(2): 339–344.
B. Ritchie, C. Brindley (2007) ‘An emergent framework for supply chain risk man-
agement and performance measurement,’ Journal of the Operational Research Society,
58: 1398–1411.
L. Rittenberg, F. Martens (2012) Enterprise Risk Management: Understanding and
Communicating Risk Appetite. COSO.
S.A. Ross, R.M. Westerfield, B.D. Jordan (2007) Corporate Finance Essentials. McGraw-
Hill/Irwin.
M. Saadatseresht, A. Mansourian, M. Taleai (2009) ‘Evacuation planning using multiob-
jective evolutionary optimization approach,’ European Journal of Operational Research,
198(1): 305–314.
T.L. Saaty (2008) ‘Decision making with the analytic hierarchy process,’ International
Journal of Services Sciences, 1(1): 83–98.
F. Salmon (2009) ‘Recipe for disaster: the formula that killed Wall Street,’ Wired, 17(3).
V. Sampath (2009) ‘The need for greater focus on nontraditional risks: the case of
Northern Rock,’ Journal of Risk Management in Financial Institutions, 2(3): 301–305.
N. Santella, L.J. Steinberg, K. Parks (2009) ‘Decision making for extreme events: mod-
eling critical infrastructure interdependencies to aid mitigation and response plan-
ning,’ Review of Policy Research, 26(4): 409–422.
M. Santiago (2011) ‘The Huasteca rain forest,’ Latin American Research Review, 46: 32–54.
248 References

D. Santin, F.J. Delgado, A. Valino (2004) ‘The measurement of technical efficiency: a


neural network approach,’ Applied Economics, 36(6): 627–635.
S. Scandizzo (2005) ‘Risk mapping and key risk indicators in operational risk man-
agement,’ Economic Notes by Banca Monte dei Paschi di Siena SpA, 34(2): 231–256.
J. Seifert, M. Uhrig-Hombur, M. Wagner (2008) ‘Dynamic behavior of CO2 spot prices,’
Journal of Environmental Economics and Management, 56(2): 180–194.
L.M. Seiford, J. Zhu (1999) ‘Profitability and marketability of the top 55 U.S commercial
banks,’ Management Science, 45(9): 1270–1288.
J. Seigel (2002) Stocks for the Long Run, 3rd ed. McGraw-Hill.
C. Serrano-Cinca, Y. Fuertes-Calle’n, C. Mar-Molinero (2005) ‘Measuring DEA efficiency
in Internet companies,’ Decision Support Systems, 38: 557–573.
W.F. Sharpe (1964) ‘Capital asset prices: a theory of market equilibrium under condi-
tions of risk,’ The Journal of Finance, 19(3): 425–442.
W.F. Sharpe (1970) Portfolio Theory and Capital Markets. McGraw-Hill Book Company.
J.W. Shavlik, R.J. Mooney, G.G. Towell (1991) ‘Symbolic and neural learning algorithms:
an experimental comparison,’ Machine Learning, 6: 111–143.
R. Shelp, A. Ehrbar (2009) Fallen Giant: The Amazing Story of Hank Greenberg and the
History of AIG. Wiley.
H.D. Sherman, F. Gold (1985) ‘Bank branch operating efficiency: evaluation with data
envelopment analysis,’ Journal of Banking and Finance, 9(2): 297–316.
Y. Shi, Y. Peng, G. Kou, Z. Chen (2006) ‘Classifying credit card accounts for business intel-
ligence and decision making: a multiple-criteria quadratic programming approach,’
International Journal of Information Technology and Decision Making, 4: 1–19.
H.S. Shin (2009) ‘Reflections on Northern Rock: the bank run that heralded the global
financial crisis,’ Journal of Economic Perspectives, 23(1): 101–119.
J. Sijm, S. Bakker, Y. Chen, H. Harmesen, W. Lise (2005) ‘CO2 price dynamics: the impli-
cations of EU emissions trading on the price of electricity,’ Report ECNC-05–81,
Energy Research Center of the Netherlands (ECN).
R.T. Silves, L.K. Comfort (2012) ‘The Exxon Valdez and BP Deepwater Horizon oil
spills: reducing risk in socio-technical systems,’ American Behavioral Scientist, 56(1):
76–103.
D.H. Smaltz, R. Carpenter, J. Saltz (2007) ‘Effective IT governance in healthcare organi-
zations: a tale of two organizations,’ International Journal of Healthcare Technology and
Management, 8(1/2): 20–41.
J.C. Smith (2009) Pseudoscience and Extraordinary Claims of the Paranormal: A Critical
Thinker. Wiley-Blackwell. ISBN 978–1405181228.
J.A. Sniezek, D.R. May, J.E. Sawyer (1990), ‘Social uncertainty and interdependence: a
study of resource allocation decision in groups,’ Organizational Behavior and Human
Decision Processes, 46, 155–180.
J. Sobehart, S. Keenan (2001) ‘Measuring Default Accurately,’ Credit Risk Special Report,
Risk, 14: 31–33.
A. Soteriou, S.A. Zenios (1999) ‘Operations, quality, and profitability in the provision of
banking services,’ Management Science, 45(9): 1221–1238.
U. Springer (2003) ‘The market for tradable GHG permits under the Kyoto Protocol: a
survey of model studies,’ Energy Economics, 25: 527–551.
U. Springer, M. Varilek (2004) ‘Estimating the price of tradable permits for greenhouse
gas emissions in 2008–2012,’ Energy Policy, 32: 611–21.
B. Stafford (2001) ‘Risk management and internet banking: what every banker needs to
know,’ Community Banker, 10(2): 48–49.
References 249

J.N. Stanard, M.G. Wacek (1991) ‘The spiral in the catastrophe retrocessional market,’
Casualty Actuarial Society Discussion Paper, May, Arlington, VA.
J.E. Stiglitz (2003) The Roaring Nineties: A New History of the World’s Most Prosperous
Decade. W.W. Norton & Co.
P.S. Sudarsanam, R.J. Taffler (1995) ‘Financial ratio proportionality and inter-temporal
stability: an empirical analysis,’ Journal of Banking & Finance, 19(1): 45–60.
G. Suder, D.W. Gillingham (2007) ‘Paradigms and paradoxes of agricultural risk gov-
ernance,’ International Journal of Risk Assessment and Management, 7(3): 444–457.
J.A.K. Suykens, T.V. Gestel, J.D. Brabanter. (2002) Least Squares Support Vector Machines.
World Scientific Press.
N. Taleb (2012) Antifragile: Things That Gain from Disorder. Random House.
N.N. Taleb (2007) The Black Swan: The Impact of the Highly Improbable. Penguin Books.
N.N. Taleb, D.G. Goldstein, M.W. Spitznagel (2009) ‘The six mistakes executives make in
risk management,’ Harvard Business Review, 87(10): 78–81.
W.-J. Tan, P. Enderwick (2006) ‘Managing threats in the global era: the impact and
response to SARS,’ Thunderbird International Business Review, 48(4): 515–536.
J. Taylor (2009) Getting Off Track: How Government Actions and Interventions Caused,
Prolonged, and Worsened the Financial Crisis. Hoover Press.
N. Taylor (2007) ‘A note on the importance of overnight information in risk management
models,’ Journal of Banking & Finance, 31: 161–180.
J. Tobin (1958) ‘Liquidity preference as behavior towards risk,’ The Review of Economic
Studies, 25: 65–86.
M.D. Troutt, A. Rai, A. Zhang (1995) ‘The potential use of DEA for credit applicant
acceptance systems,’ Computers and Operations Research, 4: 405–408.
H.C. Tsai, C.M. Chen, G.H. Tzeng (2006) The comparative productivity efficiency for
global telecoms,’ International Journal of Production Economics, 103: 509–526.
H. Tulkens (1993) ‘On FDH efficiency analysis: some methodological issues and appli-
cations to retail banking, courts and urban transit,’ Journal of Productivity Analysis,
4(1–2): 183–210.
M. Uhrig-Homburg, M. Wagner (2006) ‘Success chances and optimal design of deriva-
tives on CO2 emission certificates,’ Working Paper, University of Karlsruhe.
L.V. Utikin (2007) ‘Risk analysis under partial prior information and nonmonotone
utility functions,’ International Journal of Information Technology and Decision Making,
6, 625–647.
V.E. Vaugirard (2003) ’Pricing catastrophe bonds by an arbitrage approach,’ The Quarterly
Review of Economics and Finance, 43: 119–132.
M.T. Vo (2009) ‘Regime-switching stochastic volatility: evidence from the crude oil
market,’ Energy Economics, 31(5): 779–788.
H. Von Blottnitz, M.A. Curran (2007) ‘A review of assessments conducted on bio-ethanol
as a transportation fuel from a net energy, greenhouse gas, and environmental life
cycle perspective,’ Journal of Cleaner Production, 15(7): 607–619.
J. Von Neumann, O. Morgenstern (1944) Theory of Games and Economic Behaviour, 2nd
ed. Princeton University Press.
J.H. Von Thünen (1826) The Isolated State.
H. Wagner (2004) ‘The use of credit scoring in the mortgage industry,’ Journal of Financial
Services Marketing, 9(2): 179–183.
C.H. Wang, R. Gopal, S. Zionts (1997) ‘Use of data envelopment analysis in assessing
information technology impact on firm performance,’ Annals of Operations Research,
73: 191–213.
250 References

H.F. Wang, D.J. Hu (2005) ‘Comparison of SVM and LS-SVM for regression,’ International
Conference on Neural Networks and Brain 2005, 1: 279–283, October 13–15, 2005.
S.H. Wang (2003) ‘Adaptive non-parametric efficiency frontier analysis: a neural-net-
work-based model,’ Computers & Operations Research, 30: 279–295.
B. Watkins (2003) ‘Riding the wave of sentiment: an analysis of return consistency as a
predictor of future returns,’ Journal of Behavioral Finance, 4(4): 191–200.
J. Wei, D. Zhao, L. Liang (2009) ‘Estimating the growth models of news stories on
disasters,’ Journal of the American Society for Information Science and Technology, 60(9):
1741–1755.
D. Williamson (2007) ‘The COSO ERM framework: a critique from systems theory of
management control,’ International Journal of Risk Assessment and Management, 7(8):
1089–1119.
F.B. Wiseman (2013) Some Financial History Worth Reading: A Look at Credit, Real Estate,
Investment Bubbles & Scams, and Global Economic Superpowers. Abcor Publishers.
D. Wu (2006) ‘A note on DEA efficiency assessment using ideal point: an improvement
of Wang and Luo’s model,’ Applied Mathematics and Computation, 2: 819–830.
D.D. Wu (2009) ‘Performance evaluation: an integrated method using data envelopment
analysis and fuzzy preference relations,’ European Journal of Operational Research,
194(1): 227–235.
D.D. Wu (2014) ‘An approach for learning risk management: confucianism system and
risk theory,’ International Journal of Financial Services Management. Accepted and in
press.
D.D. Wu, J.R. Birge (2012) ‘Serial chain merger evaluation model and application to
mortgage banking,’ Decision Sciences, 43(1): 5–36.
D. Wu, C. Luo, H. Wang, J.R. Birge (2014) ‘Bilevel programming merger evaluation
and application to banking operations,’ Production and Operations Management. DOI:
10.1111/poms.12205. Accepted and in press.
D. Wu, D.L. Olson (2006) ‘A TOPSIS data mining demonstration and application to
credit scoring,’ International Journal of Data Warehousing & Mining, 2(3): 1–10.
D. Wu, D.L. Olson (2009) ‘Introduction to the special section on optimizing risk man-
agement. Methods and Tools,’ Human and Ecological Risk Assessment, 15(2): 220–226.
D.D. Wu, D.L. Olson (2010) ‘Enterprise risk management: coping with model risk in a
large bank,’ Journal of the Operational Research Society, 61(2): 774–787.
D. Wu, D.L. Olson (2010) ‘Enterprise risk management: coping with model risk in a large
bank,’ Journal of the Operational Research Society, 61(2): 179–190.
D. Wu, D.D. Wu (2010) ‘Performance evaluation and risk analysis of online banking
service,’ Kybernetics, 39(5): 723–734.
D. Wu, Z. Yang, L. Liang (2006) ‘Using DEA-neural network approach to evaluate branch
efficiency of a large Canadian bank,’ Expert Systems with Applications, 31(1): 108–115.
D. Wu, L. Zheng, D.L. Olson (2014) A Decision Support Approach for Online Stock Forum
Sentiment Analysis. IEEE Transactions on Systems Man and Cybernetics. Accepted
and in press. DOI: 10.1109/TSMC.2013.2295353.
D. Wu, Y. Zhou (2010) ‘Catastrophe bond and risk modeling: a review and calibration
using Chinese earthquake loss data,’ Human and Ecological Risk Assessment, 16(3):
510–523.
J. Yao, Z. Li, K.W. Ng (2006) ‘Model risk in VaR estimation: an empirical study,’
International Journal of Information Technology and Decision Making, 5: 503–512.
X. Yao, H. Yao (2000) An Introduction to Confucianism. Cambridge University Press.
T.K. Zhelev (2005) ‘On the integrated management of industrial resources incorporating
finances,’ Journal of Cleaner Production, 13(5): 469–474.
References 251

L. Zhu (2008) ‘Double exponential jump diffusion model for catastrophe bonds pricing,’
Journal of Fujian University of Technology, 6: 336–338 (in Chinese).
R. Zolkos, M. Bradford (2011) ‘Risk management faulted in probe of BP disaster,’ Business
Insurance, 45(36): 4–25.

Company websites
Bank of America (2007) Annual Report 2007, available at www.rbs.com/microsites/
gra2007/downloads/RBS_GRA_2007.pdf.
Barclays (2007) Annual Report 2007, available at www.barclaysannualreport.com/index.
html.
Chase (2007) Annual Report 2007, available at http://investor.shareholder.com/
common/.
Citibank (2007) Annual Report 2007, available at www.citi.com/citi/fin/data/k07c.pdf.
Dominion (2001) ‘Internet banking struggles for profits,’ available at www.stuff.co.nz/
inl/index/0,1008,779016a28,FF.html.
HSBC (2007) Annual Report 2007, available at www.investis.com/reports/hsbc_ar_2007_
En/report.php?type=1.
Jupiter Research.(2004) ‘FIND research, Institute for Information Industry,’ available at
http://www.find.org.tw.
Lloyds (2007) Annual Report 2007, available at www.investorrelations.lloydstsb.com/
media/pdf_irmc/ir/2007/2007_LTSB_Group_R&A.pdf.
Royal Bank of Scotland (2007) Annual Report 2007, available at www.rbs.com/microsites/
gra2007/downloads/RBS_GRA_2007.pdf.
SunTrust (2007) Annual Report 2007, available at www.suntrustenespanol.com/suntrust.
Wachovia (2007) Annual Report 2007, available at www.wachovia.com/file/2007_
Wachovia_Annual_Report.pdf.
Wells Fargo (2007) Annual Report 2007, available at www.wellsfargo.com/downloads/pdf/
invest_relations/wf2007annualreport.pdf.
Index

accounting perspective, 3–7 California electricity, 11, 12–13


Adelphia, 13 Canadian Imperial Bank of Commerce
AIG, 2, 15, 29–30, 73 (CIBS), 147
Air Canada, 147 Capital Asset Pricing Model (CAPM), 16,
allocative efficiency (AE), 61 109, 110
Ameriquest, 26 Capital market instruments, 73
Analytic Hierarchy Process (AHP), carbon emission pricing, 183–198
58, 59 catastrophe bonds, 73, 136–144
anchoring, 111 catastrophe equity puts (cat-e-puts), 73
Arbitrage pricing theory, 110 catastrophe risk instruments, 136–138
Arthur Andersen, 13, 164 Charnes, Cooper & Rhodes DEA model,
artificial neural networks (ANN), 32, 43, 62, 127, 156
48, 125, 127–134 Chase, 101
asset price volatility, 32 Chinese earthquakes, 1, 136, 137, 175
autoregressive conditional Citigroup, 100
heteroscedasticity (ARCH) model, closeness coefficient, 89
192, 199, 201 coincidence matrix, 94–95
autoregressive moving average (ARMA) collateralized debt obligations (CDOs), 16,
model, 201 73, 108
autoregressive process, 121, 139 collateralized mortgage obligations
availability, 111 (CMOs), 21
Committee on Sponsoring Organizations
backpropagation neural networks, (COSO), 3–4, 6, 7
127–128 Compound Poisson loss model, 138
bags-of-words, 33 Conditional-Value-at-Risk (CVaR), 18
balanced scorecard, 58, 60, 73–74 Confucius three-stage learning, 215–220
Bank Credit Scoring, 72–86 contingent surplus notes, 73
Bank Efficiency Analysis, 99–107, copulas, 15, 20–21
124–135 COSO ERM Cube, 4
Bank of America, 2, 100, 104 COSO framework, 3–4
Banker, Charnes & Cooper (BCC) DEA COSO internal control process, 3
model, 62, 67–70, 126, 156 Countrywide, 26
Barclay’s, 100 credit default swaps (CDSs), 16, 21,
Basel Accords, 16, 73, 106 29, 30
Bayesian analysis, 121–122 credit rating, 74
Beta (book to market value), 66 credit scorecard, 75–84
bilevel programming, 145–162 credit scoring, 90–97
binomial option pricing model, 110 Critical Infrastructure Protection
Black, Scholes, and Merton, 24, 112 Decision Support System (CIPDSS),
British Petroleum, 118–123 180–181
bubbles, 15, 24, 111–112 crude oil, 199–214

253
254 Index

daily volatility model, 38–39 33, 34–42, 48, 49, 55–56, 192–197,
Data Envelopment Analysis (DEA), 57–71, 199–200, 202, 206–210
100, 124, 126–127, 149–162 globalization, 168–169, 171–173
data mining, 87–90 Golden West, 26
decision making unit (DMU), 61, 66, 70, Green Tree, 26
126, 131, 149, 150, 151 grey relation analysis, 58, 60
decision support system (DSS), 180–181
decision tree, 93–96 H1N1 virus, 2
Deep Water Horizon, 118–123 hedging, 15, 27
derivatives, 24, 73 herding, 111
Distribution Free Approach, 100, 124 HSBC, 100
double marginalization, 151
DuPont model, 57 Icelandic volcano, 1
ICTCLAS System, 51
economic perspective, 108–117 implementation issues, 7–8
Efficient Market Hypothesis, 23 incentive incompatibility, 151
efficient market theory, 109 indifference theory, 110
emergency management, 179–180 information sentiment, 33–34
emergency management support systems Innovative Support To Emergencies,
(EMSS), 180–181 Diseases And Disasters
energy risk, 167 (InSTEDD), 179
Enron, 11–14, 15, 72, 163, 164 International Organization for
enterprise resource planning (ERP) Standardization (ISO), 14
systems, 14 investment collars, 17
Ericsson, 176
ERM process, 7–8 Kolmogorov-Smirnov statistic, 75, 77, 142
European Climate Exchange (ECX), Kyoto protocol, 183, 184, 187
184, 189
Expected Utility Theory, 16 Lehman Brothers, 15, 30
Exponential Generalized Autoregressive Levy process, 17
Conditional Heteroscedasticity Lexicon approach, 50
(EGARCH) model, 193–194, 197, 202 Li David X., 20, 108
Eyjafjallajokull, 168, 175 Lloyd’s of London, 1, 72, 101, 104, 174
London Market Exchange (LMX), 112
family regulation, 217–218 Long Term Capital Management (LTCM),
FEMA, 179 15, 24, 112, 164
financial risk forecasting, 32–48 Lorenz curve, 75, 78
financial risk management, 15–22
financial statement analysis, 58, 60–61 machine learning, 32–48, 50
Florida hurricanes, 1 Macondo, 118, 119, 120
food risk, 166 malicious activities, 164
Fourier transformations, 17 marginal abatement cost curve, 185
framing, 111 Markov chain Monte Carlo, 144
free-disposal hull, 100, 124 Markov process, 17
Fuzzy set theory, 58, 59–60 Markov regime switching model,
210–214
Gaussian copula, 20, 108 Markowitz, 16, 109
Generalized Autoregressive Conditional mean variance, 110
Heteroscedasticity (GARCH) model, merger evaluation, 150–152
Index 255

Merrill Lynch, 2, 15 risk management definition, 2


Minerals Management Service (MMS), risk management framework, 110
120, 121 risk management modeling, 9
Monte Carlo simulation, 19, 96, 110, 121, risk management responsibilities, 8
140, 144, 209 risk management theories, 111
Moody’s, 72 risk mitigation, 114
moral hazard, 115 risk scoring, 64
mortgage system, 26 risk tolerance, 114
multiattribute utility, 181 Royal Bank of Scotland, 101, 104
Multivariate Statistical Analysis, 58, 106
mutualization, 115 SAHANA, 179
San Diego Gas & Electric Company, 12
National Disaster Medical System, 181 Sarbanes-Oxley Act, 3, 13–14
natural disasters, 164, 175–182 self cultivation, 216–217
neural networks, see artificial neural semantic techniques, 32
networks sentiment analysis, 39–41, 44–48, 49–56
Nokia, 176 Sharpe, William, 109
Nord Pool, 184 squared correlation coefficient, 45
Northern Rock, 25, 26–29 Stackelberg game, 148
Standard & Poor’s (S&P), 72
online banking, 99–107 state harmonization, 218–219
options-pricing model, 110 state preference theory, 110
overall efficiency (OE), 61 Stochastic frontier analysis, 100, 124, 156
overconfidence, 111 Stock Forum, 49–51, 52, 56
stock price volatility, 54
Pacific Gas & Electric, 12 subprime banking crisis, 2
Pareto distribution, 17 SunTrust, 101
part of speech (POS) tagging, 50 supply chains, 147–149, 169, 170–171
Peregrine Systems, 13 support vector machines (SVM), 32, 39,
performance validation, 74–75 41–42, 43, 48, 49, 55–56
perturbation, 96 sustainability, 163–174
Philips Electronics, 176 sustainable risk, 166–168
polarity tagging, 49 Swine Flue epidemic, 136
PowerNext, 184 systemic failures, 164
principal component analysis (PCA),
99–107 Taleb, N.N., 20, 113
technical efficiency (TE), 61
real estate crash of 2008, 23–31 terrorism, 1
real estate cycle, 25 thick frontier approach, 100, 124
regime switching models, 203 TOPSIS, 87–98
Regional Integrated Model of Climate and trading volume volatility, 48
The Economy (RIMCE), 185 tranches, 15, 21–22, 26
Reinhart and Rogoff, 23 Treadway Commission, 3
return on assets (ROA), 63 tsunamis, 1, 164, 176
return on equity (ROE), 63 Tyco International, 13
risk analysis, 7, 165
risk appetite, 6 underinvestment problem, 110
risk exchange swaps, 73 United Nations intergovernmental panel
risk identification, 7 on climate change (UNIPCC), 183
256 Index

Value-at-risk (VaR), 15, 17–20, 31, 73, Wachovia, 101


121, 200 weather derivatives, 73
variable selection, 64–66 Wells Fargo, 101
volatility forecasting model, 33–39, Wenchuan earthquake, 136, 137, 175
200–214 word segmentation, 50, 51
volatility trend forecast accuracy, 45 WorldCom, 13, 15, 72, 163, 164
volcanoes, 1

You might also like