Professional Documents
Culture Documents
Operational Risk Management: A Review
Operational Risk Management: A Review
Dean Fantazzini
Moscow
Overview of the Presentation
• Introduction
Dean Fantazzini 2
Overview of the Presentation
• Introduction
• Introduction
• Introduction
• Introduction
• Introduction
• Introduction
• Introduction
• Bayesian Approaches
The term “operational risks” is used to define all financial risks that are
not classified as market or credit risks. They may include all losses due to
human errors, technical or procedural problems etc.
(LDA).
Dean Fantazzini 3
Introduction
Particularly, following BIS (2003), banks are allowed to choose among three
different approaches:
• The Basic Indicator approach (BI),
• the Standardized Approach (SA),
• the Advanced Measurement Approach (AMA).
If the basic indicator approach is chosen, banks are required to hold a flat
percentage of positive gross income over the past three years.
Dean Fantazzini 4
The Basic Indicator Approach
Banks using the basic indicator (BI) approach are required to hold a
capital charge set equal to a fixed percentage (denoted by α) of the
positive annual gross income (GI).
Dean Fantazzini 5
The Standardized Approach
The model specifies eight business lines : Corporate Finance, Trading and Sales,
Retail Banking, Commercial Banking, Payment and Settlement, Agency Services
and Custody, Asset Management and Retail Broker.
For each business line, the capital charge is calculated by multiplying the gross
income by a factor denoted by β assigned to that business line.
The total capital charge is then calculated as a three-year average over positive
gross incomes, resulting in the following capital charge formula:
3 8
1X X
t
RCSA = max βj GIjt−i , 0 (2)
3 i=1 j=1
Dean Fantazzini 6
The Standardized Approach
We remark that in formula (2), in any given year t − i, negative capital charges
resulting from negative gross income in some business line j may offset positive
capital charges in other business lines (albeit at the discretion of the national
supervisor).
⇒ This kind of netting should induce banks to go from the basic indicator to the
standardized approach.
Table 1.3 gives the beta factors for each business line:
Dean Fantazzini 7
Advanced Measurement Approaches
In the 2001 version of the Basel 2 agreement, the Committee described three
specific methods within the AMA framework:
• Internal Measurement Approach (IMA): according to this method, the OR
capital charge depends on the sum of the unexpected and expected losses:
the expected losses are computed by using bank historical data, while those
unexpected are found by multiplying the expected losses by a factor γ,
derived by sector analysis.
Dean Fantazzini 8
Advanced Measurement Approaches
These questions are selected to cover drivers of both the probability and
impact of operational events, and the actions that the bank has taken to
mitigate them. In parallel with the scorecard development and piloting, the
bank’s total economic capital for operational risk is calculated and then
allocated to risk categories.
In the last version of the Basel 2 agreement, these models are not mentioned to
allow for more flexibility in the choice of internal measurement methods.
Given its increasing importance (see e.g. Cruz, 2002) and the possibility to apply
econometric methods, we will focus here only on the LDA approach.
Dean Fantazzini 9
The Standard LDA Approach with Comonotonic Losses
Dean Fantazzini 10
The Standard LDA Approach with Comonotonic Losses
Moreover,
Dean Fantazzini 11
The Standard LDA Approach with Comonotonic Losses
Dean Fantazzini 12
The Standard LDA Approach with Comonotonic Losses
Once the risk measures for each losses Si are estimated, the global VaR (or
ES) is usually computed as the simple sum of these individual measures:
Dean Fantazzini 13
The Standard LDA Approach with Comonotonic Losses
Once the risk measures for each losses Si are estimated, the global VaR (or
ES) is usually computed as the simple sum of these individual measures:
Once the risk measures for each losses Si are estimated, the global VaR (or
ES) is usually computed as the simple sum of these individual measures:
If all the margins are continuous, then the copula is unique; otherwise C is
uniquely determined on RanF1 × RanF2 . . . RanFn , where Ran is the
range of the marginals. Conversely, if C is a copula and F1 , . . . Fn are
distribution functions, then the function H defined in (2.2) is a joint
distribution function with margins F1 , . . . Fn .
Dean Fantazzini 14
The Canonical Aggregation Model via Copulas
n n
∂ n [C(F1 (x1 ), . . . , Fn (xn ))] Y Y
f (x1 , ..., xn ) = · fi (xi ) = c(F1 (x1 ), . . . , Fn (xn ))· fi (xi )
∂F1 (x1 ), . . . , ∂Fn (xn ) i=1 i=1
where
f (x1 , ..., xn )
c(F1 (x1 ), ..., Fn (xn )) = n
Q · , (6)
fi (xi )
i=1
By using this procedure, we can derive the Normal and the T-copula...
Dean Fantazzini 15
The Canonical Aggregation Model via Copulas
1. Normal-copula:
1
exp − 12 x′ Σ−1 x
Gaussian
f (x1 , ..., xn ) (2π)n/2 |Σ|1/2
c(Φ(x1 ), ..., Φ(xn )) = n
= n
=
fiGaussian (xi ) √1 − 12 x2i
Q Q
2π
exp
i=1 i=1
1 1 ′ −1
= exp − ζ (Σ − I)ζ
|Σ|1/2 2
where ζ = (Φ−1 (u1 ), ..., Φ−1 (un ))′ is the vector of univariate Gaussian
inverse distribution functions, ui = Φ (xi ), while Σ is the correlation
matrix.
2. T-copula:
! − υ+n
n ζ ′ Σ−1 ζ 2
υ+n 1+
υ
f Student (x1 , ..., xn ) −1/2
Γ
2
Γ
2
υ
c(tυ (x1 ), ..., tυ (xn )) = n = |Σ| ,
υ+1 ! − υ+1
υ
Student Γ Γ ζ2
Q
f (xi ) 2 2 n 2
i Q
1+ i
i=1 2
i=1
where ζ = (t−1 −1 ′
υ (u1 ), ..., tυ (un )) is the vector of univariate Student‘s
T inverse distribution functions, ν are the degrees of freedom,
ui = tν (xi ), while Σ is the correlation matrix.
Dean Fantazzini 16
The Canonical Aggregation Model via Copulas
Dean Fantazzini 17
The Canonical Aggregation Model via Copulas
Dean Fantazzini 18
The Canonical Aggregation Model via Copulas
→ Simulation results:
Dean Fantazzini 19
The Canonical Aggregation Model via Copulas
• Empirical Analyis
→ The overall loss events in this dataset are 407, organized in 2 business
lines and 4 event types, so that we have 8 possible risky combinations (or
intersections) to deal with.
→ The overall average monthly loss was equal to 202.158 euro, the
minimum to 0 (for September 2001), while the maximum to 4.570.852 euro
(which took place on July 2003).
Dean Fantazzini 20
The Canonical Aggregation Model via Copulas
Dean Fantazzini 21
The Canonical Aggregation Model via Copulas
Dean Fantazzini 22
The Canonical Aggregation Model via Copulas
Dean Fantazzini 23
The Canonical Aggregation Model via Copulas
Dean Fantazzini 24
The Canonical Aggregation Model via Copulas
Dean Fantazzini 25
The Canonical Aggregation Model via Copulas
Dean Fantazzini 26
The Poisson Shock Model
Suppose there are m different types of shock or event and, for e = 1, . . . , m, let net
be a Poisson process with intensity λe recording the number of events of type e
occurring in (0, t].
Assume further that these shock counting processes are independent. Consider
losses of R different types and, for i = 1, . . . , R, let nit be a counting process that
records the frequency of losses of the ith type occurring in (0, t].
Dean Fantazzini 27
The Poisson Shock Model
According to the Poisson Shock Model, the loss processes nit are clearly Poisson
themselves, since they are obtained by superpositioning m independent Poisson
processes generated by the m underlying event processes.
These shocks cause a certain number of losses in the i-th ET/BL, whose severity
e ), r = 1, . . . , ne , where (X e ) are i.i.d. with distribution function F and
is (Xir t ir it
independent with respect to nt . e
Dean Fantazzini 28
The Poisson Shock Model
⇒ As it may appear immediately from the previous discussion, the key point of
this approach is to identify the underlying m Poisson processes: unfortunately,
this field of studies is quite recent and more research has to be made with this
regard. Moreover, the paucity of data limits any precise identification.
Embrechts and Puccetti (2008) and Rachedi and Fantazzini (2009) allow for
positive/negative dependence among the shocks (nit ) and also among loss
severities (Xij ), but the number of shocks and loss severities are independent to
each other:
H f (n1t , . . . , nRt ) = C f (F (n1t ), . . . , F (nRt ))
H s (X1j , . . . , XRj ) = C s (F (X1,j ), . . . , F (XR,j ))
Hf ⊥ Hs
Dean Fantazzini 29
The Poisson Shock Model
Equivalently, if we use the mean loss for the period, i.e. sit , we have
1. Fit the frequency and severity distributions like in the standard LDA
approach, and compute the relative cumulative distribution functions.
2. Fit a copula C f to the frequency c.d.f.’s. (see the next subsection for an
important remark about this issue).
5. Invert each component ufit with the respective inverse distribution function
F −1 (ufit ), to determine a random vector (n1t , ..., nRt ) describing the number
of loss observations.
Dean Fantazzini 30
The Poisson Shock Model
7. Invert each component usi with the respective inverse distribution function
F −1 (usi ), to determine a random vector (X1j , ..., XRj ) describing the loss
severities.
8. Convolve the frequencies’ vector (n1t , . . . , nRt ) with the one of the severities
(X1j , . . . , XRj ).
9. Repeat the previous steps a great number of times, i.e. 106 times.
In this way it is possible to obtain a new matrix of aggregate losses which can
then be used to compute the usual risk measures such as the VaR and ES.
Note: copula modelling for discrete marginals is an open problem, see Genest and
Nešlehová (2007, “A primer on copulas for count data”, Astin Bulletin), for a
recent discussion. Therefore, some care has to be taken when considering the
estimated risk measures.
Dean Fantazzini 31
The Poisson Shock Model
According to Sklar (1959), in the case where certain components of the joint
density are discrete (as in our case), the copula function is not uniquely defined
not on [0,1]n , but on the Cartesian product of the ranges of the n marginal
distribution functions.
Two approaches have been proposed to overcome this problem. The first method,
has been proposed by Cameron et al. (2004) and is based on finite difference
approximations of the derivatives of the copula function,
where ∆k , for k =1, . . . , n, denotes the k-th component first order differencing
operator being defined through
Dean Fantazzini 32
The Poisson Shock Model
The second method is the continuization method suggested by Stevens (1950) and
Denuit and Lambert (2005), which is based upon generating artificially continued
variables x∗1 , . . . , x∗n by adding independent random variables u1 , . . . , un (each of
them being uniformly distributed on the set [0,1]) to the discrete count variables
x1 , . . . , xn and which does not change the concordance measure between the
variables.
Dean Fantazzini 33
The Poisson Shock Model
In short, EVT affirms that the losses exceeding a given high threshold u converge
asymptotically to the GPD, whose cumulative function is usually expresses as
follows:
1 − 1 + ξ y −1/ξ
ξ 6= 0
GP Dξβ = β (13)
1 − exp − y
β ξ=0
where y = x − u, y ≥ 0 if ξ ≥ 0 and 0 ≤ y ≤ −β/ξ if ξ ≤ 0, and where y are called
excesses whereas x exceedances.
Dean Fantazzini 34
The Poisson Shock Model
Moreover this parameter has a direct connection with the existence of finite
moments of the losses distributions. We have that
E xk = ∞ if k ≥ 1/ξ
Di Clemente and Romano (2004) and Rachedi and Fantazzini (2009), suggest to
model the mean loss severity sit using the lognormal for the body of the
distribution and EVT for the tail, in the following way:
0 < x < ui
Φ ln s it −µ i
σi
Fi (sit ) = (15)
1 − Nu,i 1 + ξ sit −ui −1/ξ(i)
N i β
i i ui ≤ x
where Φ is the standardized normal cumulative distribution functions, Nu,i is the
number of losses exceeding the threshold ui , Ni is the number of the loss data
observed in the ith ET, whereas βi and ξi denote the scale and the shape
parameters of a GPD, respectively.
Dean Fantazzini 35
The Poisson Shock Model
For example, the graphical analysis for the ET3 in Rachedi and Fantazzini
(2009) reported in Figures 1-2 clearly shows that operational risk losses are
characterized by high frequency – low severity and low frequency – high
severity losses.
Dean Fantazzini 36
The Poisson Shock Model
Figure 1: Scatter plot of ET3 losses. The dotted lines represent, re-
spectively, mean, 90%, 95% and 99.9% empirical quantiles.
Dean Fantazzini 37
The Poisson Shock Model
Dean Fantazzini 38
The Poisson Shock Model
The resulting total operational risk capital charge for the three models is
reported below:
Dean Fantazzini 39
Bayesian Approaches
This makes the process of data recovery generally more difficult, since
financial institutions only started to collect operational loss data a few
years ago.
Dean Fantazzini 40
Bayesian Approaches
Dean Fantazzini 41
References
Dean Fantazzini 42
References
⇛ ... the book I’m writing with prof. Aivazian (CEMI)... STAY TUNED!
Dean Fantazzini 43