Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Financial Econometrics II

Credit Risk. Part 2

Zuzanna Wośko
SGH Warsaw School of Economics

1
Transition matricies

2
General information on transition matricies

• They serve as an input to many credit risk analyses, e.g., in the


measurement of credit portfolio risk.
• Usually estimated from from observed historical rating transitions
• For agency ratings there is practically no alternative to using historical
transitions because agencies do not associate their grades with
probabilities of default or transition.
• The most popular approaches of estimation procedures built on historical
transitions:
1) The cohort approach
2) The hazard approach

3
General information on transition matricies

Source: Moody’s

4
General information on transition matricies

Source: Moody’s

5
General idea of transition matrix

• Transition matrices measure the probability of moving from one credit state to
another.
• The probability of transitioning from a non-defaulted category into a default
category is the PD.
• Matrices can be constructed with chosen periodicity – for example, a quarterly
transition matrix can be created.
• The cumulative PD for the loan is the product of matrix math.
– For example, multiply a quarterly matrix 9 times to get a 9 quarter loss estimate.
– Several firms use dimensionality reducing techniques for computational reasons.
• Transition rates are not constant over time. The directionality and rate of change
varies with economic cycles.
• With sufficient data, these changes in the transitions can be measured. Instead of
having one generic transition matrix, institution can create either a series of
transition matrices or a conditional matrix in which the speed and direction of
each cell in the matrix changes with macro conditioning variables.
6
General idea of transition matrix

Absorbing states
• Default is an absorbing state, meaning that loans that transition into default do
not transition out.
• However, we have seen cases where a non-default cell has such a high transition
percentage (e.g. 99.5%), that it essentially becomes another absorbing state and
thus nothing transitions to default, artificially reducing loss estimates.

7
Cohort approach

It is a traditional technique, which estimates transition probabilities through


historical transition frequencies.
It does not make full use of the available data. The estimates are not affected
by timing and sequencing of transitions within a year.

Let:
, - the numer of obligors in category i at the beginning of the period t (it is the size
of the cohort i,t)
- the numer of obligors from the cohort ij that have obtained grade j at the end
,
of the period t.
The transition frequencies in period t are computed as:
,
̂ , (1)
,

8
Cohort approach

Usually, a transition matrix is estimated with data from several periods.


A common way of averaging the period transition frequencies is the obligor-
weighted average, which uses the number of obligors in a cohort as weights:
∑ , ,
̂ ∑
(2)
,

Inserting (1) to (2) leads to:


,
∑ , ∑ ,
̂ ∑
,

(3)
, ,
Therefore the obligor-weighted average can be directly obtained by dividing
overall sum of transitions from i to j by the overall numer of obligors that
were in grade i at the start of the considered periods.

9
Cohort approach

Exercise:
02_transition_matrix.R

Assuming that „1” is the best rating and „6” is the worst,
describe the results and answer:
• Which states are absorbing?
• What values of diagonal elements mean?
• How transition matrix changes with the economic cycle?
• Calculate transition matrix after 8 quarters. What is the
assumption of such approach?
10
Calibrating to ratings

11
Ways of calibration to ratings

A rating calibration typically involves the mapping of a model output (usually


a PD) to a rating class label.
Two approaches:
• Rating system as given (rating system-centric)
• PD as given (PD-centric)
Construction of cutoffs:
• Actual
• Arithmetic mean (midpoint)
• Geometric mean
Mappings:
• Through historical default rates
• To quantiles of rating distribution
• To rating class with closest average PD
12
Mapping through historical default rates

1. Calculate historical default rates, DRj, for each rating See example code:
category, j=1,2,…,J, where 1 is the best credit quality 02_mapping2ratings.R
and J is the worst.
2. For each exposure, i:
a) Calculate PDi based on PD model
b) If PDi<DR1 assign rating class 1.
c) If Pdi>DRJ assign rating class J.
d) Else:
i. Find two rating classes such that DRj < PDi < DRj+1
ii. Calculate cutoff cj,j+1 = C(DRj, DRj+1) with one of the
3 methods.
iii. If PDi < cj,j+1 assign rating j; otherwise assign rating
j+1.

13
LGD models

14
Prediction of LGD

If default: LGD regressors’ categories:


Lender’s claim: 100 (EAD) 1. Instrument related (for example
Lender receives: 40 seniority, liquidity of the market,
LGD = (100-40)/100 = 60% S&P rating, etc.)
2. Firm-specific (financial ratios,
SME or large, industry, etc.)
Problem: Bankruptcy procedures
take years to resolve, so lender can 3. Macroeconomic (GDP growth,
receive claim with time lags capacity utilisation, corporate
bond spreads)
4. Industry-specific (ratios and
indicies which reflect particular
industry)

15
Example specification:

, _ , _ !, " ,
Where:
- LGD on instrument j of firm i, observed in year t.
_ , - average LGD of instruments with the same time as j, computed
with data ending in t-1
_ !, - default rate that was observed in year t-1 for the industry to which
borrower i belongs
, - i firm leverage in t-1

As model deals with instrument-level data, one firm can contribute several
observations to a data set. For example company can enter with senior
unsecured bond and subordinated bond. This likely can lead to correlations in
error terms. In the presence of of such correlations grouped in clusters, OLS
coefficient estimates are still reliable but the standard errors are no longer to.
16
Portfolio approach

17
Use of VaR

Portfolio credit risk models produce a probability distribution for portfolio


credit losses (and gains, if it is a mark-to-market model). To validate the
quality of a given model, we can examine whether observed losses are
consistent with the model’s predictions.
Market risk models produce loss forecasts for a portfolio (for example for
trading book of the bank) but the underlying horizon is much shorter – often
restricted to a single day. A standard procedure is to check the frequency with
which actual losses exceeded the VaR (usually 99% VaR).
Such test is not very useful for the validation of credit portfolio models, which
mostly have a one-year horizon. We would have to wait 250 years until we
gain as many observations as we do after one year of tracking a market risk
model.
Way out  do not test a prediction of extreme events but rather test the
overall fit of the predicted loss distribution.

18
Berkowitz (2001) test

Input data (yearly):


• A loss figure (for example 130 milion EUR), Lt
• A forecast of the loss distribution made at the start of the period

The basic idea behind the Berkowitz test is to evaluate the entire distribution.
The test involves a double transformation of observed losses, with the two
transformations as follows:
1. Replace Lt by the predicted probability of observing this loss or a smaller
one – insert loss into CDF:
!# $
2. Transform pt by applying Φ #&$, the inverse standard normal CDF:
' Φ # $

19
Berkowitz (2001) test

• 1st transformation produces numbers between 0 and 1. If the predicted


distribution is correct, the numbers should be uniformly distributed
between 0 and 1 (see median – should be equal to 0.5).
• In principle, we could stop after the 1st transformation and test whether the
pt are uniformly distributed between 0 and 1. But tests based on normally
distributed numbers are often more powerful. This is why the 2nd
transformation is used.
• If the model summarized by F(L) is correct, transformed losses zt will be
normally distributed with zero mean and unit variance.
• Berkowitz(2001) suggested the restriction of the test to the hypothesis that
zt have zero mean and unit variance (joint test) by using likelihood ratio test.

20
Berkowitz (2001) test

2
1
( ) .& /#' / 0 /#2- $$
3
2,-
2
6 6 ' /0
ln ( / 72, / ln - / 8
2 2 2-
3
ML estimators:
2
1
09: 8'
6
3
2
1
-9: 8 ' / 09:
6
3

21
Berkowitz (2001) test

Test statistics:

C 2 ln ( 0 09: , - -9: / ln ( #0 0, - 1$

Is asymptotically distributed as chi-squared with 2 degrees of freedom

See example in excel file:


Berkowitz.xls

22
Stress test models

23
General economic rules of stress test model
construction
• 2 types of stress test models (approaches):
- top-down (1)
- bottom-up (2)

• These are time series or panel data models


• Dependent variable: risk parameter (for example, 1 - PD, LGD; 2 – LLP,
NIM)
• Regressors (macroeconomic variables):
- Income side of debtors (GDP, unemployment, industry production)
- Cost of credit (interest rate, exchange rate)
- Leverage (debt ratios)
- Collateral prices (house prices, commercial estate, ship and vehicle prices)
- Sometimes if needed additional varables  stock indices

24
Stress scenarios

Financial Supervisory Authority or • FSA types:


internal need of stress testing: National: KNF, NBP (for Germany:
BaFin, Deutsche Bundesbank)
Assumptions to scenario, for
International: EBC, EBA, IMF (FSAP)
example oil price shock, FX shock,
politic shock

Economic team simulates paths of Risk specialists use existing,


main macrovariables for a given estimated on updated sample
scenario (structural models, (S)VAR models and input stress paths
models, expert judgement values) of regressors to estimated
models. Ex ante stress path for
given dependent variable (for
example PD) is generated.

25
Example of top-down stress testing model:
LLP (Loan Loss Provisions)
Panel model:
yTU V TU ′ X α ε
where t=1,2,...,T and i=1,2,...,N, [xit]1xK , [β]Kx1 , [ ~]] #0, -^ $
i – bank’s number, t - quarter.

Dynamic version can also be used with AR(1):

yTU δyT,U xTU ′ β αT εTU

Example: Głogowski 2008

26
Example of top-down stress testing model:
LLP (Loan Loss Provisions)
Przykładowe zmienne objaśniające:
• gdp – wzrost r/r,
• real_wib3m – WIBOR 3M, wielkość urealniona CPI,
• D4_unempl – stopa bezrobocia r/r,
• dm_car – CAR (capital adequacy ratio)- współczynnik adekwatności kapitałowej – odchylenie od
mediany w sektorze,
• b_hous_ln_share – udział kredytów mieszkaniowych w kredytach dla GD w danym banku,
• b_hh_ln_share – udział kredytów GD w kredytach dla sektora niefinansowego,
• dm_ln_nf_gr_qq_all – wzrost kredytów w danym banku, odchylenie od średniej w sektorze,
• zmienne zerojedynkowe klasyfikujące bank do grup strategicznych, zmienne interakcyjne.

Przykładowy rezultat stress testu (Głogowski 2008)

27
Example of bottom-up stress testing model:
PD model (PD calculated from CDS data)
PD can be calculated not only from internal data on defaults or auxiliary
external databases with history of defaults in other institution, but also
market CDS data can be used.
We use the information on Credit Default Contracts (CDS) quotations. The
CDS spreads translate into PD by applying the hazard rate model:
b c b c
1 / .& 1 / .&
1 / d.efg.hi

For example 5-year CDS spread for a bond of a given corporate.


Such probabilities should be aggregated monthly, quarterly or yearly –
depending on the frequency o macroeconomic data.

Let assume, that macrovariables.xls file includes aggregated values of PD


according to above given formula (let’s say, for large corporates portfolio).
28
Example of bottom-up stress testing model:
PD model (PD calculated from CDS data)
We will estimate logit model (general linear regression approach), although
raw dependent variable PD is not binomial. We want to have forecasts within
the frames [0 , 1].

See: 02_stressmodel.R
(use „quasibinomial” option)

Exercise:
Change the last quarters of time series in macrovariables.xls according to
your own stress scenario. Compare the results with the baseline scenario.
How large deviations from baseline PD did you observe?

29
Model validation

30
Model validation issues

1. Documentation check (model and codes)


2. Data check: up-to-dateness, quality
3. Portfolio analysis. Model data should reflect size and structure of the given
bank’s portfolio (for example SME portfolio)
4. Economic relations check – if economic assumptions are reasonable
5. Method and model assumptions check
6. Statistical backtesting with measures and statistics which are suited to a
given model (are parameters correct or there is a need of recalibration?)
7. Check of past overrides by experts (were many expert corrections in the
past?)
8. Implementation checks

31
Model validation issues

In case of validation of a rating system we ask:


1. How well does a rating system rank borrowers according to their true
probability of default (PD)?
2. How well do estimated PDs match the true PDs?

32
Example of backtesting procedures:
Bootstrapping confidence intervals
Steps of estimation of a confidence interval for the AUC through bootstrapping:
1. From the N observations on ratings and default, draw N Times with
replacement (draw pairs of ratings and defaults, to be precise)
2. Compute the AUC with the data re-sampled in step 1.
3. Repeat steps 1, to 2 M-Times.
4. To construct a 1-α confidence interval for the AUC, determine the α/2 and
the 1-α/2 percentile of the bootstrapped AUCs.

See:
02_bootstrapping.R

33

You might also like