Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

2018ZA(1), 2017ZA(7), 2016ZA(2), 2015ZA(2), 2014ZA(2), 2013ZA(2)

1. Use the Diamond and Dybvig (1983) model to explain the liquidity insurance theory for the preference for
financial intermediation over direct financing. Discuss the implications of the equilibrium outcomes of the model.
SG CH. 1-2

The most important contribution of intermediaries is a steady flow of funds from surplus to deficit units. Financial
institutions fulfil the following main functions:

• The brokerage function: Financial intermediaries match transactors and provide transaction and other services. As a
result, they reduce transaction costs and remove information costs.

• The asset transformation function: Financial institutions issue claims that are far more attractive to savers (in terms
of lower monitoring costs, lower liquidity costs and lower price risk) than the claims issued directly by corporations.
Financial intermediaries hold the long-term, high-risk, large-denomination claims issued by borrowers and finance
this by issuing short-term, low-risk, small-denomination deposit claims. This process is often described in the
literature as qualitative asset transformation (QAT).

There are four further reasons for the dominance of intermediation over direct financing:

• transaction costs (e.g. Benston and Smith, 1976)

• liquidity insurance (e.g. Diamond and Dybvig, 1983)

• information-sharing coalitions (e.g. Leland and Pyle, 1977)

• delegated monitoring (e.g. Diamond, 1984, 1996).

These second reason is the point of our question.

Liquidity insurance relates to the fact that consumers are unsure of their future liquidity requirements in the face of
unanticipated events. In the absence of perfect information, consumers will maintain their own pool of liquidity.

Provided that shocks are not perfectly correlated across individuals, portfolio theory suggests that the total liquid
reserves needed by a bank will be less than the aggregation of the reserves required by individual consumers acting
independently. Diamond and Dybvig (1983) use this argument to account for the existence of banks. The view is that
banks enable consumers to alter their consumption patterns according to the influence of shocks, and the value of this
service permits a fee to be earned by the bank. In terms of their game theory model, there are two equilibria – the first
is the existence of a bank providing liquidity insurance and optimal risk-sharing among economic agents, whilst the
second is the situation of a bank run.

The second part of the answer should include a clear definition of a ‘bank run’.

Banks improve both the flow and quality of information. In addition, banks provide financial or secondary claims to
savers, which often have superior liquidity attributes compared with primary securities such as corporate equity and
bonds. Banks typically offer contracts that are highly liquid and low price-risk to savers on the liability side of the
balance sheet while holding relatively illiquid and higher price-risk assets. They achieve this through diversifying
some of their portfolio risks. Banks exploit the law of large numbers in this way, whereas savers are constrained to
holding relatively undiversified portfolios. The more diversification achieved by the bank, the lower the probability
that it will default on its liability obligations and the less risky, and more liquid, its claims.

Financing long-term assets through short-term deposits is a source of potential fragility for banks because they are
exposed to the possibility that a large number of depositors will decide to withdraw funds for reasons other than
liquidity needs. This results in a vulnerability to bank runs (i.e. the possibility that many depositors simultaneously
seek to redeem their claims out of concern that the bank will default if they wait). Bank runs would not be a problem if
they were confined to banks that were already (pre-run) insolvent.

In fact, the threat of a run would act as a discipline in giving banks an incentive to avoid insolvency or the appearance
of insolvency. However, bank runs clearly become problematic when depositors with imperfect information run on
banks that are not (pre-run) insolvent. In an important theoretical model, Diamond and Dybvig (1983) show that a run
can in itself cause a bank to default that would not otherwise have defaulted.

If enough other depositors are running, it becomes each depositor’s best strategy to run themselves. A bank run thus
becomes self-reinforcing, or a Nash equilibrium in game-theory terms. A bank attempting to meet demands by more
than a certain proportion of its depositors will incur losses so large that its default becomes inevitable.

In the model of Diamond and Dybvig, any event that causes depositors to anticipate a run also makes them anticipate
insolvency. It thus does in fact cause a run and so the outcome validates the anticipation. The possibility of a run
makes intermediation more costly in terms of depositors needing to monitor banks more closely and banks needing to
maintain more reserves. Whereas a bank run relates to an individual bank, a panic refers to a simultaneous run on
several banks. If runs are contagious, they will lead to a panic.

Diamond and Dybvig (1983) present a three-period model. Decisions are made in period 0 to run over the next two
periods (1 and 2). Technology is assumed to require two periods to be productive. Any interruption to this process to
finance consumption provides a lower return. Consumers either consume ‘early’ in period 1 or ‘late’ in period 2. Early
consumption imposes a cost in the form of lower output, and hence consumption, in period 2.
One solution is that there will be trades in claims for consumption in periods 1 and 2. The limitation to this is that
neither type of agent knows ex ante the probability that funds will be required in period 1. However, they can opt for a
type of insurance contract, which could be in the form of a demand deposit. This would give each agent the right
either to withdraw funds in period 1 or to hold them to the end of period 2, which provides a superior outcome. Then,
the introduction of a bank offering fixed money claims overcomes this problem by pooling resources and making
larger payments to ‘early’ consumers and smaller payments to ‘late’ consumers than would be the case in the absence
of a bank. Hence, the bank acts as an insurance agent through demand deposits.

An alternative scenario would be that both types of agents withdraw funds in period 1, that is, there is a run on the
bank. Two policy initiatives can prevent this outcome:

 suspension of convertibility, which prevents the withdrawal of deposits


 provision by regulatory authorities of a deposit insurance scheme
which removes the incentive for participation in a bank run because
the deposits are ‘safe’. The regulator can finance the scheme by levying charges on the banks. Given that no
bank runs occur, the charges will be minor after the initial levy to finance the required compensation fund.

Thus, the implication of the possibility of a bank run is that it provides a rationale for regulation. A key reason for
regulation is that uninsured depositors are likely to cause a bank run when faced with information of an adverse shock
to bank balance sheets. Diamond and Dybvig (1983) argue that deposit insurance could be introduced to prevent bank
runs, and there are many historical examples to support this theory.

The material in the subject guide provides an intuitive argument and some of the more formal theory; a good answer
should include both aspects. A very good answer would provide a full analysis of the implications of the theory, and
would include discussion of the relevance of the design of deposit contracts and critical evaluation of deposit
insurance.
Generally, there is much scope in this question for you to demonstrate rigorous analysis drawn from the readings
suggested above.

2018ZA(5), 2014ZA(5)
5. Discuss the motivations and techniques of Asset and Liability Management.

Banks nowadays deal in a very wide range of financial instruments both as assets and liabilities, and balance sheets
have become increasingly complex. In addition, off-balance-sheet (OBS) business, involving various contingent
liabilities (derivatives, letters of credit, guarantees), has grown significantly. However, the various demands of a
heterogeneous clientele brings with it many special and additional risks. As a result, banks have adopted an
increasingly systematic approach to managing their balance sheet and off-balance-sheet positions. 

This is widely known as asset and liability management (ALM), which covers the set of techniques used to
manage interest rate and liquidity risks, and deals with the structure of the balance sheet within the context of
funding and regulatory constraints and profitability targets. 

ALM involves the continual monitoring of the existing position of a bank, evaluating how this differs from the desired
position, and undertaking transactions (including hedging) to move the bank towards the desired position. The
objective is to enhance profitability, while controlling and limiting different risks, as well as complying with the
constraints of banking supervision. Therefore, a bank must assess the risks and benefits of all assets and liabilities in
the light of the contribution they make to the earnings and to the risks of its total portfolio. Banks have to continually
adjust assets and liabilities, both by varying the terms they offer for business with clients and by regular trading in
financial markets.

Issues associated with ALM

There are five major problem areas relating to asset and liability management.

1. Seasonal liquidity requirements tend to be repetitive in extent, duration and timing. Forecasts of seasonal
needs are usually based on past experience. Because seasonal requirements are generally predictable, only
moderate risk is associated with the use of bought-in forms of liquidity to cover seasonal liquidity
requirements.

2. On the other hand, liquidity requirements relating to cyclical needs are much more unpredictable.
Bought-in funds to provide liquidity needs during booming economic cycles tend to be costly. Credit demands
are high during such periods and liability sources tend to become expensive. They may be limited by the
money market’s lack of confidence in a bank’s ability to repay its obligations and the market may be restricted
to only the larger operators.

3. Large banks with broad access to money market sources have few problems during such periods,
whereas smaller banks tend to rely on their (less costly) non-bought-in liquid asset holdings.

4. The longer-term liquidity needs of banks are more complex than the aforementioned seasonal and cyclical
requirements. If loan growth exceeds deposit growth, banks must budget for longer-term liquidity. Such net
growth can be financed by selling liquid assets or purchasing funds. The major problem with fulfilling such
longer-term liquidity demands is that the supply of saleable assets and the amount of borrowing permissible
are limited. In addition, a bank should always limit its use of bought-in liquidity, so as to have enough
‘borrowing capacity’ if future unpredictable liquidity needs occur.

5. Liquidity risk is often an inevitable outcome of banking operations. Since a bank typically collects
deposits which are short term in nature and lends long term, the gap between maturities leads to liquidity risk
and a cost of liquidity. The bank’s liquidity situation can be captured by the time profiles of the projected
sources and uses of funds, and banks should manage liquidity gaps within acceptable limits.

Interest rate risk 

Besides liquidity risk, banks are subject to interest rate risk. Interest rate risk relates to the exposure of banks’ profits
to interest rate changes which affect assets and liabilities in different ways. Banks are exposed to interest rate risk
because they operate with unmatched balance sheets.
The impact of interest rate changes in the macro economy on the risk exposure of banks is a matter of significant
concern to both bankers and regulators. Because banks engage in maturity transformation, unexpected and significant
market rate changes may lead to an unacceptable number of banks and other financial institutions encountering
difficulties, or even failing. Full awareness of such costs is needed in order to evaluate policy alternatives. At the same
time, management needs to understand and manage its own exposure to interest rate risk as bank’s costs and revenues
both are significantly related to market interest rates.

If bankers believe strongly that interest rates are going to move in a certain direction in the future, they have a strong
incentive to position the bank accordingly: when an interest rate rise is expected, they will make assets more interest-
sensitive relative to liabilities, and do the opposite when a fall is expected. Assets and liabilities can obviously be
mixed to increase or decrease exposures, and techniques such as interest-margin variance analysis (IMVA) are used to
evaluate current and project future exposures.

Interest-margin variance analysis (IMVA)

IMVA is a technique used to analyse interest income in earning assets. Variance analysis involves breaking down the
changes in interest margin into three factors: changes in rates, volume and mix. Rate variances reflect movement in
asset yields and the cost of funds, and bank management has little direct influence over these rates. One aspect of
balance sheet management is the attempt to anticipate interest rate changes by adopting the appropriate volume and
mix strategies. This involves considering the implications of adding more assets or increasing balance sheet volume.
Mix concerns the effects of a change in the composition of a bank’s asset and liability structure.

One of the practical aims of IMVA is to help improve managerial understanding of the variables that affect an
individual bank’s net interest margin. The technique may help a bank to increase its margin over time, and it may be
an aid for controlling fluctuations in net interest margin over the business cycle.

Liquidity gap analysis (+example)

Liquidity gap analysis is the most widely known ALM technique, and is used for managing both liquidity risk and
interest rate risk. Liquidity risk is generated in the balance sheet by the mismatch between the sizes and maturities of
assets and liabilities. This comes from the one of the bank’s function: banks are in the business of maturity
transformation; that is, they lend for longer periods than those for which they borrow. As as a result, they expect to
have a mismatched balance sheet with short-term liabilities greater than short-term assets and with assets greater than
liabilities at medium and long term. The liquidity risk relates to the possibility of holding inadequate resources to
balance the assets.

The liquidity gap is typically defined as the difference between net liquid assets and volatile liabilities. If the bank’s
assets exceed liabilities, the gap should be funded in the market. In the reverse case, the excess resources must be
invested.

The maintenance of adequate liquidity remains one of the most important features of banking. Banks can either store
liquidity in their assets or purchase it in money and deposit markets. Sources of liquidity are diverse. They may come
from the asset side of the balance sheet through the ability to sell, discount or pledge assets at short notice. Liquidity
may come from the liabilities side of the balance sheet through banks’ ability to raise new money at short notice via
the money market. Most commonly, however, liquidity can be generated from the maturity structure of the balance
sheet (where expected outflows of funds are matched by expected inflows).

Because liquid assets have lower returns, stored liquidity has an opportunity cost that results in a trade-off between
liquidity and profitability. The aim of ALM is to increase the earning capacity of the bank while at the same time
ensuring an adequate liquidity cushion.
Interest rate gap analysis (+example)

Interest rate risk policy is typically based on selected target variables, with net interest margin being a common target.
Banks will aim to optimise the risk–reward trade-off for these targets.

The first step in assessing interest rate risk is for the bank manager to decide which assets and liabilities are rate-
sensitive – in other words to identify those that have interest rates that will be reset (re-priced) within a given time
interval (e.g. six months). The gap for a given period is defined as the difference in value between interest-sensitive
assets and interest-sensitive liabilities.
2018ZA(8), 2015ZA(3), 2013ZA(3)
8. Analyse the roles of the gearing ratio and the risk-assets ratio in banking regulation.

1) Credit ‘crunch’ has led to considerable turmoil (суматоха) for banks and capital markets from 2007 onwards. The
symptoms of the credit market turmoil started in late 2006 when the default rate on lower quality (termed ‘sub-prime’)
US mortgages increased. The substantial losses reported by two Bear Stearns hedge funds which had invested in
structured securities backed by sub-prime mortgages triggered the squeeze. Market participants drove up interbank
interest rates (e.g. Libor) drastically, and money market funds were unwilling to buy commercial paper backed by sub-
prime mortgage- backed securities (MBS) due to the uncertainty in the sub-prime market. This led to a sudden drying
up in liquidity as banks found it difficult to make payments on sub-prime MBS by rolling over short-term borrowing
in the money market.

The credit market turbulence can be traced back to institutional changes in housing finance in industrial countries. The
deregulation of housing finance markets led to increased competition and the integration of housing finance into
capital markets as different agents in the mortgage market. An extended global economic boom and worldwide
decreases in interest rates were macroeconomic factors behind the housing market boom in many countries.

The bank run on Northern Rock was a dramatic event in the context of this credit crunch. This was the first bank run
in the UK for 150 years, and naturally led to a highlighting of many questions about the efficacy of regulation. The
Bank of England was called on as lender of last resort.

Although Northern Rock depositors had some protection from deposit insurance, they still preferred to run on the bank
in the light of uncertainty regarding the bank’s future life because it had been forced to rely on the lender of last resort
facility. There was also a lack of confidence by depositors that the bank was initially suffering from a liquidity
problem rather than a solvency problem. Thus, elements of the theory of bank runs were evident in the behavior of
bank customers during the episode. The case of Northern Rock bank caused a lot of discussions, one of them was that
earlier regulatory intervention is needed.

2) One of the main arguments in favor of banking regulation comes from the fact that unregulated private actions
create outcomes whereby social marginal costs are greater than private marginal costs. The social marginal costs
occur because bank failure has a far greater effect throughout the economy than, for example, a manufacturing
concern because banks are used to make payments and as a store for savings. In contrast, the private marginal costs
are costs associated with the shareholders and the employees of the company. Marginal private costs are likely to be
much smaller than social marginal costs.

Another key argument for regulation is that uninsured depositors are likely to cause a bank run when faced with
information about an adverse shock to bank balance sheets. ‘Bank run’ occurs when a mass of depositors in a single
bank wish to exchange their deposits for money. This argument has support both in history (Northern Rock in
September 2007) and in theory (Diamond and Dybvig (1983) model).

3) However, there are also arguments against regulation. One of such arguments is moral hazard argument: once a
depositor is insured, he no longer has an incentive to monitor the bank, because he knows that he won’t suffer any
losses. In return, riskier banks do not have to pay higher rates to their depositors to compensate them for riskier
deposits.

Secondly, regulation increases bank capital ratios. The moral hazard created by deposit insurance will drive even
conservative banks to take on extra risk when faced with competition from bad banks.

Thirdly, one of the hidden costs of excessive regulation is a potential loss of innovation dynamism.

4) As it was mentioned above, the existence of deposit insurance will cause banks to hold more riskier assets and keep
less reserves. This provides support for regulation based on reserve and capital adequacy ratios. There are two main
measures of capital adequacy: the gearing ratio and the risk–assets ratio.

The gearing ratio is based on bank deposits relative to bank capital. It is an indicator of how much of the deposit base
is covered if a given proportion of the bank’s borrowers default.
Let the balance sheet be described as:

A=D+E
where A is total assets, D is deposits and E is equity.
The gearing ratio, g = D / E , hence D = g E , and:
A=gE+E
A = E (g + 1)

If μ is the rate of default on assets, and max μA = E, then:


max μA = A / (1 + g)
max μ = 1 / (1 + g)

If μA of assets is lost due to default, it can be absorbed by bank capital (E) and deposits are covered. In other words,
the maximum default rate which can be absorbed without affecting deposits is equal to 1/(1 + gearing ratio).

The other common measure used is the risk–assets ratio. Historically, various countries’ regulators had different
approaches to capital adequacy, some relying on quantitative (balance sheet ratio) approaches, with others adopting
qualitative procedures. This led to a situation that banks in one country are more conservative to the capital
requirement than do banks in some other country. The increased integration of financial markets internationally along
with the need to create some unified standards for banks in different countries led to creation of Basle I. The
agreement explicitly incorporated the different credit risks of assets (both on and off the balance sheet) into capital
adequacy measures.

Basle I was based on employing the risk–assets ratio to determine the adequacy of a bank’s capital. The risk– assets
ratio is equal to the capital divided by the credit risk-adjusted assets, with two versions defined as follows:

A credit risk-adjusted asset is calculated as the value of the asset multiplied by an appropriate credit risk weight. The
higher the default risk associated with an asset, the higher the risk weight assigned. Off-balance sheet items are treated
similarly by assigning a credit equivalent percentage that converts them to on-balance sheet items and then the
appropriate credit risk weight applies.

Basle I attracted some criticism during the 1990s. Most importantly, it was argued that the risk weights were rather
unsophisticated and inadequately related to default risk. Another criticism is that the capital requirement is additive
across loans. This means that there is no reduction in capital requirements arising from greater diversification of a
bank’s loan portfolio. Further, differences in taxes and accounting rules could mean that measurement of capital varies
widely across countries.

A revised capital adequacy framework, Basle II, was then proposed. Basle II consists of three mutually reinforcing
‘pillars’ which contribute to the safety and soundness of the financial system.

Basle II proposes methods by which banks can calculate the capital required to protect
- against operational risk:
 the Basic Indicator Approach
 the Standardised Approach
 the Advanced Measurement Approach

- against market risk:


 the Standardised Approach
 the Internal model Approach
- аgainst credit risk:
 the Standardised Approach
 internal ratings-based (IRB) approach
 the Advanced IRB Approach.
As was mentioned above, two options are available to banks for the measurement of credit risk. First, the standardised
approach, which is conceptually the same as under Basle I, but more risk-sensitive. Secondly, the internal ratings-
based (IRB) approach, whereby banks are allowed to use their internal estimates of borrower creditworthiness, subject
to strict methodological and disclosure standards.

Later Basel 3 was developed. It aimed to improve the banking’s sector ability to absorb shocks arising from financial
and economic stress and improve risk management, bank’s transparency and disclosure.
2018ZA(4), 2013ZA(4)
4. Compare and contrast the nature and formulation of credit rating systems and credit scoring models as
measures of credit risk.

1) A credit rating is an evaluation of the credit risk of a prospective debtor (an individual, a business, company or a
government), predicting their ability to pay back the debt, and an implicit forecast of the likelihood of the debtor
defaulting.

Rating systems often serve as a tool for credit policy within banks. For example, some minimum rating, such as
investment grade or AA- or above, might be required for granting a loan, and this affects the adequacy regulation of
banks. Additionally, some countries’ regulators use ratings to restrict or prohibit banks’ purchases of low-grade
securities. Also, banks that have an established credit reputation, low credit risk and good prospects for future profits,
result in a relatively low cost of capital involved with securities issuance.

2) Internal ratings refer to credit ratings assigned by banks to their borrowers, using proprietary scales that vary across
banks. External ratings are assigned by credit rating agencies using publicly disclosed scales that vary across agencies.
Both types of ratings serve as the foundation for new approaches to capital adequacy regulation.

Typical components of internal rating systems address the borrower’s risk, the role of a supporting entity (if any) and
facility (transaction) risk. The intrinsic rating arises from an assessment of the borrower’s strengths, weaknesses,
competitive advantage and financial data. If a supporting entity exists, combining assessment of that support with the
borrower’s intrinsic rating provides the overall rating.

There are three main categories of internal rating system

1. Statistical-based processes (typically include both quantitative (e.g. financial ratios) and some qualitative but
standardised (e.g. industry, payment history) factors). Examples of such models include the use of credit scoring and
the KMV approach.

2. Constrained expert judgment-based processes. Under this approach, a quantitative tool determines the grade, but the
raters may adjust the final grade up or down based on their judgment.

3. Expert judgment-based processes. Statistical models may still have a role to play, though the institution has the
right to significantly deviate from statistical model indications in assigning a grade.

External ratings are the most widely known credit ratings. Their role is to evaluate and quantify credit risks, within a
context of effective benchmarking of risks across industries and countries. They aim to provide lenders and investors
with independent and objective credit opinions. There has been considerable recent growth in the market for external
ratings (particularly in Europe), and they are playing an increasing regulatory role.

External ratings depend on fundamental analysis, where ratings assess the borrower ‘through the economic cycle’. The
relevant rating criteria include consideration of the riskiness of activities, sources of funding, capital and reserves,
performance and earnings, and market environment.

External ratings exist only for large listed companies, which means that statistical analysis of default can be
misleading because the sample of borrowers rated by agencies is usually not representative of banks’ portfolios. Banks
must rely on internal rating for small and medium-sized corporate counterparties.

3) The ability to measure the probability of default is influenced by the level of information available on the borrower.
For retail customers, the bank needs to collect information internally or from specialist credit agencies. For larger
corporate borrowers, there are additional sources of publicly available information such as company accounts, stock
and bond prices and analysts’ reports. In the latter case, the availability of more information along with the lower
average cost of collecting such information, allows banks to use more sophisticated and usually more quantitative
methods in assessing default probabilities for large borrowers compared to small borrowers. However, quantitative
assessments of small borrowers are becoming increasingly feasible and less costly.
In principle, banks can use very similar models to assess the probabilities of default on both bonds and loans. Given
this, banks can use credit risk models either in making lending decisions or when considering investing in corporate
bonds offered either publicly or privately.

A range of different credit risk models have been applied in practice, among them credit scoring models.

Rather than evaluating borrower-specific and market-specific factors in a subjective or qualitative manner, banks will
commonly apply weights to different factors within a more objective or quantitative approach. The principle of all
statistical models of credit risk is to fit observable attributes (e.g. financial data on firms) to the variables to be
predicted (e.g. default or non-default, or the credit rating).

Credit scoring models use the borrower characteristics either to calculate a score that represents the applicant’s
probability of default or to sort borrowers into different default risk classes. To follow this approach, objective
economic and financial measures of risk for different classes of borrower must be identified. At the retail level,
commonly used factors are income, assets, age, occupation and location. Scoring applies well to individual borrowers
in retail banking, because of the large volume of data. Implementation of scoring for corporates is more challenging.
For commercial borrowers, cash flow information and financial ratios are most commonly employed.

Credit scoring uses techniques for discriminating between defaulters and non-defaulters. A ‘scoring function’ provides
a score (a number) from observable attributes. Scoring techniques can also provide ‘posterior default probabilities’
(i.e. probabilities of default given the score values). Once the appropriate factors have been identified, a statistical
technique is used to quantify or score the default risk probability or default risk classification. The most popular
techniques are linear probability models, logit models and linear discriminant analysis.

The linear probability model uses past data as inputs to a model to explain the repayment experience (whether or not
default occurred) on pre-existing loans. The relative importance of different factors used to explain past repayment
performance is then used to forecast repayment probabilities on new loans.

The logit model represents an adjustment to the linear probability model to ensure that the estimated range of default
probabilities lies between zero and one.

Whereas the two models above generate an expected probability of default if a loan is made, linear discriminant
models classify borrowers into high or low default-risk classes contingent on their observed characteristics. One
problem with this approach is that the models usually discriminate only between two extreme cases of borrower
behaviour, that is, default or no default. In practice, various classes of default exist, from non-payment or delay of
interest payments to outright default on all promised interest and principal payments.

Discriminant analysis distinguishes statistically between two or more groups of observations. In the case of credit risk,
it typically serves to discriminate between borrowers likely to default and those not likely to default, up to a given
horizon. Credit scoring on this basis does not use a conceptual framework, but simply fits a function that best
discriminates between high-risk and low-risk populations.

Credit scoring models have been most widely used and most successful in the evaluation of smaller consumer loans,
such as credit card applications. More recent approaches to credit risk modelling have sought to apply financial theory
and to utilise financial market data to produce default probabilities. This is an area of considerable current research
effort by banks.
2018ZA(3), 2016ZA(7), 2015ZA(7), 2014ZA(7), 2013ZA(7)
3. Discuss the main sources of risk in commercial banking, and critically evaluate the approaches used to conduct
risk-adjusted performance measurement.

1) Main types of risk in commercial banking are credit risk, liquidity risk, interest rate risk, and market risk.

Credit risk is the most obvious risk in banking, and possibly the most important in terms of potential losses. This risk
relates to the possibility that loans will not be paid or that investments will deteriorate in quality or go into default
with consequent loss to the bank. This risk is one of the most important in banking because even a perfectly matched
balance sheet will remain subject to credit risk and because the default of a small number of key customers could
generate very large losses and in an extreme case could lead to a bank becoming insolvent.

Liquidity risk is the risk that a company or bank may be unable to meet short term financial demands. This usually
occurs due to the inability to convert a security or hard asset to cash without a loss of capital and/or income in the
process. However, liquidity risk is often an inevitable outcome of banking operations, since a bank typically collects
deposits which are short term and lends long term, this gap between maturities leads to liquidity risk and a cost of
liquidity.

Interest rate risk relates to the exposure of banks’ profits to interest rate changes which affect assets and liabilities in
different ways. Banks are exposed to interest rate risk because they operate with unmatched balance sheets. If bankers
believe strongly that interest rates are going to move in a certain direction in the future, they have a strong incentive to
position the bank accordingly: when an interest rate rise is expected, they will make assets more interest-sensitive
relative to liabilities, and do the opposite when a fall is expected. Assets and liabilities can obviously be mixed to
increase or decrease exposures, and techniques such as interest-margin variance analysis (IMVA) are used to evaluate
current and project future exposures.

The last one is the market risk, which relates to the risk of loss associated with adverse deviations in the value of the
trading portfolio, which arises through fluctuations in, for example, interest rates, equity prices, foreign exchange rates
or commodity prices. It arises where banks hold financial instruments on the trading book, or where banks hold equity
as some form of collateral. Many large banks have dramatically increased the size and activity of their trading
portfolios, resulting in greater exposure to market risk. The industry standard for dealing with market risk is the
Value-at-Risk (VaR) model.

2) The problem of the credit risk can easily be seen in the context of the credit market turmoil started in late 2006
when the default rate on lower quality (termed ‘sub-prime’) US mortgages increased. The substantial losses reported
by two Bear Stearns hedge funds which had invested in structured securities backed by sub-prime mortgages triggered
the squeeze. Market participants drove up interbank interest rates (e.g. Libor) drastically, and money market funds
were unwilling to buy commercial paper backed by sub-prime mortgage- backed securities due to the uncertainty in
the sub-prime market. This led to a sudden drying up in liquidity as banks, subject to liquidity risk, found it difficult to
make payments on sub-prime MBS by rolling over short-term borrowing in the money market.

There was also a case with Northern Rock case, which relied on liquidity from wholesale sources. It was pursuing a
somewhat unique business model for a retail bank in its heavy reliance on money-market funding. This was the first
bank run in the UK for 150 years, and naturally led to a highlighting of many questions about the efficacy of
regulation. The Bank of England was called on as lender of last resort. Consumers demonstrated a lack of awareness
and/or confidence in deposit insurance arrangements, whose provisions were subsequently extended in the UK.

Although Northern Rock depositors had some protection from deposit insurance, they still preferred to run on the bank
in the light of uncertainty regarding the bank’s future life because it had been forced to rely on the lender of last resort
facility. There was also a lack of confidence by depositors that the bank was initially suffering from a liquidity
problem rather than a solvency problem. Thus, elements of the theory of bank runs were evident in the behavior of
bank customers during the episode. The case of Northern Rock bank caused a lot of discussions, one of them was that
risk analysis is essential even for well performing banks.
3) Most traditional measures of bank performance fail to adequately capture the risk associated with achieving a given
level of performance. Banks have aimed to address this through their increasing adoption of risk-adjusted measures of
performance.

The main functions of risk-adjusted profitability measures are:


 reporting and comparing profitability and risk across business units, products and customers, and comparisons
with profitability targets
 aiding the pricing of risks
 allocating risks and capital across business units, products and customers, based on the risk–return profile.

The risk-adjusted return on capital (RAROC) is an important approach to define risk-adjusted profitability.
The RAROC ratio is applied in different ways across banks.

RAROC = risk-adjusted earnings / capital

A related ratio is termed return on risk-adjusted capital (RORAC) and is defined as:

RORAC = earnings / risk-adjusted capital

With RAROC, the earnings measure is adjusted for risk, which typically involves subtracting expected losses. With
RORAC, the capital measure is adjusted for risk, which typically means that it represents a maximum potential loss
based on the probability of future returns. The minimum required value of the RAROC ratio (hurdle rate) should be
set equal to the minimum return required by shareholders.

A related risk-adjusted performance measure is the concept of Economic Value Added (EVA). The accounting
approaches of ROE and ROA are often criticised because they fail to provide information on how a bank’s
management is enhancing shareholder value.

EVA is a joint measure of profitability and high- quality growth. It measures how much value is created over a given
period. The approach encourages management to aim for profitable growth and discourages excessive attention on
current earnings.

EVA = (r – k) K

where r is the return on capital, k is the cost of capital and K is the capital invested at the margin.

8) There are some disadvantages of these methods, however. One problem in estimating RAROC is the measurement
of loan risk.

Also, there are some issues when trying to measure EVA for the entire bank.
- First, it is often difficult to obtain an accurate measure of a firm's cost of capital. While there are many models
to estimate the cost of capital, ranging from the capital asset pricing model (CAPM) to the dividend growth
model, these procedures can produce substantially different estimates.
- Second, the amount of bank capital includes stockholders' equity, loan loss reserves, deferred (net) tax credits,
nonrecurring items such as restructuring charges and unamortized securities gains.
2017ZA(6)
6. Compare and contrast internal and external credit rating systems, and critically evaluate their roles in capital
adequacy regulation.

1) A credit rating is an evaluation of the credit risk of a prospective debtor (an individual, a business, company or a
government), predicting their ability to pay back the debt, and an implicit forecast of the likelihood of the debtor
defaulting.

Rating systems often serve as a tool for credit policy within banks. For example, some minimum rating, such as
investment grade or AA- or above, might be required for granting a loan, and this affects the adequacy regulation of
banks. Additionally, some countries’ regulators use ratings to restrict or prohibit banks’ purchases of low-grade
securities. Also, banks that have an established credit reputation, low credit risk and good prospects for future profits,
result in a relatively low cost of capital involved with securities issuance. Risk quality covers both the probability of
default and the recoveries in the event of default, and is commonly captured through credit ratings.

2) Internal ratings refer to credit ratings assigned by banks to their borrowers, using proprietary scales that vary across
banks. External ratings are assigned by credit rating agencies using publicly disclosed scales that vary across agencies.
Both types of ratings serve as the foundation for new approaches to capital adequacy regulation.

Typical components of internal rating systems address the borrower’s risk, the role of a supporting entity (if any) and
facility (transaction) risk. The intrinsic rating arises from an assessment of the borrower’s strengths, weaknesses,
competitive advantage and financial data. If a supporting entity exists, combining assessment of that support with the
borrower’s intrinsic rating provides the overall rating.

There are three main categories of internal rating system

1. Statistical-based processes (typically include both quantitative (e.g. financial ratios) and some qualitative but
standardised (e.g. industry, payment history) factors). Examples of such models include the use of credit scoring and
the KMV approach.

2. Constrained expert judgment-based processes. Under this approach, a quantitative tool determines the grade, but the
raters may adjust the final grade up or down based on their judgment.

3. Expert judgment-based processes. Statistical models may still have a role to play, though the institution has the
right to significantly deviate from statistical model indications in assigning a grade.

External ratings are the most widely known credit ratings. Their role is to evaluate and quantify credit risks, within a
context of effective benchmarking of risks across industries and countries. They aim to provide lenders and investors
with independent and objective credit opinions. There has been considerable recent growth in the market for external
ratings (particularly in Europe), and they are playing an increasing regulatory role.

External ratings depend on fundamental analysis, where ratings assess the borrower ‘through the economic cycle’. The
relevant rating criteria include consideration of the riskiness of activities, sources of funding, capital and reserves,
performance and earnings, and market environment. External ratings exist only for large listed companies, which
means that statistical analysis of default can be misleading because the sample of borrowers rated by agencies is
usually not representative of banks’ portfolios. Banks must rely on internal rating for small and medium-sized
corporate counterparties.

3) It is also necessary, however, to highlight the failures of ratings in the context of structured finance (as identified
during the credit crunch). The subprime crisis that began in the summer of 2007 may rank as one of the most traumatic
global developments of the last one hundred years. It caused dismay and panic throughout developed countries.

Over the period prior to 2007 investors showed a high-risk appetite that stimulated further development by financial
institutions of techniques for unbundling and distributing risks through financial markets. This led to a marked
expansion of the so-called sub-prime mortgage market. In 2005 and 2006, the competition among the sub-prime
originators intensified. To maintain volumes and/or increase market share, originators introduced product innovations
such as teaser rates. At the same time, there was an apparent weakening of lending standards. Loans were made with
increasingly high loan-to-value ratios and often without full documentation. Most originators sold the loans to larger
banks, who in turn securitized them and sold them to end-investors.

As interest rates increased and house prices fell rapidly in the U.S. in 2006 there was an increase in mortgage defaults,
particularly among sub-prime borrowers (especially for those who moved off teaser interest rates to higher rates).
These mortgage defaults led to a decline in the quality of securitized debt leading to a fall in value of this debt leading
to spread of the problem outside of sub-prime originating banks.

The major discrete criticism of the agencies that followed the subprime crisis is that ratings were defective and did not
warn of trouble ahead because of the perverse incentives created by a conflict of interest at the heart of the business
model adopted by the agencies. The idea is that because rating agencies are funded by fees paid by issuers or sellers of
securities they have strong incentives to inflate ratings to please their customers.

4) Both internal and external credit rating systems play a big role role in capital adequacy regulation. From a
regulatory viewpoint, the issue of adequate capital is critically important for the stability of the banking system. Even
for the best-managed bank which has effective risk management procedures, there always remains the possibility of
risks materializing that produce losses. Therefore, it is essential for banks to have adequate capital backing. As
banking risks have grown, supervisory authorities have demanded tough requirements for capital adequacy.

5) Historically, various countries’ regulators had different approaches to capital adequacy, some relying on
quantitative (balance sheet ratio) approaches, with others adopting qualitative procedures. This led to a situation that
banks in one country are more conservative to the capital requirement than do banks in some other country. The
increased integration of financial markets internationally along with the need to create some unified standards for
banks in different countries led to creation of Basle I. The agreement explicitly incorporated the different credit risks
of assets (both on and off the balance sheet) into capital adequacy measures.

Basle I was based on employing the risk–assets ratio to determine the adequacy of a bank’s capital. The risk– assets
ratio is equal to the capital divided by the credit risk-adjusted assets, with two versions defined as follows:

Basle I attracted some criticism during the 1990s. Most importantly, it was argued that the risk weights were rather
unsophisticated and inadequately related to default risk. Another criticism is that the capital requirement is additive
across loans. This means that there is no reduction in capital requirements arising from greater diversification of a
bank’s loan portfolio. Further, differences in taxes and accounting rules could mean that measurement of capital varies
widely across countries.

A revised capital adequacy framework, Basle II, was then proposed. Basle II consists of three mutually reinforcing
‘pillars’ which contribute to the safety and soundness of the financial system.

Basle II proposes methods by which banks can calculate the capital required to protect
- against operational risk:
 the Basic Indicator Approach
 the Standardised Approach
 the Advanced Measurement Approach

- against market risk:


 the Standardised Approach
 the Internal model Approach

- аgainst credit risk:


 the Standardised Approach
 internal ratings-based (IRB) approach
 the Advanced IRB Approach.

The key difference of Standartised approach from what we have seen in Basel 1 is in that this approach arises from the
greater sensitivity of the risk categories. Risk weights are refined by reference to ratings by External Credit
Assessment Institutions (ECAIs) that meet strict standards set by the BIS. The term ECAIs mainly refers to credit
rating agencies

Thus, in estimation of credit risk banks can use either the Standardised Approach, which prescribes risk weights to
ratings, or IRB approach:

Banks may also adopt one of two IRB approaches to calculating credit risk-adjusted assets for capital requirements.
Under both approaches, benchmark risk weights (BRW) are calculated for different loans. Under the Foundations
Approach, a bank estimates the one-year probability of default (PD) associated with each of its internal rating grades,
while relying on supervisory rules for the estimation of other risk components. The methods for mapping internal
ratings and PD include the use of the bank’s own default experience, the use of external data from credit rating
agencies and the use of statistical default models.

Under the Advanced Approach, a bank may use its own estimates of three additional risk components: loss given
default (LGD), exposure at default (EAD) and maturity. The bank calculates the expected (mean) PD for each of its
rating classes based on historical experience in order to generate the BRW.

6) Later Basel 3 was developed. It aimed to improve the banking’s sector ability to absorb shocks arising from
financial and economic stress and improve risk management, bank’s transparency and disclosure.
2017ZA(8), 2014ZA(4)
8. Explain the risk management process in banks, and critically evaluate the downside risk measurement
techniques.

1) The risk management process is both set of techniques and tools, and it is the process that is required to optimize
risk-return tradeoff. The aim of the process is to measure risks in order to monitor and control them.

There are four stages that are followed in risk management process:
1) Identify the areas where risks can arise
2) Measure the degree of risk (can be individual customer risk or risk in particular sector of country)
3) Balance risk and return tradeoffs and determine reasonable level of total risk exposure by individual, firm, country
and etc
4) establish appropriate monitoring and control procedures with the bank.

2) The risk management process can be viewed from both top-down and bottom-up perspectives. On a top-down
basis, target earnings and risk limits are translated into signals to business units, and then to managers dealing with
customers. On a bottom-up basis, monitoring and reporting of risks rises from the transaction level through to
aggregate risks. This process facilitates the diversification of risks and aims to ensure consistency with available
capital.

In recent years considerable advances in the quantitative techniques applied to risk management within banks have
been created. The most commonly used quantitative risk measures can be separated into the three categories:

• sensitivity of target variables (e.g. earnings or interest margin) to changes in market parameters (e.g. an interest rate
change)
• volatility of target variables, which captures deviations around their mean (both upside and downside)
• downside risk, which focuses on adverse deviations only. This type of measure is expressed as a worst-case value of
a target variable and the probability of it occurring.

These different measures address different dimensions of risk. The first category is the simplest measure and the third
is the most elaborate. The third category integrates the previous two. Quantitative risk measures have been developed
significantly because of appearance of Value at Risk (VaR) and Earnings at Risk (EaR) techniques which belong to
the third category above.

3) Regulators are increasingly focusing on requiring banks to measure their market risk using an internally generated
risk measurement model. Market risk relates to the risk of loss associated with adverse deviations in the value of the
trading portfolio, which arises through fluctuations in, for example, interest rates, equity prices, foreign exchange rates
or commodity prices. It arises where banks hold financial instruments on the trading book, or where banks hold equity
as some form of collateral.

This risk has two components related to volatility and liquidity. The first means that despite the fact that liquidation
period is very short, deviations can be very high at volatile markets. Secondly, for instruments traded with low volume
of transactions, it may be difficult to sell them without large discounts.

4) The industry standard for measuring their market risk is the Value-at-Risk (VaR) model. The aim of VaR is to
calculate the likely loss a bank might experience on its whole trading book.

Then, the Value at Risk of a portfolio is defined as the maximum loss on a portfolio occurring within a given length of
time with a given small probability. Here we plot the probability distribution of the change in the value of a given
portfolio. Assuming the portfolio to be well diversified, this distribution should reflect aggregate or market risk only.
A bank official wishes to know what the maximum fall is in the value of the institution’s portfolio that occurs no more
than five per cent of the time. We assume the distribution which is plotted is the distribution of six monthly portfolio
returns.

The fact that the official is interested in losses which occur very infrequently implies that we should concentrate on
the left tail of the distribution. Further, we note that the official specified losses occurring no more frequently than five
per cent of the time. Hence, the VaR of the portfolio is defined as the return which has precisely five per cent of the
probability mass to its left. In Figure below, the VaR is shown to be a loss of two per cent of the portfolio’s value.

5) What we have calculated is the worst event that is likely to happen under unexceptional market conditions. There
are, however, two parameters which are user-defined. The first is the horizon over which portfolio returns are
calculated. In the above example, the horizon was six months. Clearly, increasing this horizon will increase the
probability of a disastrous (or fantastic) return. Hence, the VaR of the portfolio will become larger in absolute terms.

The second parameter is the percentage specified by the bank official. If he had originally desired a calculation based
on losses occurring no more than one per cent of the time, it is obvious that the VaR would again increase in absolute
terms. Hence our perceptions of portfolio risk are greatly affected by these two parameters.

To sum it up, when calculating VaR we have some problems. One of them is that the quantile that we are interested in,
is composed of the very extreme events and, as such, is likely to be the least accurately estimated. Then, there are
difficulties in attaining good VaR estimates.

6) There are three major approaches that are also being used to measure market risk: RiskMetrics, historic or back
simulation, and Monte Carlo simulation.

RiskMetrics concentrates on DEAR calculating using variance-covariance approach by which volatilities are
calculated. This approach assumes normal distribution and linear dependence of factors. However, in real life these
assumptions are not always true, and because of this FIs have developed a historic or back simulation approach.

Advantages of the historic or back simulation approach are that (1) it is simple, (2) it does not require that asset returns
be normally distributed, and (3) it does not require that the correlations or standard deviations of asset returns be
calculated. Instead of the above assumptions, it uses past returns with equal probability each, and can be applied to
both linear and non-linear dependence of risk factors. Its drawbacks are: it is not useful for scenario analysis and due
to its usage of past data, it may be no relevant for the current model.

The last approach is Monte Carlo Simulation Approach, which simulates portfolio value using random number
generator. Monte Carlo Simulation is the most accurate technique, but is very time consuming for big trading
portfolio.

7) The last approach that we will examine here is the EAR approach, which is based on the economic capital.
Economic capital or ‘risk-based capital’ represents the capital necessary to absorb potential unexpected losses at a
preset confidence level. Economic capital is a quantitative assessment of potential losses for the entire portfolio of a
bank, and generally differs from regulatory capital or available capital, in that it measures actual risks. For the
purposes of producing simple estimates of economic capital, ‘Earnings at Risk’ (EaR) is a practical version of VaR.
EaR will be based on a higher tolerance level than VaR, due to the essential need to maintain the solvency of the bank.

EaR uses the observed volatility (standard deviation) of earnings values as the basis for calculating potential losses,
and thus for estimating the amount of capital capable of absorbing such potential losses.

One of the main drawbacks of EaR is that it does not relate the adverse deviations of earnings to the underlying risks,
because EaR aggregates the effects of all risks. In contrast, VaR captures risks at their source, and requires the linking
of losses to each risk. This is critical from the risk management perspective. Thus, EaR must be viewed as an
additional tool for risk management, rather than as a replacement for the more comprehensive (and sophisticated)
alternatives.
2017ZA(2), 2016ZA(6), 2015ZA(6), 2014ZA(6), 2013ZA(6)
2. Explain the mechanics, costs and benefits of different forms of securitisation.

1) Structured finance is based on capital market-based risk transfer. The two major classes are securitisation and credit
derivatives. Securitisation is mostly used for funding purposes whereas credit derivative transactions have hedging (or
trading) motivations. These instruments permit the transfer of asset risk between banks, insurance companies, and
non-financial institutions in order to achieve greater transformation and diversification of risk. These financial
innovations have changed the landscape of risk by enabling market participants to trade risk (credit risk in particular)
across financial and non-financial sectors, but also led to significant risks which should be addressed.

2) The main options available to banks to increase the flexibility of operations while adhering to the regulatory capital
requirements are to liquidate assets or to reduce risks.

Liquidation of assets can be achieved through direct sales or through securitisation. The latter is an increasingly
common means of transforming illiquid assets into negotiable assets that are attractive to investors. It is one of the
most important financial innovations of the past two decades.

Securitisation is an efficient means of redistributing the credit risks held by a bank to other banks or non-bank
investors. During the period prior to the U.S. sub-prime crisis, liquidation of assets through securitisation became an
increasingly widespread means used by banks to transform illiquid assets like loans into securities that are attractive to
investors. Securitisation is recognised as an efficient means of redistributing the credit risks held by a bank to other
banks or non-bank investors. In principle, it offers a vehicle to transform illiquid financial assets into tradable capital
market instruments, which therefore offers potential for enhanced diversification of risks.

3) Other important potential benefits of securitisation include improving the risk–return trade-off, hedging interest
rate gaps, improving the liquidity of asset portfolios and providing an important source of fee income.

4) Securitisation through the repackaging of securities, is a technique used by banks to sell balance sheet assets to
outside investors, and it is the primary focus of this aspect in our course. It is a vehicle for transforming illiquid
financial assets into tradeable capital market instruments, and thus provides enhanced risk diversification and financial
stability. This is considered to be an efficient means of redistributing the credit risks of a bank to other banks or non-
bank investors.

«Pass-through» securitisation has a principle of selling assets to investors through an intermediate structure.
Mechanism: originating bank writes off its assets from BS by choosing homogeneous pools of assets and transferring
these assets to a third-party trust (SPV) – exchange assets for cash-pooling assets. SPV gets cash for buying of
securitized assets by the issuing bonds (ABS securities). These ABS securities are then sold to investors. After issuing
ABS, SPV gets cash from investors and give these proceeds to originating bank. Then, the flows, that are generated by
underlying sold, are used to provide a return to investors holding ABS.

However, the proportion of proceeds of assets is uncertain – default, delay and etc. However, diversification is
achieved by combining the underlying assets in large pools of transactions.

5) The identification of appropriate packages of assets also has an important influence on the viability of a
securitisation transaction. With any set of benefits from securitisation, the costlier and more difficult it is to find asset
packages of sufficient size and homogeneity, the more expensive it is to securitize these asset packages. The potential
boundary to securitisation may be defined by the relative degree of heterogeneity and credit quality of an asset type or
group.

US 30-year fixed-rate residential mortgages as are the most homogeneous of all assets in banks’ balance sheets. In
contrast, commercial loans have a wide range of maturities, varying interest terms (e.g. fixed, floating) and fees, and
differing covenants, and are made to firms in a wide variety of industries. Despite this, banks have issued
securitisation packages termed ‘collateralised loan obligations’ (CLOs) containing high-quality loans and
‘collateralised debt obligations’ (CDOs) containing a diversified collection of junk bonds or risky bank loans. The
riskiest of CDOs (termed as ‘toxic waste’ in the market) pay out to investors only if everything goes right (in terms of
interest and principal payments to the originating bank). The best CDOs will pay out to investors unless the entire
portfolio defaults.

In addition to the pass-through structure of securitisation outlined above, other common forms of securitisation
include the collateralised mortgage obligation (CMO) and mortgage-backed bonds (MBB).

The CMO is a second and increasingly used approach. CMOs can be created either by packaging and securitising
mortgage loans or, more commonly, by placing existing pass-through securities in a trust off the balance sheet. Issuing
CMOs is thus often equivalent to double securitisation. Unlike a pass-through structure, each bondholder class has a
different guaranteed coupon. Cash flows are prioritised to the lower-risk securities within the hierarchy of assets.

MBBs differ from pass-through securities and CMOs in two significant ways. First, MBBs normally remain on the
bank’s balance sheet, whereas the previous two are forms of off-balance sheet securitisation. Secondly, for MBBs,
there is no direct link between the cash flows on the mortgages backing the bond and the interest and principal
payments on the MBB. In an MBB, the bank segregates a group of mortgage assets on its balance sheet, and pledges
this as collateral against the bond issue. A trustee normally monitors the segregation of assets and ensures that the
market value of this collateral exceeds the principal owed to the bondholders.

6) Securitisation provides benefits to banks in terms of both capital and funding costs. To achieve funding cost
reductions from the securitisation process, the reduction in funding costs should more than offset the additional costs
incurred by the structure required. The condition which needs to be satisfied is:

where CP are the cash proceeds from the sale of the asset-backed securities, C H are the credit enhancement costs, CR
are the credit rating agency fees, rD is the cost of attracting deposits and rB is the cost of raising finance through bond
issues.
2016ZA(3)
1. Discuss the main sources of risk in commercial banking, and critically discuss the Value-at- Risk (VaR)
approach to risk measurement.

1) Financial institutions fulfil the following main functions: the brokerage function (financial intermediaries match
transactors and provide transaction and other services and, as a result, reduce transaction costs and remove
information costs) and asset transformation function. The asset transformation function lies in fact that financial
intermediaries hold the long-term, high-risk claims issued by borrowers and finance this by issuing short-term, low-
risk, small-denomination deposit claims. This process is often described in the literature as qualitative asset
transformation (QAT). This mismatch in maturities helps bank to earn profits, but at the same time it gives a rise to
different type of risks.

2) Main types of risk in commercial banking are credit risk, liquidity risk, interest rate risk, and market risk.

Credit risk is the most obvious risk in banking, and possibly the most important in terms of potential losses. This risk
relates to the possibility that loans will not be paid or that investments will deteriorate in quality or go into default
with consequent loss to the bank. This risk is one of the most important in banking because even a perfectly matched
balance sheet will remain subject to credit risk and because the default of a small number of key customers could
generate very large losses and in an extreme case could lead to a bank becoming insolvent.

Liquidity risk is the risk that a company or bank may be unable to meet short term financial demands. This usually
occurs due to the inability to convert a security or hard asset to cash without a loss of capital and/or income in the
process. However, liquidity risk is often an inevitable outcome of banking operations, since a bank typically collects
deposits which are short term and lends long term, this gap between maturities leads to liquidity risk and a cost of
liquidity.

Interest rate risk relates to the exposure of banks’ profits to interest rate changes which affect assets and liabilities in
different ways. Banks are exposed to interest rate risk because they operate with unmatched balance sheets. If bankers
believe strongly that interest rates are going to move in a certain direction in the future, they have a strong incentive to
position the bank accordingly: when an interest rate rise is expected, they will make assets more interest-sensitive
relative to liabilities, and do the opposite when a fall is expected. Assets and liabilities can obviously be mixed to
increase or decrease exposures, and techniques such as interest-margin variance analysis (IMVA) are used to evaluate
current and project future exposures.

The last one is the market risk, which relates to the risk of loss associated with adverse deviations in the value of the
trading portfolio, which arises through fluctuations in, for example, interest rates, equity prices, foreign exchange rates
or commodity prices. It arises where banks hold financial instruments on the trading book, or where banks hold equity
as some form of collateral. Many large banks have dramatically increased the size and activity of their trading
portfolios, resulting in greater exposure to market risk. The industry standard for dealing with market risk is the
Value-at-Risk (VaR) model.

3) The problem of the credit and liquidity risk can easily be seen in the context of the credit market turmoil started in
late 2006 when the default rate on lower quality (termed ‘sub-prime’) US mortgages increased. The substantial losses
reported by two Bear Stearns hedge funds which had invested in structured securities backed by sub-prime mortgages
triggered the squeeze. Market participants drove up interbank interest rates (e.g. Libor) drastically, and money market
funds were unwilling to buy commercial paper backed by sub-prime mortgage- backed securities due to the
uncertainty in the sub-prime market. This led to a sudden drying up in liquidity as banks, subject to liquidity risk,
found it difficult to make payments on sub-prime MBS by rolling over short-term borrowing in the money market.

There was also a case with Northern Rock case, which relied on liquidity from wholesale sources. It was pursuing a
somewhat unique business model for a retail bank in its heavy reliance on money-market funding. This was the first
bank run in the UK for 150 years, and naturally led to a highlighting of many questions about the efficacy of
regulation. The Bank of England was called on as lender of last resort. Consumers demonstrated a lack of awareness
and/or confidence in deposit insurance arrangements, whose provisions were subsequently extended in the UK.

4) Market risk can be defined market risk more narrowly as the risk of loss during the time required to effect a
transaction (liquidation period). This risk has two components, relating to volatility and liquidity. First, even though
the liquidation period is relatively short, deviations can be large in a volatile market. Secondly, for instruments traded
in markets with a low volume of transactions, it may be difficult to sell without suffering large discounts

5) Regulators are increasingly focusing on requiring banks to measure their market risk using an internally generated
risk measurement model. The industry standard is the Value-at-Risk (VaR) model. The aim of VaR is to calculate the
likely loss a bank might experience on its whole trading book.

Then, the Value at Risk of a portfolio is defined as the maximum loss on a portfolio occurring within a given length of
time with a given small probability. Here we plot the probability distribution of the change in the value of a given
portfolio. Assuming the portfolio to be well diversified, this distribution should reflect aggregate or market risk only.
A bank official wishes to know what the maximum fall is in the value of the institution’s portfolio that occurs no more
than five per cent of the time. We assume the distribution which is plotted is the distribution of six-monthly portfolio
returns.

The fact that the official is interested in losses which occur very infrequently implies that we should concentrate on
the left tail of the distribution. Further, we note that the official specified losses occurring no more frequently than five
per cent of the time. Hence, the VaR of the portfolio is defined as the return which has precisely five per cent of the
probability mass to its left. In Figure below, the VaR is shown to be a loss of two per cent of the portfolio’s value.

6) What we have calculated is the worst event that is likely to happen under unexceptional market conditions. There
are, however, two parameters which are user-defined. The first is the horizon over which portfolio returns are
calculated. In the above example, the horizon was six months. Clearly, increasing this horizon will increase the
probability of a fantastic return. Hence, the VaR of the portfolio will become larger in absolute terms.

The second parameter is the percentage specified by the bank official. If he had originally desired a calculation based
on losses occurring no more than one per cent of the time, it is obvious that the VaR would again increase in absolute
terms. Hence our perceptions of portfolio risk are greatly affected by these two parameters.

7) There are three major approaches that are also being used to measure market risk: RiskMetrics, historic or back
simulation, and Monte Carlo simulation.

RiskMetrics concentrates on DEAR calculating using variance-covariance approach by which volatilities are
calculated. This approach assumes normal distribution and linear dependence of factors. However, in real life these
assumptions are not always true, and because of this FIs have developed a historic or back simulation approach.

Advantages of the historic or back simulation approach are that (1) it is simple, (2) it does not require that asset returns
be normally distributed, and (3) it does not require that the correlations or standard deviations of asset returns be
calculated. Instead of the above assumptions, it uses past returns with equal probability each, and can be applied to
both linear and non-linear dependence of risk factors. Its drawbacks are: it is not useful for scenario analysis and due
to its usage of past data, it may be no relevant for the current model.

The last approach is Monte Carlo Simulation Approach, which simulates portfolio value using random number
generator. Monte Carlo Simulation is the most accurate technique, but is very time consuming for big trading
portfolio.
2016ZA(4)
4. Discuss the methods used by banks to model and manage credit risk.

1) Credit risk is the most obvious risk in banking, and possibly the most important in terms of potential losses. The
default of a small number of key customers could generate very large losses and in an extreme case could lead to a
bank becoming insolvent. This risk relates to the possibility that loans will not be paid or that investments will
deteriorate in quality or go into default with consequent loss to the bank.

As a result of these risks, bankers must exercise discretion in maintaining a sensible distribution of liquidity in assets,
and also conduct a proper evaluation of the default risks associated with borrowers. In general, protection against
credit risks involves maintaining high credit standards, appropriate diversification, good knowledge of the borrower’s
affairs and accurate monitoring and collection procedures.

In general, credit risk management for loans involves three main principles:
• selection
• limitation
• diversification.

First of all, selection means banks have to choose carefully those to whom they will lend money. The processing of
credit applications is conducted by credit officers or credit committees, and a bank’s delegation rules specify
responsibility for credit decisions.

Limitation refers to the way that banks set credit limits at various levels. Limit systems clearly establish maximum
amounts that can be lent to specific individuals or groups. Loans are also classified by size and limitations are put on
the proportion of large loans to total lending. Banks also have to observe maximum risk assets to total, and should
hold a minimum proportion of assets, such as cash and government securities, whose credit risk is negligible.

Diversification. Credit management has to be diversified. Banks must spread their business over different types of
borrower, different economic sectors and geographical regions, in order to avoid excessive concentration of credit risk
problems. Large banks therefore have an advantage in this respect.

The long-standing existence of the above procedures within banks is insufficient to address all credit risk problems.
For example, the amount of a potential loss is uncertain since outstanding balances at the time of default are not
known in advance. The size of the commitment is not sufficient to measure the risk, since there are both quantity and
quality dimensions to consider.

2) The problem of the credit can easily be seen in the context of the credit market turmoil started in late 2006 when the
default rate on lower quality (termed ‘sub-prime’) US mortgages increased. The substantial losses reported by two
Bear Stearns hedge funds which had invested in structured securities backed by sub-prime mortgages triggered the
squeeze. Market participants drove up interbank interest rates (e.g. Libor) drastically, and money market funds were
unwilling to buy commercial paper backed by sub-prime mortgage- backed securities due to the uncertainty in the
sub-prime market. This led to a sudden drying up in liquidity as banks, subject to liquidity risk, found it difficult to
make payments on sub-prime MBS by rolling over short-term borrowing in the money market.

There was also a case with Northern Rock case, which relied on liquidity from wholesale sources. It was pursuing a
somewhat unique business model for a retail bank in its heavy reliance on money-market funding. This was the first
bank run in the UK for 150 years, and naturally led to a highlighting of many questions about the efficacy of
regulation. The Bank of England was called on as lender of last resort. Consumers demonstrated a lack of awareness
and/or confidence in deposit insurance arrangements, whose provisions were subsequently extended in the UK.

3) There are several constituents of credit risk. Among them are default risk, exposure risk and recovery risk.

Default risk. The adequate quantitative measure of default risk is the probability of a default occurring. Default can
be defined in several ways: missing a payment obligation, breaking a covenant, entering a legal procedure or
economic default. The last occurs when the economic value of assets falls below the value of outstanding debts.
Credit rating agencies typically consider that default has occurred when a contractual payment has been missed for at
least three months. Default does not necessarily lead to immediate losses, but may increase the likelihood of
bankruptcy.

The probability of default depends on factors such as market outlook, company size, competitive factors, quality of
management, and shareholders. Common measures of the probability of default are either credit ratings or historical
statistics on defaults, which can be used as proxies for default risk. The Basle II Accord views the assigning of default
probabilities to borrowers as a requirement for implementing the Foundation and Advanced approaches, but not for
the Standardised approach.

Exposure risk. Exposure is the amount at risk in the event of default excluding recoveries. Since default occurs at an
unknown future date, the risk is generated by the uncertainty regarding future amounts at risk. The type of
commitment given by the bank to the borrower sets an upper limit on possible future exposures. For lines of credit
with a repayment schedule, exposure risk can be considered small. This is not true for all other lines of credit. The
borrower may draw on these lines of credit within a limit set by the bank, as borrowing needs arise.

Recovery risk. Recoveries in the event of default are unpredictable and depend on the type of default and the
guarantees received from the borrower. Recoveries require legal procedures, expenses and a significant lapse of time.
Typical sources of recoveries are through collateral, guarantees and covenants.

The existence of collateral reduces credit risk if it can be taken over and sold at significant value. Collateral can take
the form of cash, financial assets or fixed assets. The use of collateral to mitigate credit risk transforms this risk into a
recovery risk plus an asset value risk. Guarantees are contingencies given by third parties to banks. For example, the
parent within a group of companies may guarantee to honour the obligations of one of its subsidiaries in the event of
default. With a readily enforceable third-party guarantee, there must be consideration of the joint probability of default
by both the guarantor and the borrower.

Recovery risk depends on the type of default. If no corrective action can be considered, legal procedures take over and
all the borrower’s commitments will be suspended until these reach a conclusion. Recoveries will be delayed until the
end of this procedure and, in the extreme case, there may be no recoveries because the company is re-sold or
liquidated and no excess funds are available to repay an unsecured debt.

The Basle II Accord on banks’ capital requirements (see Chapter 2) provides capital reductions for various forms of
transactions that reduce risk. Specifically, the Accord grants recognition of credit risk mitigation techniques, including
collateral and guarantees.

Expected loss. The expected loss is the product of the loss given default and the probability of default. The expected
loss (L) given default can be viewed as the product of a random variable characterising default (D, a percentage), an
uncertain exposure (X, a value), and an uncertain recovery rate (R, a percentage), where:
L = D × X × (1 – R)

4) There are several models that are intended to measure credit risk.

The ability to measure the probability of default is influenced by the level of information available on the borrower.
For retail customers, the bank needs to collect information internally or from specialist credit agencies. For larger
corporate borrowers, there are additional sources of publicly available information such as company accounts, stock
and bond prices and analysts’ reports. In the latter case, the availability of more information along with the lower
average cost of collecting such information, allows banks to use more sophisticated and usually more quantitative
methods in assessing default probabilities for large borrowers compared to small borrowers. However, quantitative
assessments of small borrowers are becoming increasingly feasible and less costly. In principle, banks can use very
similar models to assess the probabilities of default on both bonds and loans.

Qualitative models

If there is limited publicly available information on a borrower, the bank has to draw on internal sources, such as
information on the borrower’s credit and deposit accounts with the bank, and/or purchase information from external
sources such as credit agencies. In general, the amount of information collected will depend on the size of the debt
exposure being considered and the cost of information collection. The factors influencing the decision can be
categorised as (i) borrower-specific (unique to the individual borrower), and (ii) market-specific (i.e. factors that have
an impact on all borrowers). This process relies heavily on qualitative analysis and hence the subjective judgment of
bank employees.

The borrower-specific factors include reputation, leverage, volatility of earnings and collateral. The market-specific
factors include the business cycle and the level of interest rates
Credit scoring models

Rather than evaluating borrower-specific and market-specific factors in a subjective or qualitative manner, banks will
commonly apply weights to different factors within a more objective or quantitative approach. The principle of all
statistical models of credit risk is to fit observable attributes (e.g. financial data on firms) to the variables to be
predicted (e.g. default or non-default, or the credit rating).

Credit scoring models use the borrower characteristics either to calculate a score that represents the applicant’s
probability of default or to sort borrowers into different default risk classes. To follow this approach, objective
economic and financial measures of risk for different classes of borrower must be identified. At the retail level,
commonly used factors are income, assets, age, occupation and location. Scoring applies well to individual borrowers
in retail banking, because of the large volume of data. Implementation of scoring for corporates is more challenging.
For commercial borrowers, cash flow information and financial ratios are most commonly employed.

Credit scoring uses techniques for discriminating between defaulters and non-defaulters. A ‘scoring function’ provides
a score (a number) from observable attributes. Scoring techniques can also provide ‘posterior default probabilities’
(i.e. probabilities of default given the score values). Once the appropriate factors have been identified, a statistical
technique is used to quantify or score the default risk probability or default risk classification. The most popular
techniques are linear probability models, logit models and linear discriminant analysis.

The linear probability model uses past data as inputs to a model to explain the repayment experience (whether or not
default occurred) on pre-existing loans. The relative importance of different factors used to explain past repayment
performance is then used to forecast repayment probabilities on new loans.

The logit model represents an adjustment to the linear probability model to ensure that the estimated range of default
probabilities lies between zero and one.

Whereas the two models above generate an expected probability of default if a loan is made, linear discriminant
models classify borrowers into high or low default-risk classes contingent on their observed characteristics. One
problem with this approach is that the models usually discriminate only between two extreme cases of borrower
behaviour, that is, default or no default. In practice, various classes of default exist, from non-payment or delay of
interest payments to outright default on all promised interest and principal payments.

Option-based models. There are two main insights:


(i) holding equity is analogous to buying a call option on the value of the firm’s assets and
(ii) the payoff for debt holders resembles that of writing a put option on the value of the firm’s (borrower’s) assets.

Continuing to repay debt is not rational if liabilities exceed assets, thus the borrower may relinquish assets instead.
Lenders should adjust the risk premium as a borrower’s leverage and asset risk change. Market value of assets and
asset risk are a key focus in estimating default probabilities under this approach. The value and volatility of assets are
not directly observable. To address this, the KMV method relies mostly on equity market information, and its key
output is the probability (over a one-year horizon) that the market value of assets will fall below promised repayments
on short-term liabilities.

5) There are several methods available to banks for managing these elements of credit risk.

Credit allocation decision. One aspect of credit risk management consists of the credit allocation decision, the
follow-up of credit commitments, monitoring and reporting processes. A bank’s limit system sets the maximum
amount at risk with borrowers, with the aim of limiting losses in the event of default. The bank’s capital sets a limit to
lending, given diversification considerations and/or credit policy guidelines. In a sophisticated risk management
system, the bank’s capital could be allocated to all credit lines.

Credit enhancement. Another aspect of credit risk management involves credit enhancement, which seeks to reduce
the amount of loss in the event of default through increasing the recovery rate (see Equation 4.1). The main tools of
credit enhancement are covenants and guarantees.
Loan sales. A bank loan sale occurs when a bank originates a loan and then sells it either with or without recourse to
an outside buyer. If a loan is sold without recourse, not only is it removed from the bank’s balance sheet but the bank
has no explicit liability if the loan eventually becomes bad (i.e. default occurs). Thus the buyer of the loan bears all the
credit risk.
2016ZA(5)
5. Explain the general risk measurement and risk management functions of banks. Discuss how these functions
are applied by banks when they use Asset and Liability Management and gap analysis to manage liquidity risk and
interest rate risk.

1) The risk management process is both set of techniques and tools, and it is the process that is required to optimize
risk-return tradeoff. The aim of the process is to measure risks in order to monitor and control them.

There are four stages that are followed in risk management process:
1) Identify the areas where risks can arise
2) Measure the degree of risk (can be individual customer risk or risk in particular sector of country)
3) Balance risk and return tradeoffs and determine reasonable level of total risk exposure by individual, firm, country
and etc
4) establish appropriate monitoring and control procedures with the bank.

2) The risk management process can be viewed from both top-down and bottom-up perspectives. On a top-down
basis, target earnings and risk limits are translated into signals to business units, and then to managers dealing with
customers. On a bottom-up basis, monitoring and reporting of risks rises from the transaction level through to
aggregate risks. This process facilitates the diversification of risks and aims to ensure consistency with available
capital.

In recent years considerable advances in the quantitative techniques applied to risk management within banks have
been created. The most commonly used quantitative risk measures can be separated into the three categories:

• sensitivity of target variables (e.g. earnings or interest margin) to changes in market parameters (e.g. an interest rate
change)
• volatility of target variables, which captures deviations around their mean (both upside and downside)
• downside risk, which focuses on adverse deviations only. This type of measure is expressed as a worst-case value of
a target variable and the probability of it occurring.

These different measures address different dimensions of risk. The first category is the simplest measure and the third
is the most elaborate. The third category integrates the previous two.

3) Banks nowadays deal in a very wide range of financial instruments both as assets and liabilities, and balance sheets
have become increasingly complex. In addition, off-balance-sheet (OBS) business, involving various contingent
liabilities (derivatives, letters of credit, guarantees), has grown significantly. However, the various demands of a
heterogeneous clientele brings with it many special and additional risks. As a result, banks have adopted an
increasingly systematic approach to managing their balance sheet and off-balance-sheet positions. The focus is on
liquidity risk and interest rate risk (identification of risks).

This is widely known as asset and liability management (ALM), which covers the set of techniques used to
manage interest rate and liquidity risks, and deals with the structure of the balance sheet within the context of
funding and regulatory constraints and profitability targets. 

ALM involves the continual monitoring of the existing position of a bank, evaluating how this differs from the desired
position, and undertaking transactions (including hedging) to move the bank towards the desired position. The
objective is to enhance profitability, while controlling and limiting different risks, as well as complying with the
constraints of banking supervision. Therefore, a bank must assess the risks and benefits of all assets and liabilities in
the light of the contribution they make to the earnings and to the risks of its total portfolio. Banks have to continually
adjust assets and liabilities, both by varying the terms they offer for business with clients and by regular trading in
financial markets.

ALM focuses on liquidity risk and interest rate risk at the balance sheet level. It can thus be viewed as a subset of the
bank’s overall risk management process, which also addresses other forms of risk such as credit risk and market risk,
together with other aspects of risk measurement and control. ALM techniques are most applicable to commercial
banks involved in deposit collection and lending businesses. Liquidity and interest rate policies are interdependent,
since any projected liquidity gap will be funded at an unknown rate unless a hedging transaction is initiated
immediately.

The net interest margin is the target of ALM policies. The net interest margin (NIM) is defined as the net interest
income (NII, equal to interest revenues minus interest expenses) as a proportion of the bank’s total assets.4 In this
context, the ALM objective is the minimisation of the variability of NIM or NII for a target level of NIM or NII.
Alternatively, the objective can be viewed as maximising NIM or NII for a given level of risk.

4) The problem of the liquidity risk can easily be seen on the Northern Rock bank case, which relied on liquidity from
wholesale sources. It was pursuing a somewhat unique business model for a retail bank in its heavy reliance on
money-market funding. This was the first bank run in the UK for 150 years, and naturally led to a highlighting of
many questions about the efficacy of regulation. The Bank of England was called on as lender of last resort.
Consumers demonstrated a lack of awareness and/or confidence in deposit insurance arrangements, whose provisions
were subsequently extended in the UK.

Although Northern Rock depositors had some protection from deposit insurance, they still preferred to run on the bank
in the light of uncertainty regarding the bank’s future life because it had been forced to rely on the lender of last resort
facility. There was also a lack of confidence by depositors that the bank was initially suffering from a liquidity
problem rather than a solvency problem. Thus, elements of the theory of bank runs were evident in the behavior of
bank customers during the episode. The case of Northern Rock bank caused a lot of discussions, one of them was that
risk analysis is essential even for well performing banks.

5) Liquidity gap analysis (+example)

Liquidity gap analysis is the most widely known ALM technique, and is used for managing both liquidity risk and
interest rate risk. Liquidity risk is generated in the balance sheet by the mismatch between the sizes and maturities of
assets and liabilities. This comes from the one of the bank’s function: banks are in the business of maturity
transformation; that is, they lend for longer periods than those for which they borrow. As as a result, they expect to
have a mismatched balance sheet with short-term liabilities greater than short-term assets and with assets greater than
liabilities at medium and long term. The liquidity risk relates to the possibility of holding inadequate resources to
balance the assets.

The liquidity gap is typically defined as the difference between net liquid assets and volatile liabilities. If the bank’s
assets exceed liabilities, the gap should be funded in the market. In the reverse case, the excess resources must be
invested.

The maintenance of adequate liquidity remains one of the most important features of banking. Banks can either store
liquidity in their assets or purchase it in money and deposit markets. Sources of liquidity are diverse. They may come
from the asset side of the balance sheet through the ability to sell, discount or pledge assets at short notice. Liquidity
may come from the liabilities side of the balance sheet through banks’ ability to raise new money at short notice via
the money market. Most commonly, however, liquidity can be generated from the maturity structure of the balance
sheet (where expected outflows of funds are matched by expected inflows).

Because liquid assets have lower returns, stored liquidity has an opportunity cost that results in a trade-off between
liquidity and profitability. The aim of ALM is to increase the earning capacity of the bank while at the same time
ensuring an adequate liquidity cushion.

Interest rate gap analysis (+example)

Interest rate risk policy is typically based on selected target variables, with net interest margin being a common target.
Banks will aim to optimise the risk–reward trade-off for these targets.

The first step in assessing interest rate risk is for the bank manager to decide which assets and liabilities are rate-
sensitive – in other words to identify those that have interest rates that will be reset (re-priced) within a given time
interval (e.g. six months). The gap for a given period is defined as the difference in value between interest-sensitive
assets and interest-sensitive liabilities.

2014ZA(1), 2013ZA(1)
1. Explain how the theories of information sharing coalitions and delegated monitoring resolve the problems of
information asymmetry in direct financing and lead to the dominance of financial intermediation over direct
financing.
.

- Why it is not easy to explain the existence of indirect finance in the economy?

Money flow from surplus to deficit units inside a financial system which helps to allocate financial resources
in an economy. Financial system channels savings in two ways: direct and indirect. Direct finance (financial
markets) improves the welfare of agents in the economy. The effect can be demonstrated with the Fisher
separation theorem. The model highlights the need for smooth flow of funds from surplus to deficit units.
Explaining the existence of indirect finance (financial intermediaries) is much more difficult. The question is
why to use financial intermediaries when deficit and surplus units can make transactions directly and save
the money paid for intermediation services? The answer is that the shorter chain of transactions involved in
direct financing would be less costly than intermediated financing.

All functions of FIs can be split into two categories: brokerage and QAT (‘qualitative asset transformation’).
Brokerage is about:
• transaction services (e.g. check-writing; buying/selling securities and safekeeping)
• financial advise (e.g. advice on where to invest; portfolio management)
• screening & certification (e.g. bond ratings)
• origination (e.g. bank initiating a loan to a borrower)
• issuance (e.g. taking a security offering to market)
• miscellaneous (e.g. trust activities)

FIs act as an agent for the saver providing information and transaction services, for this they are usually
compensated with a fee. Efficiency of FI results from reduced costs and economies of scale, since FIs have
special skills and technologies to interpret subtle signals and benefit from the reusability of information.

The other functions of the bank is the QAT:


• maturity transformation (e.g. bank financing assets with longer maturity than liabilities)
• size transformation (e.g. a mutual fund holding assets with larger unit size than liabilities)
• liquidity transformation (e.g. a bank funding illiquid loans with liquid liabilities)
• credit risk transformation (e.g. a bank monitoring a borrower to reduce default risk)

In the economy without FIs the level of fund flows between households and corporates would be very low
for a number of reasons:
• need to monitor the borrower (moral hazard problem), which includes substantial costs and time
• liquidity reasons: borrowers are interested in long-term investments and cannot satisfy the liquidity
needs of the lenders
• price risk

By performing QAT, FI offer claims that are more attractive than those offered by corporates. This is
actually dine because of asset diversification and asset evaluation. Asset diversification: most household
savers are unable to diversify, whereas banks can. This enables them to predict more accurately the expected
return on the asset. Asset evaluation: FIs reduce adverse outcomes of information problems, since they have
a comparative advantage in evaluating credit risk. In a situation of perfect knowledge, no transaction costs
and no indivisibility, FIs would be not needed, but the reality is not perfect like this.
Therefore we can conclude that FIs facilitate smooth flow of money from deficit to surplus units,
contributing to the efficiency of economics. They allow inter temporal smoothing of consumption by
households and expenditures by firms, enabling households and firms to share risks.

Transaction costs

- List and explain the four types of transaction costs (search, verification, monitoring and enforcement).

To analyse why FIs help to reduce transaction costs associated with the crediting process, let first enlist the
main types of transaction costs:
• Search costs. These involve transactors searching out agents willing to take an opposite position;
e.g., a borrower seeking out a lender(s) who is willing to provide the sums required. It would also be
necessary for the agents to obtain information about the counterparty to the transaction and
negotiating and finalizing the relevant contract
• Verification costs. These arise from the necessity of the lender to evaluate the proposal for which the
funds are required.
• Monitoring costs. Once a loan is made the lender will wish to monitor the progress of the borrower
and ensure that the funds are used in accordance with the purpose agreed. There is a moral hazard
aspect here as the borrower may be tempted to use the funds for purposes other than those specified
in the loan contract.
• Enforcement costs. Such costs will be incurred by the lender should the borrower violate any of the
contract conditions.

- Represent the algebraic analysis and graphical illustration of transaction costs (the condition for preference
of intermediation).
- Why banks can reduce transaction costs?

Why banks can reduce transaction costs?


• reduction in search costs (branch networks, telephone banking, internet banking, standardised forms
of contracts)
• reduction in verification costs (big data, sophisticated models of evaluating credit risk)
• economies of scale (size and maturity transformation)
• economies of scope (diversification of business: loan portfolios and deposit activities)
portfolio framework, Pyle (1971)
information advantage

Nevertheless, a word of caution a word of caution is appropriate here for two reasons.

• Economies of scale seem to be exhausted relatively early, only small banks have potential for scale
economies. Optimum scale varies between the studies, but is usually between:
• $100m and $10bn in bank assets (Hunter and Timme (1986), Berger et al. (1987)) 1
• from $10bn to $25bn (Berger and Mester (1997)).

Disintermediation - large firms with good credit ratings find it cheaper to obtain direct finance through
markets for equity, bonds and commercial paper. The increasingly widespread availability of information on
borrowers has somewhat eroded banks’ informational advantages that enable them to function as
intermediaries.

Despite the clear evidence that banks do generally lower the aggregate cost of financial intermediation, this
appears to be an incomplete story of why FI occurs.

Delegated monitoring

In order to minimize the moral hazard problem, lenders must impose restrictions (restrictive covenants) on
borrowers’ actions or require collateral. Then lenders must monitor the borrowers' activities and enforce the
restrictive covenants if the borrower violates them. But, monitoring is hampered with the free-rider problem.

Financial intermediaries play a key role in reducing these adverse selection and moral hazard problems
because they avoid the free-rider problem by originating private loans that can not be bought by outside
investors. The theory of financial intermediation suggests that banks specialize in information production.
Diamond (1984) finds a special feature of banks acting as delegated monitors on behalf of the depositors, in
the presence of costly monitoring.

Banks exploit comparative advantage (comparative to individual lenders or specialized firms: rating
agencies, securities analysts, or auditors) in information production because of economies of scale and
scope, which reduce the cost of informational asymmetries and its extent. The more information the banks
obtain about borrowers, the less they have to improve borrower incentives by setting loan contract terms
(interest rates, collateral requirements). Since information gathering is costly, banks will expand their search
for information until the expected marginal benefit of search equals zero. The degree of information
asymmetry depends on borrower characteristics such as firm size, firm age and governance, or legal form.

Banks specialization in monitoring credits improves efficiency and social welfare by exploiting scale
economies in processing the information involved in monitoring and enforcing contracts with borrowers.

Monitoring activity can be described as:


- Screening projects to reduce the adverse selection problem.
- Preventing opportunistic behaviour during the realisation of the project (choosing a project that increases
the risk to the lender).
- Punishing a borrower who does not meet his contractual obligations.

- Equity Contract

Information asymmetry between inside investor and outside investor.


Profit-sharing agreement, where the profit shared to the outsider depends on the profits
reported by the inside investor.

The inside investor will choose the smallest possible value, V=0, if outsider cannot access true profit value
or there is no penalty for not reporting true value. When outside investors cannot observe the cash flows and
cannot monitor the business, equity contracts do not work.

In practice, the net cash flows to the firm are unobservable for many firms. Incentive for payment can be
provided by a debt contract with fixed face value and liquidation right in the case of default.

- Debt contract types

In situations involving a single lender and a single borrower, one compares the cost of monitoring (K) with
the resultant savings of contracting costs. When there are multiple lenders involved, either each must be able
to monitor the additional information directly at a total
cost of m x K, where m is the number of lenders per borrower, or the monitoring must be delegated to one
party.

Delegating the monitoring gives rise to a new private information problem: the party conducting the
monitoring as agent now has private information. It is not even verifiable whether the monitoring has been
undertaken. Delegated monitoring can thus lead to delegation costs.

Delegated monitoring pays off when:

K + D ≤ min [S, m x K]
where D is the delegation cost per borrower, K + D is the cost of monitoring via an intermediary, S is the
contracting cost without monitoring and m x K is the cost of direct monitoring.

In other words, the cost of delegated monitoring must be less than the minimum of
(a) costs without monitoring; and (b) total costs of direct monitoring.

2017ZA(3), 2015ZA(5) AND 2013ZA(5) WITHOUT credit risk management


3. Explain and discuss the purpose and implementation of (i) gap analysis for liquidity risk and interest rate risk,
and (ii) credit risk management.

1) Main types of risk in commercial banking are credit risk, liquidity risk, interest rate risk, and market risk.

Credit risk is the most obvious risk in banking, and possibly the most important in terms of potential losses. This risk
relates to the possibility that loans will not be paid or that investments will deteriorate in quality or go into default
with consequent loss to the bank. This risk is one of the most important in banking because even a perfectly matched
balance sheet will remain subject to credit risk and because the default of a small number of key customers could
generate very large losses and in an extreme case could lead to a bank becoming insolvent.

Liquidity risk is the risk that a company or bank may be unable to meet short term financial demands. This usually
occurs due to the inability to convert a security or hard asset to cash without a loss of capital and/or income in the
process. However, liquidity risk is often an inevitable outcome of banking operations, since a bank typically collects
deposits which are short term and lends long term, this gap between maturities leads to liquidity risk and a cost of
liquidity.

Interest rate risk relates to the exposure of banks’ profits to interest rate changes which affect assets and liabilities in
different ways. Banks are exposed to interest rate risk because they operate with unmatched balance sheets. If bankers
believe strongly that interest rates are going to move in a certain direction in the future, they have a strong incentive to
position the bank accordingly: when an interest rate rise is expected, they will make assets more interest-sensitive
relative to liabilities, and do the opposite when a fall is expected. Assets and liabilities can obviously be mixed to
increase or decrease exposures, and techniques such as interest-margin variance analysis (IMVA) are used to evaluate
current and project future exposures.

The last one is the market risk, which relates to the risk of loss associated with adverse deviations in the value of the
trading portfolio, which arises through fluctuations in, for example, interest rates, equity prices, foreign exchange rates
or commodity prices. It arises where banks hold financial instruments on the trading book, or where banks hold equity
as some form of collateral. Many large banks have dramatically increased the size and activity of their trading
portfolios, resulting in greater exposure to market risk. The industry standard for dealing with market risk is the
Value-at-Risk (VaR) model.

2) Banks nowadays deal in a very wide range of financial instruments both as assets and liabilities, and balance sheets
have become increasingly complex. In addition, off-balance-sheet (OBS) business, involving various contingent
liabilities (derivatives, letters of credit, guarantees), has grown significantly. However, the various demands of a
heterogeneous clientele brings with it many special and additional risks. As a result, banks have adopted an
increasingly systematic approach to managing their balance sheet and off-balance-sheet positions. 
This is widely known as asset and liability management (ALM), which covers the set of techniques used to
manage interest rate and liquidity risks, and deals with the structure of the balance sheet within the context of
funding and regulatory constraints and profitability targets. 

ALM involves the continual monitoring of the existing position of a bank, evaluating how this differs from the desired
position, and undertaking transactions (including hedging) to move the bank towards the desired position. The
objective is to enhance profitability, while controlling and limiting different risks, as well as complying with the
constraints of banking supervision. Therefore, a bank must assess the risks and benefits of all assets and liabilities in
the light of the contribution they make to the earnings and to the risks of its total portfolio. Banks have to continually
adjust assets and liabilities, both by varying the terms they offer for business with clients and by regular trading in
financial markets.

ALM focuses on liquidity risk and interest rate risk at the balance sheet level. It can thus be viewed as a subset of the
bank’s overall risk management process, which also addresses other forms of risk such as credit risk and market risk,
together with other aspects of risk measurement and control. ALM techniques are most applicable to commercial
banks involved in deposit collection and lending businesses. Liquidity and interest rate policies are interdependent,
since any projected liquidity gap will be funded at an unknown rate unless a hedging transaction is initiated
immediately.

The net interest margin is the target of ALM policies. The net interest margin (NIM) is defined as the net interest
income (NII, equal to interest revenues minus interest expenses) as a proportion of the bank’s total assets.4 In this
context, the ALM objective is the minimisation of the variability of NIM or NII for a target level of NIM or NII.
Alternatively, the objective can be viewed as maximising NIM or NII for a given level of risk.

3) Liquidity gap analysis (+example)

Liquidity gap analysis is the most widely known ALM technique, and is used for managing both liquidity risk and
interest rate risk. Liquidity risk is generated in the balance sheet by the mismatch between the sizes and maturities of
assets and liabilities. This comes from the one of the bank’s function: banks are in the business of maturity
transformation; that is, they lend for longer periods than those for which they borrow. As as a result, they expect to
have a mismatched balance sheet with short-term liabilities greater than short-term assets and with assets greater than
liabilities at medium and long term. The liquidity risk relates to the possibility of holding inadequate resources to
balance the assets.

The liquidity gap is typically defined as the difference between net liquid assets and volatile liabilities. If the bank’s
assets exceed liabilities, the gap should be funded in the market. In the reverse case, the excess resources must be
invested.

The maintenance of adequate liquidity remains one of the most important features of banking. Banks can either store
liquidity in their assets or purchase it in money and deposit markets. Sources of liquidity are diverse. They may come
from the asset side of the balance sheet through the ability to sell, discount or pledge assets at short notice. Liquidity
may come from the liabilities side of the balance sheet through banks’ ability to raise new money at short notice via
the money market. Most commonly, however, liquidity can be generated from the maturity structure of the balance
sheet (where expected outflows of funds are matched by expected inflows).

Because liquid assets have lower returns, stored liquidity has an opportunity cost that results in a trade-off between
liquidity and profitability. The aim of ALM is to increase the earning capacity of the bank while at the same time
ensuring an adequate liquidity cushion.
Interest rate gap analysis (+example)

Interest rate risk policy is typically based on selected target variables, with net interest margin being a common target.
Banks will aim to optimise the risk–reward trade-off for these targets.

The first step in assessing interest rate risk is for the bank manager to decide which assets and liabilities are rate-
sensitive – in other words to identify those that have interest rates that will be reset (re-priced) within a given time
interval (e.g. six months). The gap for a given period is defined as the difference in value between interest-sensitive
assets and interest-sensitive liabilities.

4) There are several constituents of credit risk. Among them are default risk, exposure risk and recovery risk.

Default risk. The adequate quantitative measure of default risk is the probability of a default occurring. Default can
be defined in several ways: missing a payment obligation, breaking a covenant, entering a legal procedure or
economic default. The last occurs when the economic value of assets falls below the value of outstanding debts.
Credit rating agencies typically consider that default has occurred when a contractual payment has been missed for at
least three months. Default does not necessarily lead to immediate losses, but may increase the likelihood of
bankruptcy.

The probability of default depends on factors such as market outlook, company size, competitive factors, quality of
management, and shareholders. Common measures of the probability of default are either credit ratings or historical
statistics on defaults, which can be used as proxies for default risk. The Basle II Accord views the assigning of default
probabilities to borrowers as a requirement for implementing the Foundation and Advanced approaches, but not for
the Standardised approach.

Exposure risk. Exposure is the amount at risk in the event of default excluding recoveries. Since default occurs at an
unknown future date, the risk is generated by the uncertainty regarding future amounts at risk. The type of
commitment given by the bank to the borrower sets an upper limit on possible future exposures. For lines of credit
with a repayment schedule, exposure risk can be considered small. This is not true for all other lines of credit. The
borrower may draw on these lines of credit within a limit set by the bank, as borrowing needs arise.

Recovery risk. Recoveries in the event of default are unpredictable and depend on the type of default and the
guarantees received from the borrower. Recoveries require legal procedures, expenses and a significant lapse of time.
Typical sources of recoveries are through collateral, guarantees and covenants.

The existence of collateral reduces credit risk if it can be taken over and sold at significant value. Collateral can take
the form of cash, financial assets or fixed assets. The use of collateral to mitigate credit risk transforms this risk into a
recovery risk plus an asset value risk. Guarantees are contingencies given by third parties to banks. For example, the
parent within a group of companies may guarantee to honour the obligations of one of its subsidiaries in the event of
default.

Recovery risk depends on the type of default. If no corrective action can be considered, legal procedures take over and
all the borrower’s commitments will be suspended until these reach a conclusion. Recoveries will be delayed until the
end of this procedure and, in the extreme case, there may be no recoveries because the company is re-sold or
liquidated and no excess funds are available to repay an unsecured debt.
The Basle II Accord on banks’ capital requirements provides capital reductions for various forms of transactions that
reduce risk. Specifically, the Accord grants recognition of credit risk mitigation techniques, including collateral and
guarantees.

Expected loss. The expected loss is the product of the loss given default and the probability of default. The expected
loss (L) given default can be viewed as the product of a random variable characterising default (D, a percentage), an
uncertain exposure (X, a value), and an uncertain recovery rate (R, a percentage), where:
L = D × X × (1 – R)

5) There are several methods available to banks for managing these elements of credit risk.
Credit allocation decision. One aspect of credit risk management consists of the credit allocation decision, the
follow-up of credit commitments, monitoring and reporting processes. A bank’s limit system sets the maximum
amount at risk with borrowers, with the aim of limiting losses in the event of default. The bank’s capital sets a limit to
lending, given diversification considerations and/or credit policy guidelines. In a sophisticated risk management
system, the bank’s capital could be allocated to all credit lines.

Credit enhancement. Another aspect of credit risk management involves credit enhancement, which seeks to reduce
the amount of loss in the event of default through increasing the recovery rate (see Equation 4.1). The main tools of
credit enhancement are covenants and guarantees.

Loan sales. A bank loan sale occurs when a bank originates a loan and then sells it either with or without recourse to
an outside buyer. If a loan is sold without recourse, not only is it removed from the bank’s balance sheet but the bank
has no explicit liability if the loan eventually becomes bad (i.e. default occurs). Thus the buyer of the loan bears all the
credit risk.

You might also like