Risk Assessment and Pooling - Book 2

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 21

Chapter 4

Risk
Assessment
and Pooling
Insurable Loss Exposures

Definition: Risk assessment, also called underwriting,


is the methodology used by insurers for evaluating and
assessing the risks associated with an insurance policy.
The same helps in calculation of the correct premium
for an insured. 

Description: There are different kinds of risks


associated with insurance like changes in mortality
rates, morbidity rates, catastrophic risk, etc.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-2
Insurable Loss Exposures
• Risk Assessment is the estimation of financial
impact of each risk identified previously.
• Two key statistical measures:
– Frequency with which losses occur.
– Their severity.

Frequency-severity method: Is an actuarial method for


determining the expected number of claims that an insurer
will receive during a given time period and how much the
average claim will cost. Frequency-severity method uses
historical data to estimate the average number of claims
and the average cost of each claim. The method multiplies
the average number of claims by the average cost of a
claim.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-3
Basic Statistical Concepts - 1

As we all know, the way statistics works is that we use a sample to


learn about the population from which it was drawn. Ideally, the
sample should be random so that it represents the population well.
A probability distribution: is a mathematical function that provides
the probabilities of occurrence of different possible outcomes in
an experiment. In more technical terms, the probability distribution
is a description of a random phenomenon in terms of
the probabilities of events.
For instance, if the random variable X is used to denote the outcome
of a coin toss ("the experiment"), then the probability distribution
of X would take the value 0.5 for X = heads, and 0.5 for X =
tails (assuming the coin is fair). Examples of random phenomena
can include the results of an experiment or survey.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-4
Basic Statistical Concepts - 1
The normal distribution, also known as the
Gaussian distribution, is a probability
distribution that is symmetric about the mean,
showing that data near the mean are more
frequent in occurrence than data far from the
mean.

The Arithmetic Mean: is the average of the numbers: a calculated


"central" value of a set of numbers. 
© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-5
Basic Statistical Concepts - 1

The standard deviation: is a statistic that measures the dispersion of a


dataset relative to its mean and is calculated as the square root of the
variance. It is calculated as the square root of variance by determining
the variation between each data point relative to the mean, it’s a
quantified measure of risk.
In a simple definition, the standard deviation is an indicator of how
widely values in a group differ from the mean,
Variance (σ2): is a measurement of the spread between numbers in a
data set. It measures how far each number in the set is from the mean
and is calculated by taking the differences between each number in the
set and the mean, squaring the differences (to make them positive) and
dividing the sum of the squares by the number of values in the set.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-6
Basic Statistical Concepts - 1

• Random Variable:
Future value is not
known with
certainty.
• Probability
Distribution: Shows
all possible
outcomes for a
Random Variable.

The pooled standard deviation is a method for estimating a single


standard deviation to represent all independent samples or groups
in your study when they are assumed to come from populations
with a common standard deviation. The pooled standard deviation
is the average spread of all data points about their group mean
(not the overall mean).
© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-7
Basic Statistical Concepts - 2

• Expected Value: Sum of the


multiplication of each possible outcome
of the variable with its probability.
E[R] = Σ Ri * Pi
• Variance and Standard Deviation:
N
σ=
 R  E  R   * Pi
2
i
i 1

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-8
The Expected Value

• Can be calculated by multiplying the


expected losses with their probability and
calculating the sum of all outcomes
• Is a starting point for calculating an
insurance premium or how much a firm
should set aside each year to cover the
losses

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-9
The Expected Value
Calculate the Expected Value of the following
Probability Distribution:
Loss Outcome Probability
-3 10%
-5 35%
-6 55%

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-10
Average Loss

• Estimating Loss Frequency (= Total


Amount of Losses divided by Total Number
of Accidents) and Loss Severity (= Total
Number of Accidents divided by Total Units
Analyzed).
• Average Loss = Average Loss Frequency
multiplied with Average Loss Severity.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-11
Average Loss

If the Average Loss Severity is $1,150 and the Average Loss


Frequency is 0.12, what is the Average Loss?
A) $9,583.33
B) $104.35
C) $138

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-12
Convolution
• Calculates all possible combinations of losses
indicated by the frequency and severity loss
distributions, as well as their corresponding
probabilities of occurring. It uses joint
probabilities since it calculates the likelihood of
two events occurring together and at the same
point in time.
• The total probability of all loss combinations have
to add up to 1
• will be expressed in dollars
• Often done by computer simulation due to
complexity of calculations.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-13
Risk Pooling

• Risk can be reduced through diversification.


• The creation of a pool of many (exposure)
units helps the insurer to better predict any
individual unit’s risk of loss, i.e reduces the
standard deviation of the loss distribution
• The Probability Distribution matters!
• It assumes that some assumptions are met.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-14
Normal Probability Distribution
“Bell Curve”:

A bell curve is another name for a normal distribution curve


(sometimes just shortened to “normal curve”) or Gaussian
distribution. The name comes from the fact it looks bell-shaped.
Characteristics of Bell Curves, Normal Curves
- The mean (average) is always in the center of a bell curve or normal
curve.
- A bell curve / normal curve has only one mode, or peak. Mode here
means “peak”; a curve with one peak is uni-modal; two peaks
is bimodal, and so on.
- A bell curve / normal curve has predictable standard deviations that
follow the 68 95 99.7 rule.
- A bell curve / normal curve is symmetric. Exactly half of data points
are to the left of the mean and exactly half are to the right of the mean
© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-15
Normal Probability Distribution
• “Bell Curve”:

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-16
Normal Probability Distribution
• “Bell Curve”: Example

Example, in a group of 100 individuals, 10 may be below 5 feet


tall, 65 may stand between 5 and 5.5 feet and 25 may be above
5.5 feet. This range-bound distribution can be plotted as follows:

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-17
Confidence Interval - 1
• A confidence interval gives an estimated range of
values which is likely to include an unknown
population parameter, the estimated range being
calculated from a given set of sample data. 
• Assuming Normal Distribution:
Estimated Mean Loss ± (k) * Estimated σ
• Where:
– (k) = Specified number of standard deviations
which reflect the uncertainty.
– σ = Standard Deviation calculated using loss data
from past.
• This is the Confidence Interval.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-18
Confidence Interval - 2

• (k) * Estimated σ is also called the Risk


Charge.
• It represents the margin of error.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-19
Practical Considerations

• Insurers sort consumers into homogeneous


categories:
– Age.
– Gender.
– Etc.
• Yet still independent of each other. i.e the
occurrence of one event makes it neither
more nor less probable that the other occurs
• Insurers will not insure when these
assumptions are violated.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-20
The pooling of risk is fundamental
to the concept of insurance. A
health insurance risk pool is a
group of individuals whose medical
costs are combined to calculate
premiums. Pooling risks
together allows the higher costs of
the less healthy to be offset by the
relatively lower costs of the
healthy, either in a plan overall or
within a premium rating category.
In general, the larger the risk pool,
the more predictable and stable the
premiums can be.

© 2013 Pearson Education, Inc., publishing as Prentice Hall.  All rights reserved. 3-21

You might also like