Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

RISK TIP #2 – HOW DO WE MEASURE CON- BACK

TROL EFFECTIVENESS?
Measuring control effectiveness is dif cult for many organisations (if not most). What worries me is how
often I come across the ‘guess work’ that goes into measuring control effectiveness when what’s actually
needed is evidence to prove the controls in place are right for the resources, budget and risk.

What I nd fascinating is that the majority of risk management databases on the market that I have re-
viewed provide a free text eld to list the controls and, usually, a free text eld to provide an assessment
of effectiveness. So, for instance, there may be six or more controls listed by a company but no mean-
ingful way of assessing the effectiveness of each of those controls individually, preferring instead to
provide one effectiveness rating that covers all of the controls. Given that the assessment of likelihood
and consequence are going to be made giving due regard to the effectiveness of the controls, this
blanket approach to assessing effectiveness may lead to awed assessments of the risk level.

In order to answer the question as to how we measure control effectiveness, it is worthwhile to go back
to the start and de ne what a control is.

The humble control


Essentially, a control is something that is currently in place to reduce risk within an organisation and/or
an industry. They have often been brought in as a result of a previous situation or incident. Note, in
many cases these situations or incidents arise, not because of a lack of controls, but because of a failure
of existing controls. So the real key to managing risk effectively is to ensure that our controls are
effective.

There are three key categories for controls:

Preventative – controls that aim to reduce the likelihood of a situation occurring, for example,
policies and procedures, approvals, authorisations, police checks and training;
Detective – controls that aim to identify failures in the current control environment, for example,
reviews of performance, reconciliations, audits and investigations; and
Corrective – controls that aim to reduce the consequence and/or rectify a failure after it has been
discovered, for example, crisis management plans, business continuity plans, insurance and disaster
recovery plans.

As you can see, controls are absolutely crucial in the management of risk.

So, are all controls made equal?


The answer to this question is, obviously, no. What separates those controls that are critical and those
that are far less important is the consequence of the risk should it occur.

There are risks within any organisation that, if they were to materialise as an incident, would have seri-
ous implications for the ongoing viability and survivability of the organisation (the severe/catastrophic
consequences in the consequence matrix). These must be prevented.

The way we do this is through the controls we have in place.

Let’s say we are working at a major hardware chain in the plant import and distribution part of the busi-
ness. The identi ed risk is:

release of a plant borne bio hazard (including fauna) into the community

There are a range of controls including (but not limited to):

fumigation at the country of origin and the country of receipt;


handling protocols;
inspection protocols;
training of staff;
supervision of staff;
suf cient breaks to maintain concentration;
… and the list goes on

The key here is that we want to make sure these controls are as effective as possible as any failure of
one or more of the controls could lead to the risk materialising. These controls are critical controls and,
therefore, must be the subject of assurance.

The relationship is shown in the diagram below:


The next step – control criticality
It needs to be recognised that not all of the controls associated with the Severe consequence risk will
have the same impact to reduce or maintain the level of the risk. If all of the controls associated with
high consequence risks are treated the same, we may commit more resources than are necessary to the
assurance function. To that end, assigning criticality to each of the controls will assist in prioritising our
audit program.

Here’s a way it can be done:

Criti
cal- Descriptor
ity

The control is absolutely critical to the management and reduction of the risk. If this control
5 is ineffective or partially effective, the likelihood and/or consequence of the risk will increase
signi cantly (i.e. increases likelihood or consequence by 3 or more levels)

The control is very important to the management and reduction of the risk. If this control is
4 ineffective or partially effective, the likelihood and/or consequence of the risk will increase
(i.e. increases likelihood or consequence by 2 levels)
The control is important to the management and reduction of the risk. If this control is inef-
3 fective or partially effective, the likelihood and/or consequence of the risk will increase (i.e.
increases likelihood or consequnece by 1 level)

The control has some impact on the management and reduction of the risk. Depending on
2 the criticality of the other controls, an analysis should be undertaken to determine the ne-
cessity of this control.

The control has little to no impact on the management and reduction of the risk. It is unlikely
1
this control is required.

Having identi ed the controls against the risks with the highest level of consequence and then assessed
them for their criticality, we now have a list of controls associated with that risk that, not only need to be
effective, but require evidence of effectiveness.

But the biggest question – what does effective


look like?
Organisations continue to nd it dif cult to assess the true effectiveness of the controls. Some will use
Control Self Assessments (CSA), however, it is very rare that control effectiveness is measured against
performance measures developed speci cally for the control.

Let’s use an example of a wharf where the risk has been identi ed as: Catastrophic material failure of
infrastructure. We have identi ed multiple causes, the rst of which is: Lack of/ineffective mainten-
ance. Against this cause, we identify the following controls:

Preventative/routine mainten-
% of routine maintenance tasks carried out in accordance with
ance program
designated timeframes

Inspections
% of maintenance inspections carried out in accordance with
designated timeframes
% of issues identi ed during inspections that are recti ed
within speci ed timeframes

For the rst control – preventative/routine maintenance, the following performance measures are
developed:

Effectiveness Performance

Effective 100% of routine maintenance tasks conducted within designated timeframes

Mostly Effective 80-99% of routine maintenance tasks conducted within designated timeframes

Partially Effective 50-79% of routine maintenance tasks conducted within designated timeframes

Not Effective <50% of routine maintenance tasks conducted within designated timeframes

For the second control – inspections, the following performance measures are developed:
Ef- Performance
fect-
ive-
ness

Effect- 100% of maintenance inspections conducted within designated timeframes and 100% of
ive issues identi ed during inspections are recti ed within speci ed timeframes

Mostly 80-99% of maintenance inspections conducted within designated timeframes and 80-99%
Effect- of issues identi ed during inspections are recti ed within speci ed timeframes
ive

Par- 50-79% of maintenance inspections conducted within designated timeframes and any-
tially thing less than 79% of issues identi ed during inspections are recti ed within speci ed
Effect- timeframes
ive

Not <50% of maintenance inspections conducted within designated timeframes and anything
Effect- less than 70% of issues identi ed during inspections are recti ed within speci ed
ive timeframes

Provided we actually undertake the measurement of the controls, we can now provide evidence of ef-
fectiveness to management. In doing so, we are providing them assurance that the risks with the most
signi cant consequences to the organisation should they materialise, are being effectively controlled.
This level of assurance cannot be provided when control effectiveness is guessed rather than assessed.

The three lines of defence of control assurance


Obviously, we don’t need the same level of assurance for all controls – the resource impost would be sig-
ni cant, for questionable value. To that end, we need a methodology that provides the ability to separ-
ate those controls that require further assurance from those where Control Self Assessments are
suf cient.

The rst step in this process is to prioritise the controls. The following matrix can assist:

Con- Crit- Assurance Priority


sequence ical-
Level ity
(of the of
risk) the
Con-
trol

Severe 3,4,5 1 It is critical that these controls are effective, therefore, they are
the number one priority for the organisation’s audit program. Where
possible, external auditing should be utilised to provide further
assurance

Major 4,5 2 It is important that these controls are effective, therefore, they are
a signi cant priority for the organisation’s audit program. Where pos-
sible, external auditing should be utilised to provide further assurance
for the criticality 5 controls

Moderate 5 3 It is relatively important that these controls are effective. There is no re-
quirement for external auditing.
1st Line of Defence 2nd Line of Defence 3rd Line of Defence

Control self-assessment. Line management review External audit conducted


This involves the control on speci c controls.
owner making a judge- or
ment in relation to the ef- Once again, the focus
fectiveness of the control. Internal audit of controls needs to be on controls
against designated per- against risks with the
It involves documenting formance measures and highest level of con-
the organisation’s control key performance sequence.
processes with the aim of indicators. Outcomes of audit nd-
identifying suitable ways ings are reported to the
of measuring or testing Primary focus is on con- internal audit function for
each control. trols against risks with inclusion in reports to ap-
The actual testing of the the highest level of propriate governance
controls is performed by consequence. committees.
staff whose day-to-day
role is within the area of Outcomes of control
the organisation that is audits provided to control
being examined as they owner. If any change to
have the greatest know- effectiveness the risk
ledge of how the pro- owner needs to be in-
cesses operate formed as this may
change the level of the
risk.
Internal audit provides fol-
low up to ensure that
control improvements
have been implemented
within speci ed
timeframes.

Assurance Line of Defence to be engaged


Priority

Priority 1 × × Internal Audit × External Audit

Priority 2 × × Internal Audit × External Audit

Priority 3 × Line Management


× × Internal Audit
Review

All other
×
controls

Once prioritised, we can use the following three lines of defence model to assist in developing our audit
program:

At the completion of this process we have a prioritised list of controls that inform both our internal and
external audit program – a true risk based auditing approach.

Conclusion
To truly manage risk, rather than ‘doing risk management’, it is essential that the control environment
within the organisation is effective. We can ill afford, however, to simply estimate (or guess) whether
our controls are effective.

So, what do we need to have in place in order to provide the level of assurance to management that the
controls relating to their highest consequence risks are effective? For each control we need to identify:

A control owner
Performance measures and KPIs
What effective looks like

Put simply, if you can’t prove your controls are effective, you are not managing your risks.

COPYRIGHT © PALADIN RISK MANAGEMENT SERVICES 2017 PRIVACY POLICY | REFUND POLICY |
STATEMENT OF OWNERSHIP

RELATED COURSES
Advanced Diploma in Governance and Risk Compliance
Diploma of Risk Managment and Business Continuity
Certi cate IV in Risk Management Essentials

RECENT BLOG
Risk Tip 16 – Let us start at the very end

One of the areas that organisations nd dif cult is determining the effectiveness of controls, something I
have written about previously on how we measure effectiveness. The most signi cant challenge I have
noted is the development of performance measures for controls in order for effectiveness to be meas-
ured. The methodology I have developed to assist organisations […]
READ MORE

RECENT BLOG
Risk Tip # 9 – Describing Risk Treatments

I love reading risks treatments in risk registers – they are always so descriptive. Some of the treatments I
have taken from risk registers over time are shown below: better communication; training in contract
management; rolling fraud audit program; additional physical security; more management oversight and
action; better change management; and/or recruit additional staff. increased […]

READ MORE

Risk Tip # 8 – Capturing the right risks in your risk register

Lack of quali ed staff would have to be one of the risks that I see most often in risk registers. You may
even have it in yours. Other risks that I see on a regular basis in risk registers include: lack of funding;
failure to meet the Government’s reform agenda; project does not meet its […]

READ MORE

Quali cations issued by McMillan Staff Development

ASQA - 45173

You might also like