Questionnaire Tally Sheet

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 19

RISK ANALYSIS AND STATISTICAL SAMPLING IN AUDIT

A brief overview of theory, assumptions and existing methodology

1. The risk model


Making an audit assertion with absolute certainty would be vastly expensive. There would
always be some risk that audit fails to discover all material errors, even when 100% of the
transactions are audited. Recognizing this, the auditor defines an audit risk that he is willing
to accept or conversely the assurance that he desires to provide that his audit assertions/
opinions are correct. This risk (or assurance) is usually defined as a matter of Departmental
policy. Using this assurance as input, it is possible to define a sample, using statistical
sampling methods, on which audit tests that are carried out give results that can be projected
to the entire population. This approach prescribes a uniform audit scrutiny for all transactions
in the population. However, all transactions are not equally risky and treating them as such
will mean higher costs of audit in less risky transactions on the one hand and the threat that
risky transactions will not be detected on the other.
The risk model is an analytical tool for planning and execution. This approach detects high-
risk areas where audit effort can be concentrated. Audit can thus focus on areas which are
likely to generate better assurance instead of sampling and testing of larger but low risk areas.
It structures the audit procedures and reorganises the audit work in terms of risk perception.
Theory and Assumptions:
In any risk assessment the auditor has to consider three kinds of risks:
1. Control Risk (CR) which is determined by the efficacy of internal control environment in
the auditee organization;
2. Inherent Risk (IR) which is the determined by the susceptibility of the classes of
transactions to be audited to material misstatement, irrespective of the related internal
controls in the organization and
3. Detection Risk (DR) which is the risk that auditor’s substantive tests do not detect a
material misstatement in the transactions audited by him. This is again independent of the
other two risks.
Since all the three risks are independent, Overall Audit Risk (AR) is defined as:
OAR=CR x IR x DR,……………………………..(1)
In other words OAR, the overall audit risk acceptable to the auditor will depend on IR is the
inherent risk, i.e. the risk that an error will occur in the first place; CR is the control risk, i.e.
the risk that internal controls will fail to detect the error, and DR is the detection risk, i.e. the
risk that the audit procedures will fail to detect the error. The underlying assumption is that
the individual risks are independent of each other.
The overall audit risk is defined by the audit institution and hence is a constant pre-
determined quantity. The objective for the auditor is to first assess inherent and control risks
in the entity, and then to design and perform appropriate compliance and substantive
procedures that provide sufficient assurance that the product of the risks identified is less than
or equal to the overall audit risk that the auditor is willing to accept. If the inherence risk and
control risk are low, the auditor will be required to provide less assurance from substantive
tests, while if the inherence risk and control risk are high, the amount of assurance required
form substantive audit tests will also be high.
In the risk model, thus, the auditor assesses the inherence risk and control risk and solves the
equation for detection risk. The detection risk (DR) is actually a combination of two risks;
analytical procedures risk (AP) which is the risk that analytical procedures will fail to detect
material errors and tests of detail risk (TD) which is the risk that detailed test procedures will
fail to detect the material errors. These two risks are again considered independent and thus a
multiplicative model is suggested as follows:
DR = AP X TD…………………………………………..(2)
OAR = IR X CR X AP X TD………………………… (3)
The auditors exercise professional judgment in assessing the IR, CR and AP. Then solve the
model to arrive at the test of details risk (TD).
Detection Risk thus is closely related to the confidence that the auditor wishes to obtain from
his substantive testing procedures. It he wants to increase his level of confidence, he wants to
make his detection risk low and then more transactions and balances need to be tested
substantively than where the detection risk can be allowed to be high (because control risk is
low, for instance). Thus Confidence Level can be defined as (100%-Detection Risk). Thus if
detection risk is 10% and transactions and balances are selected on that basis, the confidence
level will be 90% which means that the auditor wishes to be 90% confident that the sample of
transactions or balances used in substantive testing will be representative of the total of such
transactions and balances. The only risk that the auditor has under his control is the Detection
Risk and he will have to ensure that it is kept low.
2. Materiality and Audit Risk
While risk is concerned with the likelihood of error, materiality deals with the extent to which
we can tolerate error. Materiality relates to the maximum possible misstatements/ error. The
auditor needs to do just enough work to conclude that the maximum possible misstatement/
error at the desired level of assurance is less than the materiality. Materiality is determined
from the user's point of view, and is independent of the overall audit assurance (risk). While
making materiality judgements three main factors are considered; the value of the error, the
nature of the error and context in which the transaction has occurred. It is normally sufficient
to determine a single materiality level for different components/ areas of audit, but one can
determine different materiality levels for different sets of transactions also. The auditor is
concerned only with material errors. Risk assessment will thus focus on the likelihood of
material error. To use the risk model, the auditor has thus to specify the materiality level
along with the overall assurance required from the audit. While the IR and CR together will
tell the auditor about the expected error rate in the population, materiality will tell him about
the tolerable error rate in the population from his point of view.
3. Assessment of Different Kinds of Risks
3.1 Assessment of Inherent Risk
Inherent risk assesses the nature, complexity, and volume of the activities and transactions
that give rise to the possibility of error occurring in the first place. They are generally
inherent to these activities or sets of transactions. For example, suspense transactions will
naturally have a higher inherent risk than the transactions under the heads Salary and
Allowances. The assessment of inherent risk factors would, to a large extent, be based on the
knowledge and understanding of the business of the auditee based on our experience from
previous audits and identification of events as well as from transactions and practices which
may have a significant impact on the audit areas. The major factors that can be considered
for assessment of inherent risk in a financial (certification) audit are listed in Annexure
A. Different audits will have a different set of risk parameters for assessment of inherent
risk.
Inherent risk has to be assessed for each audit assertion/ opinion. Inherent risk factors
impacting the audit assertion need to be documented. The risk associated with each
individual factor is then assessed as high, moderate or low. The assessment is then
consolidated for overall assessment of inherent risk. It is possible to assign numerical values
to the risk assessed, or the assessment can be done quantitatively in terms of high, moderate
and low.
3.2 Assessment of Control Risk
Control risk assesses the adequacy of the policies and procedures in the auditee organization
for detecting material error for identified functions or activities. For assessing the control
risk, the auditor considers both the control environment and control systems together.
Techniques used to evaluate internal control are narrative descriptions, questionnaires, check
lists, flow charts, inspection, inquiries, observation and re-performance of internal controls.
The factors that can be considered for assessment of control environment and control
systems in a financial (certification) audit are listed in Annexure B. Different kinds of
audit will have a different set of control factors to be considered.
The auditor evaluates the control environment and systems (both manual and computerised)
and places appropriate reliance on them. This evaluation is based on the preliminary systems
examination and is designed to assess whether the activities undertaken by the audited body
are in accordance with the statutory or other authorities, whether the audited body's structure
is likely to ensure adequate internal control, the adequacy of general financial controls,
whether the employees in areas critical to internal controls are competent and whether there
are adequate other general controls in areas relevant to audit. The control risk is then
assessed and expressed either in numerical (percentage terms) or qualitative (high, medium,
low) terms.
3.3 Assessment of Detection Risk
Having assessed the inherent and control risks, the risk equation can be solved for detection
risk, i.e. the assurance required form audit procedures. An assurance about the transactions
being audited is required from the audit procedures. An assurance guide is placed at
annexure C where the required assurance from substantive audit tests can be found.
This assurance level will be used as input in determining the sample size on which the
audit tests need to be performed to arrive at the required overall assistance.
4 Risk Assessment and Stratification of Population
Given the level of assurance required from audit testing of an area and the materiality of
errors associated, audit processes are well-defined. A high likelihood or error in an audit
area which requires a high level of assurance of the audit test along with a high significance
would, for example, make the area a critical concern for audit and one may decide to conduct
a 100% check on these kind of areas. Based on the perception of risk and the materiality
along with the value of the set of transactions, the population is stratified. Each strata of the
population will require a different level of substantive audit checks. The high risk, high
materiality items will be subjected to a higher level of substantive audit test, while an area
with lower materiality may be tested through analytical methods or test of controls and lesser
substantive tests.
As a rule it is prudent to examine all transactions that are individually material. The
conclusions which can be drawn from a test of items selected on a high value basis will only
relate to these items and provide better assurance to the auditor. Similarly, there could be key
items which are especially prone to error or other risks, or those which merit special
attention. The auditor may wish to examine these items 100% when forming an audit
opinion.
5 Statistical sampling
Sampling means testing less than 100% of the items in the population for some characteristic
and then drawing a conclusion about that characteristic for the entire population.
Traditionally, auditors use 'test check' (judgmental sampling or non-statistical sampling)
approach. This means checking a pre-determined proportion of the transactions on the basis
of the auditor's judgment. This sampling technique can be effective if properly designed.
Hitherto audit has been based almost entirely on such judgmental sampling, which, as has
been proved time and again, has been extremely effective in detecting errors. However, it
does not have the ability to measure and integrate risk model in the audit procedure and the
audit observations have to be limited only to the transactions actually checked in audit; audit
conclusions so arrived cannot be projected over the entire population. For example, while
certifying accounts balances or receipts of the Government, we cannot state the extent to
which the accounts balances as depicted in the financial statements are reliable; all we can
say is that they are subject to the audit observations made on the basis of test check of
transactions. Thus the possibility remains that in his judgment the auditor may err and miss to
check transactions containing substantial errors or misstatements.
For statistical sampling techniques, there is a measurable and quantifiable relationship
between the size of sample and the degree of risk. Statistical sampling procedure uses the
laws of probability and provides a measurable degree of sampling risk. Accepting this level
or risk, (or conversely at a definite assurance or ‘confidence’ level) the auditor can state his
conclusions for the entire population. In sum, statistical sampling may provide much greater
objectivity in sample selection as well as in the audit conclusion, besides enabling the auditor
to express an opinion concerning the entire population.
The basic hypotheses of statistical sampling theory are:
(a) The population is a homogeneous group.
(b) There is no bias in the selection of items of the sample. All items of the population have
equal chance of being selected in the sample, in other words, the sample is representative
of the population and retains all its characteristics.
6 Attribute Sampling, Variable Sampling and Monetary Unit Sampling
Different statistical sampling methods may be used in different auditing situations. The
auditor may wish to estimate how many departures have occurred form the prescribed
procedures; or estimate a quantity, e.g. the value (amount) of errors in the population. Based
on whether the audit objective is to determine a qualitative characteristic or a quantitative
estimate of the population, the sampling is called an attribute or variable sampling.
Attribute sampling estimates the proportion of items in a population having a certain
attribute or characteristic. In an audit situation, attribute sampling would estimate the
existence or otherwise of an error. Attribute sampling would be used when drawing
assurance that prescribed procedures are being followed properly. For example, attribute
sampling may be used to derive assurance that procedures for classification of vouchers have
been followed properly. Here, the auditor estimates through attribute sampling the
percentage of error (vouchers that have been misclassified) and sets an upper limit of error
that he is willing to accept and still be assured that the systems are in place. It is thus
obvious that attribute sampling can only be used in assessment of control risk, where
the attribute is whether a specific control has been applied and there can only be two
answer to this: Yes or No.
Variables sampling estimates a quantity, e.g. amount of sundry debtors shown in the balance
sheet or the underassessment in a tax circle. Variables sampling involves complex procedures
and has certain drawbacks.
Monetary Unit Sampling provides quantitative results and is suited to most audit situation,
but theoretically it can be used in low level error situations with a relatively small population,
where there are no negative or zero balances. In this sampling method, which is a variation of
‘PPS’ or ‘Probability Proportional to Size’ sampling methods the sampling units are defined
not as individual transactions but as the progressive cumulative totals of all transactions
which constitute the population. Since it takes monetary values as sampling units, the high
value items tend to get more weight and therefore greater probability of getting picked up in
any random selection, since the probability of selection becomes proportional to the values of
transactions.
7 Sampling Methods
There are different ways in which a statistical sampling can be selected. A simple random
sampling ensures that every member of the population has an equal chance of selection.
Through simple to administer, the underlying assumption is that the population is
homogeneous. Instead of a simple random sampling, the auditor can choose to have a
systematic random sampling in which the first member of the sample is determined by
random sampling, while the others are picked up at fixed intervals from that first member. A
similar procedure was earlier used in central auditing of accounts vouchers. In cases where
the population is non-homogeneous, a stratified sampling would be a better option. Here the
population is sub-divided into homogeneous groups and then a random sampling is done on
the groups, ensuring a better representative sample. In the cell sampling method, the
population is divided into a number of cells and one item is selected from each cell randomly.
This method overcomes the drawback of systematic sampling when fixed numbers are given
to various categories, but retains the advantage of systematic sampling of automatically
selecting items bigger than the average sampling interval.
The auditing softwares, e.g. Computer Assisted Audit Technique (CAAT) like IDEA is an
efficient tool for sample selection. Once the sample is selected, identified audit tests can
directly be applied on the sample.
Each sampling method has its practical advantages and limitations. The auditor uses his
judgment in determining which kind of sampling is best suited to his audit job.
8. Audit Assumptions
Audit works on the principle that higher the risk involved in the transactions, higher the
need for more extensive checks. Audit through statistical sampling is a systematic procedure
based on the following stages:
1. Assessment of Inherent Risk through auditor’s knowledge, judgment and application of
specific auditing procedures like analytical reviews etc.
2. Assessment of Control Risk through Compliance Testing, i.e. testing whether the
prescribed/ required controls are operating or being complied with in the auditee
organization. This can be done through attribute sampling, analytical reviews etc.
3. To Design the Sampling Frame for Substantive Testing based on the Inherent Risk
Evaluation and Assessment of Control Risk through Compliance Testing. In this, the auditor
has primarily to determine the sampling method (Attribute/ Variable/ Monetary Unit
Sampling) as well as the sample size.
4. Evaluation of results of Substantive Tests and expression of audit opinion.
Since for particular class of transactions, the Inherent Risk is constant, basically the auditor
has to determine the extent and method of substantive testing based on his evaluation of the
internal controls through Compliance Testing. Needless to say, the above steps follow each
other and are closely interlinked and hence cannot be segregated into independent streams of
audit. While Compliance Testing is needed to review and evaluate the effectiveness of the
internal control systems in an organization, Substantive Testing procedures are applied
to gather audit evidence regarding completeness, accuracy and validity of data.
9. Sampling Risks of an Auditor
While suggesting an appropriate an appropriate theoretical frame work for the application of
statistical sampling in audit, we need to consider the sampling risk from the point of view of
an auditor and in designing the sampling frame, must provide for this in a way that minimizes
or eliminates this risk. This risk is not to be confused with any of the risks involved in the risk
model discussed earlier.
Obviously sampling involves certain risk if the auditor has to express n opinion on the entire
population based on his sample checks as he checks only a few items in order to express an
opinion on the whole population. The possibility remains that his conclusions could be
different if he had checked all transactions instead of a few selected items. This risk is
inherent in both compliance as well as substantive testing which is decided on the basis of
compliance testing only.
(a) Sampling Risk in Compliance Testing
Here the auditor undertakes the risk of over-reliance as well the risk of under-reliance on
the controls operating in the auditee organization. The former may lead to the auditor
reducing the extent of his substantive testing since he has placed reliance on a control that
was otherwise weak, while the latter would lead to the auditor wasting his resources on
additional substantive testing which may not be necessary as the controls in reality were
adequate. Obviously the former is more dangerous of the two as it would lead to erroneous
audit conclusions and therefore needs to be minimized.
(b) Sampling Risk in Substantive Testing
Similarly, in substantive testing also, there are two kinds of risks involved. We may call these
as the risk of incorrect rejection and the risk of incorrect acceptance. Risk of incorrect
rejection is the risk that the sample checks supports that the recorded account balances are
materially incorrectly stated while they are actually correct. This again makes the auditor
redefine his sampling frame so as to increase the extent of his substantive tests, leading to
waste of resources. Risk of incorrect acceptance, which the more serious of the two, is the
risk that the auditor’s sample checks lead to the conclusion that the account balances are
correct while in reality they are incorrect, thereby leading to erroneous audit conclusion and
expression of incorrect audit opinion.
Selection of appropriate sample size, both in compliance as well as in substantive testing,
thus becomes imperative for the auditor to arrive at the correct opinion and to minimise his
sampling risks.
10 Designing a Sample
In designing sampling frame, the auditor has to follow the stages as described below:
(a) Define the population and select an appropriate sampling method: attribute, variable,
monetary unit etc.
(b) Determination of the sample size which is a crucial step to minimize the auditor’s risks;
(c) Selection of the sample elements following an appropriate random sampling procedure;
(d) Performance of substantive audit tests on the sample elements and
(e) Projecting the sample results into the population and express an audit opinion on the
population.
11. Determinants of Sample Size in Attribute Sampling
In case of attribute sampling, the first step is to clearly define the target population and the
errors/ exceptions (attributes) the audit wishes to test. The following quantities estimated by
the auditor could be the determinants of his sample size:
1. In order to determine the sample size, the auditor next defines the expected error rate or
amount in the population which is an important determinant of the sample size. In
compliance testing, the auditor seeks to determine is the rate of non-occurrence of the
internal controls being applied, e.g. mistakes in vouchers, wrong entries in cash books,
stores ledger, unauthorized payments, cash books not being daily checked or physical
verifications not being made. These techniques can be applied in sanctions audit,
propriety audit or regularity audit or in financial audit, where, while testing an account
balance, the auditor only wants to confirm if the balance is correctly stated or no without
going into estimating the correct balance (for which variable sampling would be
necessary). The greater the expected error rate or amount, the greater the sample
size must be for the auditor to conclude that the actual error rate or amount is less
than the tolerate error rate or amount.
2. Tolerate error rate or amount is the maximum error rate the auditor is prepared to
accept when deciding whether his initial evaluation of the control risk is valid or whether
the total recorded transactions or balances may be regarded as accurate and complete. It is
the maximum error rate the auditor is willing to accept and still conclude that the auditee
is following the procedures properly. When testing for amounts, the tolerable error is
limited by the level of materiality set by the auditor. The lower the tolerable error,
the larger would be the sample size.
3. Audit tests on the sample will throw up an estimate of error for the population. The true
error of the population could be more than this estimate. The difference between the
sample estimate and the actual population is the precision level. The auditor has to
decide the precision he desires to provide in his estimates. Tolerable Error, being the
maximum error that the auditor is willing to accept = Maximum (sample estimate +
precision level).
4. The confidence level or the level of assurance that audit needs to provide is to be
predefined by the auditor. When a risk assessment has preceded the sampling process, the
confidence level would be (1- detection risk). Confidence level states how certain the
auditor is that the actual population measure is within the sample estimates and its
associated precision level.
5. The occurrence rate or population proportion which is the proportion of items in the
population having the error/exception that audit wishes to test.
6. Acceptable risk of over reliance: As we have seen, the risk of under-reliance does not
affect the correctness of the auditor’s opinion, it only results in increasing his workload.
When the degree of reliance in controls is high, the acceptable risk of over reliance is low
and vice versa. The auditor may try to quantify the risk (5%, 10%, 15% etc.) depending
on whether the degree of reliance is high, moderate or low.
An appropriate quantitative framework needs to be prepared based on the above factors.
12. Nature of Population
Statistical sampling is based on the assumption of homogeneity of population. In reality this
assumption may not always be true. Another problem is the nature of distribution in the
population which may not always be known. However, in general the nature of population
distribution is immaterial when the sample size is large, as, according to the Central Limit
Theorem, for a large sample, the sample means would have a normal distribution irrespective
of the error-proneness of the auditee. When the sample size is small, one may have to use the
t-distribution.
While testing the internal controls of the auditee organisation, we are interested in finding the
proportion of a population having a single attribute (say a specific control parameter like a
classification mistake). In this case since there are only two possible outcomes of each
individual testing, the Binomial/ Poisson distribution could be more appropriate. We need to
determine whether we should take into account the nature of population distribution of the
parameter to be audited and if so, how best we can develop a framework simple enough for
the average auditor to follow and apply in actual practice.
13 Projecting the Results into the Population and Estimating the
Population Value
Once the audit tests are performed on the sample, the test results need to be projected to the
population. Following this, a conclusion has to be reached. In compliance testing this would
concern whether the auditor can place an assurance on the systems, while in substantive
testing this might be to estimate the account balance from the sample results and thereby
express an opinion about the correctness or otherwise of the reported balance.
From the actual number/amount of errors observed in the sample, the sample size and the
confidence level desired by the auditor, the precision p can be determined. The maximum
error estimate of the population would then be obtained after loading the sample estimate
with the precision. This is the computed tolerable error. Instead of solving the mathematical
formula, it is possible to locate the 'computed tolerable error' straightway from the statistical
tables for the desired confidence (assurance levels). A sample of such a statistical table is
placed at annexure D.
In a case when the computed tolerable error is less than the tolerable error, the auditor can
place the desired assurance on the systems. When the computed tolerable error is higher than
the tolerable error, the auditor cannot derive assurance form the systems. The auditor may, in
such situations reduce the assurance he derives form the control and increase the assurance
required from substantive tests. A similar, though more complex, process can be prescribed
for estimating the population value of a parameter through variable sampling also.
14. Problems and Decision Areas
There cannot be any doubt that statistical sampling cannot be a substitute for judgmental
sampling and audit is primarily a judgmental process. At best statistical sampling may
supplement the judgmental sampling.
A review of the existing methods available for audit through sampling is given in Appendix I.
There are many gray areas and we are i the process of evolving techniques and methods, after
which they can be modified based on our experiences of actual application of the techniques
and evaluation of the results thereof.
The following are some of the areas that merit attention in this regard:
A. To identify the areas of applicability of sampling methodology within the
Department. For example, whether each of the following areas (only an illustrative list)
can be audited though sampling techniques with equal effectiveness or if there are areas
which will more prone to statistical analyses than others.:
 Checking the correct accountal of expenditure or receipts;
 Checking calculations of payment or receipts;
 Checking propriety and regularity of expenditure;
 Checking interpretation or application of rules /contract clauses /provisions of tax acts;
 Checking achievement of objective of expenditure / exemption of receipts.
B. To evolve a framework for the application of sampling techniques in the identified
areas. This will involve the following issues:
i. To integrate the risk model of audit with the assumptions and requirements of
sampling theory;
ii. To suggest a way to identify the distribution of audit parameter in the population of
transactions being audited and to suggest the corresponding sampling frame for
auditing purposes;
iii. To suggest an appropriate sampling method for selection of the sample elements so
as to optimally achieve the audit objectives, including identification of areas for
application of attribute/ variable/ monetary unit sampling;
iv. To suggest an appropriate formula for determination of sample size;
v. To evolve an appropriate methodology as well as theoretical framework for
projecting the sample results into the entire population and for estimating the
population value of the parameter;
vi. To suggest ways in which audit risks can be minimized through sampling,
especially the risk of over reliance and the risk of incorrect acceptance;
vii. To suggest a way to incorporate the concepts of Materiality and Detection Risk in
Audit in a more rigorous objective manner in the sampling frame to be evolved;
viii. To suggest a practical way to apply the theoretical frame so suggested in practical
auditing situations which is simple enough for an average auditor to understand and
apply without mistakes.
Annexure – A

Factors to consider the assessment of inherent risk in financial audit

 The number and significance of audit adjustments and difference waived during the
audits of previous years.
 Complexity of underlying calculations of accounting principles
 The susceptibility of the asset to material fraud or misappropriation
 Experience and competence of accounting personnel responsible for the component
 Judgment involved in determining amount
 Mix and size of items subject to the audit test
 The degree to which the financial circumstances of the entity may motivate its
management to misstate the component in regard to this assertion
 Integrity and behaviour of the management.
 Management turnover and reputation

Annexure – B

Factors to consider for assessment of control risk in financial audit

Evaluate the control environment

 Management philosophy and operating style


 The functioning of the board of directors and its committees, particularly the audit
committee
 Organizational structure
 Methods of assigning authority and responsibility.
 Systems development methods
 Systems development methodology
 Personnel policies and practices
 Management reaction to external influences
 Internal audit

Evaluate the control systems

 Segregation of incompatible functions


 Controls to ensure completeness of transactions being recorded
 Controls to ensure that transactions are authorized
 Third party controls (e.g. confirmation of events)
 Control over accounting systems
 Controls over computer processing
 Restricted access to assets (only allow access to authorized personnel)
 Periodic count and comparison (ensure book amounts reconcile with actual inventory
counts)
 Controls over computer operations
Annexure D:
Annexure E:

Annexure F: Z score values

Confidence Level (%) Z score values


75 1.13
80 1.28
85 1.44
90 1.65
92 1.751
94 1.881
95 1.96
96 2.054
97 2.17
99 2.58
Annexure G
Existing Audit Sampling Methods
The existing literature in audit prescribes certain procedures for the application of sampling
in audit, especially concerning the most crucial stage of selection of sample size. Some of
these are listed below. All these methods are followed in different situations/ by different
SAIs, but there has been no attempt so far to compare between the results obtained by the
application of these different techniques.
A. When testing for amounts, the tolerable error is limited by the level of materiality set by
the auditor. The lower the tolerable error, the larger the sample size. The population size is
immaterial for the sample size when the auditor is working normal populations. Sample
size is determined as
Sample size= Reliability Factor /Tolerable Error Rate …………………….(1)
This approach is based on the Risk Model and takes into account the factors like
Expected Errors in Population, Tolerable Error Rate (which should be less than the
expected error rate) and the Confidence Level. We give below an example of the
Reliability Factors based on the Poisson distribution:
No of Confidence Levels
Sample 80% 90% 95% 99%
Errors
0 1.61 2.31 3.00 4.61
1 3.00 3.89 4.75 6.64
2 4.28 5.33 6.30 8.41
3 5.52 6.69 7.76 10.05
Thus if the maximum risk the auditor is willing to accept is 20%, the expected number of
errors =1 and tolerable error rate =4%, the sample size = 3.00/0.04=75.
In case the auditor finds 2 errors in the sample of 75, the Upper Error rate =Reliability
Factor/ Sample Size= 4.28/75=5.5> Tolerable Error. Thus the auditor has to adjust his
confidence level which will then become 75% commensurate with 2 errors at the
tolerable error level of 4%. He will the place less reliance on controls and may increase
the scope of his substantive tests/ procedures etc. Alternatively, the sample size will go up
if he is to retain 80% level of confidence.

B. Another approach, which is very similar to the first approach, takes into account the
following 3 factors
1 Acceptable Risk of Over- Reliance (ARO) / Acceptable Risk of Incorrect
Acceptance (ARIA), as explained earlier.
2 Tolerable Deviation Rate (TDR): This is the maximum rate of deviation the auditor
is willing to accept without altering his planned reliance on the controls and is the
same thing as the Tolerable Error Rate. For example, he may decide that in cash book
checking, if the deviation rate is not more than 1 in 30 days, i.e. if the cash book is
checked in at least 29 out of 30 days in a month, he will accept that the internal
control system in this regard is working satisfactorily. The TDR in this case is 3.3%.
Establishing the TDR is a matter of the auditor’s professional judgment and will relate
to the significance of the transactions and related balances affected by the relevant
internal controls. The higher the significance, lower is the TDR. As an illustration,
we may decide that if the internal controls affect highly significant balances, the TDR
of the auditor would be 4%; if they affect moderately significant balances, TDR of the
auditor would be 8% and in case these affect balances of low significance, the TDR of
the auditor would be reckoned as 12%.
3 Estimated Population Deviation Rate (EPDR): This is the expected error in the
population and can be determined by selecting a preliminary sample and using the
sample results to estimate the population deviation rate by applying either the Chi-
Square or the t-distribution as described above. Point estimate can also be used.
Once these three quantities are known, the sample size is automatically determined
which can be found out from the statistical tables. A sample of such table is placed at
Annexure E. After determining the size of the sample, auditor's task is reduced to
check the sample elements and find out the actual deviation rate in the sample.

C. A third approach suggests the following formula for determination of the sample size n

Z 2 p (1 p )
n= E2
…………………………………………….(2)
where Z=score associated with confidence level,
E= precision
p=occurrence rate
Z scores uses the Normal Distribution as shown in Annexure F.

D. Monetary Unit Sampling: In this the Sample size is determined on the basis of the
following four quantities:
1 Acceptable Risk of Incorrect Acceptance (ARIA): It correspondence of the ARO
used in attributes sampling for compliance test. The substantive testing where the
auditor tests transactions and balances, the same risk is expressed as ARIA. It is the
risk of accepting a transaction or an account balance as correct on the basis of the
results of sample whereas, in reality, the auditor should not have accepted the same.
2 Tolerable Error (TE): In trying to verify the balances under investment account, the
auditor may state that he will accept the recorded amount as correct, if the amount
estimated on the basis of his sample lies between Rs.31.0 lakh and the recorded
amount. Here the TE has been defined as Rs.1.0 lakh (=32-31)
3 Expected Error (EE): This is analogous to the EDPR as in attribute sampling. Here
the auditor has to estimate the ER in the population designing the sample applications.
4 Recorded Population Value (sample Size): This is the rupee value (or other
quantitative value) of the population taken from the records of the enterprise under
audit.

Sample size is then calculated based on above four quantities as follows.


Step-I: Given the level of ARIA, the Expected Error (EE) is multiplied by the expansion
factor at the given level of ARIA found from the following table:
Expansion Factors for Expected Errors:
ARIA 1% 5% 10% 15% 20% 25% 30% 37% 50%
Expansion 1.9 1.6 1.5 1.5 1.3 1.25 1.2 1.15 1.0
Factor
Step II: The product of the EE and the Expansion Factor is then subtracted from the tolerable
error to arrive at the tolerable error adjusted for expanded error.
Step III: Tolerable Error adjusted for expanded expected error is then divided by the
reliability factor for the actual number of expected errors at the given level of ARIA from the
following table to arrive at the sampling interval.
Reliability Factors for Errors of Overstatement:
Risk of Incorrect Risk of Incorrect
Acceptance Acceptance
Number of 5% 10% 30% Number of 5% 10% 30%
Overstatement Overstatement
Errors Errors
0 3.00 2.31 1.21
1 4.75 3.89 2.44 11 18.21 16.60 13.55
2 6.30 5.33 3.62 12 19.45 17.79 14.63
3 7.76 6.69 4.77 13 20.67 18.96 15.70
4 9.16 8.00 5.90 14 21.89 20.13 16.77
5 10.52 9.28 7.01 15 23.10 21.30 17.84
6 11.85 10.54 8.12 16 24.31 22.46 18.0
7 13.15 11.78 9.21 17 25.50 23.61 19.97
8 14.44 13.00 10.31 18 26.70 24.76 21.03
9 15.71 14.21 11.39 19 27.88 25.91 22.09
10 16.97 15.41 12.47 20 29.07 27.05 23.15

Step IV: Sample size can then be determined by using the formula:

Sample size = Size of Population (i.e., the recorded population value)/


Sampling interval as determined in step III………………………(3)

Evaluation of the results:


The risk inherent in sampling prohibits the auditor to draw the conclusion that the sample
results can be applied to the population per se. In case there are no errors in the sample,
sampling risk is calculated as equal to the sampling interval multiplied by the reliability
factor from the above table. This gives the basic precision limit or the projected error. If this
projected error is less than or equal to the tolerable error limit, the auditor can accept the
population as correct. It should be noted that this method is basically deigned to detect
overstatements, in case understatements need to be detected, the method has to be suitably
modified.
Where errors are detected in the sample items, the upper limit errors in the sample can be
estimated by adding three quantities:
1. Basic precision limit of the sample i.e. sampling interval multiplied by the reliability
factor.
2. Errors found in all items which are greater than or equal to the sampling interval (In
monetary unit sampling, all large items which are greater than or equal to the sampling
interval are automatically included in the sample).
3. Errors found in items which are smaller than the sampling interval.
If the upper limit of errors in the population as estimated by the auditor is lower than the
TE, the auditor can conclude that the sample results are as per ledger. In case the upper
error limit exceeds the TE, the population cannot be accepted and in such cases, the sample
size has to be increased and additional audit procedures have to be devised.

D. Variables Sampling: It is relatively more complex and useful in performing substantive


tests, especially where errors are expected to be large or where there are zero or negative
balances. It requires the estimate of standard deviation. There are number of techniques
of variable sampling, e.g. mean per unit estimation, estimation of differences, estimation
of ratios etc. We shall briefly outline only the first method, i.e. the Mean Per Unit
method. Here the auditor calculates the average audited amount for the sample and
projects it into the population. The steps are:

1. Define the population.


2. Determine sample of a size corresponding to the sampling risk the auditor is wiling to
undertake: The sample size is calculated for a desired precision, the desired
confidence level and the standard deviation of the population.
3. From the sample tests performed, find out the Average Sample Value defined as
Average Sample Value=Total value of all sample elements/Number of sample
elements.
4. Multiply the total number of elements in the population with the average sample
value. The resultant figure is the estimated value of the whole population X=xN,
where x is the average sample value and N the population size.

Determining the appropriate sample size is the most crucial part of the entire procedure.
The sample size can be determined, once the confidence level and desired precision are
know and the value of the population standard deviation is estimated form a study of a
small pilot sample, by following the procedures described earlier. Once these three
quantities are known, the sample size can be calculated using the formula.

n ={SXJ x (UR) x (N/A)} 2 …………………………………(3)

where n is the sample size, SXJ is the estimated standard deviation of the population, UR is the
coefficient (e.g. 1.645 is the coefficient of confidence for 90% level of confidence. The
coefficients can be directly found out from the normal distribution table annexed at Annexure
– D), N is the population size and A is the accuracy or precision desired in the estimates. As a
measure of safety, the sample size may be increased by 10% or so.

In the estimation of differences, the auditor calculates the differences between the values
found from the samples and the corresponding values found in the books of the auditee and
the difference is then projected for the whole population. If the resultant figure is within the
desired precision limits, the auditor accepts the recorded figure as correct.

In the estimation of ratios, the auditor calculates the ratio between the sum of the recorded
amounts of the sample items and projects the same to the population.

E. Stratified Sampling Plan:


The following steps are usually required in a sampling plan for stratified random sampling.
1. Establish the desired precision level and confidence level for the estimate for the
overall population value (this step is the same as in variables sampling).
2. Define the strata clearly so that each element in the population is known in advance to
belong to one and only one stratum. It is important that there is no confusion
regarding whether an element falls into one stratum or another. The exact number of
elements falling in each stratum should be known.
3. Using the regular random number table (not the illustrative table given in this
chapter), a preliminary sample of at least 30 items for each stratum may be selected.
It may be recalled that this sample is required to calculate the standard deviation of
each stratum. Thus, if the population is divided into three strata, 30 items from each
stratum will separately be selected and examined. In selecting a sample for stratified
random sampling, usually the items are selected without replacement. This means
that once an item has been elected, it will not be included again even though its
number may appear again in the random number table.
4. The estimated standard deviation of each stratum is then to be computed. The steps
for calculating the same are given in the section on variables sampling.
5. The next step is to determine the basis of allocating the total sample size among the
strata, i.e. to determine the percentage of the total sample size which each stratum will
contribute. There are two methods of determining the percentage of the total sample
size which each stratum will contribute (this figure can be represented by the symbol
Pi):
(a) Proportional allocation Under this method the percentage of the total sample
size allocated to each stratum is in proportion to the number of elements in
each stratum. Suppose we have sub-divided a population into two strata, A
and B. thus, if stratum A has 1,500 elements and stratum B has 500 elements,
the percentage of the sample allocated to each stratum will be in the ratio of
3:1. The value of Pi for the first stratum would be 0.75 and for the second
0.25.
(b) Optional allocation This method takes two factors into account —the number
of elements in each stratum and the homogeneity of elements in each stratum.
The percentage of sample size to tbe contributed buy each stratum (Pi) is
calculated by taking a weighted average of both of these factors.

The following illustrates this.

Stratum Ni Si NiSi Pi
1. High value items 1,500 36 54,000 0.35
2. Low value items 500 205 1,02,500 0.65

where Ni represents the number of elements in each sample, Si represents the


estimated standard deviation and thus, the dispersion of elements in each stratum.
NiSi is the product of Ni and Si and Pi is the proportion that NiSi of the one stratum
bears to the total NiSi. Thus, Pi for stratum 1 is:

NiSi 54,000
 = 0.35.
NiSi 1,56,500
Similarly Pi for Stratum 2 is: Pi = 0.65

It can be seen that the method of optimal allocation takes into account the standard
deviation as well as the number of elements in each stratum. It is a better method than
the proportional allocation method.
6. The next step is to compute the total sample size through the equation given below:

U R2 NiSi 2 ( N i  Pi n )
n= ∑ …………………………………………….(4)
A2 Pi

It would be noted that except for n, which represents the sample size and which we
have to find out, all other quantities in this formula are know to us. Is the square of
the coefficient of reliability; A2 is the square of the precision limit required; Ni is the
number of elements in the population; Si is the square of the estimated standard
deviation of each stratum; and Pi is the percentage of the total sample size that each
stratum will contribute.)
7. From the solution of the sample size equation, the total sample size is known. The
contribution of each stratum to the total can be calculated by multiplying the total
sample size with Pi Thus, if the total sample size is 300, Stratum 1 will contribute 105
(300x0.35) items. To this, a 10 per cent addition many be made as a matter of
precaution and 116 items (105+11) will, therefore, be selected from Stratum A.
Similarly, Stratum B will contribute 215 (195+20) items.
8. From each stratum, select additional items for examination and record the individual
values.
9. Compute the new sample mean for each stratum.
10. Estimate the total population value. The estimated value of each stratum can be
reached by multiplying stratum sample mean by the number of elements in the
population. ( x iNi,, where x i is sample mean o a stratum and Ni is the number of
elements in the stratum.) The value of the various strata can then be added to each an
estimate of the value of the population.

There are other techniques available for sampling with the help of which the auditor looks for
a characteristic, which, if discovered in his sample might be indicative or more widespread
irregularities. Discovery of one such error will be indicative of the need to apply more
extensive checks. This technique, called discovery sampling, is especially useful in
unearthing frauds.

You might also like