Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

International Journal of Auditing

Int. J. Audit. 7: 21–35 (2003)

Modelling Audit Risk Assessments:


Exploration of an Alternative to the
Use of Knowledge-based Systems
Paul Lloyd* and Peter Goldschmidt
University of Western Australia

This paper compares decision-modelling approaches. The


decision modelled is the assessment of inherent and control
risk in the purchases, accounts payable and inventory cycle.
It is modelled using two different approaches. Firstly,
knowledge-based models are constructed using established
development shells. Secondly, a model is constructed using
a conventional procedural programming language. Both
modelling approaches are tested against the output of human
practitioners and compared to each other to determine if the
more restrictive, assumption-laden approach offered by the
procedural model is adequate to deal with the decision
problem under examination, or if the greater flexibility
offered by the knowledge-based approach is required. This
comparison yields positive results. The procedural model is
able to reproduce satisfactorily the output of the human
decision makers and the knowledge-based models for the
chosen decision problem. This result emphasises the
importance of devoting time to the selection of the most
appropriate modelling approach for a given decision problem.

Key words: audit risk, audit decision tools, decision modelling,


decision aids, practice aids, audit planning, internal control risk,
inherent risk, audit technology, client risk assessment.

SUMMARY training of new practitioners, enable the more


efficient use of skilled practitioners’ time and
Artificial intelligence-based decision models are provide documented consistency in decision-
employed on an extensive and growing basis making. For these reasons the use of knowledge-
in audit practice and research. They facilitate the based models to support decisions such as audit
risk assessment has grown.
This paper investigates one risk assessment
*Correspondence to: Department of Accounting and
Finance, The University of Western Australia, 35 Mounts
decision, audit risk assessment in the purchases,
Bay Road, Crawley, Western Australia 6009. Email: inventory and accounts payable transaction cycle,
plloyd@ecel.uwa.edu.au which has previously been the subject of

ISSN 1090–6738 Received February 2001


© Blackwell Publishing Ltd 2003. Published by Blackwell Publishing, 9600 Garsington Road, Revised March 2002
Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Accepted July 2002
2 P. Lloyd and P. Goldschmidt

knowledge-based modelling, to see if it can be


adequately addressed by a more restricted INTRODUCTION AND MOTIVATION
decision model, embodied in a purpose written
procedural application. This study is a This paper presents a comparison of different
comparison of decision output of decision models decision modelling approaches applied to the
constructed using these two approaches. Such a same audit decision problem, the assessment of
comparison can act as a trigger, signalling the inherent and control risk in the purchases,
need to further evaluate decision-modelling inventory and accounts payable transaction cycle.
techniques before a final choice is made. Two approaches, knowledge-based (or expert
systems) and procedural decision models, are
The knowledge base required to construct these
decision models is obtained from a case study applied to the problem and the results they
based instrument. Two knowledge-based models generate are compared.
were constructed to provide a comparator for the Peters (1989), Bharadwaj et al. (1994) and Delisio
output of the procedural model. These use a rule- et al. (1994) have selected a knowledge-based
based knowledge representation technique, and approach as the appropriate tool to model audit
embody the appropriate meta-knowledge to apply risk assessments of this type. An overview of
those rules. The procedural model incorporates these papers follows.
the contents of these rules as a set of data items • Peters (1989) constructed an expert system to
that are applied on the basis of a generalised model the assessment of inherent risk during
approximation of that meta-knowledge. This audit planning. He developed an initial model
approximation is inherently less flexible and, if it derived from several knowledge sources.
produces an adequate model, must thus result in a These comprised: a review of relevant
more parsimonious model of the risk assessment professional and academic literature; relevant
process. practice manuals of two international
accounting firms; and unstructured interviews
The success of the procedural model is judged
by comparison of its decision output with that of of staff members of an international auditing
the two knowledge-based models. To ensure that firm. The background information collected
this comparison is meaningful, the output of all from these sources was used as the basis for
three models is also compared to that of human more structured interviews with staff from a
decision makers. Decision output for these single international accounting firm. Actual
comparisons is generated from a series of five test audit planning meetings were also observed.
cases, which were also presented to a group of Peters found his initial model to be deficient in
human respondents via a second case study that it did not assess firm-wide risk factors and
instrument. it did not appear to place sufficient emphasis
on the possibility of unintentional errors
The results presented show that it was possible
arising. A series of case studies was employed
to satisfactorily replicate the decision output of
to collect data from practising auditors in
the two knowledge-based models, which were
order to refine the initial model to the point
themselves validated against human decision
where it performed satisfactorily.
output, using the more restricted techniques
available within the procedural model. This • Bharadwaj et al. (1994) developed a
conclusion is based on results derived using knowledge- based model, APX, to generate
measures of correlation and an adaptation of an assessments of acceptable audit risk, inherent
output classification method presented in Fleiss risk, control risk and detection risk, on an
(1981). account-by- account basis. They described risk
assessment as a highly judgemental task and
As suggested by Weber (1997), the most
appropriate decision modelling choice need not be sought to acquire the knowledge for their
the most advanced technology available. Whilst modelling project using protocol analysis. An
this project examined one decision problem it academic experienced in auditing was
provides an important signal of the need to required to solve a series of comprehensive
consider the availability of alternative approaches case studies whilst orally explaining his
before commencing a modelling project using any reasoning process. These explanations were
one technique, and to adopt the approach that is broken down into their component concepts to
the most efficient, as well as effective. incorporate them in a series of ‘dependency
diagrams’, designed to show the relationships
between concepts that

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 2

formed the basis of the completed model’s


architecture. A prototype of APX was Table 1: Advantages of expert systems relative to
constructed using KEE, a LISP-based human decision makers
knowledge-based systems development shell. Human expertise Artificial expertise
This model was evaluated using a series of test
cases that were also completed by the domain Perishable Permanent
expert employed in the knowledge acquisition Difficult to transfer Easy to transfer
phase of the project. Difficult to document Easy to document
Unpredictable Consistent
• Delisio et al. (1994) discussed the development Expensive Affordable
of a knowledge-based model, PLANET,
which performed audit risk assessments. Source: Waterman (1986, p. 24).
PLANET was developed through a process
of gradual refinement of prototype models.
The knowledge acquisition process However, the existing body of research modelling
commenced with the extraction of lists of audit risk assessments suggests that such
potential risk indicators from the professional modelling is worthwhile.
auditing literature. The services of experienced The use of knowledge-based systems in the
practising auditors were employed to link audit process is now commonplace. Aside from
these risk indicators to potential error types. specific research in the area, evidence of this can
The linkages identified in this manner were be found by reference to almost any recent
incorporated into a prototype model that was auditing text. White (1995) provided clear
then refined in a laboratory setting where evidence of the ubiquity of these techniques. He
the model was applied to actual audit cases surveyed 105
to identify missing elements and validate its U.S. accounting academics. Of these, 96% covered
contents. The authors did not report any the role of knowledge-based systems in their
validation results for their model beyond this accounting and auditing courses and 46%
‘interactive’ development. included material dealing directly with their
The foregoing studies all demonstrate the application.
applicability of expert systems to audit risk As characterised in Waterman (1986)
assessment problems. However they are all alike knowledge-based systems have a number of
in that they do not consider the applicability of potential advantages over human decision makers
alternative modelling approaches to this class of (see Table 1). It is not clear that these advantages
problem. Thus, they have all assumed that the are specific to the use of expert systems. To
greater flexibility (and model complexity) brought varying degrees they are applicable to any type of
to the modelling task by expert systems-based computer-based decision aid, though the extent to
techniques are warranted in this case. Weber which they are encountered may vary. For
(1997) highlights the tendency for information instance, it has been suggested that knowledge-
systems researchers to bias their efforts towards based models are better able to cope with
1
applying the latest available technology, to the uncertainty in decision problems. However, the
detriment of the theoretical underpinnings of their presence of uncertainty (or lack of structure) in a
work, hence the need to ask questions of the type decision problem is a question of degree rather
addressed in this paper. than an absolute one. It may be that alternative,
Another important question is the need to more simplistic, approaches are adequate to
model a given decision problem. Decisions address some decision problems that embody a
problems can be modelled to achieve degree of uncertainty.
improvements in accuracy, consistency or Simon (1973) defined a well-structured
consensus amongst decision makers, as well as to problem as one that could be addressed by an
achieve more efficient use of scarce decision algorithm. Applying this definition, the class of
maker resources, to provide validity checking well-structured problems will expand over time
tools and training aides. As with the choice of as our understanding of the required algorithms
modelling approach, the need to model a decision increases. For all of these reasons this paper
is a substantial issue that must be addressed on a presents a comparison of procedural modelling
case-by-case basis. It is also an issue that lies (denoting a decision support system or decision
beyond the scope of the paper. model constructed using a conventional
programming language to produce an algorithm)
with knowledge-based modelling.

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


2 P. Lloyd and P. Goldschmidt

Turban and Watkins (1986, p. 122) describe


a knowledge-based system as: ‘A computer lack of a need to collect this meta-knowledge will
render the modelling exercise more efficient.
program that includes a knowledge-base
containing an expert’s knowledge for a particular This paper does not set out to develop a set
of principles of the type developed by Michaelsen
problem domain, and a reasoning mechanism for
et al. (1992), who attempted to develop a set
propagating inferences over the knowledge-base’.
of global criteria to select modelling techniques
Thus, we see that a defining characteristic of a
within the executive compensation problem
knowledge-based model is the possession of a
domain. Rather, it looks at a single decision
reasoning mechanism. Waterman (1986, p. 391)
problem in the auditing domain, which previous
defines this mechanism (the inference engine)
research has addressed using a knowledge-based
as: ‘That part of a knowledge-based system or
approach, to signal the, so far neglected, need for
expert system that contains the general problem-
such a set of principles.
solving knowledge. The inference engine
processes the domain knowledge (located in the
knowledge-base) to reach new conclusions’. THE DECISION PROBLEM
The inference engine will require instructions
to guide it in the application of domain specific The audit decision modelled in this study is the
knowledge. Waterman (1986, p. 392) defines meta- assessment of inherent and control risk made at
knowledge as: ‘Knowledge [in a knowledge-based the account balance level within the purchases,
system] about how the system operates or accounts payable and inventory transaction
reasons, such as knowledge about the use and cycle. This decision was selected for two reasons.
control of domain knowledge. More generally, The decision arises in a wide range of different
knowledge about knowledge’. Whilst a business activities so that the resulting decision
procedural model might embody some form of models will be widely generalisable. Second,
reasoning mechanism, it will not be one that most auditors have experience of auditing this
possesses the flexibility engendered by the ability transaction cycle, rendering the required
to apply meta-knowledge. knowledge readily accessible.
Thus, a knowledge-based system is not only Audit risk assessments of this type commonly
a piece of computer software which has the occur within the framework of the audit risk
potential to apply heuristic rules to a given model. This framework is extensively explored in
problem situation, it also embodies principles professional literature and auditing texts, as well
governing how those rules are applied. Since as auditing standards. The use of this framework
these principles can differ in respect of different lends audit risk assessments a certain degree of
heuristics, different rules can be applied to the structure and renders them more suitable for
decision problem differently. It is this facility, decision modelling. The imposition of this
to explicitly incorporate principles governing structure may lessen the need for the meta-
how the specific contents of the knowledge-base knowledge facility offered by knowledge-based
are applied to problems, that provides the key modelling. Since past research has applied
distinction between the knowledge-based decision knowledge-based modelling to this problem
modelling approach and the procedural approach. domain, the importance of the question posed in
Procedural models must either mimic this meta- this paper is heightened. The aim of this paper
knowledge facility by applying the knowledge- is to determine whether these past applications
base to the decision problem according to highly were justified or whether the simpler set of tools
generalised principles, or avoid the need for meta- offered by the procedural model is adequate.
knowledge by applying the contents of their
knowledge-base to the decision problem on a non-
selective basis. Either of these approaches renders
MODEL CONSTRUCTION
the procedural model inherently less flexible than Data collection
the knowledge-based one. If such a less flexible
model is able to address the decision problem The data requirements to model audit risk
adequately, it must be a more parsimonious model assessments are twofold: the identification of the
since it will not employ specific meta-knowledge required risk indicators, and the assessment of
to govern the application of each heuristic. The their impact upon the assessed risk. In this
instance, a case study based approach to data
collection was

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 2

adopted. A case study instrument was adopted as


the most suitable vehicle to acquire and integrate continua: the presence of high or low inherent risk
indicators, the presence of high or low control risk
knowledge from a group of respondents. A
indicators, and indications of error suggested by
respondent group, rather than an individual, was
analytical review results or of freedom from such
employed as the basis of modelling because the
indications. These permutations ensure that the
problem being modelled is not one with definitive
knowledge captured covers the full range of
correct answers, thus group consensus seemed a
conditions under which auditors operate.
useful means of avoiding the bias that an
individual expert’s viewpoint might introduce. It The instrument was pre-tested by a sample of
ten auditors. The members of the pre-test sample
will be noted that whilst the study of Bharadwaj et
were asked to assess inherent and control risk and
al. (1994) relied on a single expert, those of Peters
the likelihood of a material mis-statement at the
(1989) and Delisio et al. (1994) chose to rely on a
account and transaction cycle level on a ten-point
group of experts, and to incorporate existing
Likert scale, and to comment on any deficiencies
published literature into their knowledge-bases.
or ambiguities in the instrument or any difficulties
This case study instrument, handed to they found in its completion. The results of this
participants at the conclusion of a professional
pre-test confirmed that the manipulations to the
development seminar for audit practitioners, was
case data acted in the manner desired, in that the
completed in their own time and returned by
pre-test respondents were able to identify
mail, by a sample of 79 auditors (representing a
correctly the low and high situations for each
response rate of 22.6%). Whilst this rate is
category of risk.
relatively low, it is a reflection of the time that a
respondent was required to invest to review and The responses required of participants, all
measured on a ten-point Likert scale, comprised:
complete the instrument, the nature of which is
explained below. Since Abdolmohammadi and • their assessment of inherent risk associated
with each transaction cycle account;
Wright (1987) suggested that students and
inexperienced staff provided poor surrogates for • their assessments of control risk in respect of
each account;
the judgement of experienced auditors, the survey
was administered to auditors having at least three • the attributes of the case, or risk indicators
relied on in making their inherent risk
years’ experience, the minimum requirement for
assessments, along with a weighting out of ten
full professional membership. Thus, a group of
as a measure of the impact that each indicator
experienced practitioners, active in the audit area
had on their risk assessment;
provided the expertise for this modelling project.
• the attributes of the case they used to make
The instrument took the form of an audit their assessment of control risk, along with
planning memorandum and accompanying
a weighting out of ten as a measure of the
working papers. The information presented
impact that each indicator had on their risk
therein covered the purchases, inventory and
assessment;
accounts payable areas of a manufacturing
• the additional factors they would have used to
company. The material presented comprised: a
expand their assessments of inherent risk, if
description of the company’s operations; a
information regarding them had been
completed internal control questionnaire; the most
available; and
recent period’s unaudited financial statements;
• the additional factors they would have used to
audited statements for the two periods prior to
expand their assessment of control risk, if
that (along with common size equivalents);
information regarding them had been available.
industry comparative data; and an audit
Whilst auditors do not ordinarily use a ten-point
programme with initial estimates of hours
scale as a basis to express risk assessments, such a
required to complete each of the programme
scale was employed here to enable the observation
steps.
of more precise linkages between risk indicators
Eight variations of the basic survey instrument and their impact on risk assessments. The pre-test
were prepared. Each respondent received a single
sample of ten auditors indicated no difficulty in
variation at random. Between nine and eleven
working in terms of this scale.
responses to each variation were received. These
eight differing versions were derived by varying The latter two items were included to gather
data on the full set of risk indicators relied on by
the facts presented in the material of the case from
the respondents, rather than just those presented
one extreme to the other of the following three

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


2 P. Lloyd and P. Goldschmidt

by the researchers. The importance of the


requirement not to impose the modeller’s beliefs researchers and decision makers, on a worldwide
basis.
on the expert was emphasised in Reitman-Olson
and Rueter (1987) in their survey of knowledge Bharadwaj et al. (1994) employed a LISP
language tool to construct their model and found
acquisition techniques.
this able to handle risk assessment problems
A number of these items necessarily rely on the successfully. A public domain implementation of
auditor’s self-insight. Past research has shown
LISP, X-LISP (version 2.1f) was recommended by
limitations in this regard; however this is an
colleagues working in the artificial intelligence
inherent limitation present in all knowledge-based
field. Given this choice, another knowledge-based
models, which necessarily rely to some extent on
development shell (CLIPS version 4.2) was
the decision maker’s ability to expound how they
selected as embodying a modelling approach that
make their decisions. Ultimately, the extent to
was clearly different to that inherent in X-LISP
which modelling via this approach succeeds will
and therefore offering a contrasting set of
be determined by comparison with human
strengths and weaknesses.
decision output, a comparison which is presented
below. Whilst differing in their approaches, both
knowledge-based models employ rule-based
Whilst the number of responses to each of the
knowledge representation, consisting of a series
eight variations of the instrument is relatively low
of conditional tests where alternative actions are
(an average of ten responses) when compared to
taken on the basis of the outcome of previous
samples generally employed in statistical studies,
rules. This broad approach seems appropriate to
the level of response received compares
the risk assessment task, which professional
favourably with sample sizes in studies such as
literature suggests is based on the evaluation of
Peters (1989), and seems more than adequate for a
a series of risk indicators, which carry
modelling project of this type.
consequential implications for the resulting risk
The results from this instrument provide a assessment.
knowledge-base consisting of a set of inherent
and control risk indicators that have been selected
by practising, experienced auditors, as well as a First knowledge-based model
measure of the impact of those risk indicators (CLIPS version 4.2)
upon their risk assessments. The computer models
discussed below all employ this knowledge-base This first knowledge-based model was
to the extent made possible by the modelling constructed using CLIPS version 4.2. Separate
techniques and tools employed in each case. modules were constructed to deal with inherent
and control risk assessment. These modules utilise
user responses to computer generated queries to
Model construction produce a risk assessment. Responses to the
survey instrument, described above, are used to
The discussion below presents an overview of
derive the risk indicators underlying the rules
the two knowledge-based models and the
comprising the knowledge-base. In order to
procedural model constructed. Further details of
construct a valid rule a respondent’s observation
the construction of two of these models, the
must have the following attributes: a clear
procedural model and the CLIPS-based
indication of the aspect of the client’s operations
knowledge-based model can be obtained in
that the auditor observed, a clear indication of the
Goldschmidt et al. (1995).
auditor’s assessment of the strength of that
The two packages used to build the knowledge- attribute, and a weighting out of ten to provide a
based models were chosen after considering a
measure of the importance of that observation in
number of factors. A wide and growing range of
assessing the client’s risk levels.
knowledge-based modelling tools was available,
all with their own strengths and weaknesses to The first step in knowledge-base construction
was to classify the observations thus obtained into
bring to the problem modelled. These packages
those that were positive indicators (reducing the
also vary widely in their accessibility and cost.
assessment of risk) and those which were negative
A decision was taken to use non-proprietary
indicators. Weightings out of ten, provided by
packages, running on Windows-based personal
the respondents, for positive risk assessment
computers, which were readily available to
factors become positive contributions to the risk
assessment score calculated by the module,
those for negative factors becoming negative

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 2

contributions. Any observations that could not


be clearly identified as positive or negative were are relevant to the case under examination. As
with the CLIPS model, the impact of each risk
disregarded. Where more than one respondent
indicator on the overall risk assessment is
identified a risk indicator, the risk score was
expressed as a weighting out of ten.
the arithmetic mean of the weightings they
supplied. In operation, the X-LISP model behaves as
follows. The user is presented with the first query
The observations made by respondents are (derived from the survey instrument responses in
converted into questions that could be presented
the same manner as the CLIPS model), to which
to the user of the model. This is achieved by a
they will make an affirmative or negative
minimal process of re-phrasing, intended to make
response. On the basis of this response a revision
the least possible change in meaning, for instance,
will be made to the model’s risk assessment
‘The client had good segregation of duties’, becomes,
scores, and a further rule or rules will be chosen
‘Did the client have good segregation of duties?’ As
as relevant to the problem under examination.
well as generating risk assessment scores the
Each of these rules will in turn generate a query,
user’s affirmative or negative response to each
which is presented to the user, the positive or
rule will determine which further rules are
negative responses to the queries being the basis
applied to the case being examined.
of further risk assessment revisions and the
The CLIPS shell’s paradigm of operation is selection of further rules. This process continues
based around a schedule of facts established at the
until no further rules are indicated as relevant by
start of analysis. Additional facts can
the user’s responses.
subsequently be added or deleted by the
operation of the model’s rules. Further rules are One key refinement was rendered possible by
the flexibility of the X-LISP environment. It
deemed relevant and thus applied on the basis of
was possible to reduce user input where a user’s
the facts identified to date. When inherent and
response to one rule was perfectly predicted
control risk assessments have been completed, the
by some previous response. Within the X-LISP
risk assessment scores generated are scaled from
environment it was possible to record such
the maximum possible range to a range of zero to
perfectly ‘implied’ responses without recourse
ten, for each transaction cycle major account
to user input. Thus, we see another way in
(purchases, inventory and accounts payable).
which rules can interact with each other, in that
Separate scores are reported for inherent and
a user’s response to a query generated by one
control risk.
rule can not only indicate the relevance of another
rule but also the response that that rule will
Second knowledge-based model receive.
(X-LISP version 2.1f) Once the analysis of these implied responses
has been completed, the reporting of the risk
The second knowledge-based model was assessments mirrors the steps described in regard
constructed using the X-LISP package, version to the CLIPS model.
2.1f. The use of the X-LISP environment enabled a
more flexible modelling approach to be adopted
than was the case with the CLIPS environment. Procedural model
Utilising X-LISP it is possible to establish a series
The procedural model was developed using the
of arrays that can be used to store the knowledge-
Microsoft Quick Basic language (version 4.5).
base. It is then possible to construct the model by
The data, comprising the risk indicators by
coding general instructions that will draw data
the knowledge-based models, is accessed by the
from these arrays to apply specific rules, and their
procedural model from a set of files. The
related meta-knowledge.
programme draws on this set of data files to
The arrays hold a query to be presented to the obtain the ‘rules’ it requires for its risk analysis, in
user of the model to gather information on the
a manner which mimics, so far as possible within
risk indicator that is the object of each rule. They
the context of its procedural capabilities, the
also define the interactions between rules,
operation of the CLIPS and X-LISP models
specifically the implication that a positive or
discussed above. These are sequential access files
negative response to the query has for the overall
and read into data arrays to enable the
risk assessment, and which other risk indicators
programme to access the data more quickly.
(and hence rules)

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


2 P. Lloyd and P. Goldschmidt

The programme commences its analysis by


printing out the query form of the first risk the knowledge-based models developed in this
paper reliably and will also be able to reproduce,
indicator in the inherent risk assessment data. The
on a consistent basis the results of human
programme will then pause and await a user
practitioners in the auditing field.
response, which will be either Y (yes) or N (no)
as appropriate. According to this response the Such a comparison, which is made solely on the
basis of decision output and not on the basis of
programme will perform the following
decision approach, is addressed in the next section
operations:
of this paper. McCarthy et al. (1992) draw a
• it will record the positive or negative (if any)
distinction between empirical artificial intelligence
statement form of the appropriate risk
research, which attempts to expound human
indicator in an explanation file;
decision-making processes, and applied artificial
• it will increase or decrease the relevant intelligence research, which attempts to reproduce
influence category scores by the positive or
human output. As with most work addressing
negative risk assessment score of that risk
audit decision problems, this paper concentrates
indicator;
on the latter objective. Thus, the comparisons
• it will move to the next risk indicator from the presented herein are restricted to the output level.
array indicated by the positive or negative
‘goto’ values (thus approximating the meta-
knowledge functions of a knowledge-based
COMPARISON OF MODELS
environment), where it will repeat this process;
and Approach to comparison
• if any other positive or negative responses (to
other rules) are implied by the user’s response If such a comparison is to be meaningful, the
the associated risk assessment scores are also decision models compared must also be able to
recorded along with the entry of the statement produce useful results. To provide assurance of
forms of these rules in the explanations file. this, the output of each model is also compared to
The programme continues this process until it the decision output of human decision makers.
encounters a ‘goto’ value of zero, which denotes O’Keefe and O’Leary (1993), whose paper
that the end of that stage of analysis, whether it directly examines quality assurance for
be inherent or control risk assessment, has been knowledge-based systems, described the
reached. When this process is complete for traditional approach to software quality assurance
inherent risk assessment it is repeated for control as consisting of two phases of activity: verification
risk assessment. and validation. In their terms verification
These ‘goto’ values function as the procedural provides assurance that a system, or, in this case
model’s approximation of meta-knowledge. They decision model, meets the specifications
support the chaining of rules sequentially, based established for it. Validation provides assurance
on user responses. This differs from the more regarding the output of the model.
flexible meta-knowledge that the two knowledge- O’Keefe and O’Leary suggested that, in
based environments support. In the case of the addition to these two phases, three further phases
CLIPS model a schedule of facts is constructed should be added to the evaluation process. The
based on user responses, and these facts can first of these additional phases they labelled
deem a range of further rules relevant. The X- credibility assessment, a measure of the degree to
LISP model’s meta-knowledge enables the direct which the finished model can be relied upon by its
linking of multiple rules together based on user users. O’Keefe and O’Leary presented the stages
responses. of the evaluation process as stages in a hierarchy
The use of a procedural model and its restricted used to make a complete assessment of system
approximation for meta-knowledge must quality. The stages in the hierarchy build upon one
necessarily impose certain limits upon the another, in the sense that if a model is to achieve
assessment of audit risk. Such a model is a fixed credibility it must have been successfully verified
algorithm embodying fixed assumptions about and validated. Each stage in the hierarchy
the problem that it addresses. If these simplifying depends upon the successful completion of those
assumptions are not excessively far removed below it. Thus systems must be verified to ensure
from reality, however, this simpler approach to that they meet their specifications before
modelling will be able to reproduce the results of validation can sensibly begin, and so forth.

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 2

The final two stages in O’Keefe and O’Leary’s


hierarchy (evaluation and assessment) address, survey instrument, completed by a sample of 27
auditors, all having three or more years’
respectively, the correspondence of the model to
experience. The second survey instrument was
the needs of its working environment and its
relatively time consuming to complete
success in day-to-day operation. These latter two
(approximately one hour), and it was felt
phases lie beyond the scope of the comparison
necessary to recompense them with a sum of
undertaken in this paper as they rely upon the
$A100. This test sample was intentionally
model’s use in practice over an extended period of
composed of auditors having a level of experience
time.
(3 years or more) comparable to those who
Thus, the research project documented here completed the first instrument used for model
addresses three stages in the quality assurance
construction. Since these respondents were to be
process: verification, validation and credibility.
paid for their efforts, an organised session was
The first stage, verification, was addressed by
arranged for them to complete the instrument,
detailed examination of the model code, in order
which was handed to them upon their arrival.
to obtain assurance that the models as constructed
Thirty auditors had indicated their intention to
were error-free and met the specifications set for
attend this session, in the event three of these
them.
failed to do so.
Validation assesses the ‘correctness’ of decision
output or, in the case of a decision problem of The survey instrument presented a series of
five hypothetical cases to each participant. The
the type examined here, to which there is no
information given to a participant in each case
identifiable correct answer, the degree to which it
comprised the following items:
corresponds to the output of human practitioners.
If a model is to be credible it must, obviously, • an ‘Inherent Risk Assessment Dialogue’, which
presents the series of questions that the user
provide valid decision output. However,
of one of the computer models was asked by
credibility is also concerned with the usability of
that model when performing a risk assessment
the output presented to the user. Is the model’s
and the original user’s responses to those
output in a format that the user can interpret
questions;
and apply to real-world situations? Validity and
credibility, closely related issues in the context of • a ‘Control Risk Assessment Dialogue’
which is presented in the same format as the
this model, are the two phases of quality
inherent risk assessment dialogue referred to
assurance
above.
that are addressed below.
This was the only information presented to
In the case of this research project the central the participants to describe the subject of the
application of the validation process will be
audit engagement. This approach ensures that
the comparison between decision models,
the human auditors have exactly the same data
documented below, whilst the question of
set available to them as the computer models.
credibility will be addressed in the next section
Presentation in this form will not be the form in
where collection of comparative human decision
which the respondents are used to seeing audit
data is discussed.
decision data presented. Whilst it might be argued
that this is a potential source of bias, this is true to
Collection of comparative data some extent of all survey instruments since they
cannot hope to present the data in a form that is
As described above, the modelling undertaken familiar to each of the respondents. Also, were the
here yields quantitative, though none the less data to be presented to a control group in a ‘more
judgemental, risk assessment scores. This familiar’ format, there is no way to guarantee that
quantitative approach is imposed by adherence the same content is conveyed. On the basis of this
to the audit risk model as it is conventionally data the respondent was required to perform the
described. If the model is to be credible in O’Keefe following tasks:
and O’Leary’s terms, output in this format must • to provide inherent and control risk
be acceptable to the end users of the model. The assessment scores (over a possible range of 1 to
testing process outlined below provides assurance 10, where 1 denotes low risk and 10 denotes
that this is so. high risk) for each of the Purchases and
The data used to test and compare the Disbursements, Inventory and Accounts
completed decision models was collected using a Payable transaction cycles;
second

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


3 P. Lloyd and P. Goldschmidt

• as a basis for future extensions to this work, to


indicate the adjustment rate, expressed as a a new competitive threat, although it is well
positioned within its market place.
percentage, they would apply to the audit
hours that were initially allocated to a range of 5. Case 5 deals with a private company subject,
once again, to significant influence from its
audit programme steps.
ownership. This company has one major
Thus, the case study respondent was provided supplier with whom it has a history of
with the same data set that was employed by the
disputes, a past history of material audit
computer models, and was required to perform
adjustments, a history of both material and
the same tasks. This should ensure that the
immaterial fraud and weak internal controls.
respondents to the second instrument performed
Additionally this company appears to be
a task that was exactly comparable with that to
subject to some ‘going concern’ problems.
which the decision models constructed in this
thesis were applied. As suggested above, these scenarios were
described using queries generated by one of the
An outline of each of the five hypothetical cases
computer risk assessment models constructed in
employed in the instrument is presented below.
this thesis, and a user of that model’s responses to
The authors will be pleased to supply copies of
those queries. The responses that were
either survey instrument upon request.
appropriate to describe each of these scenarios
1. Case 1 represents a publicly listed, but
were determined in consultation with academic
not publicly prominent, audit client. This
colleagues in the auditing area, and by reviewing
company does not carry out any overseas
a range of undergraduate auditing texts.
operations. No significant changes in its
operations have taken place recently, nor are O’Keefe and O’Leary (1993) suggested that a
statistical approach to validation was appropriate
any expected. The client is well established
where decision models produce output that is
and has an extensive customer base and its
quantitative in nature and where that output is to
internal controls are generally good.
be compared to decisions made by one or more
2. Case 2 represents a private company subject
human experts. To this end the next section
to significant day-to-day control from its
presents an examination of the correlation
ownership and having a ‘Small Management’
between the output of the different models and
structure. This company has recently
the human practitioners. In particular, O’Keefe
experienced significant management
and O’Leary advocated a technique based on the
turnover and is currently working with new
work of Fleiss (1981) for use where the decision
accounting staff. This client has a past history
output is numerical in form but not continuous in
of inventory errors and is currently facing
nature. An evaluation using this approach is
new competitive threats. The client has a past
presented in the following section of this paper.
history of material audit adjustments and
internal controls are generally deficient.
3. Case 3 is a company having a similar Tests of correlation
management structure to that in case 2 above.
However, this company differs from the The risk assessment scores used in this
foregoing one in a number of respects. Firstly, comparison exercise are integers over a range of
it enjoys significantly better relations with zero to ten. Six scores (being two risk assessments
its management; secondly it does not have a for each of three transaction cycles) are generated
past history of inventory errors; and it has in respect of each of the five case scenarios. As
significantly better internal control. such each data sample consists of 30 integers over
a range of 0 to 10. Spearmann rank correlations
4. Case 4 deals with a publicly listed company,
which has a volatile share price. The are reported in Table 2.
company
is currently facing a new competitive threat
in its operating market place. The company Table 2: Spearman rank correlation coefficients
has a corporate structure based around a
‘Management by Objective’ approach and Correlations: CLIPS X-LISP Procedural
has an established audit committee. The X-LISP 0.961
company’s operations have exposed it to Procedural 0.957 0.993
changing technology and it is currently Auditors 0.818 0.841 0.840
facing

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 3

Table 3: Comparisons of all pairs of variables using k


CLIPS X-LISP Procedural
X-LISP k = 0.667
s.e.0(k) = 0.051
Sig = 0.000
Procedural k = 0.703 k = 0.889
s.e.0(k) = 0.049 s.e.0(k) = 0.051
Sig = 0.000 Sig = 0.000
Auditors k = 0.222 k = 0.074 k = 0.074
s.e.0(k) = 0.054 s.e.0(k) = 0.057 s.e.0(k) = 0.058
Sig = 0.001 Sig = 0.103 Sig = 0.107

It is clear from these results that all four


variables are highly correlated with each other. identified numerically, their identification forms
an ordinal scale. In this situation, a small
Since all the coefficients reported here are
difference in classification may equate to the
relatively large and the samples involved are
difference in judgement that will naturally arise
relatively small, it is difficult to draw any specific
between practitioners regarding subjective
inferences about relationships between particular
decisions, whilst a larger difference is an example
variables. However, it is encouraging to note that
of clear inconsistency.
the risk scores generated by all the models and the
mean score of the human decision makers are This section presents two analyses based on
Fleiss’ technique. The first of these applies his
highly correlated.
approach strictly, whilst the second is a modified
approach designed to take the subjective
Tests of classification attributes of the decision problem modelled here
into account. Given that the risk assessment scores
Whilst the descriptive statistics presented above here are integers the smallest possible widening of
serve to provide confidence in the comparability the matching interval between raters is to accept
of the decision models they do not provide risk assessment scores from two sources as
rigorous tests of that comparability. As outlined matching when they are within one point on the
earlier, each variable consists of a sample of 30 ten-point scale.
observations. Each of those observations
Fleiss provides a measure to evaluate the
corresponds to a similar observation in respect of significance of k. Table 3 presents a comparison of
the other variables. For instance, each variable the three computer models and the mean auditor
contains an observation that relates to inherent judgement applying Fleiss’ approach strictly.
risk in the purchases cycle in respect of the first
The results presented here confirm our earlier
case, an observation that relates to control risk in results to a large extent by showing strong and
respect of purchases and so forth. As such it is highly significant links between both knowledge-
possible to derive matched pairs of observations based models, the knowledge-based models and
between any two variables. the procedural model and between the CLIPS-
Fleiss’ (1981) model compares the classifications based model and the human decision maker’s
of categorical decision output applied by two mean response. This suggests that in respect of
different ‘raters’ (in this case audit risk assessors) the chosen decision problem, models constructed
to see how often they are consistent and how using a procedural approach are able to
often they differ. His technique is based around satisfactorily replicate those constructed using a
the measure k (kappa). k assumes a value of 0 knowledge-based paradigm.
when the proportion of agreement matches that
These results do highlight a weaker, though in
which we would expect to arise by chance and 1 the case of the CLIPS model still highly
when perfect agreement exists between the two significant, linkage between the mean human
raters. Fleiss’ work also provides a measure to respondents’ decision output and the output of
evaluate the significance of k. the computer models generally. This weaker link
Fleiss’ model assumes that any difference in may be seen as a limitation upon the usefulness of
classification implies a misclassification. This all the models presented here. However, it must
need not always be the case. Where categories are be borne in mind

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


3 P. Lloyd and P. Goldschmidt

Table 4: Comparisons of all pairs of variables using k with widened


matching interval
CLIPS X-LISP Procedural
X-LISP k = 0.944
s.e.0(k) = 0.220
Sig = 0.000
Procedural k = 0.941 k = 1.000
s.e.0(k) = 0.240 s.e.0(k) = 0.238
Sig = 0.000 Sig = 0.000
Auditors k = 0.645 k = 0.488 k = 0.543
s.e.0(k) = 0.181 s.e.0(k) = 0.147 s.e.0(k) = 0.140
Sig = 0.001 Sig = 0.001 Sig = 0.000

that these results represent a strict application of


Fleiss’ classification test, which looks only at strict available for the project did not enable the
addition of a significant number of further
matches of category. Results based on the relaxed
respondents.
form of Fleiss’ technique discussed above are
presented in Table 4. Whether these latter results are meaningful or
not depends on the user’s expectations with
The relaxation of the strict assumptions of regard to a computer model addressing a decision
Fleiss’ approach has served to emphasise the
problem of this type. It could be argued that the
strong linkages between the different computer
improved results obtained here have no meaning
models. At the same time, given the adjustment to
beyond the inevitable improvement that results
Fleiss’ model, strong significant relationships
from relaxing Fleiss’ testing approach. On the
between human decision output and all other
other hand, the adjustments made represent the
variables are identified, which adds confidence in
least possible relaxation of that approach and do
the idea that all the models potentially possess
not seem inconsistent with the spread of
utility in a practical environment at least at a
individual responses collected from the second
decision review level. 3
case study, or with what one would expect in
Whilst all models appear to perform relatively
poorly as predictors of the mean practitioner regard to subjective judgements made by
response in Table 3, the relaxed assumptions of practitioners generally.
Table 4 reveal highly significant and relatively
strong linkages between all models and the mean
human response. It should be stressed that CONCLUSIONS AND
this relaxation in assumptions is the least possible
FUTURE DIRECTIONS
acknowledgement of the subjective nature of Conclusions
2
audit risk assessments. Confidence in the results
in Table 4 is enhanced by the high correlation The results presented above show that it was
between the mean human response and the possible to satisfactorily replicate the decision
decision models reported in the previous section. output of the two knowledge-based models,
Confidence in the mean human response which were validated against human decision
scores as being representative of the ‘correct’ output, using the more restricted techniques
risk assessment would be enhanced were the available within the procedural model. This
instrument presented to a larger number of conclusion is based on results derived using
respondents. It may be that a larger sample would measures of correlation and an adaptation of an
yield an improvement in the correspondence of output classification method presented in Fleiss
the decision models with the human response, (1981).
evidenced by improved k scores. However, as Whilst the results presented here do not
noted above, the instrument is a relatively time support a claim that any of the decision models
consuming one for respondents to complete, perfectly replicate human decision output
necessitating their payment. Research funds precisely, it seems clear from the measures
reported that the models constructed are
sufficiently credible to confirm the human
decision makers’ conclusions in the majority of

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 3
cases. In so doing, it is possible that they might
function as a means to identify those

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


3 P. Lloyd and P. Goldschmidt

extreme cases that require further attention


from more highly skilled decision makers. This completely realistic and may not provide all
the information that practising auditors rely on
possibility is equally true of the procedural model
to make real-world decisions, although every
as of the knowledge-based models.
effort was made to render the case study
By examining a specific decision problem, to instruments as realistic as possible. The
which knowledge-based techniques have already
reasons underlying the format of data
been applied and finding a simple procedural
presentation in the second case study
technique to be adequate, this project highlights
instrument have been discussed above.
the need to devote time to selecting an
appropriate and efficient modelling technique • An issue related to the realism of the case
study instruments also arises. For the
prior to commencing a decision-modelling project.
comparison of modelling techniques to have
The aim in constructing a decision model must
any validity, the data captured by the first
not be limited to the construction of a valid model
instrument must provide a sufficiently
(something which cannot be fully assessed until
detailed and sophisticated rendering of the
the model is completed) but to do so in the most
risk assessment process to enable the subtleties
parsimonious and efficient manner possible. How
of that process to be incorporated in the
this might, in general, be achieved is discussed
models constructed to the full extent permitted
below in the ‘Future extensions and implications’
by each modelling environment. As noted
section.
above, the first instrument was pre-tested, in
As suggested by Weber (1997), the most part to ensure that a sufficiently sophisticated
appropriate choice need not be the most advanced
re- creation of the risk assessment problem
technology available. Whilst this project examined
setting was provided.
one narrowly defined problem it highlights the
need to consider the availability of alternative • Similarly if the comparison between models
is to have any value, the full capabilities of
approaches before commencing a modelling
each modelling environment must be applied
project using any one technique. How this need
to the models’ construction – it is for this
might be developed into future research
reason that the capabilities of each model vary
directions is discussed in the section below.
(see for instance the discussion of the ‘implied
response’ issue in the discussion of model
Limitations of the project construction above).
• The relatively large amounts of time required
In interpreting the results outlined above it is to complete the instruments used for model
necessary to bear in mind some limitations. construction and validation meant that this
• This project employed two different packages project had to be based around relatively small
to develop knowledge-based models, in an sample sizes. Had it been practical to do so,
attempt to ensure that limitations of a specific validity could have been enhanced with larger
shell would not bias the overall results of the sample sizes. The time frame available for this
project. The risk of this bias could be further research project also precluded the
reduced by the construction of additional examination of the models using the final two
models, but this option was not a practical one phases of O’Keefe and O’Leary’s testing
in the time available. framework, which are concerned with the
• The audit risk model is widely used in success of completed models in long-term use.
auditing methodologies. However, it is often
• Finally, just as the data used and the models
not applied in the strictly computational sense constructed must capture the full subtlety of
adopted here. As such, a degree of the risk assessment process, so too must the
artificiality may be introduced into the testing process employed capture the full
decision modelling undertaken here. However, subtlety of the output produced by those
this limitation would appear to be minimised models. In that regard, the cornerstone of the
by the manner in which the second case study testing process employed here is Fleiss’ k
instrument was administered, which appears measure, as advocated in the widely regarded
to indicate that models in this form are work of O’Keefe and O’Leary (1993) as a
credible to users. foundation for the validation of this type of
• This study uses data collected using case study decision model.
instruments. Such instruments can never be

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


Modelling Audit Risk 3

Future extensions and implications Either of the routes will require significant
research effort.
Brown (1991) suggests that the development of The findings of this paper are also lent a certain
knowledge-based systems will provide a way topicality by recent large-scale audit failures. In
of coping with the changing balance of the response to these, there have been calls for future
population. She suggests that as the proportion of auditing standards to be more principle-based
young people entering the workforce declines and less procedural. In this light it is interesting to
there will be a greater need for knowledge-based review the evidence here, which suggests that
systems to take over the roles that they some specific auditing tasks can be adequately
traditionally fill in the workforce. Other addressed by narrowly-defined procedural
arguments for the construction of decision models models.
include improved accuracy, consistency and the
development of training tools. If these arguments
are to be accepted, they must also indicate the NOTES
need for decision models to be developed in the
most efficient way possible. Thus, if a procedural 1. See Borthwick (1987), Giarratano and Riley
(1989).
decision aid, with its more parsimonious
approach, is adequate to model a particular 2. Where the matching interval associated with k
is widened in this manner, it is possible to
decision, then that approach should be adopted. It
make observations regarding the directional
is for these reasons that the research theme
dispersal of those instances accepted as a
pursued in this project, the need to choose
successful match between the responses of
between alternate modelling approaches, is a
two computer models, or a computer model
significant one and needs to be developed further.
and the human judgement. The table below
This development could take the form of the presents an analysis of those responses
type of framework advocated by Michaelsen et al.
accepted as a successful match between each
(1992). That paper developed a set of global
of the computer models and the mean human
criteria to identify applications, in that instance
response, where the proportion of ‘matching’
within the executive compensation problem
responses is separated into those observations
domain, which were suitable for knowledge-
falling one point below the mean human
based modelling. It is possible that a similar
response, precise matches and those one point
framework might be developed for the auditing
above. Whilst there is a tendency for the risk
problem domain. This paper demonstrates the
score yielded by the models to be higher than
need for such a framework to include as one of its
the mean human response, the arguments
criteria the availability of alternative modelling
advanced in favour of the wider matching
approaches.
interval suggest it is unlikely that this trend is
The conclusions reached here will not meaningful:
necessarily remain current. Future developments
in knowledge-based modelling, or in other CLIPS X-LISP Procedural
modelling approaches, may yield properties to Over 17.4% 15.8% 20.0%
knowledge-based models that cannot be Equal 39.1% 26.3% 25.0%
mimicked using procedural models (or vice Under 43.5% 57.9% 55.0%
versa), but which offer worthwhile advantages to
the decision modeller. Thus any principles of
modelling technique suitability of the sort However, this table should be viewed as a
advanced in Michaelsen et al. will need to be caution against further relaxation of the
sufficiently flexible to cope with ongoing matching interval. Were significantly low risk
advances in decision modelling. This requirement scores to be relied upon, there is a risk of
for flexibility has the potential to create an ineffective audits with opinions founded on
ongoing stream of research opportunities, since an insufficient level of evidence. Were
the attributes offered by new modelling overstated scores relied upon, inefficient
approaches and environments will need either to audits would result.
be addressed by a set of carefully developed 3. It seems clear that some adjustment of the
global principles, or to be continuously tested type reported here must be made, to
against existing and new problem domains to acknowledge the manifest subjectivity of the
assess their suitability for those domains. risk assessment

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35


3 P. Lloyd and P. Goldschmidt

problem, evidenced by a mean range of 5.0


applications in compensation practice’, Intelligent
and standard deviation of 1.3 in individual Systems in Accounting, Finance and Management,
respondent’s risk assessment scores as Vol. 1, No. 2, pp. 123–134.
provided in response to this instrument. O’Keefe, R. M. & O’Leary, D. E. (1993), ‘Expert
system verification and validation: a survey and
tutorial’, Artificial Intelligence Review, Vol. 7,
REFERENCES pp. 3–42.
Abdolmohammadi, M. & Wright A. (1987), ‘An Peters, J. M. (1989), Assessing Inherent Risk during
examination of the effects of experience and task Audit Planning: Refining and Evaluating a Knowledge-
complexity on audit judgements’, The Accounting based Model, PhD Thesis, University Of Oregon.
Review, Vol. LXII, No. 1, pp. 1–13. Reitman-Olson, J. & Rueter, H. H. (1987), ‘Extracting
Bharadwaj, A., Karan, V., Mahapatra, R. K., Murthy, expertise from experts: methods for knowledge
acquisition’, i, August, Vol. 4, No. 3, pp. 152–168.
U. S. & Vinze, A. S. (1994), ‘APX: an integrated
knowledge-based system to support audit Simon, H. (1973), ‘The structure of ill-structured
planning’, Intelligent Systems in Accounting Finance problems’, Artificial Intelligence, Vol. 4, No. 3/4,
and Management, Vol. 3, pp. 149–164. pp. 181–201.
Borthwick, A. F. (1987), ‘Artificial intelligence in Sutton, S. G., Young, R. & McKenzie, P. (1994), ‘An
auditing: assumptions and preliminary analysis of potential legal liability incurred
development’, Advances in Accounting, Vol. 5, through audit expert systems’, Intelligent Systems
pp. 179–204. in Accounting Finance and Management, Vol. 4,
pp. 191–204.
Brown, C. E. (1991), ‘Expert systems in public
accounting: current practice and future directions’, Turban, E. & Watkins, P. R. (1986), ‘Integrating expert
Expert Systems With Applications, Vol. 3, pp. 3–18. systems and decision support systems’, MIS
Quarterly, Vol. 10, No. 2, pp. 121–139.
Chang, M. & Monroe, G. S. (1995), ‘The impact of
inherent risk, control risk and analytical review Waterman, D. A. (1986), A Guide To Expert Systems
results on auditors’ planning judgments for the (1st edn), Reading, MA: Addison Wesley
purchases, cash disbursements, and inventory Publishing Company.
cycle’, accountability and Performance, Vol. 1, No. 2, Weber, R. (1997), Ontological Foundations of
pp. 31–53. Information Systems, Coopers & Lybrand
Accounting Research Methodology Monograph
Delisio, J., McGowan, M. & Hamscher, W. (1994), No. 4, Melbourne: Coopers & Lybrand and the
‘PLANET: an expert system for audit risk
assessment and planning’, Intelligent Systems in Accounting Association of Australia and New
Accounting Finance and Management, Vol. 3, Zealand.
pp. 65–77. White, A. (1995), ‘An analysis of the need for ES and
AI in accounting education’, Accounting Education,
Fleiss, J. L. (1981), Statistical methods for rates and Vol. 4, No. 3, pp. 259–269.
proportions (2nd edn), New York: John Wiley &
Sons.
Giarratano, J. & Riley, G. (1989), Expert Systems –
Principles and Programming (1st edn), Boston: PWS
Kent Publishing Company. AUTHOR PROFILES
Goldschmidt, P., Lloyd, P. & Monroe G. (1995), ‘The
efficiency of expert systems as a technique for Paul Lloyd lectures in the area of accounting
modelling audit judgement’, International Journal of information systems. His research interests
Business Studies, Vol. 3, No. 2, pp. 13–38. include the application of artificial intelligence to
McCarthy, W. E., Denna, E., Gal, G. & Rockwell, S.
(1992), ‘Expert systems and AI-based decision accounting and accounting history.
support in auditing: progress and perspectives’, Associate Professor Peter Goldschmidt is a
International Journal of Intelligent Systems in senior lecturer in information management. He
Accounting, Finance & Management, Vol. 1, No. 1, has extensive teaching experience in the
pp. 53–63. information systems and accounting information
Michaelsen, R. H., Bayer, F. A. & Swigger, K. M. systems areas. His research interests lie in the area
(1992), ‘A global approach to identifying expert of artificial intelligence.
system

© Blackwell Publishing Ltd Int. J. Audit. 7: 21–35

You might also like