Twenty Five Year Overview of Experimental Auditing and Assurance Research With Cover Page Final

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

Twenty-five Year Overview of Experimental Auditing Research: Trends and

Links to Audit Quality

Roger Simnett, UNSW Sydney,

And

Ken T. Trotman, UNSW Sydney

November, 2017

Recommended Citation: Simnett R. and K. T. Trotman. 2018. “Twenty-five Year Overview of


Experimental Auditing Research: Trends and Links to Audit Quality”. Behavioral Research in
Accounting, forthcoming.
Twenty-five Year Overview of Experimental Auditing Research: Trends and
Links to Audit Quality

SUMMARY: We examine 468 experimental auditing research papers that were published in ten
leading accounting and auditing journals from 1991–2015 to address three key issues. First, we
consider the trends in experimental auditing research and find that while the total number of
papers published in the leading journals has expanded, the percentage of experimental auditing
papers has decreased substantially. Second, in order to support evidence-based standard-setting
and regulation, and to identify the audit quality issues that have been addressed, we map this
research to the IAASB Framework for Audit Quality. We find that the majority of studies have
concentrated on the processing stage of the Framework and at both the engagement and firm
levels. Third, breaking our period of study into five five-year blocks, we observe a significant
evolution in experimental research over the 25 years of our study, as evidenced by trends in the
topics examined, types of participants used, and data collection approaches used.

Keywords: audit and assurance; experimental research; audit quality; research opportunities
Twenty-five Year Overview of Experimental Auditing Research: Trends and
Links to Audit Quality

INTRODUCTION

The IAASB (2014) provides a Framework for Audit Quality (Framework) where the primary

responsibility for performing quality audits rests with auditors, but recognizes that auditors

interact with a range of other stakeholders including management, audit committees, regulators

and users of audit reports. The purpose of judgment and decision-making (JDM) research in

auditing is to understand the judgments and decisions of auditors and those who interact with

auditors. Most JDM research in auditing uses experimental methods and our paper provides a 25-

year overview that identifies trends in this research (which we label as ‘experimental auditing

research’). Our categorization of this research includes both auditor judgments and the

judgments of the above stakeholders in respect of their interactions with auditors, auditor

characteristics and audit reports.

The aims of JDM audit research are primarily to evaluate the quality of judgments of

auditors (or those interacting with auditors), to describe how the judgments are made, to examine

the factors that affect those judgments, and to develop and test theories of the underlying

cognitive processes by which judgments are made. The output of this research allows researchers

to suggest and test potential remedies for any deficiencies found (Libby 1981; Trotman 1996). In

this paper we examine 25 years of JDM experimental auditing research (1991–2015), with three

main aims. First, to identify trends in the publication of experimental auditing research in ten

leading international accounting journals. Second, to map the experimental auditing research

undertaken from 1991–2015 to the IAASB’s (2014) Framework (described later in this review),

in order to help inform interested parties, including the IAASB and other standard-setters and

1
regulators, as to what research has been conducted about audit quality at the various stages of the

audit process. Third, to document the trends of this JDM audit research by examining how the

issues examined, participants used, and data collection methods have evolved over 25 years.

Much of the motivation underlying auditing research papers is that they inform regulators,

standard-setters and audit firms. The standard-setters and regulators are increasingly seeking the

best available evidence, including empirical research, as a means of informing the development

of standards, analysis of viable solutions and evidence of impact once a standard is finalized

(Cohen and Knechel 2013). The Public Company Accounting Oversight Board (PCAOB) has

recently outlined its interest in and use of auditing research. For example, Franzel (2016) at the

American Accounting Association (AAA) Auditing Section Mid-year meeting noted:

The panel discussed opportunities for academics to inform PCAOB policy and
decision making through research and other collaborative opportunities with the
PCAOB. Recent initiatives at the Board seek to advance research related to the role of
auditing in the capital markets. These initiatives have created new opportunities to
enhance PCAOB’s collaboration with the academic community in ways that further
the public interest in auditing to protect investors.

The majority of audit research uses archival or experimental methods. The AAA Auditing

Section Research Committee compiled the 33 Years Audit Research database of 1,649 auditing

papers from 1975–2008, and have coded JDM experiments (466 papers, 28.3 percent) and

archival (458 papers, 27.8 percent) as the two most common research methods used. In recent

years there have been two major reviews of the archival audit research, both of which have had

standard-setters and regulators as part of their intended audience. DeFond and Zhang (2014), in a

comprehensive review of mainly U.S. archival auditing research, identified significant reference

to the U.S. public regulator, the PCAOB. Simnett, Carson, and Vanstraelen (2016), in their

review of international (non-U.S.) archival auditing and assurance studies published from 1995–

2014, mapped the research to the IAASB’s (2014) Framework with an aim of identifying how

2
the research supported evidence-based international standard-setting.

JDM audit research also has a long history of being an applied discipline. For example,

Gibbins and Swieringa (1995) note that the motivation of JDM research is to understand and

capture the essential features of an applied setting which includes incentives, tasks, structures

and other characteristics of the accounting/auditing setting. The papers related to auditing in the

Ashton and Ashton (1995) book provide numerous suggestions of the importance of the

task/auditor interaction (Libby 1995; Messier 1995; Solomon and Shields 1995), with Gibbins

and Swieringa (1995, 245) concluding:

The strong preference for using as participants people having significant task-relevant
experience and expertise (auditors, business managers, financial analysists, bankers,
and others related to accounting-information preparation and use) has helped to
produce a body of findings about ‘real people’ and ‘real tasks’ that is impressive.

We describe this body of literature over the past 25 years by considering the flow of research

over this period in terms of the number of experimental auditing papers, percentage of total

accounting papers and percentage of total auditing papers across ten major international journals.

We then map these 468 papers to the categories in the IAASB’s (2014) Framework against both

elements categories (input-values, input-knowledge, processing and output) and levels categories

(engagement, firm and national). By mapping the studies against the IAASB’s (2014)

Framework, our review has the potential to identify and inform areas which have been

researched extensively and areas where there are opportunities for further research. We then

categorize these 468 papers according to research topic area using an extended version of the

Libby and Luft (1993) conceptual categorization including ability, knowledge, environment and

motivation, and then according to any reference to the auditing standards that motivate the

papers. This allows regulators, standard-setters and researchers to identify what presently exists

and match this to what they believe is needed. Next, as informed and appropriate methodological

3
choices are central to both the production and consumption of research, we consider a number of

the challenges faced when undertaking experimental auditing research. Strong research designs

make it possible to address challenging problems and produce findings that contribute to a robust

body of knowledge. In turn, this provides a solid foundation for future research. In this paper, we

explore whether particular research methods have evolved over the period 1991–2015. For each

of these areas we break this 25-year period into five five-year periods, and explore how the

percentage of total accounting papers, the percentage of total auditing papers categories, the

elements and levels of the Framework, the research topics examined, the participants used, and

the data collection approaches, have evolved.

Our research is of interest to a number of parties, including standard-setters, regulators and

researchers. Standard-setting and policy making groups are regularly required to review and

assess the extent to which reliance can be placed on the research findings when developing

evidence-informed standards or examining current or best practice (Trochim and Donnelly 2008;

IAASB 2014). Our ability to identify the issues and examine research published in the leading

international journals enables a more thorough assessment of the incremental knowledge that has

been gained. Practitioners can examine where they see particular gaps in the literature and

consider supporting research in what they see as the important gaps. Our data on changes in

research methods over time provides some interesting trends for editors and researchers to

consider.

Background

The early research on auditor judgments started with Ashton (1974) who systematically

examined consensus, cue usage and the stability of auditor judgments. While Ashton (1974)

studied internal control evaluation, Joyce (1976) considered audit planning judgments and found

4
lower levels of consensus. One explanation for the differences is the nature of the tasks in the

two studies.1 This was the beginning of an extensive literature reconciling some of the

differences in results between the two papers (see Trotman and Wood (1991) for a meta-analysis

of the experiments).

To provide the setting for our 25-year review (1991–2015) we briefly outline the six key

themes of the JDM audit research in the 1980s, much of which set the foundation for the research

in the period we examine (Trotman, Tan, and Ang 2011). First, the policy capturing research

from the 1970s was extended to address a much wider array of issues, including materiality

(Messier 1983), independence (Shockley 1981), performance evaluation (Wright 1982) and to

examine accuracy instead of consensus (Kida 1980; Simnett and Trotman 1989). Second, there

were a range of papers on heuristics and biases, particularly related to both representativeness,

and anchoring and adjustment (Joyce and Biddle 1981a, 1981b; Bamber 1983; Cohen and Kida

1989). Also, there were studies that considered the belief adjustment model which follows the

anchoring and adjustment heuristic (Ashton and Ashton 1988; Knechel and Messier 1990).

Third, a line of research that considered predecisional behavior including information acquisition

and hypothesis framing (Biggs and Mock 1983; Kida 1984) with hypothesis generation studies

being introduced to the literature in Libby (1985). Fourth, the recognition that many decisions in

auditing were made by groups (Schultz and Reckers 1981; Solomon 1982; Trotman, Yetton, and

Zimmer 1983) including the review process (Trotman 1985; Trotman and Yetton 1985). Fifth, the

start of the decision aid literature evolved (Butler 1985; Libby and Libby 1989). Sixth, a vast

literature on the role of knowledge and expertise in auditing commenced, including outlining a

1 We note that a major difference between early JDM audit research and present research is the interpretation of
what is incremental contribution. This issue of auditor consensus and cue usage is one of the few areas of JDM
audit research where there were sufficient papers to conduct a meta-analysis. A much narrower definition of
incremental contribution appears to exist in more recent years with few studies considering task differences.

5
set of conditions sufficient to experimentally demonstrate a knowledge effect and consideration

of the cognitive processes through which knowledge is brought to bear on a decision task

(Frederick and Libby 1986; Abdolmohammadi and Wright 1987) and increased interest in long-

term memory (Plumlee 1985).

Libby and Luft (1993) use a conceptual equation to categorize much of the research from the

1970s and 1980s using the equation: Performance=f(ability, knowledge, environment and

motivation) which is an important part of the Framework we use to categorize the audit papers in

our 25-year window.

Sample, Definition and Procedure

Our review of experimental auditing research encompasses papers published during the

period 1991–2015 in the following ten journals: Accounting, Organizations and Society (AOS);

Auditing: A Journal of Practice & Theory (AJPT); Behavioral Research in Accounting (BRIA);

Contemporary Accounting Research (CAR); European Accounting Review (EAR); Journal of

Accounting and Public Policy (JAPP); Journal of Accounting & Economics (JAE); Journal of

Accounting Research (JAR); Review of Accounting Studies (RAST); and The Accounting Review

(TAR). The rationale for the selection of these ten journals is their general recognition as the

leading accounting and auditing journals based on citation and impact. They contain the eight

journals categorized in Simnett, Carson, and Vanstraelen (2016), and adding BRIA to highlight

the behavioral nature of this categorization, and EAR, to make sure that European trends in JDM

audit research were not overlooked.

Within this JDM audit research we identify experimental auditing (and assurance) papers

using the following definition and criteria. Experiment is defined as “a scientific investigation in

which an investigator manipulates and controls one or more independent variables and observes

6
the dependent variables or variables for variation concomitant to the manipulation of the

independent variables. An experimental design then is one in which the investigator manipulates

at least one independent variable” (Kerlinger 1973).2 We identify the experimental auditing

papers by searching the title, keywords, abstracts or hypotheses for any reference to audit or

assurance, and experimental research method.3

The AAA Auditing Section 33 Years Audit Research database (AAA 2009) including papers

published in the following eight journals: AJPT, AOS, BRIA, CAR, JAE, JAPP, JAR, and TAR,

and classified these papers by research methods. The database facilitates our collection of

experimental auditing papers for the years from 1991 up to 2008 for these eight journals. 4 Then

we extend our sample of these eight journals for the period 2009–2015. As the AAA Auditing

Section 33 Years Audit Research database does not include auditing papers published in EAR and

RAST, we extend our sample to include experimental auditing papers published in these two

journals from 1991–2015 using the same definitions.

Using both the AAA Auditing Section 33 Years Audit Research database and the hand-

collection process outlined, we identify 468 experimental auditing papers published in eight of

the ten journals (there are no experimental auditing papers published in either JAE or RAST) in

2 We also include five auditing papers that only have measured variables with no manipulation variables. These
papers were published from 1991–2008 and were classified as experiment papers in the 33 Years Audit Research
database. These papers have measured variables such as auditors’ knowledge, and auditors’ personality. We
further include six protocol studies which do not meet the strict definition of an experiment as they did not
manipulate independent variables. However, we consider them an important part of the JDM audit literature.
3 We do not include auditing papers using economic experimental method s, which have features such as “auctions
and markets by incorporating market mechanisms, repetition which consist of a series of periods, and incentives
which use monetary payments that are contingent on behaviour” (Loewenstein 1999).
4 To verify the accuracy and completeness of our sample size by using the 33 Years Audit Research database, we
manually compared the papers published in AJPT from 1991–2008 with the papers included in 33 Years Audit
Research database. The rationale for selection of AJPT for verification is that all papers published in AJPT are
auditing papers, therefore verification of this journal is more likely to identify missing experimental auditing
papers. We adjust this database by adding six experimental auditing and assurance papers published from 1991
through 2008 in AJPT that are not included in the 33 Years Audit Research database using the definitions above.
As 94.4 percent of experimental papers (102 out of 108) in AJPT from 1991 through 2008 were identified in the
33 Years Audit Research database, we rely on this database.

7
the years 1991–2015.5 These papers are classified by time period and journal in Table 1, and will

be discussed in the following sections.

Trends in Publications

Year and Journal

Table 1, Panel A, shows the trends in publications of experimental auditing research by each

five-year period and across each of the ten journals. Across the 25-year period, 468 experimental

auditing papers (6.7 percent) were published out of 6,991 papers. In total across the five-year

period there is a clear trend of a drop in the percentage of experimental auditing papers (8.8

percent, 8.1 percent in 1991–1995 and 1996–2000 respectively) compared to the later years (6.0

percent, 5.7 percent and 5.8 percent across the latter 15 years). The number of experimental

auditing papers in 2011–2015 (108) is actually higher than in 1991–1995 (100) but the

percentage is lower (5.8 percent compared to 8.8 percent).

The largest number of experimental auditing papers is in AJPT (168) followed by TAR (81),

BRIA (78), CAR (49), AOS (44) and JAR (32). The remaining journals have published either a

minimal number of audit experiments (JAAP (13) and EAR (3)) or none (JAE and RAST). Across

the journals that publish the most audit experiments we see the following trends. First, we

discuss the three journals (TAR, CAR and JAR) that publish generally across the financial,

management and auditing research areas and across different research methods, including

archival and experimental. For TAR, while there is considerable variation in the number of

experimental papers published for the first two five-year periods (21 versus ten; 1991–1995 and

5 We also examine the number and percentage of auditing papers going back to 1981 by using the 33 Years Audit
Research database, where we find 600 experimental auditing papers in total out of 2,255 auditing and assurance
papers for the 35 years. The percentage of experimental auditing and assurance papers over the 35 years from
1981–2015 (26.6 percent) is slightly higher than over the 25 years from 1991–2015 (26.4 percentage), which is
mainly due to the higher publications of experimental auditing in JAR from 1981–1985 (30 papers) compared to
the other examining periods (32 papers from 1991–2015, untabulated).

8
1996–2000, respectively), the number of experimental auditing papers was very consistent across

the last three five-year periods (17, 16, 17). However, the experimental auditing papers as a

percentage of all papers published in TAR halved over the period of our study, and decreased

over every five-year period, from 9.7 percent in 1991–1995 to 4.5 percent in 2011–2015. We

note that TAR also increased the number of issues from four in 2005 to five in 2006–2007 and to

six in 2008. CAR’s largest numbers/percentage of experimental papers were in 1991–1995 (14

papers; 10.1 percent) moving down to the lowest level in 2011–2015 (8 papers; 3.4 percent).

Finally, most of JAR’s experimental auditing papers were published in the period 1991–2000 (20

papers; 7.9 percent in total) and drops off in the last ten years 2006–2015 (6 papers overall; 1.9

percent in total). Next, we consider the more specialist journals. For AJPT, while there is some

variation across the 25 years, there is an overall mean of 27.2 percent for experimental auditing

papers compared to total audit papers. Of interest, in 2011–2015 the most audit experiments were

published (41) of any five-year period but this was also the smallest percentage of total papers

(22 percent). The main explanation is that AJPT moved from two to four issues in 2011, with

most of the additional papers being archival. The results for AOS are rather surprising in that the

number of experimental auditing papers could be seen to be low for a journal that specializes in

behavioral and organization issues. That is, given the behavioral emphasis and the small number

of archival papers published in AOS, we would have expected more than 44 audit experiments

over 25 years (4.9 percent). The numbers and percentages vary considerably by five-year periods

with 1996–2000 (13 papers; 7.4 percent) and 2011–2015 (12 papers; 7.0 percent) the highest. On

the other hand, BRIA, which specializes in behavioral research, published a significant number of

audit experiments across all five-year periods with the highest numbers in 1991–1995 (17

papers; 32.7 percent) and 2011–2015 (24 papers; 29.3 percent).

9
Table 1, Panel B, shows the same 468 experimental auditing papers as a percentage of all

audit papers (rather than total accounting papers) in the same years. First, we consider the

journals that regularly publish both archival and experimental research. As a percentage of all

audit papers, TAR publishes the highest percentage of experiments (81 papers; 32.8 percent) with

JAR (32 papers; 26.4 percent) and CAR (49 papers; 24.7 percent) the next highest. In fact, these

percentages are very similar to AJPT (27.2 percent). While our a priori beliefs were that JAR did

not publish many audit experiments, the data indicates that the percentage of audit experiments

as a total of all audit papers is very similar to other major journals, with the lower absolute

number of experimental auditing papers being the result of relatively fewer audit papers in total

being published in JAR. While EAR and JAPP publish a significant number of audit papers (143

and 124 respectively) only a very small number of these are experiments (three papers; 2.1

percent; 13 papers; 10.5 percent, respectively). Both BRIA and AOS specialize in behavioral

research. A much higher percentage of BRIA audit papers are experiments (67.2 percent) and this

is high in all five-year periods. AOS published 131 audit papers across the 25-year period of

which 44 (33.6 percent) were experiments. With the exception of 1996–2000 (43.3 percent) this

percentage is very consistent across periods and lower than what we expected. As there are a

very small number of archival auditing papers in AOS, the explanation appears to be that AOS

publishes a much broader array of audit research beyond experiments, including qualitative

research and future direction papers.

What are the potential reasons for the overall drop in percentages of experimental auditing

papers published across the 25-year period? The following suggestions are speculative but have

evolved over time based on anecdotal evidence and informal discussions with many colleagues.

While there are exceptions, experimental auditing researchers come from two categories of

10
researchers: (a) those that want to do audit research and choose experimental research over

archival methods; or (b) an experimental researcher who chooses to conduct auditing research

over financial/managerial research.6 For (a), new audit researchers are more likely to choose

archival over experimental research than in the past because of the increased availability of

archival data over the period (Franzel 2016; Simnett, Carson, and Vanstraelen 2016). Recent

revisions to auditor reports in many jurisdictions will provide further numerous other archival

opportunities. On the other hand, it has become much more difficult to obtain experienced

auditors as participants, at least in the U.S. This reduces the likelihood of an audit researcher

undertaking experimental research. For (b), researchers may be indifferent between addressing

audit, financial and managerial research questions. The use of student surrogates

(undergraduates, MBAs, etc.) and/or online platforms are commonly used in both financial

accounting, as non-sophisticated investors, and in management accounting as workers. While the

appropriateness of the use of student surrogates or online platforms depends on the task, theory

and research question, it is our assessment that there are a large percentage of audit judgment

researchers that have major concerns about the use of these participants under many

circumstances. Given the increasing difficulties of obtaining professional auditor participants,

many experimental researchers are likely to find it more attractive to move away from the

auditing sub-discipline.

We also identify the countries from which experimental participants were recruited as well

as, for each study, the number of countries from which participants are sourced. Table 1, Panel C,

shows that there are 450 of the 468 papers (96.2 percent) whose participants are recruited from a

single country, with only 18 papers (3.8 percent) using participants from two or three countries.

6 We recognise a number of researchers conduct both archival and experimental auditing research and a number
are experimentalists who conduct auditing, financing or management.

11
Of the single country studies, the most numerous auditing experiments are undertaken in the U.S.

(370 papers, 79.1 percent), with a considerable gap to the next country Canada (27 papers; 5.8

percent), followed by Australia (25 papers; 5.3 percent) and Singapore (16 papers; 3.4 percent).

Of the 18 multiple country studies, eight papers (44.4 percent) were sourced from a combination

of the U.S. and Canada.

[INSERT TABLE 1 HERE]

THE IAASB’S FRAMEWORK FOR AUDIT QUALITY AND HOW RESEARCH HAS
INFORMED OUR KNOWLEDGE OF THE FRAMEWORK ELEMENTS

The IAASB’s Framework for Audit Quality

The objectives of the IAASB’s (2014) “Framework for Audit Quality: Key Elements that

Create an Environment for Audit Quality”, as described by the IAASB, are raising awareness of

the key elements of audit quality; encouraging key stakeholders to explore ways to improve audit

quality; and facilitating greater dialogue between key stakeholders on the topic. We coded the

468 experimental auditing papers against the IAASB’s (2014) Framework using a similar

approach to coding as that used by Simnett, Carson, and Vanstraelen (2016) in relation to the

coding of international archival studies.

As outlined in Simnett, Carson, and Vanstraelen (2016), the IAASB’s Framework describes

three different elements (Inputs, Processing and Outputs) that create an environment for audit

quality (at the engagement, firm, and national levels) together with the relevant interactions and

contextual factors. Figure 1 sets out the elements of the Framework including examples of each

element. These elements are coded in Table 2. The input element to quality audits is divided into

values and knowledge: auditors exhibiting appropriate values, ethics and attitudes (Input-Values),

and auditors being sufficiently knowledgeable, skilled, experienced and having sufficient time

allocated to them to perform the audit work (Input-Knowledge). Key attributes of Input-Values

12
are that the engagement team recognizes that the audit is performed in the wider public interest

and that it exhibits objectivity and integrity, independence, professional competence, due care,

and appropriate skepticism. Key attributes in Input-Knowledge are that partners and staff have

the necessary competences, understand the entity’s business, make reasonable judgments, and

have sufficient time to undertake the audit in an effective manner.

The processing element includes auditors applying a rigorous audit process and appropriate

quality control procedures that comply with laws, regulations and applicable firm and national

standards. Key attributes at the engagement level are that the engagement team complies with

auditing standards, relevant laws and regulations, and the audit firm’s quality control procedures.

Key elements at the audit firm level are that the audit methodology encourages individual team

members to apply professional skepticism and exercise appropriate professional judgment. At the

national level, external audit inspections consider relevant attributes of audit quality, both within

audit firms and on individual audit engagements.

The output element relates to quality audit outputs that are useful and timely for report users.

These include outputs from the auditor, the audit firm, the entity, and audit regulators and

include, for example, the Independent Auditor’s Report at the individual engagement level, or the

audit firm’s transparency report at the audit firm level. They also include those outputs that arise

from the auditing process but which are generally not available to those outside the audited

organization, for example, the management letter provided by the auditor to the audit committee

at the completion of the audit.

[INSERT FIGURE 1 HERE]

The fourth element of the Framework includes the key interactions between the various

participants in the financial reporting supply chain as these interactions can have an important

13
impact on audit quality. The five participant groups identified in the Framework are

management, those charged with governance, regulators, users and auditors. While the primary

responsibility for performing quality audits rests with auditors, audit quality is best achieved in

an environment where there is support from these other participants. In this study we include

papers using all of the above participants and identify those groups who participate in completing

each experiment in Table 5.

The fifth element relates to the contextual factors. The ten contextual factors identified in the

Framework are: Business Practices and Commercial Law, Laws and Regulations Relating to

Financial Reporting, Applicable Financial Reporting Framework, Information Systems,

Corporate Governance, Broader Cultural Factors, Audit Regulation, Litigation Environment,

Attracting Talent, and Financial Reporting Timetable. Collectively, the contextual factors have

the potential to impact audit quality and, directly or indirectly, the nature and quality of financial

reporting. Where appropriate, auditors respond to these factors when determining how best to

obtain sufficient appropriate audit evidence. In this study we examine the contextual factors by

identifying the research topics addressed in Table 3.

What Elements of the Framework Have Been Examined?

Inputs/Processing/Outputs

For inputs, processing and outputs, the IAASB’s (2014) Framework is developed at the

engagement, firm and national level. Of the 468 papers, 8.1 percent could not be coded as

informing the inputs, processing or outputs elements, leaving 430 coded papers (91.9 percent).

As 101 of these 430 papers informed more than one element (of which 96 jointly examine the

input and processing stages), it resulted in a total coding of 531 elements. As shown in Table 2, at

the elements level, 362 coded elements (68.2 percent) informed the processing element. The next

14
most informed element was the input (values) element, with 79 coded elements (14.9 percent).

There was a relatively equal percentage of papers informing the input (values) element (46

elements; 8.7 percent) and the outputs element (44 elements; 8.3 percent). When looking at the

five-year periods, the most notable trends are the continuing growth in the outputs element over

each of the five years, and the continual decrease in the input (knowledge) element. These

findings for elements are very different from the categorizations of the international archival

auditing research (Simnett, Carson, and Vanstraelen 2016), which were heavily associated with

the inputs and outputs stages, showing the complementarity of the two streams of research to

obtain an overall understanding of the elements of audit quality.

With regard to the levels analysis of the 430 papers, 30 papers examined two levels resulting

in 460 coded levels. Most of the category levels have informed the engagement level (380 levels;

82.6 percent), or the firm level (72 levels; 15.7 percent), and very few informed the national level

(8 levels; 1.7 percent). There were no discernable trends in the levels analysis over the individual

five-year periods.

[INSERT TABLE 2 HERE]

TRENDS IN RESEARCH TOPICS ADDRESSED

Table 3 uses the Libby and Luft (1993) framework which considers auditor judgment

performance as a function of ability, knowledge, environment and motivation. We further

expanded these categories as discussed below and also consider the judgments of others who rely

on audit reports, given that they are part of the IAASB’s Framework.7 Under ability, we consider

three categories: individual effects, cognitive limitations and predecisional behavior. All three

categories were addressed in the period prior to 1991 and while there was further research

7 Classifying the papers under a research topic was more difficult than expected and our original attempts to have
research assistants do this led to considerable inconsistencies. As a result , one of the researchers made the
judgment on what they thought was the main topic addressed in each paper.

15
between 1991 and 2000, there have been less of these studies since 2000. The decrease is

particularly noticeable for cognitive limitations and predecisional behavior. Research on

knowledge effects (including knowledge, memory, expertise, mindset) was particularly strong in

the period 1991–1995 (22 percent of all experiments in that period) but has gradually decreased,

particularly so for the most recent five-year period (2011–2015). Under environment, two

categories had a significant amount of research across the 25-year period, namely decision aids

and accountability. The former has been strong across all five-year periods while accountability

research has been decreasing over the last 15 years. While group decision making and other

interactions were included under ‘Environment’ in the Libby and Luft (1993) model, we included

them separately given the significant amount of research on some group topics (see Trotman,

Bauer, and Humphreys 2015 for a review). Research on the review process across the 25-year

period resulted in 36 papers (7.7 percent of all experimental papers). The noticeable trend is the

large drop off in 2011–2015. On the other hand, the brainstorming fraud detection research

started in 2006–2010 and increased in 2011–2015 with eight papers (7.4 percent).

In the other interactions topic, the major research area is auditor-client negotiations which

started in the 2001–2005 period and received strong emphasis in 2006–2010 with 11 papers (12.8

percent) tampering off to seven papers (6.5 percent) in 2011–2015. The number of papers on

incentives and motivation has generally been quite low across the 25-year period with a total of

nine papers (1.9 percent). Given the important emphasis on professional skepticism and the

criticism of auditors by regulators around the world using insufficient professional skepticism

(ASIC 2017; IFIAR 2017), it is surprising that there is not more published research on this topic.

However, we note that in our coding we only categorized it under the heading professional

skepticism if that was the key topic of the paper and many papers that address other topics (e.g.,

16
audit risk, incentives, fraud brainstorming) have implications for professional skepticism (see

Hurtt, Brown-Liburd, Earley, and Krishnamoorthy 2013).

The Libby and Luft (1993) model considers auditor judgments. We also consider papers

examining the judgments of a range of other users of audit reports and other parties listed the

IAASB’s Framework (see Table 3 lines 9–14). One clear trend is the increase in 2011–2015 in

studies on juries and user judgments and in 2011–2015 on financial statement preparers. Again,

we suggest that this is part of the trend away from examining auditor judgments due to the

difficulty of obtaining participants. While the papers on non-financial information assurance

have generally been extremely low, there were four papers (3.7 percent) in 2011–2015. As the

range of assurance engagements continues to expand, we expect to see an increased number of

experiments on assurance (Cohen and Simnett 2015).

[INSERT TABLE 3 HERE]

Alternative Framework—Reference to Auditing Standards

An alternative framework against which we coded the research was to examine the extent to

which the research informs the development or operationalization of specific auditing standards.

One of the main ways that auditing standard-setters such as the IAASB, or the AICPA and the

PCAOB in the U.S., drives quality of auditing is through the development and revision of

auditing standards. We examine the 468 research papers and identify any reference to auditing

standards (international or national equivalent). Results in Table 4, Panel A, show that 184 (39.3

percent) of research papers do not refer to specific auditing standards while 284 (60.7 percent)

do. Approximately half of the 284 studies that did refer to specific auditing standards referred to

more than one auditing standard, with 145 referring to one standard, and 139 referring to more

than one. This finding of significant references to auditing standards contrasts with the finding

17
that only 8.1 percent of international archival auditing papers referred to specific auditing

standards (Simnett, Carson, and Vanstraelen 2016). It is conjectured that this difference between

the various research approaches is because most auditing standards are written around the

various stages of the auditing process, and these standards can generally only be observed or

informed by experimental research (for most standards there are no outputs that can be observed

by archival research). Table 4, Panel A, further shows that other than greater reference in the

2006–2010 period, with only 25 (29.1 percent) of papers not referencing auditing standards,

there has not been a significant change in the extent of references to auditing standards for

experimental research over each of the five-year periods.

[INSERT TABLE 4 HERE]

Table 4, Panel B, shows that in total there were 555 references to specific auditing standards.

Those standards most commonly referred to are ISA 240 (or national equivalent) “The Auditor’s

Responsibilities Relating to Fraud in an Audit of Financial Statements”, having 102, or 18.4

percent of the references, ISA 320 (or national equivalent) “Materiality in Planning and

Performing an Audit”, with 63, or 11.4 percent, of the references, the U.S. specific standard (and

its various iterations over time, AS 5/AS 2/ AU 319/AU 320/SAS 55/SAS 78 “An Audit of

Internal Control over Financial Reporting that is Integrated with an Audit of Financial

Statements” with 59 (10.6 percent) references, and ISA 520 (or national equivalent) “Analytical

Procedures”, with 35 (6.3 percent) references. This panel also outlines the trends in references to

auditing standards for each of the five-year periods. Not surprisingly, among the trends observed

are that two of the more recently developed and key auditing standards have been explored much

more in the last ten years (ISA 315 (and national equivalents), “Identifying and Assessing the

Risks of Material Misstatement through Understanding the Entity and Its Environment”, and ISA

18
330 (and national equivalents) “The Auditor’s Responses to Assessed Risks”), being for this

period the third and fourth most referenced. Both ISA 320 (or national equivalent) “Materiality

in Planning and Performing an Audit”, with references in the 1991–2005 periods ranging from

14.2 to 16.7 percent of the references to auditing standards, falling to 8.9 percent in 2006–2010

and 5.3 percent in 2011–2015, and ISA 520 (or national equivalent) “Analytical Procedures”,

with references ranging from 11.5 percent in the 1991–1995 period of the references to auditing

standards to 2.4 percent in 2006–2010, have decreased in research attention over the last ten

years. A further interesting finding is that there is very little experimental research on the broader

ISAE (or national equivalent) assurance standards (only two instances over the 25 years) or the

ISRE (or national equivalent) review standards (only one instance over the 25 years).

HOW HAS THE RESEARCH EVOLVED OVER TIME?

Here we consider two key issues: the type of participants included in the experiments and the

experimental control including the approach to distributing research instruments.

Experiment Participants

Table 5 provides details of the types of participants found in the studies we examine. As we

are considering JDM audit research, it is not surprising that the bulk of participants are auditors

(367 papers; 78.4 percent) with the remaining participants being users of audit reports or those

that interact with auditors in their preparation or oversight of the financial statements (judges,

jurors, investors, financial statement preparers, internal auditors, audit committee members and

other managers), with all these latter categories individually being below three percent. In

addition, 37 papers (7.9 percent) exclusively used students as surrogates for auditors, users or

those interacting with auditors, and a further 31 papers (6.6 percent) used students in

combination with other participant groups.

19
[INSERT TABLE 5 HERE]

While traditionally there has been a strong emphasis on using auditor participants to

undertake an auditing task whose experience matches the task (Gibbins and Swieringa 1995),

one disturbing trend is the move away from using professional auditors. The percentage of JDM

audit papers examining the judgments of external auditors has decreased each five-year period

(88.0 percent in 1991–1995 to 63.0 percent in 2011–2015). As obtaining auditor participants is

becoming more difficult, it is resulting in a move towards research involving other preparers and

users of financial statements and surrogates for experienced participants. While relatively small

numbers of studies used students in 1991–1995 and 1996–2000 (3 percent and 4.2 percent

respectively), these numbers increase significantly in the last two five-year periods (12.8 percent

and 13.0 percent respectively). However, we note that the percentages using students are much

lower than in JDM research in both financial and management accounting. Based on our

observation of papers presented at the AAA audit mid-year meetings, ABO meetings and

International Symposium on Audit Research (ISAR) during 2015, 2016 and 2017, we expect this

downward trend to continue given the considerable number of papers considering the judgments

of non-professional investors and jurors who are provided with audit outputs and information

about audit processes. While the latter issues are important, our concern is with the move away

from examining auditor judgments at a time when there is increasing criticism of auditing,

including professional judgment and skepticism (PCAOB 2015; ASIC 2017; IFIAR 2017).

In addition, by comparing the 2011–2015 percentages to the total percentages and earlier

year percentages in Table 5, the differences for the first two rows involving staff and senior

auditors is reasonably constant, but for the third row (audit managers and partners) both the

number and percentage of papers has approximately halved (19.0 percent in 1991–1995 to 9.3

20
percent in 2011–2015). This suggests that audit research is no longer matching the level of

experience of auditors to the task or it is choosing tasks for which more junior auditors are

appropriate. The latter can lead to some of the important audit judgments being neglected by

research because it reduces the likelihood of examining complex judgments made by

managers/partners and a consideration of the review process, a quality control which may

eliminate/reduce judgment errors of more junior staff.

While numbers are small, we note particularly interesting trends in Table 5 in four categories

of studies: judges and jurors, investors, financial statement preparers, and audit committee

members. First, some very interesting studies on judges were published in the period 1991–2000

but more recently the trend has been towards juror studies (both juror-eligible adults and

students). Second, most of the papers using financial statement preparers have been published in

the last ten years. These papers generally examine how important aspects of the audit

environment affect the judgments of preparers. Third, while the number of studies using

experienced investors is small, it is increasing (1.0 percent to 5.6 percent from 1991–1995 to

2011–2015 respectively). Fourth, a small number of audit studies examining the judgments of

audit committee members are appearing. Given the increasing importance of the role of audit

committees in the oversight of financial statement and audit quality, the trend is encouraging.

However, obtaining such participants is difficult and it likely requires the existence of long-term

contacts.

Controlled versus Non-controlled Experiments

Experiments can be conducted in a supervised environment, for example, in the presence of

the researcher (controlled experiment), or the material can be distributed to participants for

completion in a non-supervised environment (non-controlled experiment). The latter can be

21
administered by hard copies distributed by an audit firm to staff, mailed/emailed questionnaires

or more recently through web links and online platforms.

Traditionally, controlled experiments were most common (Trotman 1996) and were

particularly useful in studies involving group processes where teams interact, in memory

recognition and recall studies to avoid participants referring back to earlier materials, and studies

where the order of reading the material and/or time allocation for sub-tasks are important. These

controlled experiments often involved the researcher using different envelopes to control the

distribution of materials.

Trotman (1996) suggests a range of problems faced with conducting non-controlled

experiments. First, participants may consult additional information beyond what is in the

research instrument. This is more of a problem in studies examining cue usage but, in general, it

does take away one of the advantages of an experiment which is the ability to manipulate

particular variables and hold constant all other information. Second, it is much more difficult to

ensure all participants work independently. The extent of this problem varies depending on the

type of participants and their location. It becomes more of a problem when different treatments

are sent to a unique location, e.g., one office of a Big 4 firm. In this case, discussion between

participants may occur. On the other hand, if the participants are not in close proximity (e.g.,

audit committee members, user groups, etc.) this is unlikely to be a problem. Third, it is difficult

to control who completes the research instrument. In a controlled setting, e.g., a training program

at a Big 4 firm, confidence can be gained that all participants are employees of a particular firm

and there are checks on some demographic variables (such as seniority level). In many non-

controlled settings this information is also likely to be accurate, e.g., when a researcher sends the

email to known contacts. However, this is more likely to be a concern for many online platforms,

22
such as Mturk or Qualtrics, where participants are only paid if they meet certain criteria and,

therefore, additional procedures need to be included to verify the accuracy of these descriptives.

Fourth, in hard copy non-controlled settings, manipulation checks can be difficult to include

because of the possibility of making the manipulation salient or allowing the participants to refer

back to earlier parts of the task. This can be controlled for in online settings. Fifth, in non-

controlled settings, whether hard copy or online, it is more difficult to control extraneous factors

such as time to complete the task, interruptions, multi-tasking, etc.

In general, experienced researchers gain a fairly good understanding of the quality of the

research data when attending controlled experiments. In this case, quality of data includes

attention to the task, freedom of distraction, interest in the task and general diligence. Factors

that can affect this quality of data are the introduction to the experiment and/or oral instructions

from the researcher or a senior official (e.g., partner), the room setting, and the interest in the

task that may vary with familiarity, perceived relevance, etc. Usually the researcher is far less

aware of the impact of these factors and the participants’ understanding of the task in non-

controlled environments.

To determine if each study in our sample was a controlled experiment, a research assistant

read the research methods section of each paper looking for information that the experiment was

conducted in a controlled environment. This included reference to the supervision of the

experiment, reference to being conducted during staff training or classroom settings, oral

instructions at the start of the experiment or collection/distribution of materials.

The above discussion suggests that the level of control in an experiment can have clear

implications for internal validity. Table 6 considers changes in experimental control across the

25-year period. A key change is the reduced number and percentage of papers using a controlled

23
setting from 66 (66 percent) in 1991–1995 to 51 (47.2 percent) in 2011–2015. Not surprising, the

increase in non-controlled experiments has been through distribution by email or web links

which only started for papers published in 2001–2005 (4 papers; 5.1 percent) and has increased

dramatically in 2011–2015 (28 papers; 25.9 percent). This has resulted in a reduced number of

controlled experiments and a reduction in the distribution of hard copies within firms. In 2011–

2015 there was the introduction of Mturk studies in auditing. While only two papers were

included in this time period, there are a considerable number of published papers in 2016 and

2017 as well as many conference presentations using this type of platform. The potential

opportunities and pitfalls of using such online platforms in auditing are discussed in Leiby,

Rennekamp, and Trotman (2017). Our expectation is that such data sources are more likely to be

applicable to jury and non-professional investor studies rather than participants with a high

degree of expertise, e.g., auditors, audit committee members and investment analysts. However,

Leiby, Rennekamp, and Trotman suggest the potential for online alumni platforms as one source

of experienced participants. Our final finding is that there has been a gradual decrease in the use

of verbal protocol as a data collection technique, with five of the six identified studies conducted

in the first ten of the 25 years.

[INSERT TABLE 6 HERE]

Interactions among Participant Types, Countries of Participants and Experimental


Conditions

Comparing experiments conducted in the U.S. versus experiments conducted in non-U.S.

countries may be of interest to inform the development of experimental auditing research

worldwide. Thus, we examine the interactions among participant types (auditors, students and

others), countries where participants came from (U.S. versus non-U.S.) and experimental

conditions (controlled versus non-controlled).

24
As can be seen in Table 7, Column A, in relation to the interactions between participant types

and countries of participants, using external auditors as the experiment participants in the U.S.

has dropped gradually since 1991–1995 (80 percent) to only 43.5 percent in 2011–2015. In the

last five years, slightly more non-external auditors (students and others) were used as experiment

participants in the U.S. auditing experiments than external auditors, which is probably due to the

easier access to non-auditors in the U.S. or a broader spectrum of auditing experimental research

other than research evaluating external auditors’ judgments. A further breakdown of non-external

auditor participant types into students and others shows that using participants other than

external auditors and students in the U.S. has increased significantly in 2011–2015 to 22.2

percent, which is more than double any previous period. While there is a small growth in the use

of participants other than external auditors for non-U.S. studies, external auditors are still the

predominant experimental participants. In fact, there are no studies using students as

experimental participants in countries other than the U.S. for the first 20 years and only two

studies in 2011–2015.

[INSERT TABLE 7 HERE]

We examine the interaction between participant types and experimental conditions in Table

7, Column B. By examining the number of external auditors under controlled experimental

conditions, it can be seen that the number dropped from 66.3 percent in 1991–1995 to 34.7

percent in 2010–2015. On the other hand, using students and others under controlled

experimental conditions has seen an increase from 7.8 percent (2.2 percent students + 5.6 percent

others) in the first five years compared to 18.9 percent (12.6 percent students + 6.3 percent

others) in the most recent five years which indicates that the increase is almost entirely due to the

greater use of students as surrogates. In addition, using participants other than auditors and

25
students under non-controlled experimental conditions has gained popularity over the last 25

years, from 4.5 percent in 1991–1995 to 16.8 percent in 2011–2015.

For the interaction between experimental conditions and countries of participants, as shown

in Table 7, Column C, undertaking the research under controlled experimental conditions used to

be the predominant experimental method (68.5 percent) compared to non-controlled (22.5

percent) in the U.S. in 1991–1995. By 2011–2015 controlled experiments were less preferred in

the U.S. (35.8 percent) during the most recent five years compared to non-controlled

experimental condition (38.9 percent). On the other hand, controlled experimental conditions are

still the preferred experiment research method for countries other than the U.S. during all five-

year periods.

DISCUSSION AND CONCLUSION

In this paper we have examined 468 experimental auditing research papers that were

published in ten leading accounting and auditing journals for 1991–2015. We first analyze the

trends in these publications by each five-year period and by each of the ten journals. We find that

across the five-year periods, although the total number of experiment auditing papers has

remained reasonably constant, there is a clear trend of a drop in the experimental auditing papers

published as a percentage of both total accounting papers and total auditing papers compared to

the later years. The percentage fall in experimental auditing papers is reflected in CAR, TAR and

JAR, while BRIA and AOS have shown different rates of publication but no clear decline. For

AJPT the drop only appears in 2011–2015. The remaining journals have minimal (JAAP and

EAR) or no (JAE and RAST) audit experiments.

We consider whether experimental audit research has significantly contributed to evidence

based standard-setting and regulation, by using the IAASB’s Framework for Audit Quality to

26
categorize and then map the experimental auditing research undertaken over 1991–2015. We find

that the experimental auditing research is relatively concentrated into either of the inputs and

processing (or both) elements, rather than the outputs elements. These findings are very different

to the categorizations of the international archival auditing research (Simnett, Carson, and

Vanstraelen 2016), which were heavily associated with the inputs and outputs stages, showing

the complementarity of the two streams of research to obtain an overall understanding of the

elements of audit quality. We further identified that most experimental research is undertaken to

examine audit quality at the individual audit engagement level.

We document the evolution of experimental auditing research by examining how the topics

examined, participants used, and data collection and research approaches used have evolved. It is

not surprising that in the bulk of the studies participants are auditors (about 80 percent), although

there is evidence of a move away from using professional auditors to using students as surrogates

in later years. In terms of the topics examined, the most prominent experimental auditing

research themes relate to environmental factors that influence auditors’ judgments. These topics

include decision aids, accountability, and fraud detection, while papers investigating the effect of

knowledge and expertise, have become less popular over the last 25 years. There has also been

an increasing trend in studies addressing fraud detection factors, along with studies examining

brainstorming, negotiating, and audit committee effects. We further find that across the 25-year

period there is a reduced number and percentage of papers using a controlled setting. The major

increase in non-controlled experiments has been in distribution by email or web links which only

started in 2001–2005.

We also examine the extent to which the research papers refer to specific auditing standards.

We find that over 60 percent of studies contain reference to auditing standards, which contrasts

27
with the finding that less than ten percent of international archival auditing papers referred to

specific auditing standards (Simnett, Carson, and Vanstraelen 2016). It is conjectured that this

difference for the various research approaches is because most auditing standards are written

around the various stages of the auditing process, and these standards can only be observed or

informed by experimental research (for most standards there are no outputs that can be observed

by archival research).

These findings are of interest to a number of parties, including researchers, practitioners, the

standard-setters and regulators. This review aids researchers in identifying what has been

researched, informs future research opportunities and highlights some important decisions to be

made by researchers in undertaking experimental auditing research. Our analysis also aids

standard-setting and policy making groups which are responsible for regulating these standards.

The greater emphasis on evidence informed standards and policy means that these groups are

required to review and assess the extent to which reliance can be placed on the research findings.

Our overall conclusion is that there appears to be a greater demand for research that informs

practice, regulators and standard-setters.

We find that experimental audit researchers tend to motivate their research around practical

issues and auditing standards and make much more frequent reference to auditing standards than

do archival researchers, and thus may have a much greater impact on the standard-setting

process, one important part of the overall audit quality process. We further find that the

experimental audit research in the last ten years, now informs a greater array of the audit process,

with a much greater recent emphasis on the output stage of the process (especially important as

the standard-setters and regulators work through suggested changes, and possible further

adaptations to the auditor reporting suite of standards). However, our overall conclusion is that,

28
at a time where it appears to be a greater demand for research that informs practice, regulators

and standard-setters, there is a relative move away from conducting experimental auditing

research towards predominantly conducting archival auditing research, and also less emphasis on

addressing issues that require experienced participants. This has been potentially caused by a

lack of available audit qualified participants and a consequent move by researchers away from

experimental research to archival research, and/or a move by researchers to other areas of

experimental accounting research (such as management or financial) where there is less of an

expectation of the need for experienced participants.

In conclusion, we find a clear trend in that while the number of papers published in the

leading journals has increased, experimental auditing papers have not. Also, as a percentage of

total audit research, experimental papers have decreased. This is due to the increase in publicly

available data for archival research and the decrease in available audit participants. There are

many potential reasons for the latter but a key reason appears to revolve around what can be

expected from a single research study. Historically, JDM research has built on previous research

papers which is also typical of many other disciplines including psychology and medicine. More

recently, we see an expectation of a greater incremental contribution over previous research. This

is partly due to the scarcity of journal space, but we suggest there is still a need to examine the

boundary conditions of the results for earlier papers.

Further research needs to consider why it is more difficult to obtain participants in an era

where audit firms are under increased scrutiny from regulators for audit deficiencies. We see a

‘catch 22’ situation where as audit practitioners become more difficult to access for experiments,

audit researchers move to topics requiring less senior auditors and surrogates for auditing (online

and student surrogates). This research is seen as less informative to audit firms, standard-setters

29
and regulators which longer term will negatively affect the type of audit research conducted.

International auditing standard-setters and regulators are seeking the results of well-designed

research studies to inform their decision making. We suggest that research directed at the issues

on the agendas of these groups, together with specific reference and links to auditing standards,

will enhance the demand for audit research and the willingness of audit firms to participate in the

research.

30
REFERENCES

Abdolmohammadi, M., and A. Wright. 1987. An examination of the effects of experience and
task complexity on audit judgments. The Accounting Review 62 (1): 1-13.
American Accounting Association Auditing Section Research Committee. 2009. Database Thirty
Three Years of Audit Research. Available at: http://aaahq.org/AUD/research (accessed
November 2017).
Ashton, A. H., and R. H. Ashton. 1988. Sequential belief revision in auditing. The Accounting
Review 63 (4): 623-641.
Ashton, R. H. 1974. An experimental study of internal control judgments. Journal of Accounting
Research 12: 143-157.
Ashton, R. H., and A. H. Ashton. 1995. Judgment and Decision-Making Research in Accounting
and Auditing. New York: Cambridge University Press.
Australian Securities and Investments Commission (ASIC). 2017. REP 534 Audit Inspection
Program for 2015-16. Available at: http://asic.gov.au/regulatory-resources/find-a-
document/reports/rep-534-audit-inspection-program-report-for-2015-16/ (accessed
November 2017).
Bamber, E. M. 1983. Expert judgment in the audit team: A source reliability approach. Journal
of Accounting Research 21 (2): 396-412.
Biggs, S. F., and T. J. Mock. 1983. An Investigation of auditor decision processes in the
evaluation of internal controls and audit scope decisions. Journal of Accounting Research
21 (1): 234-255.
Butler, S. A. 1985. Application of a decision aid in the judgmental evaluation of substantive test
of details samples. Journal of Accounting Research 23: 513-526.
Cohen, J. R., and R. Simnett. 2015. CSR and assurance services: A research agenda. Auditing: A
Journal of Practice & Theory 34 (1): 59-74.
Cohen, J. R., and W. R. Knechel. 2013. A call for academic inquiry: Challenges and
opportunities from the PCAOB synthesis projects. Auditing: A Journal of Practice &
Theory 32 (Supplement 1): 1-5.
Cohen, J., and T. Kida. 1989. The impact of analytical review results, internal control reliability,
and experience on auditors' use of analytical review. Journal of Accounting Research 27
(2): 263-276.
DeFond, M., and J. Zhang. 2014. A review of archival auditing research. Journal of Accounting
& Economics 58 (2/3): 275-326.
Franzel, J. M. 2016. The PCAOB’s interests in and use of auditing research. Speech delivered at
panel session entitled ‘Opportunities for Researchers to Inform the PCAOB’ at the
American Accounting Association, 2016 Auditing Section Mid-Year Meeting, January 15,
2016. Available at: https://pcaobus.org/News/Speech/Pages/Franzel-PCAOBs-Interests-
Use-Auditing-Research.aspx (accessed November 2017).
Frederick, D. M., and R. Libby. 1986. Expertise and auditors' judgments of conjunctive events.
Journal of Accounting Research 24 (2): 270-290.
Gibbins, M., and R. J. Swieringa. 1995. Twenty years of judgment research in accounting and
auditing. In Judgment and Decision-Making Research in Accounting and Auditing, edited
by R. H. Ashton and A. H. Ashton, 231-249. New York: Cambridge University Press.
Hurtt, R. K., H. Brown-Liburd, C. E. Earley, and G. Krishnamoorthy. 2013. Research on auditor
professional skepticism: literature synthesis and opportunities for future research. Auditing:

31
A Journal of Practice & Theory 32 (Supplement 1): 45-97.
International Auditing and Assurance Standards Board (IAASB). 2014. A Framework for Audit
Quality: Key Elements That Create an Environment for Audit Quality. Available at:
https://www.ifac.org/sites/default/files/publications/files/A-Framework-for-Audit-Quality-
Key-Elements-that-Create-an-Environment-for-Audit-Quality-2.pdf (accessed November
2017).
International Forum of Independent Audit Regulators (IFIAR). 2017. Report on 2016 Survey of
Inspection Findings. Available at: https://www.ifiar.org/activities/annual- inspection-
findings-
survey/index.php?wpdmdl=2055&ind=YciMWbzCRjCac6lmBR1Jm_RhYJnZa9WJIkDxA
80gHQmIa4nKydx5DGkkbXdueu0vxvlBHv7py3Qi__3oG0aM-
vjzLdWPRrqHQ_iBs6vgWLVnzPeOIEQZUGRenrr002pj&#zoom=100 (accessed
November 2017).
Joyce, E. J. 1976. Expert judgment in audit program planning. Journal of Accounting Research
14 (3): 29-60.
Joyce, E. J., and G. C. Biddle. 1981a. Anchoring and adjustment in probabilistic inference in
auditing. Journal of Accounting Research 19 (1): 120-145.
Joyce, E. J., and G. C. Biddle. 1981b. Are auditor's judgments sufficiently regressive? Journal of
Accounting Research 19 (2): 323-349.
Kerlinger, F. N. 1973. Foundations of Behavioral Research. 2nd edition. New York: Holt,
Rinehard & Winston.
Kida, T. 1980. An investigation into auditors' continuity and related qualification judgments.
Journal of Accounting Research 18 (2): 506-523.
Kida, T. 1984. The impact of hypothesis-testing strategies on auditors’ use of judgment data.
Journal of Accounting Research 22 (1): 332-340.
Knechel, W. R., and W. F. Messier Jr. 1990. Sequential auditor decision making: Information
search and evidence evaluation. Contemporary Accounting Research 6: 386-406.
Leiby, J., K. Rennekamp, and K. T. Trotman. 2017. Using Online Platforms to Further Your
Research. Working paper, University of Georgia, Cornell University and UNSW Sydney.
Libby, R. 1981. Accounting and Human Information Processing: Theory and Applications. NJ:
Prentice-Hall, Englewood Cliffs.
Libby, R. 1985. Availability and the generation of hypotheses in analytical review. Journal of
Accounting Research 23 (2): 648-667.
Libby, R. 1995. The role of knowledge and memory in audit judgment. In Judgment and
Decision-Making Research in Accounting and Auditing, edited by R. H. Ashton and A. H.
Ashton, 176-206. New York: Cambridge University Press.
Libby, R., and J. Luft. 1993. Determinants of judgment performance in accounting settings:
Ability, knowledge, motivation, and environment. Accounting, Organizations and Society
18 (5): 425-450.
Libby, R., and P. A. Libby. 1989. Expert measurement and mechanical combination in control
reliance decisions. The Accounting Review 64 (4): 729-747.
Loewenstein, G. 1999. Experimental economics from the vantage-point of behavioural
economics. The Economics Journal 109: 25-34.
Messier, W. F., Jr. 1983. The effect of experience and firm type on materiality/disclosure
judgements. Journal of Accounting Research 21 (2): 611-618.
Messier, W. F., Jr. 1995. Research in and development of audit decision aids: A review. In

32
Judgment and Decision-Making Research in Accounting and Auditing, edited by R. H.
Ashton and A. H. Ashton, 205-228. New York: Cambridge University Press.
Perreault, S., and T. Kida. 2011. The relative effectiveness of persuasion tactics in auditor–client
negotiations. Accounting, Organizations and Society 36 (8): 534-547.
Plumlee, R. D. 1985. The standard of objectivity for internal auditors: Memory and bias effects.
Journal of Accounting Research 23 (2): 683-699.
Public Company Accounting Oversight Board (PCAOB). 2015. Inspection Observations Related
to PCAOB ‘Risk Assessment’ Auditing Standards (No. 8 through No. 15). Washington,
D.C.: PCAOB.
Schultz, J. J., Jr., and P. M. J. Reckers. 1981. The impact of group processing on selected audit
disclosure decisions. Journal of Accounting Research 19 (2): 482-501.
Shockley, R. A. 1981. Perceptions of auditors' independence: An empirical analysis. The
Accounting Review 56 (4): 785-800.
Simnett, R., and K. T. Trotman. 1989. Auditor versus model: Information choice and information
processing. The Accounting Review 64 (3): 514-528.
Simnett, R., E. Carson, and A. Vanstraelen. 2016. International archival auditing and assurance
research: Trends, methodological issues, and opportunities. Auditing: A Journal of Practice
& Theory 35 (3): 1-32.
Solomon, I. 1982. Probability of assessment by individual auditors and audit teams: An empirical
investigation. Journal of Accounting Research 20 (2): 689-710.
Solomon, I., and M. D. Shields. 1995. Judgment and decision-making research in auditing. In
Judgment and Decision-Making Research in Accounting and Auditing, edited by R. H.
Ashton and A. H. Ashton, 137-175. New York: Cambridge University Press.
Trochim, W. M. K., and J. P. Donnelly. 2008. The Research Methods Knowledge Base. 3rd
edition. New York, NY: Atomic Dog/Cengage Learning.
Trotman, K. T. 1985. The review process and the accuracy of auditor judgments. Journal of
Accounting Research 23 (2): 740-752.
Trotman, K. T. 1996. 'Research Methods for Judgment and Decision Making Studies in Auditing'
Monograph No. 3, Accounting Research Methodology. Coopers & Lybrand and
Accounting Association of Australia and New Zealand.
Trotman, K. T., and P. W. Yetton. 1985. The effect of the review process on auditor judgments.
Journal of Accounting Research 23 (1): 256-267.
Trotman, K. T., and R. Wood. 1991. A Meta-analysis of studies on internal control judgments.
Journal of Accounting Research 29 (1): 180-192.
Trotman, K. T., H. C. Tan, and N. Ang. 2011. Fifty-year overview of judgment and decision-
making research in accounting. Accounting & Finance 51 (1): 278-360.
Trotman, K. T., P. W. Yetton, and I. R. Zimmer. 1983. Individual and group judgments of
internal control systems. Journal of Accounting Research 21 (1): 286-292.
Trotman, K. T., T. D. Bauer, and K. A. Humphreys. 2015. Group judgment and decision making
in auditing: Past and future research. Accounting, Organizations and Society 47
(November): 56-72.
Wainberg, J. S., T. Kida, M. D. Piercey, and J. F. Smith. 2013. The impact of anecdotal data in
regulatory audit firm inspection reports. Accounting, Organizations and Society 38 (8):
621-636.
Wright, A. 1982. An Investigation of the engagement evaluation process for staff auditors.
Journal of Accounting Research 20 (1): 227-239.

33
Table 1: Trend of Experimental Audit and Assurance Papers by Journals and Years
Panel A: Experimental Audit and Assurance Papers by Total Publications in Each Journal by Five-year Periods
Journal
Year
AJPT AOS BRIA CAR EAR JAE JAPP JAR RAST TAR Total
1991-1995 32/117(27.4%) 6/186(3.2%) 17/52(32.7%) 14/139(10.1%) 0/128(0.0%) 0/104(0.0%) 2/73(2.7%) 8/123(6.5%) 0/0(0.0%) 21/216(9.7%) 100/1138(8.8%)
1996-2000 39/104(37.5%) 13/176(7.4%) 13/82(15.9%) 6/109(5.5%) 0/177(0.0%) 0/139(0.0%) 2/77(2.6%) 12/131(9.2%) 0/64(0.0%) 10/119(8.4%) 95/1178(8.1%)
2001-2005 30/111(27.0%) 5/160(3.1%) 8/48(16.7%) 10/130(7.7%) 1/164(0.6%) 0/139(0.0%) 2/99(2.0%) 6/159(3.8%) 0/83(0.0%) 17/215(7.9%) 79/1308(6.0%)
2006-2010 26/99(26.3%) 8/202(4.0%) 16/62(25.8%) 11/166(6.6%) 1/130(0.8%) 0/169(0.0%) 5/150(3.3%) 3/162(1.9%) 0/99(0.0%) 16/279(5.7%) 86/1518(5.7%)
2011-2015 41/186(22.0%) 12/171(7.0%) 24/82(29.3%) 8/238(3.4%) 1/132(0.8%) 0/171(0.0%) 2/150(1.3%) 3/159(1.9%) 0/183(0.0%) 17/377(4.5%) 108/1849(5.8%)
Total 168/617(27.2%) 44/895(4.9%) 78/326(23.9%) 49/782(6.3%) 3/731(0.4%) 0/722(0.0%) 13/549(2.4%) 32/734(4.4%) 0/429(0.0%) 81/1206(6.7%) 468/6991(6.7%)

Panel B: Experimental Audit and Assurance Papers by Auditing and Assurance Publications in Each Journal by Five-year Periods
Journal
Year
AJPT AOS BRIA CAR EAR JAE JAPP JAR RAST TAR Total
1991-1995 32/117(27.4%) 6/21(28.6%) 17/22(77.3%) 14/24(58.3%) 0/20(0.0%) 0/5(0.0%) 2/17(11.8%) 8/38(21.1%) 0/0(0.0%) 21/52(40.4%) 100/316(31.6%)
1996-2000 39/104(37.5%) 13/30(43.3%) 13/22(59.1%) 6/26(23.1%) 0/52(0.0%) 0/11(0.0%) 2/18(11.1%) 12/27(44.4%) 0/3(0.0%) 10/26(38.5%) 95/319(29.8%)
2001-2005 30/111(27.0%) 5/15(33.3%) 8/15(53.3%) 10/47(21.3%) 1/27(3.7%) 0/11(0.0%) 2/24(8.3%) 6/26(23.1%) 0/4(0.0%) 17/45(37.8%) 79/325(24.3%)
2006-2010 26/99(26.3%) 8/29(27.6%) 16/22(72.7%) 11/44(25.0%) 1/20(5.0%) 0/12(0.0%) 5/34(14.7%) 3/14(21.4%) 0/6(0.0%) 16/48(33.3%) 86/328(26.2%)
2011-2015 41/186(22.0%) 12/36(33.3%) 24/35(68.6%) 8/57(14.0%) 1/24(4.2%) 0/12(0.0%) 2/31(6.5%) 3/16(18.8%) 0/13(0.0%) 17/76(22.4%) 108/486(22.2%)
Total 168/617(27.2%) 44/131(33.6%) 78/116(67.2%) 49/198(24.7%) 3/143(2.1%) 0/51(0.0%) 13/124(10.5%) 32/121(26.4%) 0/26(0.0%) 81/247(32.8%) 468/1774(26.4%)

Panel C: Experimental Audit Papers by Countries


Participants from
Two and Three
Participants from One Country Countries * Total
U.S. Canada Australia Singapore Other Asia** Europe***
1991-1995 92 (92.0%) 2 (2.0%) 4 (4.0%) 2 (2.0%) 0 (0.0%) 0 (0.0%) 0 (0.0%) 100 (100.0%)
1996-2000 81 (85.3%) 5 (5.3%) 4 (4.2%) 4 (4.2%) 1 (1.1%) 0 (0.0%) 0 (0.0%) 95 (100.0%)
2001-2005 60 (75.9%) 7 (8.9%) 6 (7.6%) 3 (3.8%) 1 (1.3%) 0 (0.0%) 2 (2.5%) 79 (100.0%)
2006-2010 64 (74.4%) 5 (5.8%) 6 (7.0%) 6 (7.0%) 0 (0.0%) 0 (0.0%) 5 (5.8%) 86 (100.0%)
2011-2015 73 (67.6%) 8 (7.4%) 5 (4.6%) 1 (0.9%) 3 (2.8%) 7 (6.5%) 11 (10.2%) 108 (100.0%)
Total 370 (79.1%) 27 (5.8%) 25 (5.3%) 16 (3.4%) 5 (1.1%) 7 (1.5%) 18 (3.8%) 468 (100.0%)

* There are many other studies where the authors come from more than one country but in most of these studies the participants are obtained from only one country. These 18
papers are exceptions where the participants came from two or three countries.
** China (2), Hong Kong (2) and Japan (1)
*** Germany (2), Netherlands (2), Norway (2) and Spain (1)

34
Table 2: Coding of Elements and Levels by Five -year Periods Based on International Framework for Audit Quality

Elements/Levels 1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total


Input(V) 13(10.5%) 8(7.0%) 8(8.5%) 7(7.7%) 10(9.3%) 46(8.7%)
Input(K) 25(20.2%) 19(16.5%) 15(16.0%) 10(11.0%) 10(9.3%) 79(14.9%)
Processing 81(65.3%) 83(72.2%) 64(68.1%) 65(71.4%) 69(64.5%) 362(68.2%)
Output 5(4.0%) 5(4.3%) 7(7.4%) 9(9.9%) 18(16.8%) 44(8.3%)
Element Total 124(100.0%) 115(100.0%) 94(100.0%) 91(100.0%) 107(100.0%) 531(100.0%)
Engagement 90(88.2%) 82(83.7%) 65(80.2%) 65(76.5%) 78(83.0%) 380(82.6%)
Firm 12(11.8%) 14(14.3%) 13(16.0%) 18(21.2%) 15(16.0%) 72(15.7%)
National 0(0.0%) 2(2.0%) 3(3.7%) 2(2.4%) 1(1.1%) 8(1.7%)
Level Total 102(100.0%) 98(100.0%) 81(100.0%) 85(100.0%) 94(100.0%) 460(100.0%)

35
Table 3: Research Topics by Five-year Periods

Research Questions 1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total


1. Ability
A. Individual Effects (cognitive style, personality factors, culture) 2(2.0%) 4(4.2%) 2(2.5%) 3(3.5%) 2(1.9%) 13(2.8%)
B. Cognitive Limitations (including heuristics and biases) 13(13.0%) 9(9.5%) 1(1.3%) 3(3.5%) 3(2.8%) 28(6.0%)
C. Predecisional Behavior (including hypothesis generation, 9(9.0%) 11(11.6%) 3(3.8%) 1(1.2%) 1(0.9%) 25(5.3%)
information search, hypothesis evaluation)
2. Knowledge, Memory, Expertise, Experience 22(22.0%) 14(14.7%) 11(13.9%) 11(12.8%) 7(6.5%) 65(13.9%)
3. Environmental
A. Decision Aids (includes priming, structuring, instruction, different 6(6.0%) 7(7.4%) 9(11.4%) 7(8.1%) 9(8.3%) 38(8.1%)
methods, feedback, training)
B. Accountability and Partner Emphasis 6(6.0%) 7(7.4%) 3(3.8%) 3(3.5%) 3(2.8%) 22(4.7%)
C. Risk Factors 2(2.0%) 4(4.2%) 1(1.3%) 2(2.3%) 1(0.9%) 10(2.1%)
D. Regulation / Litigation 1(1.0%) 1(1.1%) 4(5.1%) 1(1.2%) 2(1.9%) 9(1.9%)
E. Other Environmental Factors (deadline pressure, time, effort, and 2(2.0%) 4(4.2%) 6(7.6%) 2(2.3%) 6(5.6%) 20(4.3%)
attributes of evidence)
F. Evidence Attributes 3(3.0%) 0(0.0%) 2(2.6%) 4(4.7%) 4(3.7%) 13(2.8%)
G. Other 4(4.0%) 1(1.1%) 1(1.3%) 0(0.0%) 4(3.7%) 10(2.1%)
4. Group Decision Making
A. Review Processing 5(5.0%) 9(9.5%) 9(11.4%) 10(11.6%) 3(2.8%) 36(7.7%)
B. Brainstorming and fraud detection 1(1.0%) 0(0.0%) 0(0.0%) 4(4.7%) 8(7.5%) 13(2.8%)
C. Consulting 0(0.0%) 3(3.2%) 1(1.3%) 1(1.2%) 2(1.9%) 7(1.5%)
D. Other 0(0.0%) 1(1.1%) 1(1.3%) 0(0.0%) 0(0.0%) 2(0.4%)
5. Other Interactions
A. Negotiating 1(1.0%) 0(0.0%) 2(2.5%) 11(12.8%) 7(6.5%) 20(4.3%)
B. Effect of ACs 0(0.0%) 0(0.0%) 0(0.0%) 0(0.0%) 1(0.9%) 1(0.2%)
C. Internal Auditors 3(3.0%) 1(1.1%) 1(1.3%) 3(3.5%) 2(1.9%) 10(2.1%)
D. M anagement (client credibility, reliability, etc.) 4(4.0%) 4(4.2%) 2(2.5%) 1(1.2%) 2(1.9%) 13(2.8%)
6. Incentives and Motivation 2(2.0%) 3(3.2%) 2(2.5%) 1(1.2%) 1(0.9%) 9(1.9%)
7. S kepticism 0(0.0%) 0(0.0%) 0(0.0%) 0(0.0%) 3(2.8%) 3(0.6%)
8. Ethics and Independence 6(6.0%) 2(2.1%) 7(8.9%) 0(0.0%) 5(4.6%) 20(4.3%)
9. Internal Auditor Judgments * 3(3.0%) 1(1.1%) 1(1.3%) 2(2.3%) 5(4.6%) 12(2.6%)
10. Audit Committee Judgments 1(1.0%) 1(1.1%) 3(3.8%) 4(4.7%) 4(3.7%) 13(2.8%)
11. Jury/Judges Judgments 4(4.0%) 4(4.2%) 4(5.1%) 6(7.0%) 9(8.3%) 27(5.8%)
12. User Judgments 1(1.0%) 2(2.1%) 2(2.5%) 4(4.7%) 6(5.6%) 15(3.2%)
13. Non-financial Information Assurance 0(0.0%) 1(1.1%) 0(0.0%) 1(1.2%) 4(3.7%) 6(1.3%)
14. Financial S tatement Preparer Judgments 0(0.0%) 1(1.1%) 1(1.3%) 1(1.2%) 5(4.6%) 8(1.7%)
Total 100(100.0%) 95(100.0%) 79(100.0%) 86(100.0%) 108(100.0%) 468(100.0%)

* Categories 9 to 14 may not be reconciled with Table 5: Participants Group by Every Five Years, as some of the participants were recruited as surrogates for other participant
groups. Examples include 27 papers coded under jury/judges judgments where 17 papers use jury/judges as participants (Table 5), while another ten papers use students as
proxy for jury or entry-level law associates (footnote in Table 5). Similarly, 21 papers coded under user judgments (15 papers) and non -financial information assurance (six
papers), and 13 papers use investors as participants (Table 5), while another eight papers use students as proxies for nonprofessional investors (footnote in Table 3). Other
examples are: Wainberg, Kida, Piercey, and Smith (2013) used “207 managers and other professionals” as participants, and aske d them to “assume the role of an audit
committee member”. Perreault and Kida (2011) recruited “practicing managers and professionals with significant business experience. However, as only 2 percent had prior
experience as an audit committee member, we identified the participant groups as financial statement preparers, while the research questions identified the most relevant theme
as audit committee judgments.
36
Table 4: Number of Papers with Reference to Auditing Standards*

Panel A: Percentage of Reference to Auditing Standards


1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total
Papers did not reference to auditing standards 39(39.0%) 41(43.2%) 33(41.8%) 25(29.1%) 46(42.6%) 184(39.3%)
Reference to 1 international auditing standard or national equivalent category 32(32.0%) 29(30.5%) 29(36.7%) 30(34.9%) 25(23.1%) 145(31.0%)
Reference to 2 international auditing standards or national equivalent categories 17(17.0%) 8(8.4%) 8(10.1%) 13(15.1%) 21(19.4%) 67(14.3%)
Reference to 3 international auditing standards or national equivalent categories 8(8.0%) 6(6.3%) 4(5.1%) 11(12.8%) 8(7.4%) 37(7.9%)
References to more than 3 auditing standards or national equivalent categories 4(4.0%) 11(11.6%) 5(6.3%) 7(8.1%) 8(7.5%) 35(7.5%)
Total 100(100.0%) 95(100.0%) 79(100.0%) 86(100.0%) 108(100.0%) 468(100.0%)

Panel B: Specific International Standards of Auditing (ISAs) and/or National Auditing Standards Referenced
IS As or National Equivalent U.S . Equivalent 1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total
ISA 200 Overall Objectives of the Independent
Auditor and the Conduct of an Audit in Accordance AU 110/AU 150/AU 230/AU 310/SAS 1
with International Standards on Auditing 10 (8.8%) 5 (4.6%) 3 (3.8%) 4 (3.2%) 8 (6.1%) 30 (5.4%)
ISA 220 Quality Control for an Audit of Financial
AS 7
Statements 1 (0.9%) 0 (0.0%) 0 (0.0%) 3 (2.4%) 3 (2.3%) 7 (1.3%)
AS 3/AU 339 (superseded by AS 3)/SAS
ISA 230 Audit Documentation 103/SAS 96 (superseded by SAS
103)/SAS 41 (superseded by SAS 96) 1 (0.9%) 2 (1.9%) 1 (1.3%) 8 (6.5%) 6 (4.6%) 18 (3.2%)
AU 316/SAS 99/SAS 82 (superseded by
ISA 240 The Auditor’s Responsibilities Relating to
SAS 99)/SAS 53 (superseded by SAS
Fraud in an Audit of Financial Statements
82)/SAS 16 (superseded by SAS 53) 17 (15.0%) 19 (17.6%) 17 (21.5%) 23 (18.5%) 26 (19.8%) 102 (18.4%)
ISA 260 Communication with Those Charged with
AS 16/SAS 114/SAS 90/SAS 61
Governance 0 (0.0%) 1 (0.9%) 6 (7.6%) 5 (4.0%) 3 (2.3%) 15 (2.7%)
ISA 265 Communicating Deficiencies in Internal
Control to Those Charged with Governance and SAS 60
M anagement 0 (0.0%) 0 (0.0%) 0 (0.0%) 1 (0.8%) 0 (0.0%) 1 (0.2%)
ISA 300 Planning an Audit of Financial Statements SAS 108/SAS 22/AU 311 3 (2.7%) 4 (3.7%) 3 (3.8%) 8 (6.5%) 1 (0.8%) 19 (3.4%)
ISA 315 Identifying and Assessing the Risks of
M aterial M isstatement through Understanding the AS 12/AS 8/SAS 109
Entity and Its Environment 1 (0.9%) 0 (0.0%) 1 (1.3%) 8 (6.5%) 13 (9.9%) 23 (4.1%)
ISA 320 M ateriality in Planning and Performing an
AS 11/AU 312/ SAS 47/SAS 107
Audit 16 (14.2%) 18 (16.7%) 11 (13.9%) 11 (8.9%) 7 (5.3%) 63 (11.4%)
ISA 330 The Auditor’s Responses to Assessed Risks AS 13/AU 318/SAS 110 0 (0.0%) 0 (0.0%) 1 (1.3%) 5 (4.0%) 10 (7.6%) 16 (2.9%)
ISA 450 Evaluation of M isstatements Identified
AS 14/SAS 89
during the Audit 0 (0.0%) 1 (0.9%) 5 (6.3%) 5 (4.0%) 2 (1.5%) 13 (2.3%)
ISA 500 Audit Evidence AS 15/AU 326/SAS 106/SAS 80/SAS 31 9 (8.0%) 9 (8.3%) 0 (0.0%) 5 (4.0%) 7 (5.3%) 30 (5.4%)
ISA 501 Audit Evidence – Specific Considerations
SAS 12
for Selected Items 1 (0.9%) 0 (0.0%) 1 (1.3%) 0 (0.0%) 0 (0.0%) 2 (0.4%)
ISA 505 External Confirmations AU 330/SAS 67 1 (0.9%) 0 (0.0%) 1 (1.3%) 0 (0.0%) 1 (0.8%) 3 (0.5%)

37
IS As or National Equivalent U.S . Equivalent 1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total
ISA 520 Analytical Procedures AU 520/AU 329/SAS 56 13 (11.5%) 9 (8.3%) 4 (5.1%) 3 (2.4%) 6 (4.6%) 35 (6.3%)
ISA 530 Audit Sampling AU 350/SAS 39 8 (7.1%) 7 (6.5%) 3 (3.8%) 0 (0.0%) 3 (2.3%) 21 (3.8%)
ISA 540 Auditing Accounting Estimates, Including
Fair Value Accounting Estimates, and Related SAS 57
Disclosures 3 (2.7%) 0 (0.0%) 4 (5.1%) 5 (4.0%) 2 (1.5%) 14 (2.5%)
AU 341/SAS 59/SAS 34 (superseded by
ISA 570 Going Concern
SAS 59) 7 (6.2%) 9 (8.3%) 6 (7.6%) 3 (2.4%) 3 (2.3%) 28 (5.0%)
ISA 580 Written Representations SAS 85 0 (0.0%) 1 (0.9%) 0 (0.0%) 1 (0.8%) 0 (0.0%) 2 (0.4%)
AU 322/SAS 65/SAS 9 (superseded by
ISA 610 Using the Work of Internal Auditors
SAS 65) 4 (3.5%) 2 (1.9%) 1 (1.3%) 2 (1.6%) 2 (1.5%) 11 (2.0%)
ISA 620 Using the Work of Auditor’s Expert SAS 73/SAS 11 (superseded by SAS 73) 1 (0.9%) 1 (0.9%) 1 (1.3%) 0 (0.0%) 0 (0.0%) 3 (0.5%)
ISA 700 Forming and Opinion and Reporting on SAS 58/SAS 79
Financial Statements 5 (4.4%) 2 (1.9%) 0 (0.0%) 0 (0.0%) 0 (0.0%) 7 (1.3%)
ISA 705 M odifications to the Opinion in the
Independent Auditor’s Report 0 (0.0%) 0 (0.0%) 0 (0.0%) 0 (0.0%) 1 (0.8%) 1 (0.2%)
ISA 720 The Auditor’s Responsibilities Relating to
Other Information in Documents Containing AU 9550/SAS 8
Audited Financial Statements 0 (0.0%) 0 (0.0%) 1 (1.3%) 0 (0.0%) 0 (0.0%) 1 (0.2%)
ISAE 3000 Assurance Engagements Other than
Audits or Reviews of Historical Financial
Information 0 (0.0%) 0 (0.0%) 0 (0.0%) 1 (0.8%) 1 (0.8%) 2 (0.4%)
ISQC 1 Quality Controls for Firms that Perform QC 20/QC 30/QC 10 (superseded by QC
Audits and Reviews of Financial Statements and 20)/SQCS 7(superseded by QC 10)/
Other Assurance and Related Services Engagements SQCS 1 1 (0.9%) 3 (2.8%) 3 (3.8%) 4 (3.2%) 6 (4.6%) 17 (3.1%)
ISRE 2400 Engagement to Review Financial
Statements 0 (0.0%) 1 (0.9%) 0 (0.0%) 0 (0.0%) 0 (0.0%) 1 (0.2%)

Number of Papers Reference to National Auditing and Assurance S tandards with no International Equivalents
U.S . 1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total
AS 5/AS 2 (superseded by AS 5)/ AU
319/AU 320/
SAS 55/SAS 78 An Audit of Internal
Control over Financial Reporting that is
Integrated with an Audit of Financial
Statements 7 (6.2%) 13 (12.0%) 4 (5.1%) 17 (13.7%) 18 (13.7%) 59 (10.6%)
SAS 37, SAS 48, AU 317/SAS 54, SAS
71, SAS 94, SAS 98/SAS 64 and SSCS 1 4 (3.5%) 1 (0.9%) 2 (2.5%) 2 (1.6%) 2 (1.5%) 11(2.0%)
Total 113 (100.0%) 108 (100.0%) 79 (100.0%) 124 (100.0%) 131 (100.0%) 555 (100.0%)*

* Based on references given in the bibliography of papers we did not include papers which referred to international or national auditing standards in general without reference to
a specific standard, references to proposed standards or exposure drafts.
* The total citing number 555 is greater than the total number of papers 468 because many papers cited more than one auditing standard. The total citing numbe r can be
reconciled with Table 4, Panel A.

38
Table 5: Participants Used in Audit Experimental Research by Five-year Periods

Participants Group 1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total


External Staff auditors only / Staff auditors and above 6(6.0%) 12(12.6%) 6(7.6%) 11(12.8%) 6(5.6%) 41(8.8%)
Auditors
Senior auditors only / Senior auditors and above 27(27.0%) 36(37.9%) 26(32.9%) 22(25.6%) 24(22.2%) 135(28.8%)
Audit managers / Audit partners 19(19.0%) 11(11.6%) 17(21.5%) 10(11.6%) 10(9.3%) 67(14.3%)
Auditors from all ranks, also include specialists such as fraud
specialists, financial institution/banking specialists,
manufacturing industry specialists, health industry specialists,
superannuation industry specialists, litigation support
specialists; other auditors without specifying the rank, or only
provide the mean of auditing experience without specifying
the range; external auditors and other professionals 24(24.0%) 22(23.2%) 13(16.5%) 13(15.1%) 21(19.4%) 93(19.9%)
External auditors and students 12(12.0%) 2(2.1%) 4(5.1%) 6(7.0%) 7(6.5%) 31(6.6%)
S ubtotal of external auditors 88(88.0%) 83(87.4%) 66(83.5%) 62(72.1%) 68(63.0%) 367(78.4%)
S tudents * 3(3.0%) 4(4.2%) 5(6.3%) 11(12.8%) 14(13.0%) 37(7.9%)
Judges and
Judges 3(3.0%) 2(2.1%) 0(0.0%) 0(0.0%) 1(0.9%) 6(1.3%)
jurors
Jurors, jury-eligible adults such as 18 years above U.S. 1(1.0%) 1(1.1%) 2(2.5%) 2(2.3%) 5(4.6%) 11(2.4%)
S ubtotal of judges and jurors 4(4.0%) 3(3.2%) 2(2.5%) 2(2.3%) 6(5.6%) 17(3.6%)
Investors 1(1.0%) 3(3.2%) 1(1.3%) 2(2.3%) 6(5.6%) 13(2.8%)
Financial statement preparers 0(0.0%) 1(1.1%) 1(1.3%) 2(2.3%) 7(6.5%) 11(2.4%)
Internal auditors 3(3.0%) 1(1.1%) 1(1.3%) 1(1.2%) 3(2.8%) 9(1.9%)
Audit committee members 1(1.0%) 0(0.0%) 3(3.8%) 4(4.7%) 2(1.9%) 10(2.1%)
Other managers 0(0.0%) 0(0.0%) 0(0.0%) 2(1.9%) 2(1.9%) 4(0.9%)
Total 100(100.0%) 95(100.0%) 79(100.0%) 86(100.0%) 108(100.0%) 468(100.0%)

* For 68 papers using students as participants (where 37 (7.9 percent) papers use students only and 31 (6.6 percent) papers use both students and external auditors). Of these 68
papers, 43 (63.2 percent) papers use students as surrogates for staff level or inexperienced auditors, 11 (16.1 percent) papers use students as proxies for juror or entry-level law
associates, both entry-level law associates and nonprofessional investors, as well as auditing and tax professionals, eight (11.8 percent) papers us e students as proxies for
nonprofessional investors, six (8.8 percent) papers use students to examine the reporting intent ions for questionable acts.

39
Table 6: Experimental Controlled Condition by Five-year Periods

1991-1995 1996-2000 2001-2005 2006-2010 2011-2015 Total


Controlled environment* 66(66.0% ) 57(60.0% ) 42(53.2% ) 55(64.0% ) 51(47.2% ) 271(57.9% )
Non-controlled environment
Hard copy Mail 6(6.0%) 5(5.3%) 12(15.2%) 5(5.8%) 7(6.5%) 35(7.5%)
Contact persons for internal distribution of hard copies
(including 6 papers which use both mail and contact persons) 17(17.0%) 19(20.0%) 16(20.2%) 10(11.6%) 5(4.6%) 67(14.3%)
Hard copy subtotal 23(23.0%) 24(25.3%) 28(35.4%) 15(17.4%) 12(11.1%) 102(21.8%)
Email/link 0(0.0%) 0(0.0%) 4(5.1%) 5(5.8%) 28(25.9%) 37(7.9%)
Computerized Mechanical turk 0(0.0%) 0(0.0%) 0(0.0%) 0(0.0%) 2(1.9%) 2(0.4%)
Computerized subtotal 0(0.0%) 0(0.0%) 4(5.1%) 5(5.8%) 30(27.8%) 39(8.3%)
Both hard copy and computerized 0(0.0%) 0(0.0%) 0(0.0%) 3(3.5%) 2(1.9%) 5(1.1%)
Subtotal of non-controlled environment 23(23.0% ) 24(25.3% ) 32(40.5% ) 23(26.7% ) 44(40.7% ) 146(31.2% )
Both controlled and non-controlled environment in the same study 7(7.0% ) 8(8.4% ) 2(2.5% ) 4(4.7% ) 10(9.3% ) 31(6.6% )
Verbal protocol 1(1.0% ) 4(4.2% ) 0(0.0% ) 1(1.2% ) 0(0.0% ) 6(1.3% )
Experimental conditions not identified 3(3.0% ) 2(2.1% ) 3(3.8% ) 3(3.5% ) 3(2.8% ) 14(3.0% )
Total 100(100.0%) 95(100.0%) 79(100.0%) 86(100.0% ) 108(100.0%) 468(100.0%)

* Controlled environment is coded based on meeting one of the following coding criteria: (a) participants completed the experiment under the supervision of
researchers/auditing firm coordinators ; (b) When the presence of researchers were not specified, the experiment was conducted in the training session or classroom settin g; (c)
There are detailed descriptions of how experiments proceeded, such as an oral introduction at the beginning, instruction during the experiment or monetary incentives were
offered at the end of the experiment.

40
Table 7: Interactions among Participant Types, Countries and Experimental Conditions

P articipants Countries of P articipants P articipants Experimental Conditions* Experimental Conditions by Countries of P articipants
(Column A) (Column B) (Column C)
US Non- Subtotal of Subtotal of Controlled Non- Subtotal of Subtotal of Non- Controlled Non- Subtotal of Subtotal of
US** US Non-US controlled Controlled controlled controlled Controlled Non-
controlled
1991- Auditors 80(80.0%) 8(8.0%) Auditors 59(66.3%) 19(21.3%) US 61(68.5%) 20(22.5%)
1995 Students 3(3.0%) 0(0.0%) Students 2(2.2%) 0(0.0%) Non-US 5(5.6%) 3(3.4%) 66(74.2%) 23(25.8%)
Others 9(9.0%) 0(0.0%) 92(92.0%) 8(8.0%) Others 5(5.6%) 4(4.5%) 66(74.2%) 23(25.8%)
1996- Auditors 70(73.7%) 13(13.7%) Auditors 48(59.3%) 21(25.9%) US 48(59.3%) 22(27.2%)
2000 Students 4(4.2%) 0(0.0%) Students 4(4.9%) 0(0.0%) Non-US 9(11.1%) 2(2.5%) 57(70.4%) 24(29.6%)
Others 7(7.4%) 1(1.1%) 81(85.3%) 14(14.7%) Others 5(6.2%) 3(3.7%) 57(70.4%) 24(29.6%)
2001- Auditors 50(63.3%) 16(20.3%) Auditors 36(48.6%) 26(35.1%) US 29(39.2%) 29(39.2%)
2005 Students 5(6.3%) 0(0.0%) Students 3(4.1%) 1(1.4%) Non-US 13(17.6%) 3(4.1%) 42(56.8%) 32(43.2%)
Others 7(8.9%) 1(1.3%) 62(78.5%) 17(21.5%) Others 3(4.1%) 5(6.8%) 42(56.8%) 32(43.2%)
2006- Auditors 48(55.8%) 14(16.3%) Auditors 41(52.6%) 14(17.9%) US 44(56.4%) 19(24.4%)
2010 Students 11(12.8%) 0(0.0%) Students 10(12.8%) 1(1.3%) Non-US 11(14.1%) 4(5.1%) 55(70.5%) 23(29.5%)
Others 9(10.5%) 4(4.7%) 68(79.1%) 18(20.9%) Others 4(5.1%) 8(10.3%) 55(70.5%) 23(29.5%)
2011- Auditors 47(43.5%) 21(19.4%) Auditors 33(34.7%) 26(27.4%) US 34(35.8%) 37(38.9%)
2015 Students 12(11.1%) 2(1.9%) Students 12(12.6%) 2(2.1%) Non-US 17(17.9%) 7(7.4%) 51(53.7%) 44(46.3%)
Others 24(22.2%) 2(1.9%) 83(76.9%) 25(23.1%) Others 6(6.3%) 16(16.8%) 51(53.7%) 44(46.3%)
Total Auditors 295(63.0%) 72(15.4%) Auditors 217(52.0%) 106(25.4%) US 216(51.8%) 127(30.5%)
Students 35(7.5%) 2(0.4%) Students 31(7.4%) 4(1.0%) Non-US 55(13.2%) 19(4.6%) 271(65.0%) 146(35.0%)
Others 56(12.0%) 8(1.7%) 386(82.5%) 82(17.5%) Others 23(5.5%) 36(8.6%) 271(65.0%) 146(35.0%)

* When examining the interaction with experimental conditions, we exclude papers that used both controlled and non -controlled experimental conditions, verbal protocol, and
experiments where conditions are not identified, thus in Columns (2) and (3) there are 417 papers used for analysis, where 271 use controlled environment s only and another
146 use non-controlled environments only (refer to Table 6).
** Non-U.S. means that all participants are from countries other than U.S.

41
Figure 1: Coding Framework Based on International Framework for Audit Quality Some Examples

Elements Engagement Level Firm Level National Level


Input-Values, Ethics, The engagement team is independent or is perceived to be independent, Tone at the top,
Attitude The engagement team exhibits accountability, professional scepticism, Whistleblowing in audit firms and power distance.
objectivity, and makes ethical decisions,
Auditors’ behaviour under conflict of interest situations,
Auditors’ whistle blowing, senior report discovery of procedures prematurely
signed-off,
Auditor’s mood and auditors’ locos of control impact on job performance,
Auditors’ type impact (whether auditors are principle-oriented, rules-oriented or
client-oriented).
Input-Knowledge, S kills, Partners and staff ability, experience, specialization, memory and critical Audit firm provides training or instruction,
Experience, Time thinking mindset, Feedback is provided to partners and staff,
Time pressure and time budget decision. Engagement teams industry-specialization,
Staffing assignments decision,
Audit firm structure.
Processing The engagement team assesses client acceptance, Review team judgments, Impact of regulatory changes on
Audit planning judgment, Quality control procedures at firm level, auditor decision-making,
Analytical procedures judgment, Audit team discussion, group interaction, group Auditing standard and regulatory
Probability assessment and interpretation of probability phrases decision-making, group-assist judgment, body guidance on audit judgment.
Risk assessment, identify internal control weaknesses, brainstorming session, audit team effectiveness,
Audit inquiry, evidence collection, evaluations of management-provided Group support systems,
information, Auditor and client negotiation, Audit firm uses audit technology/strategy,
Hypothesis generation, Superior' views or suggestion on auditor judgment,
Auditors' error judgments, Staff performance and competence evaluation,
Detection of earnings management, Audit team time reporting,
Sampling, Planned audit investment,
Forming opinion, Audit firms error management,
Audit adjustment, Consult with firm experts,
Evaluation of/reliance on internal auditor, Audit firm formalization of decision processing,
Senior reports discovery of procedures prematurely signed-off, Audit partner compensation schemes.
Auditors take action to protect their litigation reputation.
Outputs Audit report, Audit firm inspection reports, The impact of regulatory change
Internal control report, Judges' evaluation of audit firm liability, on investors’ perception of audit
Audit matter paragraphs in the audit report, Audit firm litigation exposure. quality.
Investors’ perceptions of auditors,
Investors’ judgment of hyperlinking unaudited information to audited financial
statements,
The impact of audit quality on financial reporting executives’ decision-making,
CPA letterhead on response rates for second requests for accounts receivable
confirmation,
Disclosing the auditor’s judgment processing in the audit report,
Level of attestation on investors’ decision making,
Judges' evaluation of auditors’ liability and assessment of litigation reputation.
Outside the Framework Internal auditors, audit committee members, non-financial information assurance

42

You might also like