Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

ACCOUNTING HORIZONS American Accounting Association

Vol. XX, No. XX DOI: 10.2308/HORIZONS-2023-073


MONTH YEAR
pp. 1–13

Improving Audit Quality with Data Analytic Visualizations:


The Importance of Spatial Abilities and Feedback in

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Anomaly Identification
Becca N. Baaske
The University of Tampa

Marc Eulerich
University of Duisburg-Essen

David A. Wood
Brigham Young University

SYNOPSIS: Public accounting firms and internal audit departments are implementing data analytics to enhance
effectiveness and efficiency; however, there is a shortage of professionals with data analysis skills and the ability to
derive meaningful insights. We conducted a quasiexperiment to examine whether and how individuals’ spatial
abilities and types of feedback are related to anomaly identification performance. We predict and find that those with
higher spatial abilities choose better visualizations and, in turn, are more accurate at anomaly identification. Auditors
with lower spatial abilities can choose better visualizations and more accurately identify anomalies when they are
provided task property feedback (i.e., feedback about the process) rather than outcome feedback or no feedback.
Finally, a combination of high spatial abilities and task property feedback significantly reduces the number of false
positive anomalies identified for all auditors. Our findings suggest practitioners should consider measuring spatial
abilities during recruitment and when assigning visualization tasks.
Keywords: data analytics; data visualizations; task property feedback; spatial abilities; anomalies.

I. SYNOPSIS AND CONTRIBUTION TO PRACTICE


ata analytic tools, including visualization tools, are increasingly used by audit firms and internal audit func-

D tions, enhancing efficiency and effectiveness (e.g., PwC 2015; Earley 2015; Ernst & Young LLP (EY) 2018;
Business-Higher Education Forum and PwC (BHEF and PwC) 2017; Austin, Carpenter, Christ, and Nielson
2021; Eulerich, Masli, Pickerd, and Wood 2023). However, these tools are not effective unless professionals possess the
necessary skills to effectively utilize the tools (e.g., Appelbaum, Kogan, and Vasarhelyi 2017; Richardson and Watson
2021), including auditors who effectively visualize and comprehend patterns in data (A. Rose, J. Rose, Sanderson, and
Thibodeau 2017). This paper explores limitations faced by auditors when choosing and using data visualizations to ana-
lyze data and tests a way to improve performance and, thus, audit quality.

We thank Uday Murthy, Kristina Demek, Hilda Carrillo, and Juliana Kralik for helpful comments and advice during the early stages of this paper.
We thank Linda Myers, Tina Carpenter, doctoral student participants at the 2019 Deloitte Consortium, attendees of the 2019 Florida Behavioral
Accounting Research Symposium, attendees of the 2019 AIS Midyear Meeting, and workshop participants at University of Richmond and The
University of Tampa for the helpful presentation feedback.
Becca N. Baaske, The University of Tampa, Sykes College of Business, Department of Accounting, Tampa, FL, USA; Marc Eulerich, University of
Duisburg-Essen, Mercator School of Management, Department of Accounting and Finance, Duisburg, Germany; David A. Wood, Brigham Young
University, BYU Marriott School of Business, School of Accountancy, Provo, UT, USA.
Supplemental materials are available online, as linked in the text.
Editor’s note: Accepted by Vernon J. Richardson, under the Senior Editorship of D. Scott Showalter.

Submitted: June 2023


Accepted: January 2024
Early Access: February 2024

1
2 Baaske, Eulerich, and Wood

Using the extended cognitive fit model, we emphasize the significance of user characteristics in establishing the fit
between anomaly tasks and data visualizations, particularly when choosing visualizations and using them to detect a pat-
tern or anomaly.1 Given that spatial abilities are pivotal for success in science, technology, engineering, and mathematics
(STEM) disciplines (Wai, Lubinski, and Benbow 2009), they might have an increasingly vital role in accounting due to
the growing use of data visualizations in practice (Austin et al. 2021).2 Specifically, our study examines how spatial abili-
ties and feedback impact visualization choices and anomaly identification. Spatial abilities are defined as an individual’s
capacity to identify and understand patterns in visual stimuli and manipulate visual patterns (Anderson 2005; Bonner
2008). We predict that that those with higher spatial abilities will be better at mentally configuring a fit between the anom-
aly task and the visualization type and that they will also be better at visualizing anomalies within the data.

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


For auditors with lower spatial abilities, it is important to explore other factors that can help them overcome their
limitations while they develop greater spatial abilities, which takes considerable effort and time (Sorby 2007; Potter and
Van der Merwe 2003). Based on prior research, we expect feedback about the process or steps taken to complete a task
(i.e., task property feedback) will be more helpful than feedback that just reports whether answers are right or wrong
(i.e., outcome feedback) or no feedback at all. Thus, we predict that task property feedback will be especially helpful to
auditors with lower spatial abilities.
We conducted an experiment where we measured spatial abilities and manipulated feedback type to examine their
effects on visualization choice and anomaly identification performance. We find that auditors with higher spatial abili-
ties select superior visualizations for anomaly tasks and identify more anomalies than auditors with lower spatial abili-
ties. Interestingly, these abilities also aid in anomaly detection even if the most suitable visualization is not chosen. We
also find a significant interactive effect between spatial abilities and feedback type—task property feedback is particu-
larly beneficial for auditors with lower spatial abilities but can be harmful to individuals with higher spatial abilities if it
does not lead to improved visualization choices (we discuss the tentative evidence for this more later in the paper).
However, a combination of higher spatial abilities and task property feedback can reduce false positive anomaly identifi-
cation even when it does not lead to improved visualization choices. Task property feedback can help auditors with
lower spatial abilities match the performance of their counterparts with higher spatial abilities by helping them make
better visualization choices, but it does not completely alleviate the spatial abilities gap for identifying anomalies.
Importantly, our analyses confirm that these findings are consistent across both accounting students and professionals,
suggesting a persistent shortfall in anomaly identification that is likely hurting audit quality.
Our findings have implications for public accounting firms, internal audit functions, and other practitioners aiming
to equip auditors with data visualization skills. The inconsistent ability of auditors, both incoming and experienced, to
select superior visualizations can harm risk identification and audit quality. Until training can improve inherent spatial
abilities, we suggest measuring auditors’ abilities during recruitment and assigning those with higher spatial abilities to
visualization tasks. However, if auditors with lower spatial abilities are working on a visualization task, we recommend
that they be provided feedback on choosing and using superior visualizations prior to the task. Finally, our results dem-
onstrating that higher spatial abilities and task property feedback can reduce identification of false positive anomalies
offer practical guidance to help auditors become more efficient.
Our study, as one of the first to empirically examine auditors’ performance in anomaly identification using interac-
tive visualization tools, lends support to the extended cognitive fit model (Shaft and Vessey 2006). It supports that both
internal characteristics, like spatial abilities, and external sources of insight, like task property feedback, are helpful for
achieving an optimal “fit” for visualization tasks. We also respond to the call for research in choosing visualizations for
more complex accounting tasks (Dilla, Janvrin, and Raschke 2010), thus expanding on prior research that mainly deals
with choosing visualizations for simpler tasks.

II. BACKGROUND AND HYPOTHESIS DEVELOPMENT


The American Institute of Certified Public Accountants (AICPA) (2017) defines data analytics as “the science and
art of discovering and analyzing patterns, identifying anomalies, and extracting other useful information in data under-
lying or related to the subject matter…through analysis, modeling, and visualization.” Internal and external auditing
are increasingly making use of data analytic tools, and it is changing the nature of these fields (Earley 2015; Schneider,
Dai, Janvrin, Ajayi, and Raschke 2015; Vasarhelyi, Kogan, and Tuttle 2015; Eulerich et al. 2023). By uncovering anom-
alies or patterns in datasets that would remain otherwise unknown, data analytics can help external and internal

1
We define an anomaly as an unexpected or unusual observation or pattern in the data visualization.
2
We validated the relevance of visualization to accounting practice through personal contacts at all Big 4 firms as well as several large internal audit
functions. All practitioners explained that visualizations have become an important element to their data analytic activities.

Accounting Horizons
Volume XX, Number XX, 20XX
Improving Audit Quality with Data Analytic Visualizations: The Importance of Spatial Abilities and Feedback 3

auditors identify high-risk areas, internal control weaknesses, or process inefficiencies (Vasarhelyi 2013; Brown-Liburd, Issa,
and Lombardi 2015; Austin et al. 2021; Jans and Eulerich 2022; Emett, Eulerich, Lovejoy, Summers, and Wood 2023).
Many of the implemented data analytic tools have both a data visualization and interactive component (Dilla et al.
2010; BHEF and PwC 2017; Austin et al. 2021). For instance, auditors can choose how to sort, filter, and disaggregate
the data with various graphs giving them many options for visualizing the full population of a dataset. Then, auditors
can use the chosen visualization to identify anomalies and focus on areas of concern (Chang and Luo 2021; Brown-
Liburd et al. 2015). The helpfulness of a visualization in facilitating pattern recognition depends on various task charac-
teristics (Dilla et al. 2010). Consistent with this notion, cognitive fit theory posits that a “fit” between the way informa-
tion is presented (e.g., the way data is presented with a visualization) and task characteristics (i.e., the type of anomaly

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


being searched for) can result in more effective and efficient task performance (Dilla and Steinbart 2005; Vessey 1991).
Therefore, users of data visualization tools must possess the ability to select the most suitable visualization types that
align with the task at hand (Lurie and Mason 2007).
According to the extended cognitive fit model, effectiveness of performance outcomes depends not only on the
fit between the task characteristics and the way external information is presented, but also on user characteristics
(e.g., see Vera-Mu~ noz, Kinney, and Bonner 2001). We suggest that a user’s characteristics and insights play an
important role in improving performance with interactive data visualization tools because the user can choose how
to select, organize, and display the information. In essence, the user must draw on internal characteristics and
insights to choose the form of the visualization, establishing the extent of the fit between the task characteristics and
the visualization.
Spatial ability is one internal characteristic that is likely to influence performance with data visualizations. Prior
studies have shown the significance of spatial ability for performance in certain tasks when using visual stimuli, but not
all tasks (e.g., Speier 2006; Speier and Morris 2003). In contexts where interaction with visual stimuli is available, Luo
(2019) finds that spatial ability did not have significant effects on the choice of visualization format. However, partici-
pants in Luo’s (2019) study only chose between tabulated and graphical stimuli. Therefore, it remains uncertain whether
auditors may leverage spatial ability while choosing or interacting with various graphical visualizations. We hypothesize
that, because individuals with higher spatial abilities can better mentally manipulate visual patterns, they will be better
at formulating a fit between the anomaly task and the visualization type. Moreover, those with greater capability to per-
ceive visual patterns are predicted to be more adept at spotting anomalies, given their capacity to utilize their visual abil-
ities and select appropriate visualizations for the task at hand.
Research shows that spatial ability can be improved with training; however, it also suggests that training is less suc-
cessful in improving performance for adults than for children and can take time (Uttal, Miller, and Newcombe 2013;
Yang et al. 2020). Given there is relatively little specific training on spatial ability in accounting programs, audit firms
would likely benefit from knowing factors that can help the individuals with low spatial ability improve their perfor-
mance beyond training. Feedback is likely one type of mechanism for improving performance for individuals with low
spatial ability.
Kelton, Pennington, and Tuttle (2010) suggest that feedback impacts the mental model of a problem, influencing
task performance. Whereas feedback generally improves performance (Bonner 2008), its effect varies (Kluger and
DeNisi 1996; Leung and Trotman 2005), necessitating tailored feedback interventions. Two prevalent feedback types
studied are outcome feedback (e.g., feedback that indicates whether responses are correct or incorrect) and task property
feedback (e.g., feedback that provides information about the process or steps taken to complete a task). Since outcome
feedback can be ambiguous in complex tasks (Bryant, Murthy, and Wheeler 2009; Kluger and DeNisi 1996), task prop-
erty feedback becomes more beneficial as it provides process clarity.
Process clarity is likely to be especially beneficial for individuals who have lower spatial abilities. High spatial ability
individuals can successfully achieve the correct answer and already have the cognitive ability to match the task with the
right type of visualization. In contrast, the connection between the task and the type of visualization is not clear to low
spatial ability individuals. These individuals will benefit more from feedback that helps them learn the type of visualiza-
tion that is better for performing a task. In essence, task property feedback helps low spatial ability individuals create a
better cognitive fit, which they are not as successful at doing without the feedback. This argument also explains why we
do not expect outcome feedback to be successful as it does not help create a cognitive fit between the task and type of
visualization for identifying an anomaly.
Based on this theory and these arguments, we test the following hypotheses:
H1: Individuals with higher spatial abilities will be more likely to choose a superior visualization for
identifying anomalies and, in turn, will be more accurate at identifying anomalies than individuals with
lower spatial abilities.

Accounting Horizons
Volume XX, Number XX, 20XX
4 Baaske, Eulerich, and Wood

TABLE 1
Demographic Information

Mean Std. Dev. Minimum Maximum


Age
Low spatial 27.38 10.24 19 64
High spatial 30.32 9.86 20 53
Total 28.87 10.12 19 64

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Experience (months)
Low spatial 36.00 75.68 0 396
High spatial 58.72 79.89 0 300
Total 47.51 78.41 0 396
Familiarity with visualization tools
Low spatial 3.82 1.41 1 7
High spatial 4.16 1.62 1 7
Total 3.99 1.52 1 7

Number Percent
Female
Low spatial 39 52.7
High spatial 27 35.5
Total 66 44.0
Male
Low spatial 35 47.3
High spatial 49 64.5
Total 84 56.0
The table presents demographic data for the 150 participants, each split by low spatial abilities (n ¼ 74) and high spatial abilities (n ¼ 76).

H2: Individuals who receive task property feedback will be more likely to choose a superior visualization for
identifying anomalies and, in turn, will be more accurate at identifying anomalies than individuals who
receive no feedback or receive outcome feedback.
H3: The effectiveness of task property feedback on visualization choice quality and anomaly identification will be
more pronounced for individuals with lower spatial abilities compared to those with higher spatial abilities.

III. RESEARCH METHOD


We test our hypotheses using a quasiexperimental3 design with Spatial abilities as a measured variable and
Feedback type as a between-participant manipulation.4 We examine the influence of the variables on anomaly identifica-
tion performance using measures for Visualization choice quality, True positive rate (i.e., the number of correctly identi-
fied anomalies divided by the six seeded anomalies), and Positive predictive rate (i.e., the number of correctly identified
anomalies divided by the total number of identified anomalies).

Participants
We sampled 150 participants, comprising 88 accounting students and 62 internal audit professionals. Table 1
provides descriptive information about the participants. The students had prior training with interactive data visu-
alization tools. The use of student participants is appropriate, as companies and audit firms are expecting incom-
ing staff to fill a new role where they will use data visualization tools to identify risky patterns or anomalies in

3
We received exempt status from the Institute Review Board (IRB) for using human subjects in this study.
4
We acknowledge the limitation (i.e., lack of randomization) of using measured variables instead of manipulated variables, but prior research supports
the use of measured variables in certain circumstances (e.g., Gaynor, McDaniel, and Neal 2006; Libby, Nelson, and Bloomfield 2002). In this study,
we examine individual abilities, which, by definition, are inherent to the person and not easily manipulated.

Accounting Horizons
Volume XX, Number XX, 20XX
Improving Audit Quality with Data Analytic Visualizations: The Importance of Spatial Abilities and Feedback 5

underlying data (PwC 2015). The experienced internal auditors came from a national chapter of professional inter-
nal auditors.

Experimental Task
Participants completed the experimental task via Qualtrics. First, participants completed the spatial aptitude test to
capture their inherent spatial abilities (discussed subsequently). Then, participants began round one of the anomaly iden-
tification tasks. For the anomaly identification tasks, participants were asked to identify various anomalies and were
provided the choice of four visualizations for each of three anomaly tasks: composition (i.e., trend analysis), compari-

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


sons, and relationships. A composition anomaly task involves developing pattern expectations based on trends in time.
A comparison anomaly task involves developing pattern expectations for a comparison, such as between customer bal-
ances or inventory storage locations. A relationship anomaly task involves developing pattern expectation based on the
relationship between the data of interest and other financial or nonfinancial data. Participants were asked to indicate the
anomalies, if any, for each task by checking the box for that datapoint. Additionally, participants indicated the chosen
visualization that was used to identify anomalies. After each anomaly task in round one, participants received either no
feedback, outcome feedback, or task property feedback. Each participant received the same type of feedback after each
anomaly question in round one (see the Online Appendix for screenshots).
Next, participants began round two of the experiment and were again asked to identify anomalies using new data-
sets and new visualizations. In the second round, the participants do not receive any type of feedback, most reflective of
a real audit setting outside of training or learning. Again, participants were asked to identify the anomalies and indicate
the chosen visualization. Anomaly identification performance in the second round is measured for the dependent varia-
bles of interest. Lastly, participants moved to the final section, where they completed the post-experimental question-
naire, including demographic information and manipulation checks.

Independent Variables
We measure Spatial abilities using the spatial aptitude test. This test5 makes use of short-term memory to determine
the individual’s ability to acquire patterns of data and mentally manipulate a figure or pattern (Swink and Speier 1999).
Participants must mentally rotate an object or visualize a folded version of the object without doing so physically. They
answer multiple choice questions to identify how the figure or pattern would look based on the rotation or fold. We cal-
culate Spatial abilities as the proportion of correct responses divided by eight, the total number of questions.
We manipulate Feedback type on three levels between participants. In the first round of the experiment, participants
receive either no feedback, outcome feedback, or task property feedback. In the no feedback condition, participants move
from question to question without any information regarding their response. In the outcome feedback condition, partici-
pants are told whether their response was correct or incorrect, including information indicating the true seeded anomalies.
In the task property feedback condition, participants receive information regarding the superior visualization for the specific
anomaly task and where they could have looked on the visualization to identify the true seeded anomalies.6

Dependent Variables
Anomaly identification performance is measured using scores for Visualization choice quality, True positive rate,
and Positive predictive rate from round two. The overall number of superior visualizations7 chosen serves as an indicator
of Visualization choice quality. There is one superior visualization for each of the three anomaly questions in round two.
As such, the score for Visualization choice quality can range from zero to three. The True positive rate is a measure of
the number of correct anomalies identified divided by the six total seeded anomalies. The Positive predictive rate is a
measure of the number of correct anomalies identified divided by the total number of anomalies identified (including
false positives). We also separately measure the number of incorrect anomalies identified for a supplemental analysis of
false positives.

5
The spatial aptitude test can be found at https://www.aptitude-test.com/free-aptitude-test/simulated-spatial-ability/
6
Examples of the task property feedback are available in the Online Appendix.
7
The superior visualizations are determined by published chart suggestions based on the type of anomaly task (Abela 2006; Severino n.d.) and agree
with other sources suggesting what charts are best for which purposes (Romney, Steinbart, Summers, and Wood 2021). The superior visualization is
a line graph for the composition anomaly task, a bar graph for the comparison task, and a scatter plot for the relationship anomaly task. A bubble
chart is also provided for each task.

Accounting Horizons
Volume XX, Number XX, 20XX
6 Baaske, Eulerich, and Wood

IV. RESULTS
Overall, 91.3 percent of participants responded correctly to the manipulation check questions based on their condi-
tion, with no statistically significant difference in the percentage of those who answered correctly across different feed-
back type conditions. Results do not qualitatively differ when those participants who failed the manipulation checks are
excluded from the analyses; thus, we include them in our analyses.
We provide descriptive statistics for each dependent variable, by condition, in Table 2. Whereas Table 2 splits
groups into low and high spatial abilities using a median split, all analyses are performed on the continuous Spatial abili-
ties variable. As our hypotheses make a prediction based on processes, we use path analyses to test each of them. We

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


provide statistical testing based on the manipulated and measured variables and the dependent variables without

TABLE 2
Descriptive Statistics for Dependent Variables

No Feedback Outcome Feedback Task Property Feedback Overall


DV: Visualization choice quality
Low spatial 1.46 1.43 2.15 1.64
[0.65] [0.69] [0.93] [0.80]
n ¼ 26 n ¼ 28 n ¼ 20 n ¼ 74
High spatial 1.92 1.74 2.14 1.95
[0.81] [0.62] [0.80] [0.76]
n ¼ 25 n ¼ 23 n ¼ 28 n ¼ 76
Overall 1.69 1.57 2.15 1.79
[0.76] [0.67] [0.85] [0.80]
n ¼ 51 n ¼ 51 n ¼ 48 n ¼ 150
DV: True positive rate
Low Spatial 0.71 0.64 0.60 0.65
[0.27] [0.30] [0.28] [0.28]
n ¼ 26 n ¼ 28 n ¼ 20 n ¼ 74
High Spatial 0.85 0.83 0.78 0.82
[0.16] [0.15] [0.18] [0.17]
n ¼ 25 n ¼ 23 n ¼ 28 n ¼ 76
Overall 0.77 0.73 0.70 0.74
[0.23] [0.26] [0.24] [0.25]
n ¼ 51 n ¼ 51 n ¼ 48 n ¼ 150
DV: Positive predictive rate
Low Spatial 0.70 0.75 0.68 0.72
[0.32] [0.34] [0.38] [0.34]
n ¼ 26 n ¼ 28 n ¼ 20 n ¼ 74
High Spatial 0.80 0.97 0.90 0.89
[0.23] [0.10] [0.20] [0.20]
n ¼ 25 n ¼ 23 n ¼ 28 n ¼ 76
Overall 0.75 0.85 0.81 0.80
[0.28] [0.28] [0.31] [0.29]
n ¼ 51 n ¼ 51 n ¼ 48 n ¼ 150
Shown are means [standard deviations] of Visualization choice quality, True positive rate, and Positive prediction rate. Low Spatial ¼ a spatial apti-
tude test score of 5 out of 8 or lower; High Spatial ¼ a spatial aptitude test score of 6 out of 8 or higher; No Feedback ¼ no feedback information;
Outcome Feedback ¼ feedback information about correct versus incorrect anomalies; Task Property Feedback ¼ feedback information about the
process of choosing visualizations and using them to identify anomalies.
Variable Definitions:
Visualization choice quality ¼ number of superior visualizations chosen;
True positive rate ¼ Number of correct anomalies identified/six seeded anomalies; and
Positive predictive rate ¼ Number of correct anomalies identified/number of total anomalies identified, including false positives.

Accounting Horizons
Volume XX, Number XX, 20XX
Improving Audit Quality with Data Analytic Visualizations: The Importance of Spatial Abilities and Feedback 7

FIGURE 1
Tests of H1
Panel A: Shown are Conditional Mediating Effects of X (Spatial abilities) on Y (True positive rate)

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Panel B: Conditional Mediating Effects of X (Spatial abilities) on Y (Positive predictive rate)

Spatial abilities is a continuous variable based on the spatial aptitude test scores. For Panel A, the overall model is significant (R2 ¼ 0.20, F(3, 146) ¼
12.44, p < 0.001). The direct effect of Spatial abilities on the True positive rate is significant (coefficient +0.32, 90 percent confidence interval (CI) ¼
+0.17; +0.48), as well as the indirect effect as indicated by the bootstrapped intervals (coefficient +0.05, 90 percent CI ¼ +0.00; +0.11). For Panel B,
the overall model is significant (R2 ¼ 0.13, F(3, 146) ¼ 7.38, p < 0.001). The direct effect of Spatial abilities on the Positive predictive rate is significant
(coefficient +0.45, 90 percent CI ¼ +0.25; +0.64), as well as the indirect effect as indicated by the bootstrapped intervals (coefficient +0.03, 90 percent
CI ¼ 0.00; +0.09). p-values are one-tailed when a directional prediction is made, and the results are consistent with that prediction.

mediators in the Online Appendix (i.e., using MANCOVAs and ANCOVAs). The results and inferences are similar
with both testing techniques.
Figure 1 reports the results of the mediation analysis to test H1. This analysis is performed using the Hayes
PROCESS model 4 (Hayes 2017). As shown in Figure 1, Panel A (Panel B), there is a significant (marginally significant)
indirect effect of Spatial abilities on the True positive rate (Positive predictive rate), through Visualization choice quality
as the mediator. There is also a statistically significant main effect of Spatial abilities on both the True positive rate and
the Positive predictive rate (p-values < 0.001). These results support H1 that individuals with higher spatial abilities are
more accurate at identifying anomalies, in part, because of their ability to better select the most appropriate visualiza-
tion type for the task.
Figure 2 reports the results of our test for H2 with a mediation analysis performed using the Hayes PROCESS
model 4 (Hayes 2017) with feedback type coded as 0 for no feedback, 1 for outcome feedback, and 2 for task property
feedback. The results in Figure 2, Panel A indicate that there is a significant, positive effect of Feedback type on the True
positive rate, through Visualization choice quality as the mediator, as hypothesized. We provide discussion about the neg-
ative main effect after testing H3 below. The results shown in Figure 2, Panel B indicate there is also a significant indi-
rect and positive effect of Feedback type on the Positive predictive rate, through Visualization choice quality as the
mediator. These results support H2 that task property feedback results in more accurate identification of anomalies due
to the selection of more appropriate visualization types for the task.
Results testing H3 are reported in Figures 3 and 4. We use the moderated-mediation Hayes PROCESS model
8 (Hayes 2017) for testing. As shown in Figures 3 and 4, the interaction between Spatial abilities and Feedback type on
Visualization choice quality is significant (p ¼ 0.049), but the interaction of the direct effect on the True positive rate is
insignificant (p ¼ 0.429). Therefore, the hypothesized interaction in H3 is partially supported. Figure 3, Panels A and B
indicates that individuals with higher spatial abilities are better at choosing superior visualizations when they receive no

Accounting Horizons
Volume XX, Number XX, 20XX
8 Baaske, Eulerich, and Wood

FIGURE 2
Tests of H2
Panel A: Conditional Mediating Effects of X on Y (True positive rate)

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Panel B: Conditional Mediating Effects of X on Y (Positive predictive rate)

Feedback type is dummy coded with a 0 for no feedback, 1 for outcome feedback, and 2 for task property feedback. For Panel A, the overall
model is significant (R2 ¼ 0.41, F(3, 146) ¼ 9.72, p < 0.001). The direct effect of feedback type on the True positive rate is negative and significant
(coefficient 0.05, 90% CI ¼ 0.09; 0.01), whereas the indirect effect of feedback type on the True positive rate is positive and significant (coeffi-
cient +0.02, 90% CI ¼ +0.01; +0.04). For Panel B, the overall model is significant (R2 ¼ 0.05, F(3, 146) ¼ 2.41, p < 0.070). The direct effect of
feedback type on the Positive predictive rate is insignificant (coefficient +0.02, 90% CI ¼ 0.03; +0.07); however, the indirect effect is significant
as indicated by the bootstrapped intervals (coefficient +0.01, 90% CI ¼ +0.00; +0.03). p-values are one-tailed when a directional prediction is
made, and the results are consistent with that prediction.

feedback (coefficient +1.33, p ¼ 0.005) or when they receive outcome feedback (coefficient +0.69, p ¼ 0.017). However,
as shown in Figure 3, Panel C, when individuals receive task property feedback, the relationship between Spatial abilities
and Visualization choice quality is insignificant (coefficient +0.06, p ¼ 0.451). This finding suggests that task property
feedback levels the playing field for individuals with varying spatial abilities, enabling them to make comparable visuali-
zation choices and, in turn, improving anomaly identification. However, even when individuals receive task property
feedback, there is still a significant and positive direct effect between Spatial abilities and the True positive rate
(coefficient +0.30, p ¼ 0.016), indicating that the spatial advantage is not entirely negated by the task property feedback
or improved visualization choices.
Similarly, Figure 4, Panel A indicates that individuals who have lower spatial abilities are better at choosing supe-
rior visualizations when they receive task property feedback compared to outcome feedback or no feedback (coefficient
+0.35, p < 0.001). However, the significance of task property feedback on Visualization choice quality is eliminated
when individuals have inherently higher spatial abilities, as shown in Panel C (coefficient +0.11, p ¼ 0.162).8 This find-
ing suggests that those with higher spatial abilities can make good visualization choices, so the need for task property
feedback is reduced. However, the Feedback type and True positive rate relationship is only partially mediated by
Visualization choice quality, and there is additionally a significant and negative direct effect, shown in Figure 2, Panel A
(coefficient 20.05, p ¼ 0.017). Further analysis (untabulated) indicates that this negative relationship is driven by the
comparison of the task property feedback and no feedback levels of the multicategorical variable (coefficient 20.11, p ¼
0.044). This result suggests that the helpfulness of task property feedback occurs when it improves individuals’ visualiza-
tion choices, whereas, when it does not improve visualization choices (such as when individuals have higher spatial

8
The results (untabulated) for both analyses are consistent when using the Positive predictive rate in the moderated-mediation analysis.

Accounting Horizons
Volume XX, Number XX, 20XX
Improving Audit Quality with Data Analytic Visualizations: The Importance of Spatial Abilities and Feedback 9

FIGURE 3
Tests of H3—Feedback Type as Moderator
Panel A: Conditional Effects of X on Y at the Values of the Moderator: No Feedback

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Panel B: Conditional Effects of X on Y at the Values of the Moderator: Outcome Feedback

Panel C: Conditional Effects of X on Y at the Values of the Moderator: Task Property Feedback

Feedback type is dummy coded with a 0 for no feedback, 1 for outcome feedback, and 2 for task property feedback. The overall model is signifi-
cant (R2 ¼ 0.23, F(5, 144) ¼ 8.59, p < 0.001). The direct effect of Spatial abilities on the True positive rate is positive and significant (Panel A:
coefficient +0.34, 90 percent CI ¼ +0.09; +0.59; Panel B: coefficient +0.32, 90 percent CI ¼ +0.17; +0.48; Panel C: coefficient +0.30, 90 percent
CI ¼ +0.07; +0.53). The indirect effect of Spatial abilities on the True positive rate is also positive and significant for Panels A and B (Panel A:
coefficient +0.11, 90 percent CI ¼ +0.02; +0.22; Panel B: coefficient +0.06, 90 percent CI ¼ +0.01; +0.13; Panel C: coefficient +0.01, 90 percent
CI ¼ 0.07; +0.10). p-values are one-tailed when a directional prediction is made, and the results are consistent with that prediction.

abilities), it can have a detrimental effect on their subsequent anomaly identification. As we do not have a data-driven
explanation for this finding, we encourage future research to study this. One possible explanation is that individuals with
higher spatial abilities can already make good visualization choices such that the specific type of task property feedback
we provide causes them to question their initial approach, worsening performance.

Accounting Horizons
Volume XX, Number XX, 20XX
10 Baaske, Eulerich, and Wood

FIGURE 4
Tests of H3—Spatial Abilities as Moderator
Panel A: Conditional Effects of X on Y at the Values of the Moderator: 0.50 Spatial abilities

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Panel B: Conditional Effects of X on Y at the Values of the Moderator: 0.75 Spatial abilities

Panel C: Conditional Effects of X on Y at the Values of the Moderator: 0.88 Spatial abilities

Feedback type is dummy coded with a 0 for no feedback, 1 for outcome feedback, and 2 for task property feedback. The overall model is signifi-
cant (R2 ¼ 0.23, F(5, 144) ¼ 8.59, p < 0.001). The direct effect of Spatial abilities on feedback type is negative and significant for Panel B and
Panel C (Panel A: coefficient 0.05, 90 percent CI ¼ 0.10; +0.01; Panel B: coefficient 0.05, 90 percent CI ¼ 0.09; 0.01; Panel C: coefficient
0.05, 90 percent CI ¼ 0.11; 0.00). The indirect effect of feedback type on the True positive rate is positive and significant for Panels A and B
(Panel A: coefficient +0.03, 90 percent CI ¼ +0.01; +0.05; Panel B: coefficient +0.02, 90 percent CI ¼ +0.00; +0.03; Panel C: coefficient +0.01,
90 percent CI ¼ 0.01; +0.03). p-values are one-tailed when a directional prediction is made, and the results are consistent with that prediction.

We conduct additional supplemental analyses and report the results in the Online Appendix. For brevity, we sum-
marize those findings as follows:
1. Accounting students and professional participants did not significantly differ in their ability to choose superior
visualizations for identifying anomalies.

Accounting Horizons
Volume XX, Number XX, 20XX
Improving Audit Quality with Data Analytic Visualizations: The Importance of Spatial Abilities and Feedback 11

2. Accounting/finance experience interacts with feedback type, showing that task property feedback is particu-
larly helpful for auditors with more experience.
3. Spatial abilities have a significant impact on visualization choice when combined with a high amount of
accounting/finance experience.
4. Superior visualization choices do not significantly affect the number of false positive anomalies identified.
However, the combination of high spatial abilities and feedback can reduce the number of false positive
anomalies.
5. Participants performed best in choosing visualizations for identifying composition anomalies and struggled for
comparison anomalies. They were better at identifying true positive anomalies for relationships compared to
comparisons.

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


V. IMPLICATIONS
Our results confirm that spatial abilities and task property feedback influence anomaly identification performance.
Individuals with higher spatial abilities and those receiving task property feedback make better visualization choices and
identify more true anomalies. Furthermore, task property feedback was particularly valuable to individuals with lower
spatial abilities, which helps, but does not fully ameliorate, the difference in performance between those with low and
high spatial abilities.
This study contributes to the literature by supporting the extended cognitive fit model, supporting the importance of
spatial abilities and task property feedback for choosing suitable visualizations for anomaly tasks. Practically speaking,
we recommend efforts for visualization task improvement on feedback for visualization choices and use of visualiza-
tions. However, because our results indicate that task property feedback does not completely alleviate the gap between
high and low spatial abilities, we also recommend the measurement of spatial abilities in hiring and assigning data visu-
alization tasks. The study also highlights the potential for reducing identification of false anomalies when there is a com-
bination of high spatial abilities and task property feedback, offering efficiency gains for practitioners using data
analytic tools.
This study has limitations that provide avenues for future research (in addition to those already discussed). External
validity is limited, as participants did not use actual data analytic tools. Furthermore, this study focuses solely on anom-
aly identification and does not examine other steps in the data analytics process where other types of abilities and skills
could be more useful. Future research can explore additional factors for improving visualization choices and anomaly
identification to expand on the extended cognitive fit model. For example, future research can benefit from understand-
ing how individual visualization preferences influence visualization choices and how this relationship interacts with vari-
ous feedback types. Finally, further investigation is needed to understand why auditors struggle with some types of
anomaly tasks more than others, such as how often they identify false anomalies for trends in time tasks.

REFERENCES
Abela, A. 2006. Choosing a good chart—The Extreme Presentation(tm) method. https://extremepresentation.typepad.com/blog/
2006/09/choosing_a_good.html
American Institute of Certified Public Accountants (AICPA). 2017. Guide to Audit Data Analytics. New York, NY: AICPA.
Anderson, J. 2005. Cognitive Psychology and Its Implications, 6th edition. New York, NY: Worth Publishers.
Appelbaum, D., A. Kogan, and M. A. Vasarhelyi. 2017. Big Data and analytics in the modern audit engagement: Research needs.
Auditing: A Journal of Practice & Theory 36 (4): 1–27. https://doi.org/10.2308/ajpt-51684
Austin, A. A., T. M. Carpenter, H. Christ, and C. Nielson. 2021. The data analytics journey: Interactions among auditors,
managers, regulation, and technology. Contemporary Accounting Research 38 (3): 1888–1924. https://doi.org/10.1111/
1911-3846.12680
Bonner, S. E. 2008. Judgment and Decision Making in Accounting. Upper Saddle River, NJ: Pearson/Prentice Hall.
Brown-Liburd, H., H. Issa, and D. Lombardi. 2015. Behavioral implications of Big Data’s impact on audit judgment and decision
making and future research directions. Accounting Horizons 29 (2): 451–468. https://doi.org/10.2308/acch-51023
Bryant, S., U. Murthy, and P. Wheeler. 2009. The effects of cognitive style and feedback type on performance in an internal con-
trol task. Behavioral Research in Accounting 21 (1): 37–58. https://doi.org/10.2308/bria.2009.21.1.37
Business-Higher Education Forum and PwC (BHEF and PwC). 2017. Investing in America’s data science and analytics talent:
The case for action. https://www.bhef.com/sites/default/files/bhef_2017_investing_in_dsa.pdf
Chang, C. J., and Y. Luo. 2021. Data visualization and cognitive biases in audits. Managerial Auditing Journal 36 (1): 1–16.
https://doi.org/10.1108/MAJ-08-2017-1637

Accounting Horizons
Volume XX, Number XX, 20XX
12 Baaske, Eulerich, and Wood

Dilla, W., D. J. Janvrin, and R. Raschke. 2010. Interactive data visualization: New directions for accounting information systems
research. Journal of Information Systems 24 (2): 1–37. https://doi.org/10.2308/jis.2010.24.2.1
Dilla, W. N., and P. J. Steinbart. 2005. Using information display characteristics to provide decision guidance in a choice task
under conditions of strict uncertainty. Journal of Information Systems 19 (2): 29–55. https://doi.org/10.2308/jis.2005.19.2.29
Earley, C. E. 2015. Data analytics in auditing: Opportunities and challenges. Business Horizons 58 (5): 493–500. https://doi.org/
10.1016/j.bushor.2015.05.002
Emett, S. A., M. Eulerich, K. Lovejoy, S. L. Summers, and D. A. Wood. 2023. Bridging the digital skills gap in accounting: The
process mining audit professional curriculum and badge. Accounting Horizons (forthcoming). https://doi.org/10.2308/
HORIZONS-2022-131

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


Ernst & Young LLP (EY). 2018. Are people the most important variable in your data analytics equation? https://www.ey.com/
es_bo/digital/are-people-the-most-important-variable-in-your-data-analytics-eq
Eulerich, M., A. Masli, J. Pickerd, and D. A. Wood. 2023. The impact of audit technology on audit task outcomes: Evidence for tech-
nology-based audit techniques. Contemporary Accounting Research 40 (2): 981–1012. https://doi.org/10.1111/1911-3846.12847
Gaynor, L. M., L. S. McDaniel, and T. L. Neal. 2006. The effects of joint provision and disclosure of nonaudit services on audit
committee members’ decisions and investors’ preferences. The Accounting Review 81 (4): 873–879. https://doi.org/10.2308/
accr.2006.81.4.873
Hayes, A. F. 2017. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, 2nd
edition. New York, NY: Guilford Press.
Jans, M., and M. Eulerich. 2022. Process mining for financial auditing. In Process Mining Handbook, edited by W. M. P. van der
Aalst and J. Carmona. Cham, Switzerland: Springer International Publishing.
Kelton, A. S., R. R. Pennington, and B. M. Tuttle. 2010. The effects of information presentation format on judgment and decision
making: a review of the information systems research. Journal of Information Systems 24 (2): 79–105. https://doi.org/10.2308/
jis.2010.24.2.79
Kluger, A. N., and A. DeNisi. 1996. The effects of feedback interventions on performance: A historical review, a meta-analysis
and a preliminary feedback intervention theory. Psychological Bulletin 119 (2): 254–284. https://doi.org/10.1037/0033-
2909.119.2.254
Leung, P. W., and K. T. Trotman. 2005. The effects of feedback type on auditor judgment performance for configural and non-
configural tasks. Accounting, Organizations and Society 30 (6): 537–553. https://doi.org/10.1016/j.aos.2004.11.003
Libby, R., M. W. Nelson, and R. Bloomfield. 2002. Experimental research in financial accounting. Accounting, Organizations and
Society 27 (8): 775–810. https://doi.org/10.1016/S0361-3682(01)00011-3
Luo, W. 2019. User choice of interactive data visualization format: The effects of cognitive style and spatial ability. Decision
Support Systems 122: 113061. https://doi.org/10.1016/j.dss.2019.05.001
Lurie, N., and C. Mason. 2007. Visual representation: Implications for decision making. Journal of Marketing 71 (1): 160–177.
https://doi.org/10.1509/jmkg.71.1.160
Potter, C., and E. Van der Merwe. 2003. Perception, imagery, visualization and engineering graphics. European Journal of
Engineering Education 28 (1): 117–133. https://doi.org/10.1080/0304379031000065216
PwC. 2015. Data driven: What students need to succeed in a rapidly changing business world. https://cpb-us-w2.wpmucdn.com/
sites.gsu.edu/dist/1/1670/files/2015/08/pwc-data-driven-paper-1wdb00u.pdf
Richardson, V. J., and M. W. Watson. 2021. Act or be acted upon: Revolutionizing accounting curriculums with data analytics.
Accounting Horizons 35 (2): 129–144. https://doi.org/10.2308/HORIZONS-19-020
Romney, M. B., P. J. Steinbart, S. L. Summers, and D. A. Wood. 2021. Accounting Information Systems, 15th edition. London,
U.K.: Pearson.
Rose, A. M., J. M. Rose, K. A. Sanderson, and J. C. Thibodeau. 2017. When should audit firms introduce analyses of Big Data
into the audit process? Journal of Information Systems 31 (3): 81–99. https://doi.org/10.2308/isys-51837
Schneider, G. P., J. Dai, D. J. Janvrin, K. Ajayi, and R. L. Raschke. 2015. Infer, predict, and assure: Accounting opportunities in
data analytics. Accounting Horizons 29 (3): 719–742. https://doi.org/10.2308/acch-51140
Severino, R. n.d. The data visualization catalogue. https://datavizcatalogue.com/index.html
Shaft, T. M., and I. Vessey. 2006. The role of cognitive fit in the relationship between software comprehension and modification.
Management Information Systems Quarterly 30 (1): 29–56. https://doi.org/10.2307/25148716
Sorby, S. A. 2007. Developing 3D spatial skills for engineering students. Australasian Journal of Engineering Education 13 (1): 1–
11. https://doi.org/10.1080/22054952.2007.11463998
Speier, C. 2006. The influence of information presentation formats on complex task decision-making performance. International
Journal of Human-Computer Studies 64 (11): 1115–1131. https://doi.org/10.1016/j.ijhcs.2006.06.007
Speier, C., and M. G. Morris. 2003. The influence of query interface design on decision-making performance. Management
Information Systems Quarterly 27 (3): 397–423. https://doi.org/10.2307/30036539
Swink, M., and C. Speier. 1999. Presenting geographic information: Effects on data aggregation, dispersion, and users’ spatial ori-
entation. Decision Sciences 30 (1): 169–195. https://doi.org/10.1111/j.1540-5915.1999.tb01605.x

Accounting Horizons
Volume XX, Number XX, 20XX
Improving Audit Quality with Data Analytic Visualizations: The Importance of Spatial Abilities and Feedback 13

Uttal, D. H., D. I. Miller, and N. S. Newcombe. 2013. Exploring and enhancing spatial thinking: Links to achievement in science,
technology, engineering, and mathematics? Current Directions in Psychological Science 22 (5): 367–373. https://doi.org/
10.1177/0963721413484756
Vasarhelyi, M. A. 2013. The emerging role of audit analytics: Internal audit should embrace data analytics. http://raw.rutgers.edu/
node/89 (last accessed June 13, 2023).
Vasarhelyi, M. A., A. Kogan, and B. M. Tuttle. 2015. Big data in accounting: An overview. Accounting Horizons 29 (2): 381–396.
https://doi.org/10.2308/acch-51071
Vera-Mu~ noz, S. C., W. R. Kinney, Jr., and S. E. Bonner. 2001. The effects of domain experience and task presentation format on
accountants’ information relevance assurance. The Accounting Review 76 (3): 405–429. https://doi.org/10.2308/

Downloaded from http://publications.aaahq.org/accounting-horizons/article-pdf/doi/10.2308/HORIZONS-2023-073/104362/horizons-2023-073.pdf by guest on 06 March 2024


accr.2001.76.3.405
Vessey, I. 1991. Cognitive fit: A theory-based analysis of the graphs versus tables literature. Decision Sciences 22 (2): 219–240.
https://doi.org/10.1111/j.1540-5915.1991.tb00344.x
Wai, J., D. Lubinski, and C. P. Benbow. 2009. Spatial ability for STEM domains: Aligning over 50 years of cumulative psycho-
logical knowledge solidifies its importance. Journal of Educational Psychology 101 (4): 817–835. https://doi.org/10.1037/
a0016127
Yang, W., H. Liu, N. Chen, P. Xu, and X. Lin. 2020. Is early spatial skills training effective? A meta-analysis. Frontiers in
Psychology 11: 564679. https://doi.org/10.3389/fpsyg.2020.01938

Accounting Horizons
Volume XX, Number XX, 20XX

You might also like