Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

International Journal of Accounting Information Systems 50 (2023) 100638

Contents lists available at ScienceDirect

International Journal of Accounting


Information Systems
journal homepage: www.elsevier.com/locate/accinf

Can knowledge based systems be designed to counteract


deskilling effects?
Vicky Arnold a, Philip A. Collier b, Stewart A. Leech b, Jacob M. Rose c,
Steve G. Sutton a, *
a
NHH Norwegian School of Economics, Norway, University of Central Florida, United States
b
University of Melbourne, Australia
c
University of Northern Colorado, United States, Monash University, Australia

A R T I C L E I N F O A B S T R A C T

Keywords: Recent calls in the information systems research community argue that we know intelligent
Knowledge-Based Systems systems deskill users, and future research should focus on how to design systems that do not
Artificial Intelligence deskill, rather than continue to examine whether the phenomenon occurs. This should be a
Deskilling
wakeup call for public accounting firms focused on implementing restrictive audit support sys­
Knowledge Structures
Explanation Systems
tems, which leads to de-skilling of novice accounting professionals. Our research focuses on
redesigning knowledge-based systems to facilitate expertise development and counteract the de-
skilling effects that result from use of such systems. Specifically, we manipulate the design of the
system interface by providing information cues in a screen format consistent with expert
knowledge representations and manipulate automatic provision versus voluntary use of expla­
nations for users during task completion. Results show that after using the knowledge-based
system to complete a series of reenacted client engagements over a three-day period, both the
interface design manipulation and automatic provision of explanations had a positive effect on
novice accounting professionals’ development of expert-like knowledge structures. The results of
the study have important implications for the development of knowledge-based systems intended
to support accounting professionals’ (and other knowledge workers’) expertise development
processes.

1. Introduction

Researchers from multiple disciplines across the information systems (IS) spectrum are increasingly raising concerns over tech­
nology dominance and automation bias, along with the associated deskilling and attrition of cognitive abilities (Axelson, 2014;
Balasubramanian et al., 2017; Hancock, 2014; Sutton et al., 2023; Wortmann, 2019; Asatiani et al., 2020; Rinta-Kahila et al., 2018;
Sutton et al., 2018). As these concerns heighten, researchers are exploring how to alleviate such effects. Balasubramanian et al. (2017)
review several proposed solutions that include “slowing the technology” and “difficult-to-use interfaces”, but these solutions are
almost certainly unpalatable in business environments focused on work efficiency. Asatiani et al. (2020) present more viable solutions
revolving around division of tasks between humans and computers, but again the desire of business organizations to automate and
control will inevitably conflict with such prescriptions.

* Corresponding author.
E-mail address: SGSutton@UCF.edu (S.G. Sutton).

https://doi.org/10.1016/j.accinf.2023.100638
Received 10 February 2022; Received in revised form 11 February 2023; Accepted 20 July 2023
Available online 24 July 2023
1467-0895/© 2023 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

The major public accounting firms exemplify the conflict posed by a desire to automate and control as they increasingly implement
restrictive audit support systems designed to formalize and enforce the firm’s audit methodology (Boland et al., 2018; Dowling and
Leech, 2007, 2014), improve audit efficiency and effectiveness (Banker et al., 2002; Bedard et al., 2008; Dowling, 2009), increase
competitive advantage (Carson and Dowling, 2012), and enhance consistency of documentation in defense of regulatory pressures1
(DeFond and Lennox, 2011; Dowling and Leech, 2014; Dowling et al., 2018). These audit support systems become a form of man­
agement control system that enforces the firm’s methodology and induces consistency in audit procedures across engagements; but, to
be effective, these systems must actively restrict auditors’ independent behavior (Dowling and Leech, 2014).
Amidst these efforts to steadily increase automation of the audit process and structure the guidance provided during audit
execution, little attention has been given to the potential for technology dominance effects and associated deskilling of auditors
(Arnold and Sutton, 1998; Dowling and Leech, 2014). Deskilling, which results from overreliance on technology, can occur in two
ways: (1) from professionals losing previously acquired knowledge as it atrophies from lack of use, or (2) from new professionals’
failure to develop knowledge that is traditionally acquired through experience (Arnold and Sutton, 1998; Sutton et al., 2023). These
effects could have significant ramifications by limiting development of auditor expertise, as auditors increasingly are dependent on
such audit support systems (Arnold and Sutton, 1998; Axelson, 2014; Dowling and Leech, 2007, 2014). Initial evidence on the effects of
restrictive audit support systems on the development of staff auditors’ knowledge indicates that such deskilling is occurring and is
exacerbated when systems are more structured and provide more extensive guidance (Axelson, 2014; Dowling et al., 2008; Stuart and
Prawitt, 2012). These findings raise the research question, “Can systems be designed to both enhance performance and mitigate the
risk of deskilling?”.
The purpose of this study is to explore the potential for specific system design enhancements to promote the development of expert-
like knowledge structures in novice accounting professionals that will enhance expertise development when using knowledge-based
systems. We focus on the approach suggested by Sutton et al. (2018), Sutton et al. (2023) to counteract deskilling effects by focusing on
design artifacts that promote expertise development—particularly relational reasoning and pattern-recognition. Two specific design
interventions are explored along these expertise-development strategies. First, research has shown that in less complex decision en­
vironments, developing system interfaces that organize screens to represent the patterns in expert cognitive structures facilitates the
transfer of expert knowledge structures to users (Rose et al., 2012). We extend this research to consider the viability of using such an
interface when the decision environment is complex and there are vastly more information cues that must be considered. Second,
researchers have theorized that providing system explanations along with guidance to users may facilitate knowledge transfer from a
knowledge-based system when the cognitive effort required can be minimized (Gregor and Benbasat, 1999). However, high quality
explanations can be complex to implement because explanations that are not geared to the current knowledge level of users can be
dysfunctional (Arnold et al., 2006). To date, little research has examined knowledge-based systems that address these concerns. We
study the potential to enhance the value of explanations by developing a novel system that systematically alters the knowledge level of
explanations provided as accounting professionals gain experience.
This study examines the potential benefits of two system-design interventions on accounting professionals’ development of
cognitive knowledge structures. The experimental process was embedded within a three-day staff training session for accounting firm
staff. The training process adopted constructivist learning strategies that prescribe immersion of the trainee in a real-world experience
simulated to replicate actual work engagements (Crowe et al., 1996; Hannafin and Land, 2000; Hirumi, 2002; and Jonassen et al.,
2008). A series of actual engagements were modeled through computer simulations that provided access to essentially full evidence
gathering through mechanisms such as actual client documentation and reenacted client interviews to enhance realism and provide the
engagement experience in a condensed time frame. The complexity of the simulated engagements steadily increased over the three-day
training session. Six client engagements were reenacted and executed during the training—all being completed by insolvency prac­
titioners with the assistance of a knowledge-based system to facilitate decision-making. Participants used one of four different
knowledge-based system implementations that were derived from a 2 × 2 experimental design where we manipulated the interface
design (generic versus expert knowledge structure interface) and the provision of explanations (automatically provided or voluntarily
accessed) during system use. Other than these alternative features in the knowledge-based system, the training process was identical
across all training sessions even to the point of using pre-drafted scripts that the instructors carefully followed in facilitating the work
experience simulations that drove the process.
Data were collected from 67 novice accounting professionals completing a three-day training session. The results indicate that use
of experts’ cognitive knowledge structures to organize the system interface and use of automatic explanations enhanced expertise
development in a highly complex decision domain. We find that the potential deskilling of auditors can be mitigated, at least in part, by
systems that visually represent the knowledge of experts or reduce the cognitive demands by automating explanations and thereby
assisting in the conveyance of expertise to users.
The results of the research are important to both theory and practice. First, the results indicate that the use of experts’ cognitive
knowledge structures to drive interface design for systems supporting complex decisions can improve users’ acquisition of expert-like
knowledge structures. Second, by providing automatic explanations that match a user’s level of knowledge acquisition, users develop
cognitive knowledge structures that more closely resemble those of experts despite work activities being focused on performance-
oriented activities. This latter effect is significant in that prior research on explanations (e.g. Eining and Dorr, 1991; Mascha, 2001;
McCall et al., 2008; Smedley and Sutton, 2004, 2007; Steinbart and Accola, 1994) has focused on development of fact-based and rule-

1
The PCAOB has evaluated the audit processes embedded in all the major accounting firms’ audit support systems.

2
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

oriented knowledge, but not on the cognitive knowledge structures that are necessary for the development of expertise (e.g. Choo and
Curtis, 2000; Davis and Yi, 2004; Day et al., 2001; Goldsmith and Davenport, 1990; Holyoak, 2012; Kraiger et al., 1993; Rose et al.,
2007; Schvaneveldt, 1990). Third, the results suggest that it is possible to counteract some of the deskilling effects from using highly
structured knowledge-based systems through system design interventions.

2. Background, theory and hypotheses

Decision aids such as audit support systems are an integral part of the work environment of professional accountants (see Boland
et al., 2018; Dowling and Leech, 2007). Such decision aids are also an integral part of accounting professionals’ learning experience
and opportunities for knowledge development (Dowling et al., 2008; Libby, 1995; Rose, 2002, 2005; Rose and Wolfe, 2000). However,
the focus of such systems is largely on work performance and ensuring the efficiency and effectiveness of the work process along with
the documentation needed to meet statutory and regulatory requirements (Banker et al., 2002; Bedard et al., 2008; Boland et al., 2018;
DeFond and Lennox, 2011; Dowling and Leech, 2014; Dowling et al., 2018). Little consideration has focused on how such systems
affect the expertise development of novice professionals.
The lack of concern for the experience-building provided by such systems has both short and long-term ramifications that are
potentially significant. While novice professionals will presumably make better decisions when using these systems (Bedard et al.,
2008; DeFond and Lennox, 2011), research findings are mixed. Some research shows improvement in decision making when novices
use such systems (e.g. McCall et al., 2008), while others demonstrate the potential for such systems to have negative effects on decision
making and potentially accentuate decision biases (Arnold et al., 2004b; Masselli et al., 2002; Seow, 2011).
Theory suggests that in the long term, such systems may lead to deskilling as users become dependent on the systems to perform
tasks and do not develop the ability to perform the task themselves, nor the capacity to recognize when the system is not working
effectively (Arnold and Sutton, 1998). Preliminary evidence suggests these concerns are valid (McCall et al., 2008). Novices who used a
knowledge management system to complete a learning task performed better than their manual counterparts; but, when the system
was taken away, those learning with the knowledge management system performed significantly worse. Perhaps even more con­
cerning are the results from Dowling et al. (2008), which show that auditors averaging about six years of experience from firms using
restrictive audit support systems that provide strong decisional guidance performed much worse on a client business risk task than did
similar auditors from firms using less restrictive systems with voluntary use requirements. There was a pattern of deskilling in the
auditors from the firms with more restrictive audit support systems—the type of systems that are increasingly preferred by both novice
users (Malaescu and Sutton, 2015) and the major international accounting firms deploying the systems (Boland et al., 2018; Dowling
and Leech, 2007, 2014; Dowling et al., 2018) despite the deskilling effects being well-recognized by senior auditors (Axelson, 2014).
The above leads to the basic research question, “Can these systems be designed in a way that both enhances performance and
mitigates the risk of deskilling the user?” (Sutton et al., 2023, 2018). Two potential system design interventions are considered in this
study. First, Rose et al. (2012) theorize that the development of a systems interface based on a visual layout consistent with expert
knowledge structures can promote the development of similar knowledge structures by novice users. In a small-scale prototype system,
they find evidence supporting the potential effectiveness of such a strategy, and research by Bierstaker et al., (2018) find further
support for the value of displaying experts’ knowledge structures for the development of expertise by novices in a fraud risk assessment
context. Second, researchers for some time have explored the possibility of using explanation facilities embedded in systems to
facilitate knowledge transfer to users (see Gregor and Benbasat (1999) and Smedley and Sutton (2007) for reviews of related studies).
Gregor and Benbasat (1999) provide a synthesis of the explanation facilities literature from both design science and behavioral science
views to identify how explanation facilities can be designed and implemented to provide the most benefit to system users. They
theorize that an effective explanation facility provides explanations tailored to the specific user through automatic provision during

Fig. 1. Stages of Individual Knowledge Acquisition.

3
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

task completion.

2.1. Expert knowledge structures

The development of expert knowledge structures occurs in stages with the lower forms of knowledge serving as a foundation for
higher forms of knowledge (Anderson, 1990, 2000; Kraiger et al., 1993). As shown in Fig. 1, declarative knowledge develops as an
individual explores the definitions, rules, and examples associated with a decision domain and procedural knowledge uses declarative
knowledge to solve problems, make decisions, assess decision outcomes, and refine decision processes (Anderson, 2000; McCall et al.,
2008). As declarative and procedural knowledge work together, individuals begin to create a structure, which can be used in different
situations through relational patterns of cues (Fenwick, 2000; Holyoak, 2012). In more advanced stages, individuals become experts in
a domain as their knowledge structures become more expert-like (Cooke and Schvaneveldt, 1988; Davis et al., 2003; Holyoak, 2012).
Prior research suggests that experts differ from novices in the amount, content, and organization of their domain knowledge
(Sweller, 1988; Glaser and Chi, 1988; Chi et al., 1982; Holyoak, 2012). Compared to a novice, experts can perceive large meaningful
patterns in their domain, have superior short- and long-term memory for domain-relevant information, represent problems at a deeper
level, and spend more time analyzing a problem prior to attempting a solution. Further, knowledge organization is crucial for expert
performance (Davis et al., 2003; Sweller, 1993; Holyoak, 2012) and is highly correlated with future decision performance (Kraiger
et al., 1995).
The challenge in accounting research has been one of identifying who really is an expert (Bonner and Lewis, 1990). While expe­
rience leads to the opportunity for a professional to acquire knowledge and develop expertise, experience is a necessary but insufficient
condition for expertise development (Bonner and Lewis, 1990; Bonner et al., 1997; Libby, 1995; Libby and Luft, 1993). Learning from
experience is hampered by not having a schema that is appropriate for cataloging experiences and storing the declarative and pro­
cedural knowledge to which the decision maker is exposed (Bonner et al., 1997; Holyoak, 2012). Getting the right categorization frame
is imperative to the formation of a knowledge structure that facilitates expert decision-making (Dumas et al., 2013; Kopp & O’Donnell,
2005; O’Donnell, 2003; Schultz et al., 2010). For more complex decision processes, a holistic template can aid the decision makers’
performance and facilitate the handling of the associated large number of information cues (Brewster, 2011; Holyoak, 2012).
Technology-based systems are one way of providing the necessary frame (O’Donnell and Schultz, 2003).
Several methods are available to elicit knowledge representations, including word associations, card sorting exercises, multidi­
mensional scaling, recall and chunking of concepts, and Pathfinder Network Analysis (Cooke, 1994; Cooke and Schvaneveldt, 1988;
Davis et al., 2003). These methods have several steps in common, which are necessary to determine an individual’s representation and
organization of a domain of knowledge: (1) definition of the concept domain/referent structure, (2) collection of the relatedness
judgments from experts and novices, (3) use of the relatedness data to define a representation of the knowledge, and (4) interpretation
and evaluation of the representation (Cooke, 1994; Kraiger et al., 1993). The output from these knowledge mapping techniques can be
used to determine a person’s knowledge structure and to create a two-dimensional representation or knowledge map of that structure
(Goldsmith et al., 1991; Taricani and Clariana, 2006).
Effective knowledge conveyance from knowledge-based systems will result in knowledge structures in novices that resemble ex­
perts’ knowledge structures and cognitive structures that change as a person becomes more of an expert in the subject (Goldsmith et al.,
1991). Studies have investigated the differences in knowledge maps between experts and novices (Cooke and Schvaneveldt, 1988;
Gillan et al., 1992; McKeithen et al., 1981). Results indicate that experts use fewer information cues and have more organized
knowledge structures than novices. In a study of the knowledge structures of human/computer interface design experts, experts clearly
separated networks and sub-networks while novices had much less differentiation, more links, and a higher number of weak links
(Gillan et al., 1992). Research suggests that using the knowledge structure of experts to develop knowledge transfer interventions will
increase the efficacy of the intervention (e.g., Bielaczyc et al., 1995; Kraiger et al., 1995; Rose et al., 2007; Rose et al. 2012; Sanchez,
2004). This leads to the following hypothesis:
H1: Professionals using a knowledge-based system with an interface resembling the knowledge structure of an expert will develop a
knowledge structure that is closer to that of an expert than will users of systems without such interfaces.

2.2. Explanation facilities

Since the earliest development of knowledge-based systems for facilitating decision making, explanation facilities have been
perceived as an important and valued feature. However, the development and delivery of these explanation facilities have rarely been
theory driven, but rather an artifact of the designer’s intuition. Frequently they come in the form of help menus that must be searched
and often drilled down through multiple hyperlinks (Gregor and Benbasat, 1999). Gregor and Benbasat (1999) put forth a theoretical
basis for understanding how explanation facilities should be constructed, made available, and managed. Their theory suggests that the
explanations should match the level of knowledge of the user, and explanations should be automatically, but unobtrusively, provided
to the user (rather than the user expending cognitive effort to seek the explanations). Further, explanations should be provided in both
feedforward (to facilitate task completion as the work is being done) and feedback formats (explanation of the rationale behind a
decision outcome recommendation).
Anderson’s (1990, 2000) Adaptive Character of Thought-Rational (ACT-R) theory of knowledge acquisition is the most commonly
used theoretical basis for academic research looking at optimization of explanations (see Smedley and Sutton (2007) for a review). The
ACT-R version of Anderson’s theory is depicted in Fig. 1. As noted earlier, the theory posits that expertise development is a sequential
process where the earliest knowledge acquisition is at the declarative knowledge level (definitions, rules, examples), which is then

4
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

proceduralized into advanced knowledge (procedural knowledge) that refines and tunes how this knowledge is used in the decision-
making process. The process is iterative as knowledge structures developed to organize knowledge continually evolve as declarative
knowledge increases in amount and context. The outcome is the formation of more formalized knowledge structures that mature over
time as individuals begin to develop expertise. As expertise is developed, these formalized knowledge structures evolve into deep
structural knowledge of systematic relational patterns of information cues that are encoded into memory to facilitate recognition when
stimuli are received in future instances (Holyoak, 2012).
The provision of explanations that match users’ needs suggest that the type of explanation information provided should be asso­
ciated with the knowledge level of the user—declarative knowledge for new decision makers (early novices), more rule-based and
example-oriented explanations as the novice becomes more experienced, and instructive explanations as the novice begins to build
their knowledge base and proceduralize knowledge. This is consistent with the findings in Arnold et al. (2006) where novice ac­
counting professionals (staff/seniors) selected more declarative explanations and feedforward explanations that explained how to
complete the task. On the other hand, experienced accounting professionals (managers/partners) chose more feedback explanations
that explained why the system was suggesting a course of action and what information was used to determine that course of action.
This voluntary pattern of explanation use appears consistent with Anderson’s (2000) theorizations and suggests automatic explanation
provision building on a similar pattern should optimize knowledge transfer to a user, even when that user is focused on task completion
and the actual work at hand. This leads to the second hypothesis:
H2: Professionals using a knowledge-based system that systematically provides explanations that build the users knowledge base
will develop knowledge structures that are closer to that of an expert than will users of a system without such explanation provision.
Given that the two interventions (i.e., interface design and explanation provision) work in different fashions to promote devel­
opment of the user’s knowledge structure, synergies should arise from providing both. Prior accounting research demonstrates the
benefit of providing a cognitive frame to an accounting professional before task completion (Bonner et al., 1997; Brewster, 2011;
O’Donnell and Schultz, 2003; Schultz et al., 2010). If the interface intervention using the expert knowledge structure is effective in
transferring the knowledge structure to the user, a cognitive frame that facilitates the accounting professional’s assimilation of the
knowledge presented through the explanation facilities should develop. Thus, the inclusion of both interventions should facilitate more
rapid development of more expert-like knowledge structures than will the provision of either intervention alone. This leads to the third
and final hypothesis:
H3: Professionals using a knowledge-based system with an interface resembling the knowledge structure of an expert and that
systematically provides explanations that build the users knowledge base will develop knowledge structures that are closer to that of
an expert than will users who receive only the interface, only the systematic explanation provision, or neither.
One caveat that should be considered regarding this latter hypothesis is a counter hypothesis that suggests information load from all
the knowledge cues could overload a professional, and this overload could interfere with knowledge acquisition (Gregor and Benbasat,
1999; Rose and Wolfe, 2000; Mascha and Smedley, 2007). Mascha and Smedley (2007) find a strong relationship between task
complexity and the inhibiting effects of feedback on users’ task performance.

3. Methods

This study utilizes a 2 X 2 experimental design, with explanations manipulated as voluntary use vs. automatic provision and
interface design manipulated as a generic map vs. expert knowledge map. The dependent variable is the closeness of the participants’
knowledge map to that of an expert knowledge map. Sixty-seven novice insolvency professionals participated in the experiment over a
three-day period. The firm training (experiment) was conducted in an on-campus computer laboratory. The following sections will
detail the decision domain, the participants, experimental procedures and task, pathfinder analysis, development of cases, the
accelerated experience delivery system (INCASE), the knowledge-based system (INSOLVE), as well as the independent and dependent
variables.

3.1. Decision domain

Research on expertise in accounting and auditing emphasizes the critical role of task specific knowledge for promoting expert
decision-making (Bonner, 1990; Bonner et al., 1997). Even when tasks are similar, if the decision environment (such as industry being
audited) varies, then expert knowledge structures may develop differently (Moroney, 2007; Thibodeau, 2003). These differences are
particularly notable when the focus is on complex decisions that require sequential processing of multiple cues, the decision-making
process is iterative in nature, and judgments involve multiple cognitive processes such as hypothesis generation, information search,
pattern recognition, and hypothesis evaluation. While complex decision processes such as in the professional accounting environment
are difficult to study, they are the ones that provide the richest environment for studying expertise development and training in­
terventions (Moreno et al., 2007). These complex environments provide challenges to the researcher, however, as defining and limiting
the task environment to a level where specific experts are identifiable can be difficult (Bonner and Lewis, 1990; Libby, 1995).
For the current study, the domain of insolvency was selected due to its highly specialized nature (Arnold et al., 2004b). Insolvency
practice has existed internationally in many countries for a century and involves Certified Practicing Accountants/Chartered Ac­
countants (CPA/CA) taking over management of a company that is facing financial distress (i.e., experiencing significant going concern
issues) and evaluating the best alternative for the future of the company. Insolvency cases generally result in either a complete or
partial liquidation, sell-off to another company, or reconstruction by the CPA/CA to restore the company to solid financial footing
before returning operations to the prior directors and managers. Insolvency requires a great deal of expertise; otherwise, the

5
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

accounting firm/professional will not survive. In numerous cases, the CPA/CA assumes responsibility for losses incurred while op­
erations continue under their direction, and such losses must be less than the fees they can collect in order to be profitable (Arnold
et al., 2004b). Poor judgment puts the firm not only at legal risk, but also puts the professional at significant personal financial risk.
Thus, insolvency decision making provides a solid foundation from which to explore expertise development and expert knowledge
structures.

3.2. Participants

Participants were recruited from the major international accounting firms, national firms with strong insolvency practices, and
smaller boutique insolvency firms in Australia. The researchers requested that the firms send insolvency professionals with 1 to 3 years
of insolvency experience2 for three days of training on insolvency engagement decision-making. The limited experience level allows
participants to be familiar with the nature of insolvency engagements, but the training focuses on decision processes normally not an
integral part of insolvency professionals’ work until reaching at least 5 years’ insolvency experience and promotion to manager.3
Seventy insolvency professionals participated, of which 67 completed the entire training (see Table 1). Participants had a mean of
20.49 months of insolvency experience, a mean age of 25.01 years, and an approximately even split between males and females. All
participants worked for firms in Australia where insolvency practice is a major component of accounting firms’ practice. None of the
demographic variables are significantly correlated with the dependent variables.4
Each three-day session consisted of one of the four different treatments: voluntary provision of explanations with generic
knowledge map interface, voluntary provision of explanations with expert knowledge map interface, automatic provision of expla­
nations with generic knowledge map interface, and automatic provision of explanations with expert knowledge map interface. Because
of the nature of the training, only one treatment could be conducted at a time. It was not feasible to conduct a session with multiple
treatments as the system had to be reconfigured to support the specific manipulation being presented in that session.
In preparation for soliciting participants, we held a two-hour demonstration to a group of partners, directors, and managers
representing the various firms to demonstrate the experience-based software (INCASE) to be used at the heart of the training. Par­
ticipants were recruited by providing information about the training sessions and dates to insolvency firms across Australia and
soliciting participation, using the champions from the demonstration session to further encourage participation. Firms voluntarily
provided novice insolvency practitioners; and, firms enrolled their employees in the sessions most convenient for the respective
employees.5 As a result, purely random assignment to treatments was not possible, and this also leads to unequal cell sizes.

3.3. Experimental procedures and tasks

The training sessions are based on a constructivist learning approach where the focus is on experiential learning (Crowe et al., 1996;
Hannafin and Land, 2000; Hirumi, 2002; Jonassen et al., 2008). Constructivist-based training is ideal for the experimental process in
that the training is focused on learning from actual experiences with engagement-like feedback throughout the simulation process.
Thus, the focus of the entire three days from a training standpoint is on completing re-created insolvency cases. No direct instruction is
conducted by the instructors except for explaining how to complete the knowledge mapping task, describing the experience-based
training tool (INCASE), describing the knowledge-based system (INSOLVE), announcing how to access and start each engagement
simulation, and debriefing the participants at the end of the three days. During the experimental process, a consistent script was used to
make sure each group received the same information and the same instructions from the researchers, including using the same
phrasing between all cases: “The next engagement is [XXXXX] and this one will take you about [XX] minutes to complete.” Fig. 2
provides a detailed overview of the experiment and activities that took place on each of the three days.
On Day One, participants were introduced to the nature of the training process, the philosophies behind constructivist training, and
asked to complete the informed consent. This was followed by completion of a pre-test of the knowledge mapping task, PATHFINDER
CONCEPT MAP, and collection of demographic information. Participants were then provided a short overview of the case delivery
system, INCASE, and completed a short tutorial with specific written instructions on how to use INCASE to complete a simulated

2
Insolvency specialists often have two or more years of general audit experience before moving into the insolvency specialization area.
3
Staff and senior-level insolvency practitioners spend most of their time on financial analysis of the client business to improve understanding of
financial resources and cash flows. All participants were from these position levels.
4
While there appears to be some variation between the treatments in the months of insolvency experience, the correlations with the dependent
variables of interest (C-score for pretest (-0.036, p =.77), posttest (-0.008, p =.95), and change in C-score (0.025, p =.84)) are not significant. A
broader categorization by senior versus staff level positions was also considered, correlating the binary categorization with the dependent variables
of interest (C-score for pretest (-0.041, p =.63), posttest (0.051, p =.34), and change in C-score (0.080, p =.26)) with again none found to be
significant. This is consistent with the expectation that professionals with less than three years of experience and in pre-manager positions would not
have participated in the stages of insolvency decision making that are the subject of the training sessions. Demographic differences were also
considered, examining the correlation of age with the dependent variables of interest (C-score for pretest (-.089p =.76), posttest (-0.057, p =.68),
and change in C-score (0.030, p =.41)) and none are significant. Differences between the type of firm, International versus National/Local) were also
considered. Again, none of the correlations of firm type with the dependent variables of interest (C-score for pretest (.025p =.42), posttest (-0.083, p
=.75), and change in C-score (-0.091, p =.77)) are significant.
5
The training sessions were free for participants. Using grant funds, the researchers paid for participants’ accommodation and food during the
three days; the firms absorbed the cost of travel and food outside of the training sessions.

6
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Table 1
Demographic Information.
Group Age Work Experience(in months) Insolvency Experience (in months)

Generic Map with Voluntary Explanations(n ¼ 12) 25.42(SD ¼ 2.021 35.50(SD ¼ 10.032) 24.75(SD ¼ 14.001)
Generic Map with Automatic Explanations(n ¼ 20) 24.05(SD ¼ 1.877) 29.90(SD ¼ 17.684) 21.44(SD ¼ 7.366)
Expert Map with Voluntary Explanations(n ¼ 13) 26.23(SD ¼ 7.395) 20.08(SD ¼ 17.600) 15.58(SD ¼ 12.717)
Expert Map with Automatic Explanations(n ¼ 22) 24.95(SD ¼ 2.149) 25.14(SD ¼ 16.551) 20.09(SD ¼ 12.588)
All(n ¼ 67) 25.01(SD ¼ 3.703 27.08(SD ¼ 16.464) 20.49(SD ¼ 11.686)

Fig. 2. Overview of Experiment.

7
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

engagement. Next, participants completed two engagement simulations using only INCASE to become familiar with the software. In
the afternoon, participants were introduced to the knowledge-based system, INSOLVE, and completed a tutorial on how to use
INSOLVE by going back through the simulated engagement from the original INCASE tutorial, but this time using INSOLVE to help
complete the task.6
Day Two was spent with participants completing steadily more complicated engagements using INSOLVE with two cases in the
morning and two cases in the afternoon. On Day Three, participants spent the morning session completing two additional engage­
ments, resulting in a total of eight engagement simulations. After taking a lunch break to remove them from the training focus,
participants completed a posttest knowledge mapping task for the company viability decision. This was followed by revisiting the two
original cases they had completed without INSOLVE on Day One (again without INSOLVE). The third day ended with a debriefing
session.7

3.4. Knowledge mapping task

Pathfinder network scaling provides a quantitative measure of a decision maker’s knowledge structure; and, pathfinder-based
measures of knowledge structures are predictive of performance, skill retention, and skill transfer (Choo and Curtis, 2000; Day
et al., 2001; Goldsmith and Davenport, 1990; Kraiger et al., 1993; Schvaneveldt, 1990). Rose et al. (2007) developed and tested a
graphical software application for measuring knowledge structures, pathfinder concept mapping, which allows participants to drag
and drop information concepts on a computer screen. Distances between every possible pair of concepts are calculated and used to
develop relatedness ratings referred to as C-scores. Participants were trained to use the knowledge mapping software using a very
simple task associating different animals and animal characteristics (shown in Fig. 3).
To use pathfinder network scaling for mapping insolvency decisions, the relevant concepts or information cues for decision-making
were identified. Using 27 experts to develop and 17 different experts to validate a knowledge-based system that reliably mimics
insolvency decision-making process, Leech et al. (1998, 1999) identified 25 information cues involved in assessing the viability of an
organization. The methods from Rose et al. (2007) were used to develop a software application that allowed participants to manipulate
the 25 cues identified by Leech et al. (1998, 1999) on a computer screen to measure participants’ knowledge structures. Fig. 4 shows
the screen that participants used to drag and drop insolvency concepts to visualize participants’ knowledge structures and calculate C-
scores.

3.5. Engagement simulation development

The engagement simulations used in this study were recreations of actual insolvency engagements using material provided by the
major insolvency firms. For each simulation, a partner from the engagement provided details of the engagement, which were recorded
and transcribed. These details were summarized and used as the basis for each engagement simulation. The firm also provided original
documents including key financial documents (which were edited to remove all identifying information). Scripts were written to
reconstruct key interactions between major parties such as insolvency practitioners, company directors, employees, and creditors/
suppliers. These scripts were used as a basis for videos in which other insolvency practitioners and staff served as actors. The actual
engagement partner who provided the engagement details then validated the simulation for completeness and accuracy, and revisions
were made as needed. Each case was then validated by an independent insolvency expert to ensure that the engagement simulation was
authentic and provided sufficient information for making an insolvency decision.
Since insolvency engagements develop over time, information was presented through INCASE in stages to simulate the evolution of
the engagement. These stages represent the various points in an engagement in which key decisions are made about the future of the
company. At the end of each stage, participants were asked to make a viability decision and were subsequently provided with feedback
from the engagement manager. The feedback, which included a recommendation and rationale for that recommendation, was pre­
sented by the independent insolvency expert used to validate the simulations. The engagement simulations were designed to provide
the participants an opportunity to interact with a broad array of facts and information, to make decisions based on all available in­
formation, and to compare their rationale to the rationale of an expert insolvency practitioner.

3.6. Engagement simulation delivery system—INCASE

The engagement simulation delivery process (i.e. training) was controlled through use of the INCASE system. INCASE, which was
developed to support constructivist learning, manages simulation delivery and allows content modules to be added and sequenced.
INCASE allows the experimenter through a dashboard to control the sequence and timing of engagement simulation components, keep
participants synced in starting engagement simulations simultaneously, monitor progress, and record all participant activity. (See

6
All training sessions were completed in an on-campus computerized classroom where each workstation had two monitors so as to alleviate the
cognitive load in moving from the INCASE software to INSOLVE.
7
Feedback during the debriefing was extremely positive from the participants with many noting how much better the training was with actually
doing activities instead of listening to an instructor, and a consistent theme that doing the original engagement simulations the second time made it
very clear to the participants that they had learned a substantial amount about insolvency practice during the training session as the cases were
much simpler to complete the second time.

8
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

9 (caption on next page)


V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Fig. 3. Pathfinder Concept Mapping Demo.

Arnold et al., 2013 for a detailed description of the INCASE system.)


The system was used to provide the engagement information to participants including video streams of interviews with clients,
suppliers, customers, and financiers; copies of key documentation from the actual engagements; financial statements; financial pro­
jections; and bid prices from potential suitors (see Fig. 5 for a sample screen). At the end of each stage and after making their
recommendation, the participants would enter engagement information into INSOLVE and receive the system’s recommendation,
revise or re-enter their own recommendation, document the reasoning behind their decision, and then receive feedback from the
engagement manager.

3.7. Knowledge-based system—INSOLVE

INSOLVE is a fully functional prototype knowledge-based system that mimics the decision making of expert insolvency pro­
fessionals. Participants enter information they consider relevant into INSOLVE, and INSOLVE serves as an electronic colleague by
prompting for additional information the system considers important in formulating a recommendation. The system has been
extensively validated using numerous insolvency professionals including both professionals involved in the knowledge elicitation
process and professionals independent of the development process (see Leech et al., 1998, 1999 for a detailed description). The
functionality of the system has been extended in subsequent versions, most notably with the addition of explanation facilities (Arnold,
et al., 2004a). The fully functional system has also been the subject of multiple prior accounting studies exploring how knowledge-
based systems affect both expert and novice decision making as well as their interaction with the system (Arnold et al., 2004b,
Arnold et al., 2006).
INSOLVE was re-programmed for our research to replicate versions from earlier research studies (Arnold et al., 2004a, 2004b,
2006, 2013; Leech et al., 1998, 1999). Our version of INSOLVE was programmed for delivery through a web browser and incorporated
a researcher dashboard that allowed for deployment in alternative forms. The INSOLVE interface was altered so that the questions that
were automatically generated in earlier versions (information cues) were locked into a panel on the upper left of the screen, the
explanation facility was locked into a panel on the lower left of the screen, and the right part of the screen provided a graphical
representation of the information cues (see Figs. 6 and 7). The independent variables were manipulated using alternative versions of
the knowledge-based system INSOLVE.

4. Independent variables

4.1. Provision of explanations

As in past versions of INSOLVE (Arnold et al., 2004a, 2006), explanations were available via selection and search. Even though
research has shown that feedback in the form of explanations increases performance, system users often will not seek feedback
(O’Donnell and David, 2000). Explanation systems generally receive minimal use if the user must exert effort to access the information
(Arnold et al., 2006; Gregor and Benbasat, 1999). Thus, we manipulated the provision of explanations as either available via voluntary
selection (see lower left panel in Fig. 6) or automatically provided (see lower left panel in Fig. 7).
The provision of explanations was manipulated through the researcher dashboard for INSOLVE. The dashboard both controlled
whether explanations would be automatically presented on the screen and the knowledge level of those explanations. INSOLVE
provides both feedforward (to assist during actual task completion) and feedback (to explain recommendations) explanations.
Additionally, from a knowledge component perspective, both modes of explanations provide four levels of explanation: definitions
(basic knowledge), rule-trace (more advanced knowledge on how information cues are used), justification (more advanced knowledge
on examples and causal relationships between information cues), and strategic (knowledge documenting how decisions are derived by
INSOLVE) (see lower left panel in Fig. 6).
For the automatic provision of explanations (see lower left panel in Fig. 7), the system was coded as follows: (1) definitions for the
INSOLVE training simulations (Cases 1 and 2) and the first two work experience simulations (Cases 3 and 4); (2) rule-trace explanations
for Cases 5 and 6; and (3) strategic explanations for Cases 7 and 8. Thus, the level of knowledge explanations steadily increased for all
participants in the automatic explanation provision manipulation, moving from baseline declarative knowledge to procedural level
knowledge. The progressive nature of the explanations is consistent with Anderson’s (2000) theory (see Fig. 1); the latter four
engagement simulations (Cases 5–8) focused on expertise components centered around pattern recognition for assimilating infor­
mation cues (Holyoak, 2012) and relational reasoning skills (Dumas et al., 2013). Pattern recognition and relational reasoning are
considered key elements that should be promoted by knowledge-based systems to alleviate deskilling (Sutton et al., 2023, 2018).
Importantly, Anderson (2000) also notes that the learning process is recursive. Higher-level explanations in INSOLVE (i.e. rule-trace
and strategic) include hyperlinks to definition explanations that provide declarative knowledge as desired by the user as they reference
procedural level explanations.

4.2. Knowledge map interface

The knowledge map interface for INSOLVE was manipulated by using a generic map which displays the information cues in an

10
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Screen 1:

Screen 2:

Fig. 4. Software that Allows Users to Drag and Drop Insolvency Concepts to Represent their Relatedness.

11
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Fig. 5. INCASE Case Delivery System.

alphabetical sequence (see Fig. 7—enlarged with the ‘zoom in’ button) or as a composite expert knowledge map developed as
described below (see Fig. 6). The generic map with the alphabetical sequence used the same cues as the expert map so that it was not
the display of information cues, but rather the visual organization that drive any effects from the intervention.8
The expert knowledge map was developed using the 25 information cues previously discussed (under the Knowledge Mapping Task
heading). Prior research indicates that at least 15 information cues are needed to develop a meaningful representation of experts’
knowledge structures in complex decision domains (Rose et al., 2007, Rose et al., 2012). Thus, the 25 information cues required for an
insolvency viability decision are well-validated, exceed the number required for effective mapping of knowledge structures, and are
representative of a complex decision environment with highly specialized experts. This decision model provides a solid basis for
developing and validating the knowledge structures used to construct the expert knowledge map interface.
Fourteen expert insolvency practitioners participated in the development of the expert knowledge map. One of the researchers
visited each expert in their office, trained them on the knowledge mapping task using the animal and animal characteristics task
(shown in Fig. 3), and then the expert completed the insolvency knowledge mapping task (shown in Fig. 4). Pathfinder analysis was
used to convert the fourteen experts’ ratings into a graphical model of an expert knowledge map that represents the composite
knowledge structure of all fourteen experts.
The results of the pathfinder analysis indicates that the knowledge structures of the expert insolvency practitioners are highly
similar (C = 0.588).9 This result is consistent with expectations, indicates that a single composite knowledge map is representative of
insolvency experts’ knowledge structures, and provides a valid model for building a knowledge map interface to facilitate knowledge
transfer to a system user. The expert knowledge map is shown in Fig. 6.

4.3. Dependent variable

On Day One, participants completed a pre-test of the knowledge mapping task for the company viability decision prior to

8
The choice of an alphabetical listing was deemed preferable to using a novice map organization as we were conducting actual training sessions
for the firms. Thus, a negative condition would not have been ethically appropriate as it may have impaired their knowledge acquisition.
9
C-scores can range from 0 to 1, with a score above 0.300 considered as representing strong agreement in knowledge structure.

12
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Fig. 6. INSOLVE System with Expert Knowledge Map Interface and Voluntary Explanation Access.

commencing the training. After the training concluded on Day Three, participants completed a post-test of the knowledge mapping
task. The dependent variable of interest was the closeness (C-score) of the novice insolvency professional’s posttest knowledge map to
that of the composite expert knowledge map for the company viability decision. The pre-test C-score was used as a covariate to control
for any initial differences in knowledge structures.

5. Results

Fig. 8 reflects the change in C-scores by treatment group between the pretest and the posttest. Table 2, Panel A provides the
descriptive statistics for the C-scores by group. Fig. 9 graphically presents the posttest results. There are notable changes in the mean
posttest C-scores for participants when the interventions are made available in the knowledge-based system. Those participants
receiving neither the expert knowledge map or automatically provided explanations have a mean posttest score of 0.197. Just
providing the knowledge map interface raises the mean to 0.272 and just automatically providing explanations raises the mean to
0.248. When both interventions are present, the mean is 0.274, which is approaching the 0.300 threshold that indicates consistency
with the expert knowledge maps.
The change in pretest to posttest C-score is even more interesting. The C-score for participants receiving neither intervention
actually decreased in the posttest (-0.015), while there are gains of 0.018 (automatic provision of explanations), 0.024 (expert
knowledge map interface) and 0.036 (both interventions) in consistency of knowledge maps for the three intervention treatment
groups.10
Table 2, Panel B provides the results of the ANCOVA analysis.11 The ANCOVA analysis is conducted using the posttest C-score as the
dependent variable, explanations (voluntary use vs. automatic provision) and interface design (generic map vs. expert knowledge
map) as the independent variables, and the pretest C-score as the covariate.
H1 predicts that the provision of an expert knowledge map interface design for the knowledge-based system will help the user
develop more expert-like knowledge structures. The mean values indicate that by providing the expert knowledge map interface as

10
The limited sample sizes available given the desire to have experienced professionals participate limits the power to test the significance of the
differences between the pretest and posttest scores. However, in the treatment where participants received both the knowledge map and the ex­
planations, the N=22 allowing a t-test of differences which is significant at p=.025.
11
Because of unequal cell sizes a Levene’s test for homogeneity of variance was conducted and the results indicate that the variances are not
unequal (p=.98) supporting the use of the ANCOVA analyses.

13
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Fig. 7. INSOLVE System with Generic (Alphabetic Cue) Interface and Automatic Explanation Provision.

opposed to the generic knowledge map interface, the participants mean posttest C-score rises from 0.229 to 0.273. The main effect for
the knowledge map interface is significant at p = 0.01.
H2 predicts that the automatic provision of explanations within a knowledge-based system will help the user develop more expert-
like knowledge structures. The mean values indicate that by providing automatic explanations in a knowledge sequenced fashion as
opposed to only providing them in a voluntary use form results in the participants mean posttest C-score rising from 0.236 to 0.261.
The main effect for the automatic provision of explanations is marginally significant at p = 0.10.
H3 predicts that the greatest development of expert-like knowledge structures will occur when the system is designed to provide
both the knowledge map interface and the automatic explanation provision. Per Buckless and Ravenscroft (1990), we set the contrast
weights at 3 for the knowledge map and explanations both present condition and at 1 for the three other conditions. The results of the
planned contrast (-1, − 1, − 1, +3) as shown in Table 2, Panel C, support H3 (t = 1.633, p = 0.05).

5.1. Participant perspectives on INSOLVE system

During the debriefing at the end of the experiments, the participants’ perceptions on various aspects of INSOLVE were solicited to
better understand their experience. We used multi-item measures to assess the perceived levels of systems restrictiveness, self-efficacy
(how much control they felt over the system), system usefulness, and ease of use. All items were measured on seven-item Likert-type
scales. The scales for system restrictiveness and self-efficacy were adapted from Dowling (2009), and the scales for system usefulness
and ease of use were adapted from those used by Davis et al. (1989) and Venkatesh et al. (2003). A summary of the results of the
debriefing is shown in Table 3.
System restrictiveness is important in that more restrictive systems can lead to greater deskilling (Arnold and Sutton, 1998; Sutton
et al., 2023; Dowling et al., 2008), despite novice users preferring restrictive systems that they find makes them feel better at task
performance. Where 1 would indicate high restrictiveness and 7 would indicate high flexibility, participants on average perceived
INSOLVE to be only slightly restrictive (3.2). This was quite consistent among the treatment groups with the generic knowledge map,
voluntary explanations (0,0) group at 3.2, generic knowledge map, automatic explanations (0,1) group at 3.0, expert knowledge map,
voluntary explanations (1,0) group at 3.3, and expert knowledge map, automatic explanations (1,1) group at 3.5. The difference

14
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Fig. 8. Differences in Pretest and Posttest C-Scores by Treatment.

between groups was not significant.


Self-efficacy approaches the restrictiveness dimension from an alternative perspective, focusing on the user’s perceived control of
the decision process. Where 1 indicates low self-efficacy and 7 indicates high self-efficacy, participants on average perceived their self-
efficacy when using INSOLVE to be slightly on the low side (3.6). This was quite consistent among the treatment groups with the (0,0)
group at 3.8, (0,1) at 3.6, (1,0) at 3.9, and (1,1) at 3.5. The difference between groups was not significant.
System usefulness is often seen as key to user acceptance of technologies. In the training environment, as is typical in the pro­
fessional accounting firm environment, the user had no choice but to use the system. However, system usefulness scores were not
particularly high given that 1 represented not useful at all and 7 represented very useful in task completion, the overall average
perception rating was 3.6. There was a little more variation among the treatment groups with the (0,0) group at 4.4, (0,1) at 3.6, (1,0)
at 3.7, and (1,1) at 3.2. An ANOVA on the overall ratings finds that the presence of explanations is marginally significant (p =.07) in
decreasing perceptions of system usefulness.
Relatedly, we also gathered data on the ease of use for INSOLVE with 1 being strong disagreement and 7 being strong agreement
with INSOLVE being useful to task completion. Overall mean responses were at the center of the scale (4.0). There was also some
variation among the treatment groups on perceived system usefulness with the (0,0) group at 4.6, (0,1) at 3.9, (1,0) at 4.1, and (1,1) at
3.7. An ANOVA on the overall ratings finds that the presence of explanations is significant (p = 0.04) in lowering the ease of use.
The latter two findings suggest that the implementation of the automatic explanations in INSOLVE may not have been completely
successful at minimizing cognitive load—a concern raised by Gregor and Benbasat (1999) in their theory development. This may have
limited the strength of our findings on the effect of explanations on knowledge structure improvement where the results were only
marginally significant. The result is also consistent with Mascha and Smedley’s (2007) findings that feedback created greater cognitive
load when performing complex tasks.
A group discussion was also conducted at the end of each training session, mostly to further empower participants to suggest ways
to improve the training. The themes were very consistent among the groups and are consistent with the perceptions captured via the
debriefing survey discussed above. Generally, they would have preferred more feedback explanation from the manager and more
training on how to use INSOLVE that in its advanced state was challenging for them. In both cases we wanted to limit human
intervention and promote learning through the engagement experiences and the use of INSOLVE. We perceived this was successful
given the feedback on the use of INSOLVE in these discussions where there was consistent agreement among participants which was
captured quite succinctly by this participant’s quote:
“INSOLVE was difficult to use because of the interface. The menu structure was difficult. But it did make me think about things that
I might not have thought about—it prompted me to consider other issues.”

6. Conclusions

This study addresses a potential conflict between the efficiency and effectiveness gains that are perceived to arise from the use of
standardized audit support systems and other knowledge-based systems for accounting professionals (Banker et al., 2002; Bedard et al.,
2008; Dowling, 2009; Dowling and Leech, 2014) and the deskilling effects from use of such systems (Arnold and Sutton, 1998; Arnold

15
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Table 2
Results of Hypotheses Testing.
Panel A: Descriptive Statistics for C-scores

Generic Knowledge Map Expert Knowledge Map Total

Voluntary Provision of Explanations


n 12 13 25
Mean Posttest C-score 0.197 0.272 0.236
St. Dev. 0.087 0.077 0.089
Change (Posttest – Pretest) − 0.015 0.024 0.005
% Change − 7.1% 9.7% 2.2%
Automatic Provision of Explanations
n 20 22 42
Mean Posttest C-score 0.248 0.274 0.261
St. Dev. 0.080 0.079 0.080
Change (Posttest – Pretest) 0.018 0.036 0.028
% Change 7.8% 15.1% 12.0%
Total
n 32 35 67
Mean Posttest C-score 0.229 0.273 0.252
St. Dev. 0.085 0.077 0.083
Change (Posttest – Pretest) 0.006 0.032 0.019
% Change 2.7% 13.3% 8.2%

Panel B: ANCOVA for Posttest C-scores

Source SS df F p-value

Model 0.087 4 3.636 0.01


Intercept 0.268 1 44.702 < 0.01
Interface Design 0.030 1 5.030 0.01
Explanations 0.010 1 1.651 0.10
Knowledge Map Interface * Explanations 0.007 1 1.131 0.15
Covariate 0.036 1 44.702 < 0.01
Pretest C-score
Error 0.372 62
Total 4.714 67

Panel C: Planned Contrasts

Contrast df t-value p-value

Cell 4 > Cells 1, 2, 3 (+3, − 1, − 1, − 1) 63 1.633 0.05

Fig. 9. Graphical Display of Posttest C-Scores by Condition.

16
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

et al., 2004b; Balasubramanian et al., 2017; Dowling et al., 2008; Dowling and Leech, 2014; Rinta-Kahila et al., 2018;). We specifically
examine two potential system design interventions that may help ameliorate the potential deskilling effects—use of an interface layout
that mimics experts’ knowledge structures and the automatic provision of knowledge-based explanations during task completion.
Knowledge-based systems can lead the novice user to rely on the system for decisional guidance and not actively engage in guiding
the decision processes (Sutton et al., 2023). Research shows that humans using such technologies are losing their ability to store
information in long-term memory; and, their brains are adapting as the memory portion physically shrinks and they focus only on how
to make the system work (Sparrow et al., 2011; Sutton et al., 2023). This means that the part of the brain that facilitates deep learning
and creation of deep knowledge structures in long-term memory is shrinking, leading to shallow decision-making based on limited
information and limited cognitive effort (Loh and Kanai, 2016; Sutton et al., 2023). Knowledge based systems commonly limit
expertise development by impeding acquisition of deep, structured knowledge of systematic relational patterns (Chi and VanLehn,
2012; Holyoak, 2012; Goldwater and Schalk, 2016) and are theorized to lead to deskilling (Arnold and Sutton, 1998; Sutton et al.,
2023).
The expanded Theory of Technology Dominance (TTD2) posits that countering deskilling (i.e. skilling through knowledge-based
system use) requires promoting the encoding of relational patterns into memory (Sutton et al., 2023). However, research on
analogical reasoning argues that the patterns need to be encoded by experts at a complex structural knowledge level that is separated
from surface level knowledge specific to particular occurrences (Gentner and Colhoun, 2010; Holyoak 2012). As specific occurrences
arise, they need to be contextualized within the relational patterns that facilitate pattern recognition at a deep knowledge level. This
contextualization process requires professionals to focus on the overall decision context rather than individual tasks they are
completing (Sutton et al., 2023). To promote skill development, knowledge based systems need to facilitate the encoding of key
relational patterns that represent domain expertise and help users recognize how evidence fits into those relational patterns and expert
decision processes.
This study was conducted to systematically work towards an effective intervention for use in knowledge-based systems used in the
work production process by professionals that promotes skilling of professionals. We use a knowledge map developed from 14 expert
insolvency professionals to design an interface and test for its usefulness in facilitating user development of expert-like deep knowledge
structures. Additionally, knowledge-based explanations are automatically provided to the novice accounting professional during task
completion to both facilitate task completion (feedforward explanations) and to assist the professional in assessing decisions (feedback
explanations). These explanations help the user to put evidence into the context of the overall decision process and to understand how
that evidence influences end decisions. The results of the study indicate that both interventions promote knowledge structure
development in novice professionals who use knowledge-based systems. The combination of the two interventions provides the
greatest difference in expert-like knowledge structures despite the cognitive load potentially created by the complex interface.
The results of this research are important to both theory and practice. From a theoretical standpoint the research suggests that
technology dominance effects such as deskilling can be addressed, at least in part, by considering how such systems are designed.
While expert knowledge map interfaces may not seem as intuitive and user friendly, the longer-term benefits may justify a greater
emphasis on alternative interface designs. Further, most existing systems are built around the use of help facilities that require the user
to extend cognitive effort to seek and identify explanations that facilitate effective system use. Our research shows that when such
explanations are automatically presented in a format that better matches the knowledge level of the expert, they are more effective for
promoting expertise development than are passive explanation facilities requiring the user to seek help.
There are limitations to the study that should be considered when assessing the results. First, our sample is relatively small. The
requirement for participants to sit through three consecutive days of training at an offsite location placed a burden on participation,
even though the reaction to the training was very positive and well received by participants. However, the long-term nature of our
training task allowed us to avoid a weakness common to most studies that have examined the effects of system design features on users.
Most prior studies involve experiments with very brief use of a system during one session. Second, our research seeks to understand the
effects over time of system use but condensing multi-week engagements into sessions generally lasting about two hours does poten­
tially diminish the effects of knowledge-based system use over an extended time period. Nonetheless, we were able to put participants
through six engagements in an accelerated fashion while using the knowledge-based system on a constant basis. The use of
constructivist training in other domains have been effective for rapidly developing experience-based knowledge, but whether it also

Table 3
Results of Debriefing Questions.
Perceptions of Generic Map with Voluntary Generic Map with Automatic Expert Map with Voluntary Expert Map with Automatic All
INSOLVE Explanations (0,0 Group) Explanations (0,1 Group) Explanations (1,0 Group) Explanations (1,1 Group)

System 3.2 3.0 3.3 3.5 3.2


Restrictiveness
Self-efficacy 3.8 3.6 3.9 3.5 3.6
System Usefulness 4.4 3.6 3.7 3.2 3.6
Ease of Use 4.6 3.9 4.1 3.7 4.0

Notes: Items were measured on a 7-point scale.


For system restrictiveness, 1 indicates high restrictiveness and 7 indicates high flexibility.
For self-efficacy, 1 indicates low self-efficacy and 7 indicates high self-efficacy.
For system usefulness, 1 indicates not useful at all and 7 represents very useful in task completion.
For ease of use, 1 indicates strong disagreement and 7 indicates strong agreement.

17
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

expedites the actual effects from knowledge-based system use is uncertain. Third, prior research suggests that knowledge structures are
critical to providing the frame for storing knowledge components (Kopp and O’Donnell, 2005). Thus, the short duration of the
experimental treatments may have allowed the expert knowledge structures to develop quicker but may not have provided the
additional time necessary for categorizing the explanation knowledge within those structures. Thus, the effect of the explanations may
be stronger over a longer time-period using the system. Fourth, while our interventions promote knowledge structure development
that is critical to expertise development, we do not know if other deskilling effects that were not captured may have been present.
However, our research demonstrates that the specific expertise needs of pattern recognition are promoted by our interventions.
There are also implications for future research. First, this research initiates a discussion on how systems can be better designed to
counter the effects of deskilling on professionals using such systems. Future research should consider other such potential interventions
that could mitigate these potential deleterious effects. Second, the research raises questions about the viability of active versus passive
support such as that found in explanation (i.e., help) facilities embedded in systems. The research here has assumed a sequential
pattern of knowledge matching explanations with user knowledge base without assessing differences in individual users (i.e. all users
received the same progression of explanations). Future research should consider how systems can assess users’ current knowledge state
to better tailor explanations and other systems help for those users. Finally, while this research identifies interventions in which ac­
counting professionals can improve acquisition of expert-like knowledge structures, our understanding of the degree of deskilling
effects that arise from technology use are still very limited. The risk of deskilling is great for the profession and garnering a better
understanding of how and to what degree deskilling is occurring is a critical research issue that needs greater attention.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to
influence the work reported in this paper.

Data availability

The authors do not have permission to share data.

Acknowledgements

The authors gratefully acknowledge financial support from the Australian Research Council (DP0878422), and participation by
many accounting firms in Australia. We thank Ali Abdolmohammadi, Kazeem Akinyele, Stephen Asare, Mark Beasley, Jean Bedard,
Frank Buckless, Derek Dalton, Carlin Dowling, Chris Earley, Robert Knechel, Justin Leiby, Robyn Maroney, Ania Rose, Scott Showalter,
Eileen Taylor, Jay Thibodeau, Sally Widener and participants at the International Symposium on Accounting Information Systems, and
workshops at New England Behavioral Accounting Research Symposium, Clemson University, Monash University, NHH Norwegian
School of Economics, North Carolina State University, Rutgers University, University of Florida and University of Technology Sydney
for their helpful comments on earlier versions of this paper. The authors would also like to thank Irina Malaescu and Andrew Vincent
for research assistance during this project.

References

Anderson, J., 1990. The Adaptive Character of Thought. Lawrence Erlbaum Associates, Hillsdale, NJ.
Anderson, J., 2000. Learning and Memory. John Wiley and Sons Inc, New York.
Arnold, V., Clark, N., Collier, P., Leech, S., Sutton, S., 2004a. Explanation provision and use in an intelligent decision aid. Intelligent Systems in Accounting, Finance
and Management 12, 5–27.
Arnold, V., Collier, P.A., Leech, S.A., Sutton, S.G., 2004b. Impact of intelligent decision aids on expert and novice decision-makers’ judgments. Account. Finance 44
(1), 1–26.
Arnold, V., Clark, N., Collier, P., Leech, S., Sutton, S.G., 2006. The differential use and effect of knowledge-based system explanations in novice and expert judgment
decisions. Manag. Inf. Syst. Q. 30 (1), 79–97.
Arnold, V., Collier, P., Leech, S., Sutton, S., Vincent, A., 2013. INCASE: simulating experience to accelerate expertise development by knowledge workers. Intelligent
Systems in Accounting, Finance and Management: An International Journal 20 (1), 1–21.
Arnold, V., Sutton, S., 1998. The theory of technology dominance: understanding the impact of intelligent decision aids on decision makers’ judgments. Adv. Acc.
Behav. Res. 1 (3), 175–194.
Asatiani, A., E. Penttinen, T. Rinta-Kahila, and A. Salovaara. 2020. Implementation of automation as distributed cognition in knowledge work organizations: Six
recommendations for managers. 40th International Conference on Information Systems, ICIS 2019.
Axelson, M., 2014. Technology Impeded Knowledge Acquisition and Retention: The Effects of Long-term Use of Intelligent Decision Aids on Auditor Professional
Knowledge. The University of Queensland. PhD Dissertation.
Balasubramanian, G., Lee, H., Poon, K., Lim, W., Yong, W., 2017. Towards establishing design principles for balancing usability and maintaining cognitive abilities.
Lecture Notes in Computer Science 10288.
Banker, R., H. Chang, and Y. Kao. 2002. Impact of information technology on public accounting firm productivity. Journal of Information Systems 16 (2): 209-222.
Bedard, J., D. Deis, M. Curtis, J. Jenkins. 2008. Risk monitoring and control in audit firms: A research synthesis. Auditing: A Journal of Practice & Theory 27 (1): 187-
218.
Bielaczyc, K., Pirolli, P.L., Brown, A.L., 1995. Training in self-explanation and self-regulation strategies: investigating the effects of knowledge acquisition activities on
problem solving. Cogn. Instr. 13 (2), 221–252.
Bierstaker, J., D. Downey, J. Rose, and J. Thibodeau. 2018. Effects of Stories and Checklist Decision Aids on Knowledge Structure Development and Auditor
Judgment. Journal of Information Systems 32(2): 1-24.
Boland, C., B. Daugherty, and D. Dickens. 2018. Evidence of the relationship between PCAOB inspection outcomes and the use of structured audit technologies.
Auditing: A Journal of Practice & Theory 38 (2): 57-77.

18
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

Bonner, S., 1990. Experience effects in auditing: the role of task-specific knowledge. Account. Rev. 65 (1), 72–92.
Bonner, S., Lewis, B., 1990. Determinants of auditor expertise. J. Account. Res. 28, 1–20.
Bonner, S., Libby, R., Nelson, M., 1997. Audit category knowledge as a precondition to learning from experience. Acc. Organ. Soc. 22 (5), 387–410.
Brewster, B. 2011. How a systems perspective improves knowledge acquisition and performance in analytical procedures. The Accounting Review 86 (3): 915-943.
Buckless, F., Ravenscroft, S., 1990. Contrast coding: a refinement of ANOVA in behavioral analysis. Account. Rev. 65, 933–945.
Carson, E., and C. Dowling. 2012. The competitive advantage of audit support systems: The relationship between extent of structure and audit pricing. Journal of
Information Systems 26 (1): 35-49.
Chi, M., Glaser, R., Rees, E., 1982. Expertise in problem solving. In: Sternberg, R. (Ed.), Advances in the Psychology of Human Intelligence. Erlbaum, Hillsdale, NJ.
Chi, M.T.H., VanLehn, K.A., 2012. Seeing deep structure from the interactions of surface features. Educ. Psychol. 47 (3), 177–188.
Choo, F., Curtis, M., 2000. Structural assessment in accounting research. J. Account. Lit. 19, 131–157.
Cooke, N.J., 1994. Varieties of knowledge elicitation techniques. Int. J. Hum. Comput. Stud. 41, 801–849.
Cooke, N.J., Schvaneveldt, R.W., 1988. Effects of computer programming experience on network representations of abstract programming concepts. Int. J. Man Mach.
Stud. 29 (4), 407–427.
Crowe, M., Beeby, R., Gammack, J., 1996. Constructing Systems and Information: A Process View. McGraw-Hill, London.
Davis, F., Bagozzi, R., Warshaw, P., 1989. User acceptance of computer technology: a comparison of two theoretical models. Manag. Sci. 35 (8), 982–1003.
Davis, M., Curtis, M., Tschetter, J., 2003. Evaluating cognitive training outcomes: validity and utility of structural knowledge assessment. J. Bus. Psychol. 18 (2),
191–206.
Davis, F., Yi, M., 2004. Improving computer skill training: behavior modeling, symbolic mental rehearsal, and the role of knowledge structures. J. Appl. Psychol. 89
(3), 509–523.
Day, E., Arthur Jr., W., Gettman, D., 2001. Knowledge structures and the acquisition of a complex skill. J. Appl. Psychol. 86 (5), 1022–1033.
DeFond, M., Lennox, C., 2011. The effect of SOX on small auditor exits and audit quality. J. Account. Econ. 52 (1), 21–40.
Dowling, C., 2009. Appropriate audit support system use: the influence of auditor, audit team, and firm factors. Account. Rev. 84 (3), 771–810.
Dowling, C., S. Leech, and R Moroney. 2008. Audit support system design and the declarative knowledge of long-term users. Journal of Emerging Technologies in
Accounting 5: 99-108.
Dowling, C., Leech, S., 2007. Audit support systems and decision aids: current practice and opportunities for future research. Int. J. Account. Inf. Syst. 8 (2), 92–116.
Dowling, C., Knechel, W., Moroney, R., 2018. Public oversight of audit firms: the slippery slope of enforcing regulation. Abacus 54 (3), 353–380.
Dowling, C., Leech, S., 2014. A big 4 firm’s use of information technology to control the audit process: how an audit support system is changing auditor behavior.
Contemp. Account. Res. 31 (1), 230–252.
Dumas, D., Alexander, P., Grossnickle, E., 2013. Relational reasoning and its manifestations in the educational context: a systematic review of the literature. Educ.
Psychol. Rev. 25 (3), 391–427.
Eining, M., and P. Dorr. 1991. The impact of expert system usage on experimental learning in an auditing setting. Journal of Information Systems (Spring): 1-16.
Fenwick, T.J., 2000. Expanding conceptions of experiential learning: a review of the five contemporary perspectives on cognition. Adult Educ. Q. 50 (4), 243–272.
Gentner, D. and J. Colhoun. 2010. Analogical processes in human thinking and learning. In B. Glatzeder, V. Goel, and A. Müller (Eds), Towards a Theory of Thinking,
(pp. 35-48). Berlin, Heidelberg: Springer, 35-48.
Gillan, D., Breedin, S., Cooke, N., 1992. Network and multidimensional representations of the declarative knowledge of human-computer interface design experts.
Mem. Cogn. 9, 422–433.
Glaser, R., Chi, M., 1988. The Nature of Expertise. Lawrence Erlbaum Associates, Hillsdale, NJ.
Goldsmith, T., Davenport, D., 1990. Assessing structural similarity of graphs. In: Schvaneveldt, R. (Ed.), Pathfinder Associative Networks: Studies in Knowledge
Organization. Ablex Publishing Corp, Norwood, NJ.
Goldsmith, T.E., Johnson, P.J., Acton, W.H., 1991. Assessing structural knowledge. J. Educ. Psychol. 83 (1), 88–96.
Goldwater, M., Schalk, L., 2016. Relational categories as a bridge between cognitive and educational research. Psychol. Bull. 142, 729–757.
Gregor, S., Benbasat, I., 1999. Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q. 23 (4), 497–530.
Hancock, P., 2014. Automation: how much is too much? Ergonomics 57 (3), 449–454.
Hannafin, M., Land, S., 2000. Student-centered learning environments. In: Jonassen, D., Land, S. (Eds.), Theoretical Foundations of Learning Environments. Lawrence
Erlbaum Associates, Mahway, pp. 1–23.
Hirumi, A., 2002. Student-centered, technological-rich learning environments (SCenTRLE): operationalizating constructivist approaches to teaching and learning.
J. Technol. Teach. Educ. 10 (4), 497–537.
Holyoak, K., 2012. Analogy and relational reasoning. In: Holyoak, K., Morrison, R. (Eds.), The Oxford Handbook of Thinking and Reasoning. Oxford University Press,
NY, pp. 234–259.
Jonassen, D., Howland, J., Marra, J., Crismond, D., 2008. Meaningful Learning with Technology. Prentice Hall, Upper Saddle River.
Kopp, L., O’Donnell, E., 2005. The influence of a business-process focus on category knowledge and internal control evaluation. Acc. Organ. Soc. 30, 423–434.
Kraiger, K., Ford, K., Salas, E., 1993. Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. J. Appl.
Psychol. 78 (2), 311–328.
Kraiger, K., Salas, E., Cannon-Bowers, J.A., 1995. Measuring knowledge organization as a method for assessing learning during training. Hum. Factors 37 (4),
804–816.
Leech, S.A., Collier, P.A., Clark, N., 1998. Modeling expertise: a case study using multiple insolvency experts. Advances in Accounting Information Systems 6, 85–105.
Leech, S.A., Collier, P.A., Clark, N., 1999. A generalized model of decision-making processes for companies in financial distress. Account. Forum 23 (2), 155–174.
Libby, R., 1995. The role of knowledge and memory in audit judgment. In: Ashton, R.H., Ashton, A.H. (Eds.), Judgment and Decision-Making Research in Accounting
and Auditing. Cambridge University Press, pp. 176–206.
Libby, R., Luft, J., 1993. Determinants of judgment performance in accounting settings: ability, knowledge, motivation, and environment. Acc. Organ. Soc. 18 (5),
425–450.
Loh, K., Kanai, R., 2016. How has the internet reshaped human cognition? Neuroscientist 22 (5), 506–520.
Malaescu, I., Sutton, S., 2015. The effects of decision aid structural restrictiveness on cognitive load, perceived usefulness, and reuse intentions. Int. J. Account. Inf.
Syst. 17, 16–36.
Mascha, M.F., 2001. The effect of task complexity and expert system type on the acquisition of procedural knowledge: some new evidence. Int. J. Account. Inf. Syst. 2
(2), 103–124.
Mascha, M.F., Smedley, G., 2007. Can computerized decision aids do “damage”? A case for tailoring feedback and task complexity based on task experience. Int. J.
Account. Inf. Syst. 8 (2), 73–91.
Masselli, J., Ricketts, R., Arnold, V., Sutton, S., 2002. The impact of embedded intelligent agents on tax compliance decisions. Journal of the American Tax Association
24 (2), 60–78.
McCall, H., V. Arnold, and S. Sutton. 2008. Use of knowledge management systems and the impact on the acquisition of explicit knowledge. Journal of Information
Systems 22 (2): 77-101.
McKeithen, K.B., Reitman, J.S., Rueter, H.H., Hirtle, S.C., 1981. Knowledge organization and skill differences in computer programmers. Cogn. Psychol. 13 (3),
307–325.
Moreno, K., Bhattacharjee, S., Brandon, D., 2007. The effectiveness of alternative training techniques on analytical procedures performance. Contemp. Account. Res.
24 (3), 983–1014.
Moroney, R. 2007. Does industry expertise improve the efficiency of audit judgment? Auditing: A Journal of Practice & Theory 26 (2): 69-94.
O’Donnell, E.d., 2003. The influence of process-focused knowledge acquisition on evaluative judgment during a systems assurance task. Int. J. Account. Inf. Syst. 4
(2), 115–139.

19
V. Arnold et al. International Journal of Accounting Information Systems 50 (2023) 100638

O’Donnell, E., and J. Schultz, Jr. 2003. The influence of business-process-focused audit support software on analytical procedures judgments. Auditing: A Journal of
Practice & Theory 22 (2): 265-279.
O’Donnell, E.d., David, J.S., 2000. How information systems influence user decisions: a research framework and literature review. Int. J. Account. Inf. Syst. 1 (3),
178–203.
Rinta-Kahila, T., E. Penttinen, A. Salovaara, and W. Soliman. 2018. Consequences of discontinuing knowledge work automation—Surfacing of deskilling effects and
methods of recovery. 51st Hawaii International Conference on System Sciences, 5244-5253.
Rose, J., 2002. Behavioral decision aid research. Decision aid use and effects. In: Arnold, V., Sutton, S. (Eds.), Researching Accounting as an Information Systems
Discipline. American Accounting Association, Sarasota, FL.
Rose, J., B. McKay, C. Norman, and A. Rose. 2012. Designing decision aids to promote the development of expertise. Journal of Information Systems 26 (1): 7-34.
Rose, J., Rose, A., McKay, B., 2007. Measurement of knowledge structures acquired through instruction, experience, and decision aid use. Int. J. Account. Inf. Syst. 8
(2), 118–137.
Rose, J., Wolfe, C., 2000. The effects of system design alternatives on the acquisition of tax knowledge from a computerized tax decision aid. Acc. Organ. Soc. 25,
285–306.
Rose J. 2005. Decision aids and experiential learning. Behavioral Research in Accounting 17: 175-189.
Sanchez, M., 2004. Effect of instruction with expert patterns of the lexical learning of English as a foreign language. System 32, 89–102.
Schultz, J.J., Bierstaker, J.L., O’Donnell, E.d., 2010. Integrating business risk into auditor judgment about the risk of material misstatement: the influence of a
strategic-systems-audit approach. Acc. Organ. Soc. 35 (2), 238–251.
Schvaneveldt, R., 1990. Pathfinder Associative Networks: Studies in Knowledge Organization. Ablex Publishing Corp, Norwood, NJ.
Seow, P.S., 2011. The effects of decision aid structural restrictiveness on decision making outcomes. Int. J. Account. Inf. Syst. 12 (1), 40–56.
Smedley, G., and S. Sutton. 2004. Explanation provision in knowledge-based systems: A theory-driven approach for knowledge transfer designs. Journal of Emerging
Technologies in Accounting 1: 41-61.
Smedley, G., and S. Sutton. 2007. The effect of alternative procedural explanation types on procedural knowledge acquisition during knowledge-based systems use.
Journal of Information Systems 21 (1): 27-51.
Sparrow, B., Liu, J., Wegner, D.M., 2011. Google effects on memory: cognitive consequences of having information at our fingertips. Science 333 (6043), 776–778.
Steinbart, P., Accola, W., 1994. The effects of explanation type and user involvement on learning from and satisfaction with expert systems. J. Inf. Syst. 8 (1), 1–17.
Stuart, I., and D. Prawitt. 2012. Firm-level formalization and auditor performance on complex tasks. Behavioral Research in Accounting 24(2): 193-210.
Sutton, S., V. Arnold, and M. Holt. 2018. How much automation is too much? Keeping the human relevant in knowledge work. Journal of Emerging Technologies in
Accounting 15 (2): 15-25.
Sutton, S., Arnold, V., Holt, M., 2023. An extension of the Theory of Technology Dominance: Capturing the underlying causal complexity. Int. J. Account. Inf. Syst. 50,
100626.
Sweller, J., 1988. Cognitive load during problem solving: effects on learning. Cognit. Sci. 12, 257–285.
Sweller, J., 1993. Some cognitive processes and their consequences for the organization and presentation of information. Aust. J. Psychol. 45, 1–8.
Taricani, E., Clariana, R., 2006. A technique for automatically scoring open-ended concept maps. Education Technology Research and Development Association for
Educational Communications and Technology. 54 (1), 65–82.
Thibodeau, J. 2003. The development and transferability of task knowledge. Auditing: A Journal of Practice & Theory 22 (1): 47-67.
Venkatesh, V., Morris, M., Davis, G., Davis, F., 2003. User acceptance of information technology: toward a unified view. MIS Q. 27 (3), 425–478.
Wortmann, C., 2019. Promises and Perils on the Frontier of Big Data Usage: How the Perception of Big Data Changes Managerial Decision-Making in Marketing.
University of St. Gallen. PhD Dissertation,.

20

You might also like