Decision Support Systems: Dinko Ba Ci C, Raymond Henry

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Decision Support Systems 163 (2022) 113862

Contents lists available at ScienceDirect

Decision Support Systems


journal homepage: www.elsevier.com/locate/dss

Advancing our understanding and assessment of cognitive effort in the


cognitive fit theory and data visualization context: Eye
tracking-based approach
Dinko Bačić a, *, Raymond Henry b
a
Department of Information Systems and Supply Chain Management, Quinlan School of Business, Loyola University Chicago, Chicago, IL, USA
b
Department of Information Systems, Monte Ahuja College of Business, Cleveland State University, Cleveland, OH, USA

A R T I C L E I N F O A B S T R A C T

Keywords: In Cognitive Fit Theory (CFT) based research, there is a consensus about cognitive effort as the underlying
Cognitive fit theory mechanism impacting performance. Although critical to the theory, cognitive effort and its direct empirical
Cognitive fit assessment remain a challenge. In this repeated measures experimental study, we introduce a research model and
Cognitive effort
develop hypotheses based on the fundamental relationships underlying CFT while integrating eye tracking as an
Eye tracking
Data visualization
approach for assessing cognitive effort. Our study finds that eye tracking technology, specifically fixation-based
metrics, can be used in the understanding of cognitive processes initiated by our data representation choices.
Specifically, we find that in all tasks except the complex-symbolic task, users experience meaningful change in
the physiological assessment of cognitive effort based on the condition of cognitive fit. We contrast our findings
to existing research and find that physiological indicators of cognitive effort can provide critical insights often
missed in traditional CFT research.

1. Introduction eye tracking technology through its ability to collect biometric data that
describes users' visual attention. The scarcity of physiological response
Cognitive Fit Theory (CFT) [1,2], even after over 30 years, continues research in CFT is surprising, given that research findings suggest that
to significantly impact our understanding of how visual data represen­ evaluating visual interface performance without measuring cognitive
tations, through their matching with problem tasks and users' mental processes, such as workload, may lead to incorrect conclusions about the
models, impact decision performance. In the extant CFT research, there efficiency of an interface. Driven partly by the ‘eye-mind hypothesis’
is a consensus on the critical role of cognitive effort (sometimes called [5], eye tracking is considered effective in assessing cognitive processes
cognitive workload, burden, strain, or load), yet the research largely as it reveals how the user reads and scans the displayed information by
fails to integrate cognitive effort measurement [3]. Instead, empirical capturing users' eye fixation-derived metrics [6,7]. These metrics have
studies typically only assume cognitive effort's theoretical existence as a been successfully linked to cognitive processes indicating various forms
function of cognitive fit and as the mechanism that drives decision speed of attention, interest, mental effort, and cognitive load in psychology
and accuracy. Without a greater understanding of the theorized mech­ [8], neuroscience [9], art, human factors, marketing, and computer
anism, the research could be exposed to criticism that challenges the science [10]. In recent years we have also seen the adoption of eye
fundamental underpinnings of substantial portions of CFT literature [4]. tracking technology in IS research [11–15]. Given the mature and
One possible reason for the existing state of the research may be found in validated use of eye-tracking-based metrics across fields, the application
the difficulty of measuring cognitive effort. Therefore, there is a clear of eye tracking to evaluate cognitive effort in the CFT context is timely
need to enhance current research methods and theory by offering and and appropriate.
testing ways cognitive effort could be measured and integrated into CFT The remainder of the paper is organized as follows. We first highlight
and the broader data visualization literature. the relevant background associated with CFT and the role of cognitive
To address this opportunity, in this research effort, we turn our effort and introduce eye tracking as an approach for measuring cognitive
attention to a physiological indicator of cognitive effort by deploying effort. We then present a research model and develop hypotheses based

* Corresponding author.
E-mail address: dbacic@luc.edu (D. Bačić).

https://doi.org/10.1016/j.dss.2022.113862
Received 26 February 2022; Received in revised form 9 August 2022; Accepted 23 August 2022
Available online 30 August 2022
0167-9236/© 2022 Elsevier B.V. All rights reserved.
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

on the fundamental relationships underlying CFT. The following sec­ brain is engaged in processing visual information more than processing
tions present our experimental design and data analysis. We then discuss information from any other sense [36]. Due to the cognitive control of
our findings and contextualize these results by discussing implications, eye movement, the importance of the vision sense, and technology ca­
limitations, and potential next steps. pabilities to capture eye gaze behavior accurately, eye tracking has
emerged as a method to assess one's cognitive processes when
2. Background observing/evaluating a stimulus. Eye movement is often described
through three critical events: fixations - eye movements that stabilize the
2.1. Cognitive fit theory retina over a stationary object of interest; saccades - rapid movement
between fixations; and smooth pursuit - visually tracking a moving
A series of studies, starting with the Minnesota experiments [16], target. We process visual information between saccades, fixating when
compared decision performance in various tasks by offering subjects we keep our gaze relatively steady for short periods to reposition a new
information in tabular and graphical formats. These early studies yiel­ image. Fixations take up most of our viewing time and correspond to the
ded inconclusive results, motivating the introduction of CFT [1,2] to desire to maintain one's gaze on an object of interest. Eye tracking
interpret those results and provide a theoretical basis to predict decision technology allows us to capture these eye movement events by using an
performance. CFT introduced the idea of cognitive fit or a match be­ image of the eye based on pupil and corneal reflection to estimate the
tween the problem representation and the problem-solving task that point of gaze [10].
occurs when the problem representations support the task strategies Eye tracking-based metrics have been used across many disciplines
required to perform that task [1]. The theory advocated that, when faced measuring attention in reading, psycholinguistics, website usage, online
with a symbolic task, the user's mental solution model requires a tabular gaming, writing, and language acquisition [7]. Largely driven by the
representation of the problem to achieve cognitive fit. Alternatively, a ‘eye-mind hypothesis’ [5], eye tracking is considered effective in
graphical representation of the problem was the most effective in sup­ assessing a user's attention and in reflecting cognitive effort [12] as it
porting decision processes when solving spatial tasks. CFT, over time, reveals how the user reads and scans the displayed information by
gained popularity and was used across numerous domains, such as capturing users' eye fixation, saccadic movement, blinks, and pupillary
modeling [17], software tools [18], social network analysis [19], responses [9]. This study focuses on fixation-derived metrics of fixation
geographic representation [20], knowledge management systems [21], duration (fixation length in milliseconds) and fixation count (the num­
virtual reality [22], decision aids [23], to name a few. In the process, ber of fixations). These are the most used eye tracking metrics to mea­
CFT moved beyond symbolic and spatial tasks and was deployed across sure cognitive processing [5], allowing one to investigate whether
varied and more complex task types, some of which had inconclusive individuals mainly scan or attend to information as they reason or make
[24,25] or contradicting results [26]. The theory was also expanded a judgments about it [37].
number of times by integrating problem-solving skills [2], external and A number of studies have assessed cognitive effort through fixation-
internal representation congruence [27], and problem domains [28]. based metrics [12]. Eye-fixations have been used to detect cognitive
Regardless of the context or empirical findings, virtually all CFT- effort differences in web-browsing viewing behavior and preferences,
based work recognizes the role of cognitive effort as the underlying showing that baby boomers had significantly more fixations than those
mechanism in theory- or hypotheses-building. For example, in the of Generation Y [38]. Similarly, eye-fixation data was used to assess
original theory-building paper, Vessey and Galletta state: “One of the cognitive effort in evaluating the impact of visual complexity on how
ways to reduce processing effort [emphasis added] is to facilitate the users view a webpage [39]. Fixation duration and count were used to
problem-solving processes that human problem solvers use in assess the effect of executive working memory load on a search task and
completing the task. This can be achieved by matching the problem found that the influence of working memory load on search times arose
representation to the task, an approach that is known as cognitive fit” (p. from changes in fixation durations [40]. Fixation frequency was used to
65 [2]). However, despite literature-wide consensus on the critical role assess cognitive effort when interacting with decision constructs in the e-
of cognitive effort, the research rarely integrates cognitive effort mea­ business transaction context [41], while other studies suggested that in
surement. Bačić and Henry addressed this gap and attempted to measure the task of free viewing, fixation duration is sensitive to memory and
cognitive effort as a perceptual construct [3]. However, the findings processing load [42] or the role of fixation durations and count in
using perceptual measure (cognitive strain) were mixed, and a sugges­ assessing the effect of decision framing [43]. In the context of various
tion was made that perceptual and subjective measures of cognitive forms of visualizations (salient to this research), fixation-based metrics
effort may have some drawbacks. More specifically, users may not were used to understand cognitive effort in the context of design layout
consciously recognize their cognitive decision-making process when of UML diagrams [44], business process visual layouts [14], the impact
using visualizations [29], or they may be uncomfortable, unwilling, and of the misuse of color in dashboard design [45]. This eye-tracking
unable to accurately self-report cognitive processes [30]. To address this research is rooted in the concept of assessing cognitive load based on
opportunity, we turn our attention to a physiological indicator of the cognitive load theory [46], where the term mental load often reflects
cognitive effort by deploying eye tracking technology. the task-centered dimension of the cognitive capacity needed to process
task complexity, while the term effort is often reflecting human-
2.2. Eye tracking and cognitive effort centered, the internal dimension of an individual's invested cognitive
capacity [47]. In this paradigm, the intensity of effort is considered the
Cognitive effort has been defined as cognitive resources needed to index of cognitive load [48]. In CFT, we see three elements (task,
complete a task [31]. Cognitive effort research originated as a theoret­ external representation, and the mental model) elevated in cognitive
ical construct in cognitive psychology [32–34] whose impact on human load theory through the concepts of extraneous (task), intrinsic and
performance is widely recognized. With advances in biometric tech­ germane cognitive loads (representation and mental model). Therefore,
nology, sciences such as neuroscience, vision science, and psychology it is not surprising that CFT literature uses cognitive load, workload,
have made significant gains in our ability to collect and track human burden, effort, and cognitive effort interchangeably to describe cogni­
neurophysiological signals and link those signals to underlying percep­ tive fit's user impact. For that reason, our research is informed and builds
tual and cognitive processes, such as cognitive effort. on cognitive load literature (see detailed review in [49]) and extends its
Based on the notion that visual attention and eye movements are assessment via eye-tracking to the concept of cognitive fit.
cognitively controlled [6], we focus on assessing cognitive effort In summary, given the importance of cognitive fit, cognitive effort's
through eye tracking technologies' ability to capture vision and eye gaze central role in understanding data representations' performance impli­
behavior in this paper. Vision is our dominant sense [35], as the human cations, and the lack of measures to assess cognitive effort, we utilize eye

2
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

tracking to address this gap in the CFT literature. Specifically, we pro­ decrease in cognitive effort when a spatial task is matched with a
pose using eye fixation count and average duration to assess cognitive graphical data representation (spatial format) and when a symbolic task
effort. is matched with a tabular representation (symbolic format). If, on the
other hand, there is a mismatch, the user is forced to exert additional
3. Hypothesis development cognitive effort to mentally transform data representation to be able to
solve the task. In this study, we adopt this context of spatial vs. symbolic
According to CFT, if an external problem representation does not task and tabular vs. graphical representation as it represents the original
match that emphasized in the task, there is nothing to guide the context that gave rise to CFT. Furthermore, the table vs. graph debate
decision-maker in working toward a task solution. As a result, a greater continues to be relevant in the practical arena of data visualization.
cognitive effort is required to transform the information into a form Lastly, it is important to understand and measure the impact of fit on
suitable for solving that particular type of problem [1,2]. The lens of cognitive effort in the original context before evaluating in other con­
describing cognitive effort as the emergent property of cognitive fit and texts where the link to the original theory may be altered [3].
as the underlying mechanism that influences decision performance Early empirical work applied CFT in the context of simple mental
dominates the CFT empirical literature [4]. For example, in the exten­ tasks [2,24]. Therefore, in line with previous work [3,24,25], we hy­
sion of CFT to natural language, the assumption was made that pothesize the impact of cognitive fit on cognitive effort for simple tasks
“cognitive fit has a characteristic such that consistent mental represen­ as a baseline (H1,H2). On the other hand, research applied CFT and
tations reduce the mental effort required to solve a problem [50].” A cognitive effort mechanism to more complex tasks in the context of
similar mechanism was assumed in the context of spatial decision sup­ financial statement analysis [26], geographic information systems [52],
port systems and paper maps: “…this should create a decision-making interruptions [25], operations management [24], quality assurance
environment that more consistently fits the cognitive requirements of [53], and financial accounting [3]. Because of somewhat mixed findings
the decision-maker and thereby reduces cognitive load [20].” The same when it comes to decision accuracy and decision time in more complex
mechanism is declared in the context of web search and mobile devices: tasks, it is important to validate the impact of cognitive fit on cognitive
“If a mismatch between task and information presentation occurs, users effort for a task with higher complexity; hence we hypothesize the same
must make extra cognitive effort to transform information into a format nature of the relationship for complex tasks as well (H3, H4).
that is suitable for accomplishing the task [51].” These, and many other In H1a – H4a, we use the average fixation duration to assess cogni­
examples various contexts [18,21,27], suggest that cognitive fit will lead tive effort:
to lower cognitive effort when compared to an alternative scenario
H1a. For Simple-symbolic tasks, information representation in table
without such fit [3]. Therefore, we reintroduce cognitive effort more
format results in lower average eye fixation duration than in graph
explicitly as an outcome stemming from cognitive (mis)fit, and we
formats.
measure eye fixation-based metrics to assess the level of cognitive effort
in that context (Fig. 1). H2a. For Simple-spatial tasks, information representation in graph
This model is contextualized in understanding the critical role of the format results in lower average eye fixation duration than in table
task. The importance of task was recognized early on and was part of the formats.
initial introduction of CFT as a response to inconclusive results when
H3a. For Complex-symbolic tasks, information representation in table
representing data in tabular or graphical format [1]. Initial CFT work
format results in lower average eye fixation duration than in graph
acknowledged that the link between data representation and task
formats.
characteristics would not be easy due to a large number of task char­
acteristics [1]. As a response, a new task taxonomy was developed based H4a. For Complex-spatial tasks, information representation in graph
on cognitive style and task requirements characteristics resulting in the format results in lower average eye fixation duration than in table
introduction of spatial and symbolic tasks. Spatial tasks consider the formats.
problem area as a whole rather than as discrete data values and require
Similarly, in H1b – H4b, we stated the same relationship using eye
making associations or perceiving relationships in the data. A typical
fixation count to assess cognitive effort:
spatial task, for example, may require an understanding and recognition
of trend patterns in data. On the other hand, symbolic tasks involve H1b. For Simple-symbolic tasks, information representation in table
extracting discrete and precise data values [2]. Following the intro­ format results in lower eye fixation count than in graph formats.
duction of these two categories, the original theory and associated
H2b. For Simple-spatial tasks, information representation in graph
research stream suggest an emergence of cognitive fit and resulting
format results in lower eye fixation count than in table formats.

Fig. 1. Research model.

3
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

H3b. For Complex-symbolic tasks, information representation in table to invest in. The complex symbolic task required participants to assess
format results in lower eye fixation count than in graph formats. 11 information cues (dollar amount, firm, accounts payable, accrued
expenses, notes payable, bonds payable, total liabilities, fixed amount of
H4b. For Complex-spatial tasks, information representation in graph
total liabilities, fixed percent limit for notes payable, and fixed percent
format results in lower eye fixation count than in table formats.
limit for accounts payable) and perform acts of comparison and
ordering. Given the number of cues and behavioral actions [54], this
4. Methodology
task involved substantially more complexity for the user when compared
to two simple tasks. Further, the task required participants to obtain
4.1. Experimental design
specific data by directly extracting information, which made it a sym­
bolic task [3,24,25]. (See Appendix, Figs. A7 and A8).
The experiment was a fully randomized within-subject three-factor
Participants were exposed to each experimental task using two data
design: Task Complexity (simple and complex), Task Type (spatial and
representation types: graph(s) and table(s). Tabular representations
symbolic), and Representation (table and graph). This resulted in 8 cell,
were tables with firms or locations placed vertically and attributes such
2 by 2 by 2 factorial design. All participants performed all experimental
as a month, year, and various metrics horizontally. Two-dimensional bar
tasks associated with eight cells in random order. To avoid the potential
charts and line charts were operationalized as the spatial format. All
bias of using the same answer from the same task and different repre­
representations and task problem statements fit on one computer screen
sentation influencing the answer in another representation, a slightly
(See Appendix A).
modified version of tasks was created while preserving task level of
complexity and task type. Eye fixation count and average fixation
duration were used to assess participants' cognitive effort. As most fix­ 4.2. Experimental procedure
ations are in the 200-300 ms duration range [6], we adopted 200 ms as a
minimum duration threshold to be considered a fixation. We recruited 34 participants from various business classes at a large
Four tasks (simple-spatial, simple-symbolic, complex-spatial, and public university in the Midwestern United States. Students received
complex-symbolic) and two data representation types (spatial (graph) partial course credit for their participation, a $10 gift card, and could
and tabular (tables)) were used to conduct the study. Both tasks and data win one of three US$50 rewards for performance in terms of accuracy
representations were adopted from Bačić and Henry [3] (prior research per unit of time. All subjects used the same computer with a 21′′
focused on a perceptual measure of cognitive effort), who themselves monitor. GP3 Eye Tracker (accuracy of 0.5–1 degree of visual angle,
modified it and adapted it based on earlier CFT task/representation 60hz update rate) was used to collect eye tracking data. To ensure high-
work [24,25]. The tasks followed Wood's (1986) definition of task quality data collection, each subject went through a 9-point calibration.
classification for simple (a low number of variables/information cues Once the calibration process was successfully completed, each partici­
and calculations) and complex (a high number of variables/information pant performed all four tasks using both tabular and graphical data
cues and calculations) [54]. In the simple-spatial task, subjects were representation. The order of tasks and data representation types for each
presented with performance indicators for three company locations subject was randomized in advance using a randomization algorithm.
across months and were asked to select the month where the combined There was no explicit time limit to this study. After all sessions were
performance indicator was the highest. Following Wood's [54] meth­ completed, recordings were reviewed, and data for two subjects was
odology to classify tasks, this simple-spatial task required participants to deemed unusable. Therefore, the subsequent analysis was conducted
use three information cues (performance indicator, location, and using data from 32 1 subjects (37.5% male and 62.5% female). The
month), add the indicator metric for each month and location, and then median age of the participants was 28, and the average age was 30 (SD
evaluate those across six months to discover the correct answer (month). = 8.75). Seventy-seven percent of the participants were undergraduate
This task required assessing the relationship between data points while students. Forty percent of the participants had at least some work
identifying the month associated with the highest indicator for com­ experience in a professional or technical job, while 18% had some work
bined locations [3,24,25] (See Appendix, Figs. A1 and A2). experience as a manager or proprietor. Participants came from a wide
The simple-symbolic task required subjects to obtain specific data by number of business majors.
directly extracting information. More specifically, subjects were asked A manipulation check for Task Complexity was completed by asking
to decide how much was the metric for a particular firm location and subjects their perceptions of task complexity on a 7-point Likert scale.
specific month above the target rate. Once the value for a specific The difference in mean values for complex (M = 5.72, SD = 1.427) and
location and indicator was located, it needed to be subtracted from the simple (M = 2.89, SD = 1.234) was found to be significant (F(32) =
target rate (also displayed), which resulted in the correct answer. 132.678, p < 0.01). A manipulation check for Task Type was completed
Following Wood's methodology to assess tasks [54], this simple sym­ by asking subjects their perception of the level of need for data re­
bolic task involved four information cues (metric, target metric, month, lationships and the need for precise values. The combined score for
and location), one behavior (calculate), and subtraction between the question one and reverse coded score for question two was used to assess
selected metric and its target [3,24,25] (See Appendix, Figs. A3 and A4). the subjects' ability to detect the difference in Task Type. The difference
In the complex-spatial task, subjects were asked to use existing in­ in mean values for spatial (M = 10.19, SD = 1.554) and symbolic (M =
formation for six firms to assess which companies meet two financial 9.5, SD = 1.293) was found to be significant (F(31) = 7.456, p < 0.01)
scenarios. The scenarios included target requirement conditions for and in the expected direction.
financial metrics such as Sales, Gross Profit, ROI, ROA, ROE, and ROE.
Using the same task assessment methodology [3,54], this complex- 5. Results and data analysis
spatial task required participants to evaluate 17 information cues and
use them in nine different acts of comparison. The task called for eval­ 5.1. Cognitive effort - average fixation duration (CEFD)
uating the relationship between data points and did not necessitate
precision, making it a spatial task [3](See Appendix, Figs. A5 and A6). Repeated measures Analysis of Variance (Table 1) showed a main
In the complex-symbolic task, also from prior research [3,24,25], effect of Task Complexity (F(1,31) = 8.352, p = 0.007, η2p = 212) and
subjects were asked to determine which firms to invest in based on the
given scenario's precise conditions. More specifically, we provided the
1
participants with five different balance sheet (liabilities) line items/ In HCI research, one of the main domains of eye-tracking studies, a meta­
data study of 465 publications discovered that the most common sample size is
categories associated with six firms. They had to determine which firms
12 participants, and the median was 18 participants [60]

4
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Table 1
ANOVA Results (CEFD).
Source Type III SS df Mean Square F Sig. Partial Eta Sq. Observed Powera

Task Complexity 0.007 1 0.007 8.352 0.007 0.212 0.799


Error (Task Complexity) 0.026 31 0.001
Task Type 0.002 1 0.002 4.284 0.047 0.121 0.518
Error (Task Type) 0.018 31 0.001
Representation 0.001 1 0.001 0.651 0.426 0.021 0.122
Error (Representation) 0.027 31 0.001
Task Complexity * Task Type 0.042 1 0.042 51.939 <0.001 0.626 1.000
Error (Task Complexity*Task Type) 0.025 31 0.001
Task Complexity * Representation 0.005 1 0.005 7.522 0.010 0.195 0.757
Error (Task Complexity*Representation) 0.021 31 0.001
Task Type * Representation 0.012 1 0.012 10.580 0.003 0.254 0.883
Error (Task Type*Representation) 0.036 31 0.001
Task Complexity * Task Type * Representation 0.008 1 0.008 15.889 <0.001 0.339 0.971
Error (Task Complexity*Task Type*Representation) 0.015 31 <0.001
a
Computed using alpha = 0.05.

Task Type (F(1,31) = 4.282, p = 0.047, η2p = 0.121). The Analysis of supported.
Variance also showed interaction effect of Task Complexity * Task Type Table 4 summarizes the findings on the hypothesized impact of Task
(F(1,31) = 51.939, p = 0.000, η2p = 0.626), Task Complexity * Repre­ Type and Representation fit on subjects' CEFD.
sentation (F(1,31) = 7.522, p = 0.01, η2p = 0.195), Task Type * Repre­
sentation (F(1,31) = 10.58, p = 0.003, η2p = 254), and Task Complexity * 5.2. Fixation count (CEFC)
Task Type * Representation (F(1,31) = 15.889, p = 0.000, η2p = 0.339.
Representation was the only variable not exhibiting a statistically sig­ The Analysis of Variance (Table 5) showed significant impact of the
nificant main effect on CEFD (F(1,31) = 0.651, p = 0.426, η2p = 0.021). To main effect of Task Complexity (F(1,31) = 252.204; p = 0.000; η2p =
test Hypotheses 1a through 4a, we need to look for a statistically sig­ 0.891), Task Type (F(1,31) = 140.026; p = 0.000; η2p = 0.819), Repre­
nificant interaction effect between Task Type (spatial vs. symbolic) and sentation (F(1,31) = 13.965; p = 0.001; η2p = 0.311), as well as inter­
Representation (table vs. graph) on CEFD. action effects of Task Complexity * Task Type (F(1,31) = 72.847; p =
Since the desired interaction effect was detected, a pairwise t-test 0.000; η2p = 0.701), Task Complexity * Representation (F(1,31) = 5.962;
was conducted to evaluate if the differences in means between cells p = 0.021; η2p = 0.161), and Task Type * Representation (F(1,32) =
specifically hypothesized in H1 through H4 are significant. Pairwise 11.330; p = 0.002, η2p = 0.268).
comparison of direction and statistical significance of the mean differ­ As with CEFD, the desired interaction effect was detected using CEFC
ence for Cells 2–1, 3–4, 6–5, and 7–8 (See Table 2 for descriptive sta­ and a pairwise t-test was conducted to evaluate if the difference in means
tistics and Table 3 for mean differences) was conducted to evaluate H1a between cells specifically hypothesized in H1b through H4b is signifi­
through H4a. Pairwise comparison of the CEFD mean difference between cant. Pairwise comparison of mean difference (See Table 6 for descrip­
Simple-Spatial task – Graph (Cell 2; M = 0.359; SD = 0.044) and Simple- tive statistics and Table 7 for mean differences) between Simple-Spatial
Spatial task – Table (Cell 1; M = 0.378; SD = 0.037) of − 0.019 (SD = task – Graph (Cell 2; M = 119.53; SD = 52.89) and Simple-Spatial task –
0.006) was significant (p = 0.002), and in the expected direction, Table (Cell 1; M = 159.72; SD = 67.01) of − 40.188 (SD = 10.364) was
therefore H1a was supported. Pairwise comparison of the CEFD mean significant (p = 0.001), and in the expected direction, therefore H1b was
difference between Simple-Symbolic task – Table (Cell 3; M = 0.384; SD supported. Pairwise comparison of the CEFC mean difference between
= 0.050) and Simple-Symbolic task – Graph (Cell 4; M = 0.415; SD = Simple-Symbolic task – Table (Cell 3; M = 105.59; SD = 62.73) and
0.053) of − 0.031 (SD = 0.009) was significant (p = 0.002), and in the Simple-Symbolic task – Graph (Cell 4; M = 133.66; SD = 67.48) of
expected direction, therefore H2a was supported. Pairwise comparison of − 28.062 (SD = 11.240) was significant (p = 0.018), and in the expected
the CEFD mean difference between Complex-Spatial task – Graph (Cell 6; direction, therefore H2b was supported. Pairwise comparison of the mean
M = 0.376; SD = 0.032) and Complex-Spatial task – Table (Cell 5; M = difference between Complex-Spatial task – Graph (Cell 6; M = 470.63;
0.391; SD = 0.037) of − 0.015 (SD = 0.003) was significant (p = 0.000) SD = 161.62) and Complex-Spatial task – Table (Cell 5; M = 571.66; SD
and in the expected direction therefore H3a was supported. Lastly, pair­ = 228.15)) of − 101.031 (SD = 30.28) was significant (p = 0.002), and in
wise comparison of the CEFD mean difference between Complex- the expected direction, therefore H3b was supported. Pairwise compari­
Symbolic task – Table (Cell 7; M = 0.368; SD = 0.041) and Complex- son of the mean difference between Complex-Symbolic task – Table (Cell
Symbolic task – Graph (Cell 8; M = 0.359; SD = 0.044) of 0.009 (SD 7; M = 306.13; SD = 149.39) and Complex-Symbolic task – Graph (Cell
= 0.009) was not significant (p = 0.314), therefore H4a was not 8; M = 286.63; SD = 86.27) of 19.500 (SD = 25.53) was not significant
(p = 0.451); therefore, H4b was not supported.
Table 8 summarizes findings on the hypothesized impact of Task
Table 2 Type and Representation fit on cognitive effort measured through CEFC.
Within Subject Factors & Descriptive Statistics (CEFD).
Within Subject Factors 6. Discussion
Task Task Representation Cell Mean Std. N
Complexity Type Error Our study finds that, in most tested tasks (all except complex-
Table 1 0.37765 0.03669 32 symbolic task), users demonstrate meaningful changes in the cognitive
Spatial effort, measured using eye-tracking fixation metrics based on the con­
Graph 2 0.35863 0.04376 32
Simple
Symbolic
Table 3 0.38446 0.04973 32 dition of cognitive fit. We detected the impact beyond the individual
Graph 4 0.41528 0.05385 32 effects of task complexity, task type, or representation.
Table 5 0.39070 0.03684 32
Spatial
Graph 6 0.37588 0.03155 32
Complex
Table 7 0.36835 0.04129 32
Symbolic
Graph 8 0.35937 0.04355 32

5
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Table 3
Pairwise Comparisons (CEFD).
Task Task Representation Mean Diff. Std. Siga 95% Conf. Interval for Diff.a

Complexity Type (I) Cells (J) Cells (I-J) Error L. Bound - U. Bound

Spatial 2 1 − 0.019* 0.006 0.002 − 0.031 0.007


Simple
Symbolic 3 4 − 0.031* 0.009 0.002 − 049 0.013
Spatial 6 5 − 0.015* 0.003 <0.001 − 0.021 − 0.008
Complex
Symbolic 7 8 0.009 0.009 0.314 − 0.009 0.027
*
The mean difference is significant at the 0.05 level.
a
Adjustment for multiple comparisons: LSD.

solution (Fig. 2).


Table 4
To enrich our understanding of the performance associated with the
H1-H4 Summary (CEFD).
impact of fit and cognitive effort, we conducted a post-hoc analysis and
Hypotheses 1–4 Cells Cell Mean Findings found that the advantage of using graphs for the simple-spatial task does
Diff in
result in significantly quicker decision time. The speed improvement did
CEFD
not impact decision accuracy, nor did it impact users' confidence in the
H1a: For Simple-symbolic tasks, information 2 vs − 0.019*** Supported accuracy of their answer (Table 9). The simplicity of the task used may
representation in table format results in 1
lower average eye fixation duration than
explain relatively high accuracy (0.94 vs. 0.84) regardless of the
in graph formats. representation.
H2a: For Simple-spatial tasks, information 3 vs − 0.031*** Supported In the context of the simple-symbolic task, users exhibited 31 ms
representation in graph format results in 4 (384 ms vs. 415 ms) longer fixation durations (8% difference) when
lower average eye fixation duration than
using graphs compared to fixation durations using tables. At the same
in table formats.
H3a: For Complex-symbolic tasks, 6 vs − 0.015*** Supported time, those same users were forced to make, on average, about 28
information representation in table format 5 (133.66 vs. 105.59) more fixations (27% difference) to reach a solution
results in lower average eye fixation (Fig. 3).
duration than in graph formats. In relative contrast to our study, another study [24] found no support
H4a: For Complex-spatial tasks, information 7 vs 0.009 Not
representation in graph format results in 8 Supported
for impact on decision performance (time) for simple-symbolic tasks,
lower average eye fixation duration than indirectly contradicting the CFT-assumed role of cognitive effort. The
in table formats. author suggested that subjects might have believed that graphical data
***
Sig < 0.01. representation gave them no chance to arrive at an optimal solution,
thus resorting to guesstimating and solving the task quicker and with
less effort but with less accuracy. We were unable to support this
6.1. Simple tasks
explanation as our subjects showed the difference in our measures of
In the case of simple tasks, users' attempt to solve the tasks does
appear to be in line with expectations, as fixation-based metrics
Table 6
captured the attention and the focus of the decision maker's pupil on Within Subject Factors and Descriptive Statistics (CEFC).
data representation. If representation made it hard for a user to assess
Task Complexity Task Type Format Cell Mean Std. Error N
the meaning of a particular representation area, they needed to focus
more intently and frequently to understand information. In our study, Spatial
Table 1 159.72 67.012 32
Graph 2 119.53 52.892 32
this led to a longer average fixation duration and higher fixation counts. Simple
Table 3 105.59 62.731 32
In the instance of simple–spatial task, users fixated on average 19 ms Symbolic
Graph 4 133.66 67.483 32
(378 ms vs. 359 ms) longer (5.3% difference) when assessing informa­ Table 5 571.66 228.146 32
Spatial
tion with tables over when assessing information with graphs. At the Complex
Graph 6 470.63 161.619 32
same time, those same users were forced to make, on average, over 40 Table 7 306.13 149.386 32
Symbolic
Graph 8 286.63 86.269 32
(119.53 vs. 159.72) more fixations (33% difference) to arrive at a

Table 5
ANOVA Results (CEFC).
Source Type III df Mean F Sig. Partial Eta S. Obs. Powera
SS Square

Task Complexity 4,986,568.129 1 4,986,568.129 252.204 <0.001 0.891 1.000


Error (Task Complexity) 612,929.996 31 19,771.935
Task Type 958,563.379 1 958,563.379 140.026 <0.001 0.819 1.000
Error (Task Type) 212,213.246 31 6845.589
Representation 70,390.723 1 70,390.723 13.965 0.001 0.311 0.951
Error (Representation) 156,260.902 31 5040.674
Task Complexity * Task Type 670,863.379 1 670,863.379 72.847 <0.001 0.701 1.000
Error (Task Complexity*Task Type) 285,484.746 31 9209.185
Task Complexity * Representation 47,007.660 1 47,007.660 5.962 0.021 0.161 0.657
Error (Task Complexity*Representation) 244,422.465 31 7884.596
Task Type * Representation 89,737.691 1 89,737.691 11.330 0.002 0.268 0.903
Error (Task Type*Representation) 245,539.934 31 7920.643
Task Complexity * Task Type * Representation 705.566 1 705.566 0.088 0.768 0.003 0.060
Error (Task Complexity*Task Type*Representation) 247,821.559 31 7994.244
a
Computed using alpha = 0.05.

6
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Table 7
Pairwise Comparisons (CEFC).
Task Task Representation Mean Diff. Std. Siga 95% Conf. Interval for Diff.a

Complexity Type (I) Cells (J) Cells (I-J) Error L. Bound - U. Bound

Spatial 2 1 − 40.187* 10.364 0.001 − 61.326 − 19.049


Simple
Symbolic 3 4 − 28.062* 11.240 0.018 − 50.987 − 5.138
Spatial 6 5 − 101.031* 30.282 0.002 − 162.792 − 39.271
Complex
Symbolic 7 8 19.500 25.530 0.451 − 32.568 71.568
*
The mean difference is significant at the 0.05 level.
a
Adjustment for multiple comparisons: LSD.

model to solve the problem. However, what is novel and revealing in this
Table 8
study is the ability to detect the underlying cognitive effort mechanism
H1-H4 Summary (CEFC).
in the context of simple tasks. One previous study [3] that attempted to
Hypotheses 1b – 4b Cell Mean Diff in Findings detect cognitive effort as a perceptual measure using survey questions
CEFC
was unable to identify a meaningful change in subjects' cognitive effort.
H1b: For Simple-symbolic tasks, information − 40.188*** Supported That study suggested that cognitive fit's weak impact on cognitive effort
representation in table format results in lower perception may have arisen because of the simplicity of tasks. Hence,
eye fixation count than in graph formats.
H2b: For Simple-spatial tasks, information − 28.063** Supported
when tasks are too simple, users may not consciously recognize their
representation in graph format results in cognitive decision-making process [29], or they may be unable to
lower eye fixation count than in table accurately self-report on cognitive processes such as cognitive effort
formats. [30]. Our study, however, suggests that even in these simple tasks, the
H3b: For Complex-symbolic tasks, information − 101.031*** Supported
CFT assumption of changes in cognitive effort is validated but should be
representation in table format results in lower
eye fixation count than in graph formats. assessed using physiological rather than perceptual measures.
H4b: For Complex-spatial tasks, information
representation in graph format results in Not
19.500
lower eye fixation count than in table Supported 6.2. Complex tasks
formats.
**
Sig < 0.05. For complex tasks, the findings offer less clarity. For the complex-
***
Sig < 0.01. spatial task, users exhibited 15 ms (376 ms vs. 391 ms) shorter

cognitive effort as predicted by CFT. Furthermore, our post-hoc analysis Table 9


revealed that subjects spent less time, were more accurate, and were Simple-spatial task supplemental analysis.
more confident in their answers when solving simple-symbolic tasks Simple-Spatial Task Mean Sig.
with tabular data representation (Table 10). Difference
Graph Table (Lack of
While it is possible that for some tasks, users may decide to optimize (Fit) fit)
and therefore exert levels of effort not in line with CFT, we caution when
Hypothesized:
making this assumption for simple tasks without considering the non-
Cognitive effort -
perceptual assessment of cognitive effort or exploring factors other 0.38446 0.41528 − 0.031* 0.002
FD
than effort. However, it becomes difficult to correctly assess any other Cognitive effort -
119.53 159.72 − 40.187* 0.002
factor's impact if one fails to measure cognitive effort. FC
Supplemental
The results for simple tasks are well aligned with the existing CFT
analysis
theoretical assumptions. Our results indicate strong support for the Time 48.056 75.697 − 27.641* <0.001
notion that, in simple tasks, when users are provided with graphical Accuracy 0.94 0.84 0.094 0.184
representations performing symbolic tasks or tabular representations Accuracy
6.094 6.281 − 0.187 0.311
performing spatial tasks (i.e., lack of fit), they experience an increase in Confidence

cognitive effort relative to when representations match the mental

Fig. 2. Assessing cognitive effort in simple-spatial task.

7
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Fig. 3. Assessing cognitive effort in simple symbolic task.

Table 10 Table 11
Simple-symbolic Task Supplemental Analysis. Complex-spatial Task Supplemental Analysis.
Simple-Symbolic Task Mean Sig. Complex-spatial Task Mean Sig.
Difference Difference
Table (Fit) Graph (Lack of Graphs Tables (Lack of
fit) (Fit) fit)

Hypothesized: Hypothesized:
Cognitive effort - Cognitive effort -
0.38446 0.41528 − 0.019* 0.002 0.37588 0.39070 − 0.015* <0.001
FD FD
Cognitive effort - Cognitive effort -
105.59 133.66 − 28.062* 0.018 470.63 571.66 − 101.031* 002
FC FC
Supplemental Supplemental analysis
analysis Time (Sec) 205.863 251.525 − 45.662* 0.001
Time 60.963 47.459 − 13.503* 0.013 Accuracy (0–1) 0.72 0.59 0.125 0.292
Accuracy 1.00 0.59 − 0.406* <0.001 Accuracy
5.50 5.38 0.125 0.635
Accuracy Confidence (1–7)
6.41 5.25 1.156* 0.001
Confidence

However, for the complex-symbolic tasks, we found that task-


fixation durations (3.8%) when using graphs over fixation durations representation fit had no significant implication on cognitive effort. In
when using tables while at the same time exhibiting on average about the existing literature, we see examples of both similar and contradicting
101 (470.63 vs 571.66) fewer (17.8%) fixations (Fig. 4). As with the results when it comes to empirical studies for more complex, symbolic
simple-spatial task, and as suggested by CFT, during the complex-spatial tasks. The only other study that measured cognitive effort directly in this
tasks, users demonstrated less effort when using graphical context was able to detect the impact of cognitive fit on effort using a
representations. perceptually measured cognitive effort [3]. This divergence provides
Our post-hoc analysis also found that the hypothesized impact on further evidence for the need to understand factors driving the potential
cognitive effort was associated with a difference in decision speed but difference in perception of effort vs. its physiological indication.
with no difference in decision accuracy and the confidence (Table 11). However, our results for the complex-symbolic task align with an
This is further evidence that the association between cognitive fit, explanation that users, when facing complex-symbolic problems with
cognitive effort and decision accuracy for complex tasks requires graphical representations, had no direct ability to calculate precise
continued research. values and, at some point, potentially recognized no value in further

Fig. 4. Assessing cognitive effort in complex-spatial task.

8
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

evaluation. This notion has been noted in consumer research where Table 13
consumers may avoid a particular choice selection process because it Cognitive Fit and Effort Implications Summary.
requires a significant effort and opt to use an easier process instead Simple- Simple- Complex- Complex-
(Cooper-Martin, 1994). Speier (2006) offered related interpretation by Spatial Symbolic Spatial Symbolic
introducing a complexity framework suggesting that tasks could be Hypothesized:
evaluated across a continuum, from low to high experienced complexity. Cognitive effort - X X X
This experienced complexity results from objective complexity [54], FD
user characteristics, and the decision aid [55]. In the experienced Cognitive effort - X X X
FC
complexity framework, tasks are sequentially categorized into simple Supplemental
(trivial, low demand), feasibly-solvable, trade-off, and limiting tasks analysis
based on information cues and information processing strategies Time (Sec) X X X
(perceptual, analytical) most likely to be deployed by the user. Ac­ Accuracy (0–1) X X
Accuracy X X
cording to this framework, the complex-symbolic task used in our
Confidence (1–7)
research (along with our subjects' characteristics and provided repre­
sentation) resulted in experienced complexity most accurately described
as a ‘trade-off’ task. In trade-off tasks, users feel that additional effort tracking technology, specifically fixation-based metrics, can be used in
will not result in adequate improvement in performance; hence they the understanding of cognitive processes initiated by our data repre­
may decide to trade off lower decision performance for lower effort. Our sentation choices. Our findings provide the research community with a
post-hoc analysis confirms this potential explanation as subjects, when challenge and opportunity to consider physiological sensors to assess the
using graphical representation, appear to have traded lower accuracy for cognitive effort mechanism that is central to CFT. We focused on
not spending additional effort. When using tabular representation, fixation-based eye-tracking measures as they reveal an important visual
subjects decided to continue extending cognitive effort that would be attention component of user interaction that remains the predominant
rewarded with higher accuracy (Table 12 – see Accuracy). Furthermore, approach to assessing cognitive processes and effort. In addition to
our subjects appear to be willing to engage in this trade-off evaluation continuing to measure both perceptual/experienced and eye-tracking-
and choice, as the difference in accuracy is accompanied with the dif­ based cognitive effort (including pupillometry [56,57], we encourage
ference in the reported confidence in decision accuracy (Table 12 – see others in CFT research to consider additional physiological and neuro­
Accuracy Confidence). logical sensors and related data that can be useful in expanding our
A summary of the findings from our supplemental analysis indicating understanding and assessment of cognitive effort (GSR devices, facial
the impact of cognitive fit on cognitive effort and the reported post-hoc expression detection, EEG, fMRI).
findings for decision performance (time, accuracy, and confidence) Our findings raise an important question regarding possible differ­
across tasks is shown in Table 13. Fit for both spatial tasks had no impact ences between cognitive effort perception, physiological indicators of
on accuracy. On the other hand, results for symbolic tasks indicate a cognitive effort, and theoretical inferences researchers make about the
greater role of cognitive fit in decision accuracy. For simple tasks, the cognitive effort mechanism in the absence of measurement. For
association with accuracy is achieved without a trade-off with cognitive example, studies in a similar context suggested that users are unable to
effort, while for the complex task, we find an indication of the said trade- perceive cognitive effort in simple tasks [3] or that even in simple yet
off. non-trivial/feasibly-solvable tasks, the performance benefit of infor­
mation representation and task fit may disappear [24] as users switch
7. Implications, limitations and next steps from analytical to perceptual processing strategy [26,58]. Our findings
suggest that theoretical interpretations are possible but potentially
There are important implications, questions, and next steps emerging premature until we understand the cognitive effort phenomenon better.
from this research. We identified a consensus around the role of cogni­ One way to achieve that better understanding is to measure cognitive
tive effort in the context of the visual representation of data. Cognitive effort more directly instead of only focusing on the interpretation of task
effort is the underlying mechanism and one of the key ingredients that performance as the indicator of effort. The ability to detect the change in
makes user response prediction and visual representation recommen­ cognitive effort physiologically, even in simple tasks, and perceptually
dation choices possible. Yet IS literature that depends on a deeper un­ in more complex tasks suggests that focusing on the theoretical inter­
derstanding of that cognitive effort is scarce, mostly theoretical, and pretation of effort based on decision performance may not adequately
lacking in its direct assessment. We responded to this challenge by describe the effort placed on users. Hence, we suggest a need to revisit
conducting the first study to assess, in a controlled environment and interpretations others have made in CFT empirical work about the
through the CFT lens, how task type and representation match impact cognitive effort mechanism, especially when interpreting unsupported
physiological assessment of effort. We provided evidence that eye- hypotheses. Replication studies that include a direct assessment of
cognitive effort may shine a new light on a number of CFT-based studies,
Table 12 offering potential new insights and alternate explanations.
Complex-symbolic Task Supplemental Analysis. Next, while the condition of cognitive fit induced a similar impact on
both fixation duration and count in our experimental setting, this may
Complex-symbolic Task Mean Sig.
Difference not be the case in other contexts. For example, in certain scenarios, a
Tables (Fit) Graphs (Lack manifestation of cognitive effort may primarily stem from the user's
of fit)
confusion or distraction and not from a need to explore visualization
Hypothesized: deeply [10]. In those instances, such as more novel or bespoke data
Cognitive effort -
0.36835 0.35937 0.009 0.314 displays, the user may primarily focus on assessing how to ‘read’ the
FD
Cognitive effort - display. In such scenarios, the user's effort may be mainly captured
306.13 286.63 19.500 0.451
FC through a higher fixation count. Alternatively, cognitive effort may be
Supplemental analysis driven by a necessity for deeper or more purposeful attention. In those
Time (Sec) 139.478 122.566 16.912 0.165 instances, the user's cognitive effort may manifest through longer fixa­
Accuracy (0–1) 0.53 0.06 − 0.469*
tion duration and less so through fixation count. While the user is
<0.001
Accuracy
5.78 5.13 0.656* 0.008 required to extend more cognitive effort to solve the task in both in­
Confidence (1–7)
stances, our ability to assess cognitive fits' mechanism (effort) is enabled

9
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

by tracking both fixation duration and fixation count. relationships, experiments using more complicated data gathering
Additionally, we provide further evidence that understanding and methods such as eye tracking traditionally include fewer subjects. We
predicting cognitive effort using CFT under more complex tasks is attempted to mitigate the issue by designing a fully randomized, within-
difficult and may challenge the bounds of CFT. Our study appears to subject experiment that includes roughly double the number of subjects
confirm the notion of a potential cross-over point being induced by used in a typical eye-tracking study [60]. Future research may benefit
increased task complexity and data representation's ability to mitigate it from including more subjects to allow for more complex models and
[24]. Although more research and evidence are needed to understand relationships. While we adopted a 200 ms threshold in our eye-fixation
user impact when solving more complex tasks under CFT, our study operationalization based on existing literature, results may vary if using
suggests that a cognitive fit's hypothesized and perceived benefit may alternative thresholds. Lastly, the results should be evaluated in the
disappear once users reach that cross-over point. In the case of the context of lower-than-expected decision accuracy for spatial tasks. This
complex-spatial task, our subjects did not reach that point; thus, they underscores the importance of measuring cognitive fit, cognitive effort,
continued to benefit from lower effort inducement when representing and performance to understand the underlying relationship.
data in a spatial format (graphs), i.e., cognitive fit. In the case of the
complex-symbolic task, however, the task may be no longer considered 8. Conclusion
feasibly-solvable (where decision-makers can fully deploy their analyt­
ical and perceptual processes [59]) even when information is presented In this study, we identify and elevate the need for CFT research to
in a symbolic format. Instead, our complex-symbolic task becomes a focus on advancing our understanding and assessment of cognitive
trade-off task where users decide to satisfice and limit the effort even if effort. Recognizing the difficulty in measuring cognitive effort, we offer
that may lead to less accuracy [32]. In the context of this possible the potential of eye-tracking technology to assess cognitive effort
explanation, our study offers an important contribution as it is the first physiologically in the CFT context. Several important implications for
study in the CFT context to empirically capture effort-limiting ap­ phenomenon measurement, theory and empirical research, and practice
proaches by focusing on the effort itself rather than indirectly implying emerged from our study: (i) fixation-based metrics can be useful in
it by evaluating decision performance. Although we did not hypothesize advancing our understanding of cognitive fit and its cognitive effort
this optimizing relationship for the complex-symbolic task, existing mechanism, (ii) physiological indicators of cognitive effort can provide
theoretical work on effort optimization [32,34], task complexity [59], insights that may be missed if using only perceptual measures of
CFT research focused on complex tasks [3,24–26], and our research cognitive effort or theoretical-only discussion of cognitive effort, and
findings should provide sufficient support to test this explanation more (iii) while additional evidence is still needed, more complex tasks may
formally in future studies. push the boundary of CFT's predictive power using traditional perfor­
Our results and implications should be evaluated in the context of its mance measures (time and accuracy), requiring the ability to incorpo­
limitations. We used a controlled experiment with student subjects that rate multiple measures of cognitive effort, along with other variables
often do not represent the data visualization user population and may and theoretical lenses.
not be motivated to perform the experimental tasks. We attempted to
mitigate those issues by including students (i) from a diverse group of Author credit statement
academic and professional interests, (ii) from both undergraduate and
graduate-level education, and (iii) offering a monetary award for We declare that this manuscript has not been published previously
participation. Although most of our subjects are data visualization users and that it is not under consideration for publication elsewhere. In
or represent those users well, there is a great value in further studies that addition, we assure that both authors (listed on the Title Page document
would target additional population segments with more professional and in that particular order) have fully participated in the research and
experience, design experience, domain knowledge, and decision-making the article preparation and both of us have approved the submission.
responsibility. Our choice of task and data representation was pur­
poseful to incorporate tasks and data representations that are founda­ Declaration of Competing Interest
tional in existing CFT research and typical in current practice. In our
definition of task complexity, we adopted Wood's task framework and The authors declare that they have no known competing financial
the resulting CFT task complexity operationalization literature to interests or personal relationships that could have appeared to influence
conceptualize two task levels. Defining and classifying task complexity is the work reported in this paper.
a difficult endeavor, and other operationalizations of task complexity
are possible with different results. Therefore, we encourage future Data availability
research incorporating cognitive assessment across other tasks and
representations. Although our research detected most hypothesized The authors do not have permission to share data.

Appendix A

10
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Fig. A1. Simple-spatial task, Tabular representation.

Fig. A2. Simple-spatial task, Graphical representation.

Fig. A3. Simple-symbolic task, Tabular representation.

11
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Fig. A4. Simple-symbolic task, Graphical representation.

Fig. A5. Complex-spatial task, Tabular representation.

12
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Fig. A6. Complex-spatial task, Graphical representation.

Fig. A7. Complex-symbolic task, Tabular representation.

13
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

Fig. A8. Complex-symbolic task, Graphical representation.

References [17] V. Khatri, I. Vessey, V. Ramesh, P. Clay, S.-J. Park, Understanding conceptual
schemas: exploring the role of application and IS domain knowledge, Inf. Syst. Res.
17 (2006) 81–99.
[1] I. Vessey, Cognitive fit: a theory-based analysis of the graphs versus tables
[18] S. Goswami, H.C. Chan, H.-W. Kim, The role of visualization tools in spreadsheet
literature, Decis. Sci. 22 (1991) 219–240.
error correction from a cognitive fit perspective, J. Assoc. Inf. Syst. 9 (2008)
[2] I. Vessey, D. Galletta, Cognitive fit: an empirical study of information acquisition,
321–343.
Inf. Syst. Res. 2 (1991) 63–84.
[19] Z. Bin, S.A. Watts, Visualization of network concepts: the impact of working
[3] D. Bačić, R.M. Henry, Task-representation fit’s impact on cognitive effort in the
memory capacity differences, Inf. Syst. Res. 21 (2010) 327–344.
context of decision timeliness and accuracy: a cognitive fit perspective, AIS Trans.
[20] B.E. Mennecke, M.D. Crossland, B.L. Killingsworth, Is a map more than a picture?
Human-Computer Interact. (2018) 164–187.
The role of SDSS technology, subject characteristics, and problem complexity on
[4] D. Bačić, A. Fadlalla, Business information visualization intellectual contributions:
map reading and problem solving, MIS Q. 24 (2000) 601–629.
an integrative framework of visualization capabilities and dimensions of visual
[21] J.S. Giboney, S.A. Brown, P.B. Lowry, J.F. Nunamaker Jr., User acceptance of
intelligence, Decis. Support. Syst. 89 (2016) 77–86.
knowledge-based system recommendations: explanations, arguments, and fit,
[5] M.A. Just, P.A. Carpenter, Eye fixations and cognitive processes, Cogn. Psychol. 8
Decis. Support. Syst. 72 (2015) 1–10.
(1976) 441–480, https://doi.org/10.1016/0010-0285(76)90015-3.
[22] K.-S. Suh, Y.E. Lee, The effects of virtual reality on consumer learning: an empirical
[6] K. Rayner, Eye movements in reading and information processing: 20 years of
investigation, MIS Q. 29 (2005) 673–697.
research, Psychol. Bull. 124 (1998) 372–422, https://doi.org/10.1037/0033-
[23] X. Liu, K. Werder, A. Maedche, Novice digital service designers’ decision-making
2909.124.3.372.
with decision aids — a comparison of taxonomy and tags, Decis. Support. Syst. 137
[7] D. Bačić, Biometrics and business information visualization: Research review,
(2020), 113367.
agenda and opportunities, in: F.F.-H. Nah, B.S. Xiao (Eds.), HCI Business, Gov.
[24] C. Speier, The influence of information presentation formats on complex task
Organ, Springer International Publishing, Cham, 2018, pp. 671–686.
decision-making performance, Int. J. Hum. Comput. Stud. 64 (2006) 1115–1131.
[8] P. Ayres, F. Paas, Cognitive load theory: new directions and challenges, Appl. Cogn.
[25] C. Speier, I. Vessey, J.S. Valacich, The effects of interruptions, task complexity, and
Psychol. 26 (2012) 827–832.
information presentation on computer-supported decision-making performance,
[9] M.K. Eckstein, B. Guerra-Carrillo, A.T.M. Singley, S.A. Bunge, Beyond eye gaze:
Decis. Sci. 34 (2003) 771–797.
what else can eyetracking reveal about cognition and cognitive development? Dev.
[26] C. Frownfelter-Lohrke, The effects of differing information presentations of general
Cogn. Neurosci. 25 (2017) 69–91.
purpose financial statements on Users’ decisions, J. Inf. Syst. 12 (1998) 99–107.
[10] A.T. Duchowski, Eye Tracking Methodology: Theory and Practice, Springer, Third
[27] A. Chandra, R. Krovi, Representational congruence and information retrieval:
Edit, 2017.
towards an extended model of cognitive fit, Decis. Support. Syst. 25 (1999)
[11] Q. Wang, S. Yang, M. Liu, Z. Cao, Q. Ma, An eye-tracking study of website
271–288.
complexity from cognitive load perspective, Decis. Support. Syst. 62 (2014) 1–10.
[28] T.M. Shaft, I. Vessey, The role of cognitive fit in the relationship between software
[12] M. Shojaeizadeh, S. Djamasbi, R.C. Paffenroth, A.C. Trapp, Detecting task demand
comprehension and modification, MIS Q. 30 (2006) 29–55.
via an eye tracking machine learning system, Decis. Support. Syst. 116 (2019)
[29] E. Yetgin, M. Jensen, T.M. Shaft, Complacency and intentionality in IT use and
91–101.
continuance, AIS Trans. Human-Computer Interact. 7 (2015) 17–42.
[13] P.-M. Léger, S. Sénecal, F. Courtemanche, A.O. de Guinea, R. Titah, M. Fredette,
[30] A. Dimoka, I. Benbasat, F.D. Davis, A.R. Dennis, D. Gefen, B. Weber, Issues and
É. Labonte-LeMoyne, Precision is in the eye of the beholder: application of eye
opinions on the use of neurophysiological tools in ISResearch: developing a
fixation-related potentials to information systems research, J. Assoc. Inf. Syst. 15
research agenda, MIS Q. 36 (2012) 679–702.
(2014) 651–678.
[31] E. Cooper-Martin, Measures of cognitive effort, Mark. Lett. 5 (1994) 43–56.
[14] P. Bera, P. Soffer, J. Parsons, Using eye tracking to expose cognitive processes in
[32] E.J. Johnson, J.W. Payne, Effort and accuracy in choice, Manag. Sci. 31 (1985)
understanding conceptual models, MIS Q. 43 (2019) 1–22.
395–414.
[15] A. Vance, J.L. Jenkins, B.B. Anderson, D.K. Bjornn, C.B. Kirwan, Tuning out
[33] D. Kahneman, Affection and Effort, Prentice Hall, Englewood Cliffs, NJ, 1973.
security warnings: a longitudinal examination of habituation through fMRI, eye
[34] D. Navon, D. Gopher, On the economy of the human-processing system, Psychol.
tracking, and field experiments, MIS Q. 42 (2018) 355–A15.
Rev. 86 (1979) 214–255.
[16] G.W. Dickson, J.A. Senn, N.L. Chervany, Research in management information
[35] F. Hutmacher, Why is there so much more research on vision than on any other
systems: the Minnesota experiments, Manag. Sci. 23 (1977) 913–934.
sensory modality? Front. Psychol. 10 (2019) 2246.
[36] G. Pike, G. Edger, H. Edgar, Perception, in: N. Braisby, A. Gellatly (Eds.), Cogn.
Psychol, Oxford University Press, UK, 2012, pp. 65–69.

14
D. Bačić and R. Henry Decision Support Systems 163 (2022) 113862

[37] A. Glöckner, A.-K. Herbold, An eye-tracking study on information processing in [53] J.M. Teets, D.P. Tegarden, R.S. Russell, Using cognitive fit theory to evaluate the
risky decisions: evidence for compensatory strategies based on automatic effectiveness of information visualizations: an example using quality assurance
processes, J. Behav. Decis. Mak. 24 (2011) 71–98. data, IEEE Trans. Vis. Comput. Graph. 16 (2010) 841–853.
[38] S. Djamasbi, M. Siegel, J. Skorinko, T. Tullis, Online viewing and aesthetic [54] R.E. Wood, Task complexity: definition of the construct, Organ. Behav. Hum. Decis.
preferences of generation Y and the baby boom generation: testing user web site Process. 37 (1986) 60–82.
experience through eye tracking, Int. J. Electron. Commer. 15 (2011) 121–158. [55] P. Todd, I. Benbasat, The use of information in decision making: an experimental
[39] S. Djamasbi, M. Siegel, T. Tullis, Visual hierarchy and viewing behavior: An eye investigation of the impact of computer-based decision aids, MIS Q. 16 (1992)
tracking study, in: J.A. Jacko (Ed.), Human-Computer Interact. Des. Dev. 373–393.
Approaches, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 331–340. [56] A.T. Duchowski, C. Biele, A. Niedzielska, K. Krejtz, I. Krejtz, P. Kiefer, M. Raubal,
[40] J. He, J.S. McCarley, Executive working memory load does not compromise I. Giannopoulos, The index of pupillary activity: measuring cognitive load Vis-à-Vis
perceptual processing during visual search: evidence from additive factors analysis, task difficulty with pupil oscillation, Conf. Hum. Factors Comput. Syst. - Proc.
Atten. Percept. Psychophysiol. 72 (2010) 308–316. (2018) 1–13.
[41] M. Hogan, C. Barry, A.M. Torres, An eye tracking study of optional decision [57] D.S. Shojaeizadeh Mina, Trapp Andrew, Does pupillary data differ during fixations
constructs in B2c transactional processes, in: 14th Int. Conf. WWW/INTERNET, and saccades? Does it carry information about task demand? Thirteen. Annu.
Maynooth, Ireland, 2015. Work. HCI Res. MIS. 2015 (2015).
[42] R.N. Meghanathan, C. van Leeuwen, A.R. Nikolaev, Fixation duration surpasses [58] I. Vessey, The effect of information presentation on decision making: a cost-benefit
pupil size as a measure of memory load in free viewing, Front. Hum. Neurosci. 8 analysis, Inf. Manag. 27 (1994) 103–119.
(2015) 1063. [59] L. Paquette, T. Kida, The effect of decision strategy and task complexity on decision
[43] F.-Y. Kuo, C.-W. Hsu, R.-F. Day, An exploratory study of cognitive effort involved in performance, Organ. Behav. Hum. Decis. Process. 41 (1988) 128–142.
decision under framing&mdash;an application of the eye-tracking technology, [60] K. Caine, Local standards for sample size at CHI, in: Proc. 2016 CHI Conf. Hum.
Decis. Support. Syst. 48 (2009) 81–91. Factors Comput. Syst., Association for Computing Machinery, New York, NY, USA,
[44] B. Sharif, J.I. Maletic, An eye tracking study on the effects of layout in 2016, pp. 981–992.
understanding the role of design patterns, in: 2010 IEEE Intl. Conf. Softw. Maint,
IEEE, 2010, pp. 41–48.
Dinko Bačić is an Assistant Professor of Information Systems and the founder of the UX &
[45] P. Bera, B.Y.P. Bera, How colors in business dashboards affect users’ decision
Biometrics lab in the Quinlan School of Business at Loyola University Chicago. He holds a
making, Commun. ACM 59 (2016) 50–57.
DBA degree in Information Systems from the Cleveland State University. His research
[46] J. Sweller, Cognitive load during problem solving: effects on learning, Cogn. Sci. 12
interests include information visualization, human-computer interaction, biometrics,
(1988) 257–285.
cognition, neuroIS, business intelligence & analytics, and pedagogy. He has papers pub­
[47] J. Sweller, J.J.G. Van Merrienboer, F.G.W.C. Paas, Cognitive architecture and
lished in premier journals such as Decision Support Systems, Communications of the As­
instructional design, Educ. Psychol. Rev. 10 (1998) 251–296.
sociation for Information Systems, AIS Transactions on Human-Computer Interaction,
[48] F.G.W.C. Paas, Training strategies for attaining transfer of problem-solving skill in
Springer Computer Science Lecture Notes, and Leonardo, among others. He has over
statistics: a cognitive-load approach, J. Educ. Psychol. 84 (1992) 429.
fifteen years of corporate and consulting experience in business intelligence, finance,
[49] S. Martin, Measuring cognitive load and cognition: metrics for technology-
project management, and human resources.
enhanced learning, Educ. Res. Eval. 20 (2014) 592–621.
[50] G.S. Hubona, S. Everett, E. Marsh, K. Wauchope, Mental representations of spatial
language, Int. J. Hum. Comput. Stud. 48 (1998) 705–728. Raymond M. Henry is Professor of Information Systems and Associate Dean in the Monte
[51] B. Adipat, D. Zhang, L. Zhou, The effects of tree-view based presentation Ahuja College of Business at Cleveland State University. He received his PhD in Infor­
adaptation on mobile web browsing, MIS Q. 35 (2011) 99–122 (internal). mation Systems from the University of Pittsburgh. His research explores topics related to
[52] A.R. Dennis, T.A. Carte, Using geographical information Systems For Decision information systems, human-computer interaction, knowledge management and supply
Making: Extending cognitive fit theory to map-based presentations, Inf. Syst. Res. 9 chain management. His work has been published in premier journals including Informa­
(1998) 194–203. tion Systems Research, Journal of Management Information Systems, Journal of the As­
sociation of Information Systems, and Journal of Operations Management, among others.

15

You might also like