Download as pdf or txt
Download as pdf or txt
You are on page 1of 245

What you read vs.

what you know: a methodologically


diverse approach to unraveling the neurocognitive
architecture of text-based and knowledge-based validation
processes during reading
Moort, M.L. van

Citation
Moort, M. L. van. (2022, March 3). What you read vs. what you know: a
methodologically diverse approach to unraveling the neurocognitive
architecture of text-based and knowledge-based validation processes during
reading. Retrieved from https://hdl.handle.net/1887/3278025

Version: Publisher's Version


Licence agreement concerning inclusion of doctoral
License: thesis in the Institutional Repository of the University
of Leiden
Downloaded from: https://hdl.handle.net/1887/3278025

Note: To cite this publication please use the final published version (if
applicable).
What
you read
vs.
what you
know
A METHODOLOGICALLY DIVERSE APPROACH
TO UNRAVELING THE NEUROCOGNITIVE
ARCHITECTURE OF TEXT-BASED AND KNOWLEDGE-
BASED VALIDATION PROCESSES DURING READING

Marloes van Moort


What you read vs. what you
know:
A methodologically diverse approach to unraveling the
neurocognitive architecture of text-based and
knowledge-based validation processes during reading

Marloes van Moort


ISBN: 978-94-6421-642-4

Cover design: ©evelienjagtman.com

Printed by : Ipskamp Printing | proefschriften.net

Copyright © M. L. van Moort, 2022

All rights reserved. No parts of this publication may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, mechanically, by photocopy, by recording, or otherwise,
without prior written permission of the author or, when appropriate, from the publishers of the
publications.
What you read vs. what you
know:
A methodologically diverse approach to unraveling the
neurocognitive architecture of text-based and
knowledge-based validation processes during reading

Proefschrift

ter verkrijging van de graad van doctor aan de Universiteit Leiden,


op gezag van rector magnificus prof.dr.ir. H. Bijl,
volgens besluit van het college voor promoties
te verdedigen op donderdag 3 maart 2022
klokke 11.15 uur

door

Marianne Louise van Moort


geboren te Zoetermeer
23 januari 1989
Promotor:
Prof.dr. P. van den Broek

Copromotor:
Dr. A.W. Koornneef

Promotiecommissie:
Prof.dr. L. R. A. Alink (Wetenschappelijk Directeur Instituut Pedagogische Wetenschappen/Voorzitter)
Prof.dr. S. T. Nieuwenhuis
Prof.dr. P. C. J. Segers (Radboud University)
Prof.dr. J. A. L. Hoeken (Utrecht University)
Prof.dr. T. Richter (University of Würzburg)
Table of Contents
Chapter 1 General Introduction 7

Chapter 2 Validation: Knowledge- and text-based 29


monitoring during reading

Chapter 3 What you read vs what you know: Neural 55


correlates of accessing context information
and background knowledge in constructing a
mental representation during reading
Supplementary Materials 91

Chapter 4 Differentiating text-based and knowledge- 99


based validation processes during reading:
Evidence from eye movements

Chapter 5 Purposeful validation: Are validation 127


processes and the construction of a mental
representation influenced by reading goal?

Chapter 6 Summary and General Discussion 163

Nederlandse Samenvatting (Summary in 189


Dutch)

References 209

Dankwoord (acknowledgements in Dutch) 229

Curriculum Vitae 233

List of Publications 237


1
General Introduction
Introduction
Discourse allows us to exchange meaning in a way that many consider
to be fundamentally human or, as Graesser, Millis and Zwaan put it so eloquently,
“Discourse is fundamental. It is what makes us human, what allows us to communicate
ideas, facts, and feelings across time and space.” (Graesser et al., 1997, p164). To
comprehend discourse and, more generally, to comprehend the world around us, we
continuously build mental representations in which we integrate the current input with
our existing knowledge base, for example when we read a book, watch a movie or
have a conversation. Building this representation is a dynamic process; the emerging
representation must be monitored and updated continuously as new information is
encountered (e.g., Graesser et al., 1994; Kintsch & van Dijk, 1978; Trabasso et al.,
1984; van den Broek et al., 1999). An essential aspect of building such mental
representation is that comprehenders routinely monitor to what extent incoming
information is both coherent and accurate – a process called validation (e.g., Isberner
& Richter, 2014a; O’Brien & Cook, 2016a; Richter & Rapp, 2014; Singer, 2013, 2019;
Singer et al., 1992; Singer & Doering, 2014). Validation processes function as a
gatekeeper for the quality and coherence of the mental representation: Only
information that is successfully validated is integrated into the mental representation.
Thus, by validating incoming information readers establish coherence during
comprehension and protect the emerging mental representation against inaccuracies
or incongruencies (e.g., O’Brien & Cook, 2016a, 2016b; Richter & Rapp, 2014; Singer,
2013, 2019; Singer et al., 1992).
The studies described in this thesis focus on validation processes in the
context of reading comprehension. The rise of digital technology allows us
unprecedented access to (textual) information. This provides excellent opportunities
to acquire new knowledge, but also requires a much more vigilant, knowledgeable
reader: Anyone can put information on the internet, therefore the texts available online
vary not only in linguistic quality, but also in accuracy and trustworthiness. In light of
these developments, it is important that we understand how readers validate (written)
materials against various sources of information. Current theoretical frameworks
propose a rudimentary cognitive architecture for validation processes, but they do not
provide detailed information on when and how different sources of information, such
as recently acquired knowledge (from the text) and readers’ background knowledge
(from memory), exert their influence. As a result, it is unclear whether these two
sources influence validation in essentially the same or in distinct ways and, hence,
whether they should be distinguished in theoretical models.

8
To address this issue, this dissertation examined the (neuro)cognitive
architecture of the processes involved in validating against prior text (i.e., text-based
validation) and validating against background knowledge (i.e., knowledge-based 1
validation) and how these processes affect the long(er)-term memory representation.
To this end, I employed research methods that tap into readers’ moment-by-moment
processing, including self-paced reading, functional magnetic resonance imaging
(fMRI) and eye-tracking, and methods that tap into the offline memory representation
to gain insight into the effects of these processes on long(er)-term memory for textual
information. In addition, as validation processes evoked by the same text may differ
from reader to reader or - for the same reader - across reading situations, the studies
in this dissertation examined to what extent text-based and knowledge-based
validation processes were modulated by individual differences in readers’ purpose for
reading a text, their available processing capacity and the novelty of the presented
text information to the reader. The remainder of this introduction will be divided into
four sections. First, I will provide a brief overview of the cognitive processes involved
in building and validating mental representations, with an emphasis on the effects of
contextual information and background knowledge on validation processes. Then I
will discuss potential effects of reading goals, processing capacity and novelty of the
presented information on validation processes, followed by an introduction of the
experimental methods used in this dissertation. To conclude, I will provide a brief
outline of this dissertation.

What it takes to comprehend – Building and


validating mental representations

Reading comprehension is among the most complex of human activities. Not


because everything we read is either profound or incomprehensible, but because
reading comprehension is intertwined with many cognitive functions and processes,
including memory, perception, affective processing, problem solving and reasoning
(e.g., Graesser et al., 1997). Understanding written text is an amazing feat. Readers
engage in a myriad of cognitive processes at various linguistic levels with every
sentence they read: They recognize words, use knowledge on syntax and grammar
to combine them into sentences and integrate the meaning across sentences to form
a coherent understanding of the text as a whole (Perfetti & Frishkoff, 2008). But
comprehending written text is more than simply combining words and sentences, as
comprehension requires readers to connect the individual elements of the text to each
other and to the readers’ relevant background knowledge by meaningful relations
(Graesser et al., 1994; van den Broek, 1988; Zwaan & Singer, 2003). These relations
are the result of various passive and strategic processes that take place as a reader

9
moves through a text. For example, readers activate of meaning from long-term
memory (Kintsch, 1988; McNamara & McDaniel, 2004; Rizzella & O’Brien, 2002),
make inferences, connect newly read elements of the text with other text elements
and with relevant background knowledge, and validate message consistency (e.g.,
Isberner & Richter, 2014a; Nieuwland & Kuperberg, 2008; O’Brien & Cook, 2016a;
Schroeder et al., 2008; Singer, 2013; Singer et al., 1992). These processes that unfold
during moment-by-moment comprehension provide the basis for the product of
comprehension: a memory representation of the text as a whole (i.e., the situation
model or mental representation) that can be accessed after reading (e.g., Goldman &
Varma, 1995; Kintsch, 1988; Kintsch & van Dijk, 1978; Trabasso & Suh, 1993; van den
Broek et al., 1999; Zwaan & Radvansky, 1998; Zwaan & Singer, 2003). This mental
representation can be thought of as a conceptual network of text elements that are
connected to each other and to relevant background knowledge (Kintsch, 1988;
O’Brien et al., 2015; Trabasso et al., 1984; van den Broek, 1994). This interconnected
representation goes beyond the meaning of individual words or sentences: It does not
represent the text itself (i.e., the words, sentences and paragraphs in the text), but
rather what the text is about (Kintsch, 1998; van Dijk & Kintsch, 1983; Zwaan &
Radvansky, 1998).
To construct such mental representations readers engage in a dynamic
pattern of passive and reader-initiated processes as they proceed through a text (e.g.,
Graesser et al., 1994; Kintsch, 1988; Kintsch & van Dijk, 1978; McNamara & Magliano,
2009; O’Brien & Myers, 1999; Trabasso et al., 1984; van den Broek et al., 1999; van
den Broek & Helder, 2017; van Dijk & Kintsch, 1983). Generally, passive processes
are conceived as associative processes by which information in the current text
element activates concepts from the readers memory for the preceding sentences
and from their general background knowledge (e.g., Anderson, 1983; McKoon &
Ratcliff, 1992; Myers & O’Brien, 1998; O’Brien et al., 1995; O’Brien & Myers, 1999;
van den Broek & Helder, 2017). If these passive processes yield sufficient coherence,
reader-initiated processes are not needed. However, if the results of these passive
processes do not meet the reader’s desired level of understanding (i.e., their
standards of coherence; van den Broek et al., 2011, 2015; Van den Broek et al., 1995),
reader-initiated processes take place. Similar to the passive processes, reader-
initiated processes operate on both background knowledge and the memory
representation of the preceding text. However, whereas passive processes are
outside the reader’s conscious control, nonselective, and unrestricted in the kind of
information they return (e.g., Anderson, 1983; McKoon & Ratcliff, 1992; Myers &
O’Brien, 1998; O’Brien & Myers, 1999), reader-initiated processes require control and
attentional resources on the part of the reader. Therefore, reader-initiated processes
do not always take place but if they do, they can lead to comprehension beyond that

10
resulting from the passive processes alone (e.g., Graesser et al., 1994; McNamara &
McDaniel, 2004; Singer et al., 1994; van den Broek & Helder, 2017).
To establish coherence during comprehension and protect the emerging 1
mental representation against inaccuracies or incongruencies readers routinely
monitor the consistency of incoming linguistic information with previous text
information and their own background knowledge – a process called validation (e.g.,
Cook & O’Brien, 2014; Isberner & Richter, 2014a; O’Brien & Cook, 2016a; Richter &
Rapp, 2014; Singer, 2013; Singer et al., 1992). In describing the cognitive architecture
of validation, theoretical models assume distinct components of validation: a
coherence-detection component and a post-detection processing component (Cook
& O’Brien, 2014; Isberner & Richter, 2014a; Richter, 2015; Singer, 2019; van den
Broek & Helder, 2017). The coherence-detection component, involved in detecting
(in)consistencies, is the main focus of the RI-Val model of comprehension (Cook &
O’Brien, 2014; O’Brien & Cook, 2016a, 2016b). In this model, validation is described
as one of three processing stages – resonance, integration, and validation – that
comprise comprehension. According to the model, incoming information activates
related information from long-term memory via a low-level passive resonance
mechanism (Myers & O’Brien, 1998; O’Brien & Myers, 1999). This activated
information then is integrated with the contents of working memory and these linkages
made during the integration stage are validated against information in memory that is
readily available to the reader (i.e., information that either already is part of working
memory or easily can be made available from long-term memory) in a single, passive
pattern-matching process (e.g., McKoon & Ratcliff, 1995; Myers & O’Brien, 1998;
O’Brien & Albrecht, 1992). These contents of active memory include both portions of
the episodic representation of the text (i.e., context) and general world knowledge. In
addition, the model includes a coherence threshold: a point at which processing is
deemed ‘good enough’ for the reader to move on in a text. This threshold is assumed
to be flexible: readers may wait for more or less information to accrue before moving
on in the text depending on variables associated with the reader, the task and the text
(O’Brien & Cook, 2016b). The three processes are assumed to have an asynchronous
onset and to run to completion over time, regardless of whether the reader has moved
on in the text (i.e., reached their coherence threshold).
Once detected, inconsistencies may trigger further processing. Such post-
detection processes include possible efforts to repair coherence triggered by the
detection of the inconsistency as elaborated in a second validation model, the two-
step model of validation (Isberner & Richter, 2014a; Richter, 2015; Schroeder et al.,
2008). In this model, validation is described as consisting of two components:
(1) epistemic monitoring (i.e., detecting inconsistencies) during a comprehension
stage, followed by (2) optional epistemic elaboration processes (e.g., resolving
inconsistencies) during an evaluative stage (e.g., Isberner & Richter, 2014a; Richter,

11
2011; Richter et al., 2009; Schroeder et al., 2008). According to this model the initial
detection of inconsistencies (i.e., epistemic monitoring) is a routine part of
comprehension. Similar to the RI–Val model, these detection processes are memory-
based, pose little demands on cognitive resources, and are not dependent on readers’
goals (Richter et al., 2009). If an inconsistency is detected, readers may initiate
epistemic elaboration processes to resolve the inconsistency. Such elaboration
processes may take place during reading (e.g., generating elaborative and bridging
inferences to establish hypothetical truth conditions) or after reading of a text is
completed (e.g., searching for evidence that could support dubious information).
These processes only occur when readers are motivated and have enough cognitive
resources available, as these processes are assumed to be slow, resource-
demanding and under strategic control of the reader (Maier & Richter, 2013; Richter,
2015).
Theoretical accounts emphasize that validation processes function as a
gatekeeper for the quality of the mental representation of a text and as such assume
a close relation between validation processes and what readers remember from a
text (e.g., Isberner & Richter, 2014a; Singer, 2006, 2019). Successful validation
(i.e., information is deemed congruent and accurate) results in the integration of
incoming information into the emerging mental representation and increases the
likelihood that it will be encoded in readers’ long-term memory (Schroeder et al.,
2008; Singer, 2013, 2019). When validation fails (i.e., incoming information is deemed
inaccurate or incongruent), integration of the incoming information into the reader’s
mental representation and long-term memory also fails – making this information
harder to remember.
As described above, current theoretical frameworks offer a time course and
a rudimentary cognitive architecture for validation processes. Furthermore, they
generally agree that incoming information is routinely validated against a reader’s
evolving situation model of a text (e.g., Isberner & Richter, 2014a; Nieuwland &
Kuperberg, 2008; O’Brien & Cook, 2016a, 2016b; Schroeder et al., 2008; Singer,
2006, 2019; Singer et al., 1992). Because a situation model comprises both textual
and world knowledge information, most accounts assume that both sources can affect
validation processes, yet few accounts make an explicit distinction between these
sources in their depiction of the cognitive architecture of validation. As a result, it is
unclear whether validating against background knowledge and validating against prior
text involve a common mechanism or (partially) separate mechanisms, and what
happens when they are in conflict.
Within validation research there are a considerable number of empirical
investigations on the effects of contextual information and background knowledge on
validation processes (e.g., Albrecht & O’Brien, 1993; Cook et al., 1998b; Menenti et
al., 2009; O’Brien et al., 2004, 2010; O’Brien & Albrecht, 1992; Rapp, 2008;

12
Richter et al., 2009). Usually, the focus of each investigation is on one source of
potential inconsistencies or the other, whereas in reality both sources operate in
tandem. With respect to investigations of text-based validation detection of within-text 1
incongruencies inevitably depends on background knowledge as well, as readers
often need some degree of background knowledge to establish the (in)coherence of
targets. For example, in paradigms in which targets (e.g., children are building a
snowman) presumably are incongruent with preceding context (e.g., it was a hot,
sunny day), detection only occurs if readers have certain background knowledge
(e.g., snow melts on a hot sunny day). Even blatant incongruencies (e.g., a guinea pig
is described as having a solid brown fur in one sentence but as having a solid white
fur the next) require at least a minimal amount of background knowledge (e.g., brown
and white are colors and guinea pigs generally do not change color, so they cannot
be solid brown and solid white at the same time). Thus, although the role of
background knowledge is implied because it is essential for detecting the
(in)congruency of textual targets it is not explicitly included as a factor in the research
design. With respect to studies that do include both contextual and world knowledge
manipulations, the central question tends to be whether context can override
(erroneous) world knowledge, not whether text-based monitoring is an independent
process (e.g., Creer et al., 2018; Menenti et al., 2009; Walsh et al., 2018). As a result,
it is difficult to distinguish between the respective impacts of textual information and
background knowledge and to define possibly unique influences.
Thus, although it is clear that contextual information and background
knowledge impact validation processes, it is unclear whether the two sources
influence validation in essentially the same or in distinct ways and, hence, whether
they should be distinguished in theoretical models. To address this issue, the studies
in the current dissertation aimed to gain more insight into the complex interactions
between contextual information and background knowledge in building coherent and
accurate mental representations of text.

No two persons ever read the same text - Individual


differences and validation

The validation processes discussed above are assumed to be universal to all


readers. But as Robert Zend once said: “People have one thing in common: they are
all different.” Therefore, validation processes evoked by the same text may not “look”
the same in all readers, they may differ from reader to reader or, for the same reader,
across reading situations - for example, depending on the reader’s goal(s) for reading
a text, their available processing capacity (i.e., their working memory capacity), or The
degree to which the information in a text is novel to the reader (i.e., the novelty of the

13
presented information). In the following section I will discuss these factors and their
potential influences on validation in more detail.

Reading purpose

People rarely read a text simply for the sake of reading. Usually they have a
particular reason (or reasons) for reading a text: they can read for pleasure, to learn
for school, to obtain instructions, and so on. Intuitively, it seems likely that readers
process texts differently depending on their reason for reading. For instance, reading
in preparation for an exam or a test (i.e., reading for study) most certainly requires a
different kind of processing, and different strategies, than reading for relaxation. This
intuition has considerable empirical support, as there is a number of studies showing
that reading goals affect the cognitive processes and strategies readers use when
they proceed through a text as well as what they remember from the text (Britt et al.,
2018; Linderholm et al., 2004; Narvaez et al., 1999; van den Broek et al., 2001). For
example, individuals that were instructed to read for study generally spend more time
reading the texts (e.g., Yeari et al., 2015) and engaged in more coherence-building
processes during reading (e.g., connecting and explanatory inferences, predictive
inferences, rephrasing the current sentence) than individuals that were instructed to
read for entertainment (e.g., Linderholm & van den Broek, 2002; Lorch et al., 1993;
Narvaez et al., 1999; van den Broek et al., 2001; Yeari et al., 2015). Moreover, reading
for study results in better memory for the text than reading for entertainment (Lorch
et al., 1993, e.g., 1995; van den Broek et al., 2001; Yeari et al., 2015). Hence, readers
tailor their cognitive processes and strategies to their reason for reading, and this
pattern of cognitive processes during reading affects what is remembered from that
text (Britt et al., 2018; Linderholm et al., 2004; van den Broek et al., 1999, 2001).
In the context of validation, readers’ purpose for reading may determine how
extensive readers validate incoming information (Singer, 2019). Studies have shown
that readers' sensitivity to false or implausible information varies with their goals (e.g.,
Rapp et al., 2014). For example, Rapp et al. (2014) presented participants with stories
containing both accurate and inaccurate assertions while manipulating the
instructions. Participants were asked to read for comprehension or to engage in
evaluative activities (e.g., fact checking and immediately correcting erroneous content
or highlighting inaccuracies without changing the content). Instructions that promote
evaluative activities reduced the intrusive effects of misinformation on post-reading
tasks (e.g., judging the validity of statements), as compared to the performance of
participants who merely read the text for comprehension. These results suggest that
(at least some) aspects of the validation processes depend on the goals with which
participants read.

14
Working memory capacity

Validation processes may also be affected by individual differences in 1


readers’ working memory capacity. Working memory is a vital part of the human
memory system that is used to process and temporarily store information that is
required to carry out complex cognitive tasks, such as reading comprehension
(e.g., Baddeley, 1998, 2000; Baddeley & Hitch, 1974; Cowan, 2017; Daneman &
Carpenter, 1980; Just & Carpenter, 1992). It is assumed to have limited resources
(e.g., Miller, 1956; Simon, 1974) that – in the context of reading comprehension- must
be shared between the processing of newly read text information and the
maintenance of relevant information from the preceding text and the readers’
background knowledge (Graesser et al., 1997; Kintsch, 1998; van den Broek, 2010).
As working memory constrains the cognitive resources available to the reader for
information processing and storage (see Baddeley, 1998; Baddeley & Hitch, 1974;
Cowan, 1988, 2017), it plays an important role in the construction of a coherent mental
representation (e.g., Hannon, 2012; Kintsch, 1988; Linderholm et al., 2004).
In the context of validation, working memory limits the amount of information
that is available for the validation process (e.g., Hannon & Daneman, 2001; Singer,
2006) and, thus, may interfere with the ability to detect and resolve inconsistencies
while reading a text. As such, it may serve as a bottleneck for the processing of
inconsistencies. To illustrate how this could be the case, it is important to consider
how validation is defined. An important assumption of most models of validation is that
incoming information can only be validated against information in memory that is
“readily available” to the reader – either because it is already part of working memory
or because it can easily be made available from long-term memory (e.g., McKoon &
Ratcliff, 1995; Myers & O’Brien, 1998; O’Brien & Albrecht, 1992). Thus, successful
validation requires that both the newly read information and the relevant information
from earlier text and/or the readers’ background knowledge are active and available.
It is crucial for the detection of inconsistencies that two are not only active but are co-
activated in memory (van den Broek & Kendeou, 2008). Working memory could play
a role in this co-activation (e.g., Hannon & Daneman, 2001; Singer, 2006) and, as
its capacity is assumed to be limited, serve as a bottleneck for the processing of
inconsistencies. In addition, the impact of reading goals on comprehension processes
in general depends on readers’ working memory capacity (Linderholm & van den
Broek, 2002; Narvaez et al., 1999; van den Broek et al., 1993, 2001), and this may
apply to validation processes as well.

15
Novelty

The degree to which the information in a text is novel to the reader plays a
critical role in many online comprehension processes, including inference making
(Cain et al., 2001; Singer, 2013), comprehension monitoring (Richter, 2015), and
validation processes (e.g., Singer, 2019). With respect to validation, the more novel
the information is to the reader, the less knowledge the reader has against which to
validate the accuracy of the textual information – making in more difficult to determine
whether information is accurate or not. In addition, a reader’s memory for a text may
also be affected by the degree to which the reader has topic knowledge. In general,
knowledge about a topic facilitates encoding of new information into a long-term
memory representation, resulting in better memory for text information (e.g.,
Alexander et al., 1994; Royer et al., 1996; Schneider et al., 1989; Voss & Bisanz, 1985).
However, it is currently unclear whether this knowledge effect influences memory for
all text information, irrespective of its accuracy or congruency, or whether the effects
of having more topic knowledge depend on the accuracy and/or congruency of the
presented information.
As illustrated by our discussion above, investigating the cognitive
architecture of validation requires the consideration of potential effects of readers’
purpose for reading, their processing capacity and the degree to which the text
information is novel to them on online validation processes as well as on the offline
memory representation (i.e., the final product of reading). Therefore, the current
dissertation included examinations of the potential effects of these reader factors on
text-based and knowledge-based validation processes and on the offline memory
representation. I considered the influence of reading goals on text-based and
knowledge-based validation processes during reading (Chapter 2 & 5) as well as their
effects on the offline memory representation (Chapter 5). In addition, I included a
measure of readers’ working memory capacity and a measure of the novelty of the
presented information to readers (i.e., the degree to which the information in the texts
is novel to them) to examine the potential role of these factors in text-based and
knowledge-based validation processes. Investigating how reading goals, working
memory capacity and novelty impact validation processes may aid in differentiating
text-based and knowledge-based validation processes, as they may affect text-based
and knowledge-based validation processes in distinct ways. Moreover, it aids our
understanding of how they modulate reading process and outcomes, by going beyond
the impact of these factors on general comprehension and focusing on how they
interact with a specific component process of comprehension - validation.

16
Experimental methodology
To contrast validation against background knowledge and validation against 1
prior text in a single design, I used a contradiction paradigm based on Rapp (2008).
In this paradigm participants read short expository texts about well-known historical
topics. Each text contained a target sentence that was either true (e.g., the Statue of
Liberty was delivered to the United States) or false (e.g., the Statue of Liberty was not
delivered to the United States) relative to the reader’s background knowledge and
that was either supported (i.e., congruent) or called into question (i.e., incongruent)
by the preceding context (e.g., context that described that the construction of the
statue went according to plan vs. context that described problems that occurred
during construction of the statue) (see Table 2.1 on p. 39 for a sample text). So, the
accuracy manipulation depended on which target sentence the participant read -
whether it was true or false with their background knowledge- and the text
congruency manipulation depended on whether the target sentence was congruent
or incongruent with the preceding context (i.e., this depended on the combination of
target and context). In the remainder of this dissertation (in)accuracy refers to the
accuracy of the presented information with the background knowledge of the reader
(i.e., whether the target is true or false) and (in)congruency refers to the congruency
of the target with the preceding context (i.e., whether the target is congruent or
incongruent).1 To assess the degree of difficulty readers experience while reading
false or incongruent information, I compared processing of targets that contain false
information and/or information that is incongruent with the preceding text with the
processing of true and/or congruent targets. When comparing consistent and
inconsistent targets –with either the preceding text or the readers’ background
knowledge- readers usually show a so-called inconsistency effect: processing
inaccurate or incongruent targets takes more time and/or requires additional
resources compared to processing accurate or congruent targets. This inconsistency
effect is generally taken as evidence that readers at least detected the inconsistency,
with the additional time resulting from additional processing. In addition to assessing
the degree of difficulty readers experience while reading false or incongruent
information, I compared processing of targets containing false information with the
processing of targets containing information that is incongruent with the context to
investigate possible differences between text-based and knowledge-based validation
processes.

1
Note that I use the term (in)consistency as a general term to refer to both types of inconsistencies without making
a distinction between the two sources.

17
All studies in this thesis employed the same contradiction paradigm, but they
employed different methodologies, including self-paced reading (Chapter 2 & 5),
functional magnetic resonance imaging (Chapter 3), and eye-tracking (Chapter 4) to
provide a more comprehensive and in-depth overview of the text-based and
knowledge-based processes involved in validation and how these processes affect
what readers remember from a text. These methods provide complementary
information on the nature and time course of these processes, as the neuroimaging
data (Chapter 3) provides information on where in the brain these processes occur
and the self-paced reading and eye movement data (Chapter 4) provide
complementary information on when these processes take place. Before discussing
the outline of this dissertation, I will elaborate on the experimental methods used in
this dissertation in the next section. First, I will discuss the methodologies aimed at
investigating the moment-by-moment processing that takes place during reading (i.e.,
online): self-paced reading, eye-tracking and functional magnetic resonance imaging
(fMRI). Second, I will discuss the recognition memory task I employed to investigate
the effects of these online processes on the resulting final product of reading (i.e., the
offline memory representation).

Self-paced reading

A widely used method in text comprehension research is the self-paced


sentence-by-sentence reading paradigm - a computerized method of recording a
reading time for a sentence (or word or phrase, depending on the experimental set-
up). It is called self-paced because the reader determines how long to spend reading
each sentence, which contrasts with fixed-paced methods like rapid serial visual
presentation or RSVP, where reading times are pre-determined by the researcher. In
self-paced sentence-by-sentence reading paradigms such as those used in this
dissertation participants read entire sentences and press a button when they are
done, which allows for the measurement of whole-sentence reading times. Intuitively,
the idea is that such reading times provide an indication of processing complexity and
that longer reading times are associated with increased processing load and
processing difficulty (e.g., Albrecht & O’Brien, 1993; Cook et al., 1998b). Thus, reading
times are assumed to provide a metric of readers’ difficulty integrating statements into
a discourse representation as texts unfold (e.g., Albrecht & O’Brien, 1993; Cook et al.,
1998b; Rapp et al., 2001).
Most studies investigating validation process employ a specific self-paced
reading paradigm – the contradiction paradigm. These paradigms are used to assess
the degree of difficulty readers experience when they read incongruent or inaccurate
information (e.g., Albrecht & O’Brien, 1993; Cook et al., 1998b; Cook & O’Brien, 2014;

18
O’Brien et al., 1998, 2010; O’Brien & Albrecht, 1992). In these paradigms readers are
presented with short texts (often sentence-by-sentence) that contain sentences that
violate information established earlier in the text or information from the readers’ 1
background knowledge. For example, a character is described as being vegetarian
early in the text and much later the text states that this character ate a hamburger
(Albrecht & O’Brien, 1993; Hakala & O’Brien, 1995; Myers et al., 1994). Reading times
on sentences that are inaccurate with the readers’ world knowledge and/or
incongruent with the preceding text are compared to those on sentences that are
accurate and congruent. A robust finding in these paradigms is that readers are
slower to read inaccurate or incongruent sections compared to accurate and
congruent sections, suggesting that readers detect inconsistencies against these two
sources on the fly (e.g., Creer et al., 2018; O’Brien & Albrecht, 1992; Rapp, 2008;
Williams et al., 2018).
Self-paced reading has proven to be a very fruitful method and has generated
many insightful findings (e.g., Albrecht & O’Brien, 1993; Carpenter & Just, 1975;
Garnsey et al., 1997; Gibson, 1998; Kaiser & Trueswell, 2004; O’Brien et al., 1998;
O’Brien & Albrecht, 1992; Swets et al., 2008). Self-paced reading paradigms have
several important advantages. From a practical point of view, the main advantage is
that they do not require specific computer hardware, therefore experiments using
such paradigms are simple in design, easy to set up and easy to analyze (and relatively
inexpensive). In addition, the method also has important theoretical advantages. It is
a relatively naturalistic way of reading, as readers can move through the text at their
natural pace. Moreover, participants do not have to stop reading and reflect back on
what they just experienced to record the level of processing difficulty they experience
(i.e., it requires no introspection of the participants). Because of these advantages the
self-paced reading methodology has been very popular over the last decades. As a
result, there is a rich tradition of using self-paced contradiction paradigms in the
psycholinguistic literature and, thus, an abundance of literature to build on. Thus,
using a self-paced sentence-by-sentence contradiction paradigm allows us to build
on this rich psycholinguistic tradition and provides a solid foundation to explore text-
based and knowledge-based validation processes. However, as any experimental
technique, it also has some important limitations.
First of all, the self-paced sentence-by-sentence reading task is somewhat
unnatural, as readers normally do not press a button when reading a text (e.g., Kaiser,
2013). Usually, the complete text is visible and available for inspection (or at least one
or two pages of a longer text), and the reader is free to look back and forthduring
reading. When the texts are presented sentence-by-sentence readers are unable to
look back to earlier parts of the text during reading, which may interfere with normal
reading processes. For example, in natural reading readers can resolve an
inconsistency by using information from their short-term memory or looking back in

19
the text to related information (e.g., Garnham, 2001). During sentence-by-sentence
presentation readers cannot look back in the text, therefore, they need to rely more
on their memory representation to conduct the validation (Gordon et al., 2006). This
may be advantageous from the researcher’s point of view because it might increase
the size of the inconsistency effect, but it is not necessarily representative of the
processes occurring during every day reading of text. Second, whole sentence
reading times can be ambiguous with respect to the contents and types of processes
they index (e.g., Hasson & Giora, 2007). Take for example the strategies readers can
follow to resolve an inconsistency mentioned above2. Based on whole sentence
reading times it is impossible to distinguish whether readers are looking back in the
sentence or use memory-based processing to resolve the inconsistency, as both
strategies may result in longer reading times. Third, the temporal resolution of self-
paced reading is relatively low. Whole sentence reading times are relatively coarse
measures and, thus, are limited in the information they can provide on the nature and
time course of comprehension processes during normal reading.

Eye-tracking

As self-paced reading only allows a very rough comparison of the time


courses of text-based and knowledge-based validation processes, I used eye tracking
methodology to provide more detailed information on the nature and time course of
validation processes. This non-invasive method has been successfully exploited to
study many issues in language processing as it provides valuable information
regarding moment-to-moment comprehension processes (see also Rayner, 1997,
1998). There are two major components of eye movements during reading: saccades
and fixations. Although it generally feels like our eyes are gliding smoothly across the
page of text as we read, in reality they make a series of rapid movements (called
saccades, which move the eyes from one place to another in the text) separated by
pauses (called fixations). It is only during the fixations that new information is encoded,
because vision is suppressed during saccades. About 10% to 15% of the time, skilled
readers make regressive eye movements, or regressions, to earlier read material.
The basic assumption of eye-tracking methods is that increased processing demands,
for example when readers encounter a comprehension problem in a text, are
associated with increased processing time or changes in the pattern of fixations
(e.g., Frazier & Rayner, 1982; Hyönä et al., 2003; Rayner et al., 2004; Rayner &
Slattery, 2009; Rinck et al., 2003; Stewart et al., 2004). Such changes are assumed to
be indicative of underlying cognitive processes. For example, readers may detect and

2
Consider how hard this sentence is to understand if you were unable to look back in the text.

20
attempt to resolve incoherence by spending more time on the critical regions
(Yuill & Oakhill, 1991) or by engaging in rereading activities to look for the possible
source of the incoherence (Hyönä et al., 2003; Zabrucky & Ratner, 1986). 1
Monitoring eye movements during reading provides several advantages.
First, it provides a more naturalistic reading setting, as texts can be presented in their
entirety (e.g., Hyönä et al., 2002). As such, it places very few restrictions on the
reading process, as comprehension strategies -such as regressions to earlier parts of
the text- can be applied as in normal reading. Second, this methodology allows for a
more detailed investigation of the processes that occur when one encounters a
comprehension problem (such as an inconsistency). Multiple aspects of eye
movements can be analyzed (e.g., fixation duration, saccade length, and regression
frequency), providing a window into different elements of the reading process. Third,
monitoring eye movements during reading taps into the time course of processing: it
offers high temporal resolution and the possibility to distinguish between relatively
early processing (e.g., first-pass reading times) and later processing (e.g., second-
pass reading times) (Cook & Wei, 2017; Rayner, 1998).
Eye tracking captures reading in its most natural way, but therefore also in its
outmost complexity. As a result, there are some important limitations to what eye
movements can tell us about cognitive processes. First, as eye tracking places few
restrictions on the reading process, participants can adopt a number of reading
strategies. For example, they may read a sentence (or a text) very carefully, skim
through it in order to arrive at an approximate interpretation, or adopt a reading
strategy somewhere in between these extremes. Which strategy they adopt
influences the extent to which fixation durations and locations accurately indicate
online processing differences. Second, although eye-tracking data allows the
extraction of an abundance of different dependent measures, conclusive mapping
between eye movements and specific cognitive processes is a challenge -as there is
no direct relation between the cognitive processes that take place during moment-by-
moment comprehension and specific eye movements (Boland, 2004).
In distinguishing text-based and knowledge-based validation processes
eye-tracking data can be particularly informative when the processing of text
incongruencies or knowledge inaccuracies elicits distinct reading patterns. For
example, if readers show more regressive eye movements when they encounter a
text incongruency, but spend more time processing knowledge inaccuracies, this
suggests qualitative processing differences and, thus, that there may be distinct
systems involved in text-based and knowledge-based validation. However, if readers
display similar reading patterns (e.g., they spend more time processing both types of
inconsistencies) it is harder to determine whether these reading patterns are the result
of quantitative or qualitative differences in processing. Similar reading patterns could
be the result the same underlying cognitive processes and, thus, suggest that there

21
is a single validation system involved in text-based and knowledge-based validation.
However, it could also be the result of qualitatively different cognitive processes (e.g.,
longer reading times could be the result of readers engaging in a more extensive
memory search for relevant information or because they reanalyze or reprocess what
they just read) and, thus, cannot exclude the possibility of distinct text-based and
knowledge-based validation systems.

Functional magnetic resonance imaging (fMRI)

To help differentiate between cognitive theories about common


versus separate validation systems and to examine where in the brain
text-based and knowledge-based validation processes take place I used functional
magnetic resonance imaging (fMRI), a non-invasive method for measuring neuronal
activity in the human brain. fMRI uses magnetic resonance imaging to measure brain
activity by measuring changes in the local oxygenation of blood, which in turn reflects
the amount of local brain activity (Poldrack et al., 2011). These changes can be
consequent to task-induced cognitive state changes (task-based fMRI; Heeger &
Ress, 2002; Linden et al., 1999; Worsley, 1997; Worsley & Friston, 1995), or the result
of unregulated processes in the resting brain when individuals are not performing an
explicit task (resting state fMRI; Fox & Raichle, 2007; Raichle et al., 2001). Task-based
fMRI studies, such as the fMRI study in Chapter 3 of the current dissertation,
manipulate the task participants perform while they are in the scanner and compare
the neural activity recorded during the different conditions of the experiment. For
example, to investigate the neural correlates of knowledge-based validation
processes I contrasted neural activation on trials in which participants read sentences
containing information that is inaccurate with the readers’ background knowledge to
the neural activation on trials in which participants read sentences containing accurate
information and vice versa, to see which regions are more active in one condition than
in the other. What this example also illustrates is that fMRI does not measure absolute
neural activity, but it measures activity differences in brain regions during one
condition or task compared to another condition or task. Thus, activation - in the fMRI
literature - is inherently relative, as it measures relative changes in blood oxygenation
(Ma & Zhang, 2020).
The temporal resolution of fMRI is quite low, as blood flow responses
are rather slow (often in the order of several seconds). However, the fMRI signal has
excellent spatial resolution, allowing researchers to pinpoint active brain areas with a
precision of several millimeters. In addition to providing information on which brain
regions are involved in validation processes, neuroimaging data can be used to
develop, select and constrain cognitive models, as assumptions on the cognitive

22
architecture of a process also have implications for the hypothesized neural
organization of that process (Hagoort, 2017). In the context of this dissertation,
information on the neural architecture can be used to make more specific and more 1
grounded claims on the cognitive architecture of text processing and how different
sources of information -such as contextual information and background knowledge-
influence processing. It can shed light on whether we should assign separate roles for
the two sources in the cognitive architecture of validation, as the neural organization
of these processes (i.e., whether they call on the same underlying brain systems or
(partly) different brain systems) can help differentiate between cognitive theories
about common versus separate validation mechanisms (e.g., Frank & Badre, 2015;
Hagoort, 2017).

Recognition memory

To examine the contents of the readers’ offline mental representation


(i.e., what readers remembered from the text after reading) I designed a
recognition memory task. In this task participants were presented with single
sentences containing information that either matched or mismatched the
information they encountered in the reading task (e.g., when they were presented with
the information that the statue of liberty was delivered to the US during the reading
task they could be presented with information stating either that the statue of liberty
was delivered to the US or that it was not delivered to the US). Participants had to
indicate whether they recognized the information from the texts they read the day
before (yes/no) and how confident they were of their answer on a scale (see Chapter
3 and 5 for a more elaborate description of the task). To assess the degree of difficulty
readers experience while remembering false or incongruent information I compared
participants’ memory for targets containing false and/or incongruent information with
their memory for true and/or congruent targets.
Offline measurements such as this recognition memory task can provide
insight into the nature of text representations in memory once reading is completed
and possible quantitative and qualitative differences in those representations.
In addition, they can provide insights into the relations between processing and
memory (e.g., Kaakinen & Hyönä, 2005; Kendeou & van den Broek, 2007; Magliano
et al., 1999). Moreover, combining offline measures with online measures provides
insight into how this pattern of cognitive processes during reading may be reflected
in the offline memory representation and, thus, how the online processes affect what
readers remember from a text. In the context of the current dissertation, this allows
us to investigate whether and how online validation processes translate into
an offline memory representation. In addition, combining both measures allows us to

23
investigate possible similarities and dissociations between the processes and the
product of reading. Thus, combining online and offline methodologies provides a more
comprehensive picture of reading than employing them in isolation.

The whole is greater than the sum of its parts

A comprehensive neurobiological model of validation should provide


detailed descriptions of when and how various sources of information – such
as contextual information or background knowledge - influence processing (the time
course and cognitive architecture of processing) and specify how these processes
are instantiated in and supported by the organization of the human brain (the neural
architecture). The studies described in the current dissertation allow us to take steps
towards such detailed descriptions: Whereas the eye tracking data provides detailed
information on the time course of processing, the fMRI data provides information on
the underlying neural networks. Moreover, the online measures (reading times, neural
activation and eye movements) provide insight into the readers’ moment-by-moment
processing and the offline measures provide insight into the effects of these
processes on the long(er)-term memory representation. All research methods
described above have their own strengths and weaknesses, but they provide
complementary information. Therefore, combining the insights from the different
research methods in complementary ways can result in a more comprehensive
picture of the processes involved in text-based and knowledge-based validation and,
thus, move validation research forward.

24
Aims of the current dissertation
The overarching aim of this dissertation is to deepen our understanding of 1
validation processes by examining the complex interactions between contextual
information and background knowledge in building coherent and accurate mental
representations of text. More specifically, I aimed to examine whether contextual
information and information from the readers’ background knowledge influence
validation processes and products in essentially the same way or in distinct ways
and, hence, whether the two sources should be distinguished in theoretical models.
To this end, the studies described in this dissertation aimed to:
1. Develop a contradiction paradigm that allows for the direct comparison of
text-based and knowledge-based validation processes in a single design.
2. Investigate the nature and time course of text-based and knowledge-based
validation processes by
a. Examining when and how component validation processes involved in
detecting and resolving inconsistencies are influenced by contextual
information and background knowledge.
b. Examining whether (parts of) the validation process are more
knowledge-driven or text-driven and to what extent text-based and
knowledge-based validation processes take place independently or
interactively.
c. Examining whether (parts of) the validation process are passive or
more reader-initiated.
3. Investigate whether and how online validation processes translate into an
offline memory representation and, more specifically, how text-based and
knowledge-based validation processes that take place during reading affect
readers’ long(er)-term memory for text information.
4. Identify neural correlates of text-based and knowledge-based validation
processes and examine to what extent validation against background
knowledge and validation against prior text call on the same underlying
brain systems or (partly) different brain systems.
5. Examine how a readers’ purpose for reading (whether a person is reading
for study or for comprehension) affects online validation processes and/or
the offline memory representation and whether reading goals have
differential effects on the processing of text-based and knowledge-based
inconsistencies, respectively.
6. Examine the potential role of readers’ working memory capacity and the
novelty of the presented information to readers in text-based and
knowledge-based validation processes.

25
Outline of this dissertation
The remainder of this dissertation consists of 4 empirical studies (Chapter 2-
5) with the overarching goal to examine how contextual information and information
from the readers’ background knowledge influence both validation processes that
take place during reading and the resulting long-term memory representation after
reading. In all studies we used a contradiction paradigm (based on Rapp, 2008) to
investigate whether and how inaccuracies with background knowledge and
incongruencies with prior text affected online processing. All studies described in this
dissertation used the same contradiction paradigm: Participants read text on well-
known historical topics that contained a target sentence that was either true (e.g., the
Statue of Liberty was delivered to the United States) or false (e.g., the Statue of Liberty
was not delivered to the United States) relative to the reader’s background knowledge
and that was either supported(i.e., congruent) or called into question (i.e.,
incongruent) by the preceding context (e.g., context that described that the
construction of the statue went according to plan vs. context that described problems
that occurred during construction of the statue). However, different research methods
- including self-paced reading, eye-tracking, fMRI - were employed to provide a more
comprehensive and in-depth overview of the processes involved.
The study described in Chapter 2 focused on the cognitive processes that
take place during reading. The aim of this study was to examine whether the
mechanisms of validating against background knowledge are the same as those
validating against prior text or whether these mechanisms are fundamentally different.
We used reading times as a measure of readers’ difficulty integrating statements into
their discourse representation (e.g., Albrecht & O’Brien, 1993; Cook et al., 1998b;
Rapp, 2008; Rapp et al., 2001) and compared reading times on targets that were
inconsistent with either text or background knowledge with reading times on
consistent targets. Longer reading times for the targets that were inconsistent with
either text or background knowledge would suggest that readers indeed check the
incoming information against background knowledge and the preceding text on the
fly. In addition, we investigated whether validation is a passive or reader-initiated
process by considering both the influence of the task and the role of working memory.
We examined the influence of the task by manipulating the focus of the readers by
instructing them to focus on either their background knowledge or the text. If
validation is a more reader-initiated process, we would expect participants to direct
more resources to either validating against the text or validating against background
knowledge (e.g., when they are focused on the text, they would show larger
disruptions for text incongruencies than knowledge inaccuracies). However, if
validation is a passive process it should not be influenced by the task. Second, we

26
consider the possible role of individual differences in working memory capacity in
validation.
The behavioral data of the study described in Chapter 2 are subject to 1
multiple interpretations, as potential differences in reading times can be the result
of quantitative or qualitative differences in cognitive demands. Because such
behavioral data is subject to multiple interpretations, the study described in Chapter
3 combined the collection of behavioral data (reading time measures) with the
collection of neuroimaging data (fMRI) to help better characterize the cognitive
architecture of text-based and knowledge-based validation processes. The first goal
of this study was to examine to what extent validation against background knowledge
and validation against prior text call on the same underlying brain systems or (partly)
different brain systems. An additional goal was to investigate how reading information
(in)consistent with prior text or background knowledge affects participants’ long-term
memory representation of the texts by taking a behavioral measure of participants
memory for the texts after reading.
The studies described in Chapter 2 and 3 cannot distinguish between earlier
(e.g., detecting an inconsistency) and later (e.g., repairing an inconsistency) validation
processes, as the dependent variables used lack temporal resolution. Therefore, the
study described in Chapter 4 employed eye-tracking methodology – a method with
high temporal resolution – to gain insight into component validation processes
involved in detecting and resolving inconsistencies and into when and how these
processes are influenced by contextual information and background knowledge,
respectively. A secondary aim of this study was to examine whether and, if so, to what
extent these component validation processes were modulated by individual
differences in working memory capacity.
The study described in Chapter 5 followed on the two studies described in
Chapter 2 and 3. In the study described in Chapter 5, I examined text-based and
knowledge-based validation processes from the perspective of purposeful reading.
More specifically, I combined online and offline measures of comprehension to
examine the influence of reading goals on both online validation processes and the
possible impact on the final product of reading a text (i.e., the offline memory
representation). In addition, as validation processes and the way in which they are
modulated by reader goals may be influenced by individual differences between
readers, I examined to what extent the effects of reading goals were modulated by
individual differences in working memory capacity and the degree to which the
presented information was novel to the reader.
Finally, in Chapter 6 the results and conclusions from the empirical studies in
this dissertation are summarized and discussed in a broader context.

27
2
Validation: Knowledge- and
text-based monitoring during
reading

This chapter is based on:


Van Moort, M. L., Koornneef, A., & Van den Broek, P. (2018). Validation: Knowledge-
and Text-Based Monitoring During Reading. Discourse Processes, 55(5-6), 480-496.
doi:10.1080/0163853x.2018.1426319
Abstract
To create a coherent and correct mental representation of a text, readers
must validate incoming information; they must monitor information for consistency
with the preceding text and their background knowledge. The current study aims to
contrast text- and knowledge-based monitoring to investigate their unique influences
on processing and to investigate whether validation is passive or reader-initiated.
Therefore, we collected reading times in a self-paced experiment using expository
texts containing information that conflicts with either the preceding text or readers’
background knowledge. Results show that text- and knowledge-based monitoring
have different time courses and that working memory affects only knowledge-based
monitoring. Furthermore, our results suggest that validation could occur at different
levels of processing and perhaps draw on different mixes of passive and reader-
initiated processes. These results contribute to our understanding of monitoring
during reading and of how different sources of information can influence such
monitoring.

Keywords: validation, monitoring, reading comprehension, working memory,


background knowledge, text

30
Introduction
A central tenet in reading research is that readers go through a text trying
to create a coherent mental representation that is continuously updated when new
information is encountered (e.g., Fletcher & Bloom, 1988; Graesser et al., 1994;
Kintsch, 1988; Kintsch & van Dijk, 1978; McNamara & Magliano, 2009; O’Brien &
Myers, 1999; Trabasso et al., 1984; van den Broek, 2010; van den Broek et al., 1999;
2
van Dijk & Kintsch, 1983). In a high-quality mental representation, individual elements
of the text are connected to each other and to relevant background knowledge
by meaningful relations. Together, these elements and relations create a ’situation
model’, an interpreted description of the information in the text (Kintsch, 1998;
van Dijk & Kintsch, 1983; Zwaan & Radvansky, 1998). An essential aspect of
constructing connections during reading and, thereby, updating the situation model
is that the reader monitors the extent to which incoming information is accurate.
Written materials frequently contain inconsistencies, misinformation, or even fake
news –especially in this day and age. In some cases readers even integrate and use
inaccurate information when they should know that what they are reading is
inaccurate because they most likely have the accurate knowledge (Eslick et al., 2011;
Fazio et al., 2013; Fazio & Marsh, 2008a, 2008b; Marsh et al., 2003; Marsh & Fazio,
2006). It is important that we understand how readers deal with such inconsistencies
and, more generally, how they monitor incoming information to create a mental
representation of both coherent, ‘error free’ texts and texts that contain
inconsistencies (e.g., Albrecht & O’Brien, 1993; Rapp & Braasch, 2014; Richter &
Rapp, 2014).

Knowledge-based and text-based monitoring

To create a coherent and correct mental representation and to protect it


against inaccuracies readers must validate incoming information. They can monitor
such information for congruency with previous text information and for accuracy with
their own knowledge and beliefs (Singer, 2013). There is considerable evidence
indicating that validation against background knowledge occurs (e.g., O’Brien &
Albrecht, 1992; Rapp, 2008; Richter et al., 2009), at various levels of language
processing (i.e., sentence vs. discourse level), in different text genres (e.g., narrative
- and expository texts), and through various research methods, such as behavioral
and electrophysiological measures. For example, readers of narrative (e.g., O’Brien &
Albrecht, 1992) and expository texts (e.g., Rapp, 2008) have been found to be slower
when they read target sentences that were inaccurate with their world knowledge than
when they read accurate target sentences, suggesting that they indeed check the

31
incoming information against background knowledge. Similar patterns have been
observed in event-related potential (ERP) studies, where an increase in the N400 ERP
signal is taken to indicate detection of inconsistency. Such increases in N400 have
been observed for sentences that contain information that is inaccurate with the
reader’s background knowledge. For example, compare the sentences “the Dutch
trains are yellow and very crowded” and “the Dutch trains are white and very
crowded”. Although both sentences are linguistically correct, the second sentence
elicited a N400 ERP in Dutch research participants, because it is a well-known fact
among Dutch people that Dutch trains are yellow and this sentence thus was
inaccurate with their background knowledge (e.g., Hagoort et al., 2004). Hence, there
is ample evidence that background knowledge influences validation.
In addition to validating information against background knowledge readers
should also validate information against prior text, because specific sections of a text
can be incongruent with information provided earlier or later in that same text.
Previous research has shown that contextual information (i.e., provided within a text)
affects processing and comprehension of texts, for example in studies investigating
lexical access processes (Colbert-Getz & Cook, 2013), anaphoric references
(O’Brien, 1987; O’Brien et al., 1990, 1995), bridging inferences (Myers et al., 2000),
and those based on the contradiction paradigm developed by O’Brien and colleagues
(Albrecht & O’Brien, 1993; O’Brien et al., 1998, 2004, 2010). Indeed, if contextual
information is strong (e.g., recently encountered or more elaborate) it interacts with
or may even fully override the influence of world knowledge during reading (e.g.,
Colbert-Getz & Cook, 2013; Cook & Myers, 2004; Leinenger & Rayner, 2013; Myers
et al., 2000; Nieuwland & Van Berkum, 2006; Rizzella & O’Brien, 2002). Similarly,
several studies using post-reading tasks such as statement-verification tasks or
general knowledge tests showed that readers defer to text information (and not world
knowledge), even when they are aware of the fact that the information provided by
the text is inaccurate (e.g., Gerrig & Prentice, 1991; Marsh et al., 2003). For example,
sometimes readers gave more incorrect answers on tests of general world knowledge
after reading texts that contained inaccurate information than when they read texts
containing accurate or neutral information (Marsh et al., 2003).
However, most studies on text-based monitoring do not explicitly exclude the
influence of background knowledge to investigate the influence of prior text (Isberner
& Richter, 2014b). Often the impacts of these two sources on processing and
comprehension are studied in tandem and, as a result, the unique influence of either
background knowledge or prior text remains unknown. For example, O’Brien and
Albrecht (1991) presented passages in which the contexts supported an explicitly
mentioned antecedent (e.g., cat) or an unmentioned concept (e.g., skunk) followed by
a sentence containing an anaphoric phrase (e.g., what had run in front of her car).
Then they presented naming probes for either the correct antecedent (e.g., cat), or

32
the unnamed concept (e.g., skunk). In this example the supportive context would
mention ‘a terrific odor’, assuming that readers are more likely to associate ‘a terrific
odor’ with ‘skunk’ than with ‘cat’ based on their background knowledge. They found
that the unnamed concept (skunk) was activated in memory, even despite the explicit
reference to the correct antecedent (cat). This is one example that provides
compelling evidence that contextual information influences processing, but does not
explicitly exclude the influence of background knowledge.
To summarize, it is clear that both knowledge-based and text-based 2
monitoring influence text processing. But it is unclear how to distinguish between the
two sources for validation and how to define their unique influences.

Passive or reader-initiated

A second important issue that is debated in the literature is whether validation


is a passive or a reader-initiated process. There is some evidence that validation
against background knowledge is an automated, routine process (e.g., Hagoort & van
Berkum, 2007; Isberner & Richter, 2013; Richter et al., 2009; Singer, 2006, 2013). For
example, Singer (2006) demonstrated with a reading time paradigm that readers
verify information presented in everyday stories even when they do not follow an
intentional validation strategy. Thus, individuals seem to routinely validate information
they encounter in a discourse context. This is in line with the assumption of memory-
based text-processing views that connections between text segments and
background knowledge are formed via an effortless, autonomous spread of activation
through existing associations in readers’ semantic and episodic memory (e.g., Myers
& O’Brien, 1998; O’Brien et al., 1995; O’Brien & Myers, 1999). This would suggest
that the detection of an inconsistency is also passive. However, Singer and colleagues
observed that reading tasks may influence the degree to which readers rely on story
contexts and prior knowledge, respectively, and, hence, the effect of these sources
on readers’ processing of texts’ (Singer, 2006; e.g., Singer et al., 1992; Singer &
Halldorson, 1996; see also van den Broek & Helder, 2017). If the degree to which
readers rely on the text or prior knowledge can be altered by the task they are given,
this would suggest that validation processes –or the subsequent processes that are
triggered by the validation process- are not completely passive and might be at least
partially reader-initiated. The above illustrates that the evidence on whether validation
is passive or reader-initiated is ambiguous.
Whether a process is passive or reader-initiated also influences the
processing capacity that is required to run it to completion. Passive processes are fast
and relatively effortless and require fewer cognitive resources than reader-initiated
processes do. Thus, it could be that that -if validation is a reader-initiated process-

33
individual differences in processing capacity, for example in working memory,
influence the validation process. If validation is a reader-initiated process, then
perhaps working memory capacity serves as a bottleneck for the processing of
inconsistencies. To illustrate how this could be the case it is important to consider
how models of reading comprehension define validation. They state that validation is
one of three prominent processes that are active during reading, namely activation,
integration and validation. First, relevant concepts from memory are activated
(activation), then the available concepts are connected with the content of working
memory (integration), and finally the connections that are formed between concepts
are validated to ensure they make sense (validation) (Cook & O’Brien, 2014; O’Brien
& Cook, 2016a). Thus, in order to successfully validate the connections both the newly
read information and the relevant information from either background knowledge or
the previous text have to be active and available in working memory. It is crucial for
the detection of inconsistencies that the two are not only active but are co-activated
in memory (van den Broek & Kendeou, 2008). Consequently, working memory could
play a role in this co-activation (e.g., Hannon & Daneman, 2001; Singer, 2006) and,
since its capacity is assumed to be limited (Miller, 1956; Simon, 1974), could serve as
a bottleneck for the processing of inconsistencies.

The current study

Our discussion indicates that information from both background knowledge


and the text can influence the validation process. However, to the best of our
knowledge, prior studies have either not made a direct comparison between these
two types of validation or, if they did include both text and background knowledge in
their design, did not report on both types of validation. As a result, it is unclear whether
the mechanisms of validating against background knowledge are the same as those
validating against prior text or whether these mechanisms are fundamentally different.
In the current study, we aimed to tease apart these two types of validation to
investigate their unique influence on processing by directly comparing them in a
within-subjects design. Furthermore, we aimed to shed light on whether validation is
passive or reader-initiated and elucidate the role of working memory during
knowledge-based and text-based monitoring. We did so in a self-paced reading
experiment in which the participants read expository texts about historical topics. The
situations described in the texts either supported or called into question actual
historical events. Half of the texts continued with information that was incongruent
with the text or inaccurate with their background knowledge. The texts were
presented sentence-by-sentence and reading times were recorded, providing a
measure of readers’ difficulty integrating statements into a discourse representation

34
as texts unfold (e.g., Albrecht & O’Brien, 1993; Cook et al., 1998a; Rapp, 2008;
Rapp et al., 2001).
To investigate whether validation is a passive or reader-initiated process we
used two different approaches. First, participants received instructions during the
reading task to focus them on either background knowledge or the text. If validation
is a more reader-initiated process, we would expect participants to be able to direct
more resources to either validating against the text or validating against background
knowledge and, hence, for the type of instruction to interact with the inconsistency 2
effect: When the focus of the participants matches the type of inconsistency, the
inconsistency effect would be larger than when the two are different. For example,
when they are focused on the text they would show a larger inconsistency effect for
text incongruencies than for knowledge inaccuracies. However, if validation is a
passive process, we would expect the reader to be unable to influence the validation
process and their focus would not influence the inconsistency effect. Second, we
consider the possible role of individual differences in working memory capacity in
validation. As mentioned earlier, passive processes are seen as relatively effortless
and require almost no cognitive resources, whereas reader-initiated processes are
more effortful and require cognitive resources. Therefore, if working memory capacity
is found to impact the processing of inconsistencies then validation apparently
requires cognitive resources and thus likely is at least in part a reader-initiated
process.

Method
Participants

Fifty-eight university students (11 male and 47 female, mean age = 22.4
years) participated in this study for course credit or pay. All participants were native
speakers of Dutch and had no diagnosed dyslexia and/or developmental disorders
such as ADHD or Autism-Spectrum Disorders. They also had normal or corrected-to-
normal eyesight. Participants provided written informed consent prior to testing, and
all procedures were approved by the Leiden University Institute of Education and
Child Studies ethics committee and conducted in accordance with the Declaration of
Helsinki.

35
Materials

Norming study
The texts used in this experiment were based on materials used by Rapp
(2008). Each text contained a target statement that is either true or false with
background knowledge, and a context (the sentences preceding the target) that either
supports or calls into question the information in the target. The texts were about
historical topics that are well known to readers. To make sure that the facts described
in the texts were common knowledge in our sample we conducted a norming study
among thirty native Dutch participants (8 male and 22 female, mean age = 22.5 years).
In the norming study, eighteen topics from the Rapp (2008) stimuli were translated
and used. In addition, we wrote texts about eighty new topics to make them more
suitable for the present sample of participants (e.g., a text about Babe Ruth was
replaced by a text about Johan Cruyff, a famous Dutch soccer player). The
participants read either the historically correct or the historically incorrect target
sentence for the 98 topics and indicated (a) whether the sentence was true or false
and (b) how sure they were of their answer on a visual analog scale (VAS) ranging
from ‘absolutely not sure’ to ‘absolutely sure’. The certainty scores were calculated
as a percentage of the VAS line. To determine familiarity with the items, a threshold
of 70% of participants selecting the correct answer was used as an indicator of high
familiarity (e.g., Marsh et al, 2003; Marsh and Fazio, 2006). After eliminating the
unfamiliar topics, a final sample of eighty topics remained. For the eighty items used
in the main study, on average, 87% of the participants answered them correctly
(SD = 9.9) and they were 77% certain of their answer (SD = 11.4).

Experimental texts

Four different versions of each of the 80 texts were constructed, by


orthogonally varying the context prior to the target sentence (i.e., congruent vs.
incongruent with target sentence) and the target sentence itself (i.e., true vs false with
the readers’ background knowledge). The context could either bias towards the true
or the false target, making the context either congruent (i.e., bias-true context paired
with a true target sentence or bias-false context paired with a false target sentence)
or incongruent (i.e., bias-true context paired with a false target sentence or bias-false
context paired with a true target sentence) with the target sentence. It should be noted
that the contexts that were biased towards a false target sentence did not state
with certainty that the historical events (stated in the target sentence) would not occur
or were impossible; rather, they called into question the certainty of those events.

36
To make sure that there were no violations of background knowledge prior to the
presentation of the target sentence all facts described in the context sentences were
historically correct.
Each text consisted of ten sentences (see Table 2.1 for a sample text). The
first two sentences were identical among all conditions, providing an introduction to
the topic. The next five sentences (Sentences 3-7) of the texts differed in content,
depending on context condition (congruent/incongruent). On average, the bias-true
context consisted of 64 words (SD = 4.20) and 399 characters (SD = 23.48) and the 2
bias-false context consisted of 66 words (SD = 4.30) and 407 characters (SD = 22.79).
The eighth sentence of the text was the target and provided one of two targets,
depending on target condition (true/false). The final two sentences were identical
among all conditions, providing a conclusion to the topic. On average, the texts
contained 121 words (SD = 5.66) and 766 characters per text (SD = 37.63), across all
four text versions.
Overall, the target sentences were equated for length: both true (SD = 1.92)
and false (SD = 1.90) targets contained on average 9 words. When measured by
number of characters (including spaces and punctuation), both true (SD = 10.51) and
false (SD = 10.42) targets contained on average 60 characters. Half of the true and
false targets included the word ‘not’ or ‘never’ (e.g., “Jack the Ripper was never
caught and punished for his crimes.”), and half did not (e.g., “The Titanic withstood
the damage from the iceberg collision.”).

Apparatus

Reading task

Participants read the texts on a computer screen. The texts were presented
one sentence at a time and reading times were recorded when participants pressed
a key in order to advance to the next sentence. To implement a repeated measures
design we used a Latin square to construct four lists, with each of the 80 texts
appearing in a different version (as a function of text context – congruent with target
vs. incongruent with target - and target -true vs. false-) on each list. Each participant
received one list and, hence, read one version of each text. The texts were divided
into two blocks of 40 and the order in which they were presented was randomized.
Participants received instructions at the start of each block to focus them on either
their background knowledge or the text. The order of the blocks was counterbalanced;
participants could either receive the focus on text instruction in the first block and the
focus on background instruction in the second block or vice versa. Focus on
background knowledge was promoted by the instruction that they had to write down

37
one thing they knew about the topic prior to reading each text. Focus on the text was
promoted by the instruction to think of a short summary that reflects the content of
the text best at the start of the block. After reading each text participants had to write
down their summary. Halfway through a block they were able to take a short break.
Each block of the experimental task was preceded by a short practice block of two
texts to familiarize participants with the task. The structure of the practice texts
mirrored the structure of the experimental items.

Measures

Working memory capacity

Working memory capacity was measured by means of the Swanson


Sentence Span task (Swanson et al., 1989). In this task, the experimenter reads out
sets of sentences, with set length varying from 1 to 6 sentences. At the end of each
set a comprehension question is asked about one of the sentences in the set.
Participants have to remember the last word of each sentence and recall these after
answering the comprehension question. Demands on working memory vary because
sets consist of two, three, four, five or six sentences, with two sets at each working
memory load. If participants successfully complete both tasks (recall of final words
and correctly answer the comprehension question) for at least one of the two sets at
a particular load they advance to the next higher load. Participants earned 0.25 points
for each correctly answered comprehension question or correctly recalled set of
words and the sum of these points is the index of working-memory capacity.

Reading skill

Although the sample in this study constituted of a relatively homogeneous


group of skilled readers, we used the scores for a modified cloze task (CBM Maze
Task) to control for the influence of overall reading proficiency (Deno, 1985; Espin &
Foegen, 1996; Fuchs et al., 1992). In this task, participants read two texts. For each
text, every seventh word is deleted and replaced with three options to choose from.
As participants proceed through the texts they choose the word that best completes
the text as they read. They are given 90 seconds to provide as many correct answers
as possible. Accuracy scores are averaged across the two texts.

38
Table 2.1. Sample text with the four text versions (translated from Dutch original)
Knowledge accuracy
Target true Target false

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with artist He conceptualized a giant sculpture along with artist
Auguste Bartholdi. Auguste Bartholdi.

[Bias True Context] [Bias False Context]


2
Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target congruent with context

fundraising work. fundraising work.


They organized a public lottery to generate support Raising the exorbitant funds for the statue proved
for the sculpture. an enormous challenge.
American businessmen also contributed money to Because of financial difficulties France could not
build the statue’s base. afford to make a gift of the statue.
Despite falling behind schedule, the statue was Fundraising was arduous and plans quickly fell
completed. behind schedule.
The statue’s base was finished as well and ready for Because of these problems, completion of the
mounting. statue seemed doomed to failure.

[Target True] [Target False]


The Statue of Liberty was delivered from France to The Statue of Liberty was not delivered from France
the United States. to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
Text congruency

This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with artist He conceptualized a giant sculpture along with artist
Auguste Bartholdi. Auguste Bartholdi.

[Bias False Context] [Bias True Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target incongruent with context

fundraising work. fundraising work.


Raising the exorbitant funds for the statue proved They organized a public lottery to generate support
an enormous challenge. for the sculpture.
Because of financial difficulties France could not American businessmen also contributed money to
afford to make a gift of the statue. build the statue’s base.
Fundraising was arduous and plans quickly fell Despite falling behind schedule, the statue was
behind schedule. completed.
Because of these problems, completion of the The statue’s base was finished as well and ready for
statue seemed doomed to failure. mounting.

[Target True] [Target False]


The Statue of Liberty was delivered from France to The Statue of Liberty was not delivered from France
the United States. to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

39
Procedure

Participants were tested individually. First, they completed the experimental


reading task. Participants were asked to read each sentence at their own pace. They
started with a practice block, followed by one experimental block. After completing
the first experimental block participants could take a 5–7-minute break before they
started the second experimental block (with a different focus instruction). Halfway
through each experimental block participants had the opportunity to take a short
break if needed. After they finished the experimental reading task, they completed the
Swanson Sentence Span Task and the CBM Maze Task. The total duration of the
experimental procedure was approximately 90 minutes.

Design

In the current study the factors text (target congruent or incongruent


with context), background knowledge (target true or false) and focus (focus on text or
background knowledge) varied as within-participants. The factor text was defined as
congruent or incongruent based on the combination of context and target sentence
(e.g., context bias true followed by a true target sentence is congruent and context
bias true followed by a false target sentence is incongruent). The factor background
knowledge was defined as true or false with the readers background knowledge and
the factor focus consisted of two levels (focus on text or focus on background
knowledge) depending on the instructions the participants received at the start of
each block. The variables reading ability and working memory capacity were
between-subjects variables.

Results
To investigate whether the participants completed the tasks as instructed we
analyzed their responses. Results for trials where participants had been instructed to
focus on background knowledge (by writing down one thing they already knew about
the topic) show that they were proficient in activating background knowledge:
Participants responded with background information in 94% of the trials. Likewise,
analysis of participants’ summaries for the trials where they had been instructed to
focus on the text (by thinking of a summary and writing it down) revealed that they
completed the task as requested: They responded in 99.74% of the trials with a
summary, with an average summary length of 133 characters (SD = 55).
To investigate the effects of our manipulations on the reading process we
analyzed the reading times for four regions of interest: the pre-target (sentence

40
preceding the target sentence), the target sentence, the spill-over sentence (sentence
following the target sentence), and the final sentence. Sentence reading times that
were extremely short (shorter than 300ms) or extremely long (longer than 10000ms)
were excluded from the analyses, resulting in a loss of 1% of the data. Table 2.2
reports the means and standard deviations of the resulting data for reading times as
a function of focus (focus background knowledge / focus text), text (target congruent
/ incongruent with context), background knowledge (target true / false), and region
(pre-target, target, spill-over and final). 2

Table 2.2. Mean reading times and standard deviations (in ms) at the regions of interest (pre-target,
target, spill-over and final sentence) for the experimental manipulations focus (focus on text or
background knowledge), text (context consistent or inconsistent with target sentence) and
background knowledge (target sentence true or false).

Background Pre-target Target Spill-over Final


Focus Text
knowledge M SD M SD M SD M SD
True 2261 1289 1908 1043 2526 1743 2417 1497
Congruent
Focus
False 2347 1292 2104 1187 2565 1417 2457 1755
Background
True 2359 1278 2026 1087 2441 1371 2475 2173
Knowledge
Incongruent
False 2348 1282 2276 1317 2702 1670 2491 1833

True 2581 1312 2093 1023 2783 1443 2619 1601


Congruent
Focus False 2589 1209 2373 1239 2989 1620 2668 1780
Text True 2548 1209 2317 1031 2837 1502 2634 1600
Incongruent
False 2539 1242 2626 1453 2937 1510 2811 2002

First, we fitted a mixed-effects linear regression model to determine whether


the experimental manipulations and their interactions were significant factors in the
each of the regions of interest. Models were estimated with the R package LME4
version 1.1-12 (Bates et al., 2015) and for all models Wald chi-square testing (TYPE
II), as implemented in the R-package CAR (version 2.1-4; Fox & Weisberg, 2011), was
applied to select the most parsimonious structure of fixed effects by removing non-
significant (p >.05) predictors. We considered the reading times (log transformed to
correct for right skewness) for all regions together, with the following factors: region,
background knowledge, text and focus and the interaction of these factors. The model
also included the maximal converging random structure for subjects and items. The
model showed that the experimental manipulations were significant factors in the
regions of interest.

41
The results of the Wald chi-square tests revealed significant main effects of
region (χ2(3) = 511.14, p <0.001), focus (χ2(1) = 514.43, p < 0.001), background
knowledge (χ2(1) = 52.03, p < 0.001) and text (χ2(1) = 15.31, p < 0.001) on the reading
times. Moreover, both the interaction between region and text (χ2(3) = 26.928,
p < 0.001) and the interaction between region and background knowledge
(χ2(3) = 37.07, p < 0.001) were significant.
To further investigate the effects of the experimental manipulations we
conducted mixed-effects linear regression analyses for each region of interests
separately. For each region of interest we started with a model that included the fixed
factors background knowledge (target true/target false), text (congruent/incongruent)
and focus (focus background knowledge / focus text) and the full interactional terms
for these factors. Participants and items were included as crossed random effects.
Next, we included reading ability and working memory capacity (both measures were
median-centered) using a forward stepwise selection procedure (Viebahn, Ernestus
& McQueen, 2012), comparing models with and without each particular characteristic.
Then we again selected the most parsimonious model as our final model. The final
model included the maximal participant and item random-effect structure that resulted
in a converging model (Barr et al., 2013). For each region of interest we only report
the final model. Furthermore, because it is not clear how to determine the degrees of
freedom for the t-statistics estimated by the mixed models for continuous dependent
variables (Baayen, 2008), we do not report degrees of freedom and p values for the
fixed estimates. Instead, statistical significance at approximately the 0.05 level is
indicated by values of the t-statistics ≥ 1.96 (see e.g., Schotter et al., 2014). We report
both the Wald tests and the estimates of the fixed effects for all models. Unless
mentioned otherwise, we only discuss effects that were significant (p < 0.05) in the
Wald-tests and the tests of the fixed estimates (t > 1.96 or t < -1.96) and for reading
ability and working memory capacity we only discuss significant interactions with the
fixed factors in our design (background knowledge, text and focus).

Target sentence

Table 2.3 reports the results of the Wald tests as well as of the tests of the
estimates of the fixed effects of the final model for the log transformed reading times
on the target sentence. The results revealed significant main effects for both
text (β = 0.08, SE = 0.01, t = 7.34) and background knowledge (β = 0.10, SE = 0.02,
t = 6.47). Reading times for false (M = 2345 ms) or incongruent targets
(M = 2311 ms) were longer than the reading times for true (M = 2085 ms) or congruent
targets (M = 2119 ms), both for knowledge inaccuracies and text incongruencies.
Furthermore, we observed a main effect of focus (β = 0.15, SE = 0.04, t = 3.64):

42
Participants read more slowly when they focused on the text (M = 2352 ms) than when
they focused on their background knowledge (M = 2078 ms). In addition, there was a
main effect of reading ability (β = -0.02, SE = 0.01, t = -4.37), with skilled readers
reading faster than less skilled readers. Finally, there was a significant interaction
between working memory capacity and background knowledge (β = -0.03, SE = 0.01,
t = -2.32): Participants with a larger working memory capacity showed a smaller
inconsistency effect than did participants with a smaller working memory capacity.
2

Table 2.3. Wald tests and estimates of fixed effects of the final model including maximum random
slopes for the log transformed reading times on the target sentence. The following R code was used:
Reading time (log) ~ 1 + Focus * Working memory capacity + Text + Background knowledge * Working
memory capacity + Reading ability + (1 + Focus + Text + Background knowledge| Subject) + (1 +
Focus + Text + Background knowledge | Item)

Wald tests χ2 df p
Focus 13.657 1 < 0.001
Text 53.916 1 < 0.001
Background Knowledge 40.755 1 < 0.001
Working memory 0.782 1 0.377
Reading Ability 19.065 1 < 0.001
Focus * Working memory 1.050 1 0.305
Working memory * Background Knowledge 5.386 1 0.020

Estimates of the fixed effects Β SE t


Intercept 7.421 0.045 165.56
Focus 0.148 0.041 3.64
Text 0.077 0.011 7.34
Background Knowledge 0.098 0.015 6.47
Working memory -0.051 0.051 -1.01
Reading Ability -0.019 0.004 -4.37
Focus * Working memory 0.050 0.049 1.02
Working memory * Background Knowledge -0.029 0.012 -2.32

Spill-over sentence

Table 2.4 reports the results of the Wald tests as well as of the tests of the
estimates of the fixed effects of the final model for the log transformed reading times

43
on the spill-over sentence. For the reading times of the spill-over region there was a
main effect of background knowledge (β = 0.056, SE = 0.013, t = 4.21), indicating that
there was a spill-over effect of knowledge inaccuracies: when the preceding target
sentence was false (M = 2799 ms) participants were slower on the spill-over sentence
than when the preceding target sentence was true (M = 2647 ms). Furthermore, there
was a main effect of focus (β = 0.12, SE = 0.05, t = 2.74): When participants focused
on the text (M = 2886 ms) they read more slowly than when they focused on
background knowledge (M = 2558 ms). Finally, there was an interaction of reading
ability and working memory capacity (β = -0.02, SE = 0.01, t = -2.23). However,
because this interaction was of secondary interest and, moreover, not reliable in the
Wald tests (χ2(1) = 1.71, p = 0.19) we refrain from further discussing this result.

Table 2.4. Wald tests and estimates of fixed effects of the final model including maximum random
slopes for the log transformed reading times on the spill-over sentence. The following R code was
used: Spill-over reading time (log) ~ 1 + Focus * Reading ability * Working memory capacity +
Accuracy + (1 + Focus * Background Knowledge |Subject) + (1 + Focus * Accuracy |Item)

Wald tests χ2 df p
Focus 11.60 1 < 0.001
Reading Ability 18.15 1 < 0.001
Working Memory 0.93 1 0.334
Background Knowledge 17.76 1 < 0.001
Focus*Reading Ability 1.71 1 0.191
Focus*Working Memory 0.10 1 0.754
Reading Ability * Working Memory 1.70 1 0.192
Focus * Reading Ability * Working Memory 3.26 1 0.071

Estimates of the fixed effects Β SE t


Intercept 7.704 0.051 150.56
Focus 0.123 0.045 2.74
Reading Ability -0.012 0.008 -1.36
Working Memory -0.032 0.060 -0.54
Background Knowledge 0.056 0.013 4.21
Focus*Reading Ability -0.010 0.008 -1.32
Focus*Working Memory 0.005 0.055 0.09
Reading Ability * Working Memory -0.021 0.009 -2.23
Focus*Reading Ability*Working Memory 0.016 0.009 1.81

44
Taken together the results show a main effect of background knowledge on
both target and spill-over sentences, with participants reading both target and spill-
over sentences more slowly when the target sentence was false. For the target
sentence this effect depended on participants’ working memory capacity as indicated
by a significant interaction: Participants with a larger working memory capacity
showed a smaller inconsistency effect than did participants with a smaller working
memory capacity. With respect to the congruency of target and context, a main effect
of text was observed for the target sentence but not the spill-over sentence: 2
participants were slower to read target sentences that were incongruent with the
preceding text than target sentences that were congruent. On both the target and
spill-over sentences we found a main effect of focus, indicating that participants were
generally slower when they were instructed to focus on the text. We did not find any
interaction effects between text and background knowledge on the target or the spill-
over sentence.

Pre-critical sentence and final sentence

Table 2.5 and 2.6 report the results of the Wald tests as well as of the
tests of the estimates of the fixed effects of the final model for the log transformed
reading times on the pre-critical sentence (Table 4) and the final sentence (Table 5).
Both models include maximum random slopes. We found a main effect of focus
on both the pre-critical sentence (β = 0.11, SE = 0.04, t = 2.62) and the final sentence
(β = 0.09, SE = 0.04, t = 2.16): For both sentences, participants read more slowly when
they were focused on the text (pre-critical: M = 2546 ms, final: M = 2683 ms) than
when they were focused on background knowledge (pre-critical: M = 2307 ms, final:
M = 2460 ms). Also, the main effect of reading ability was significant in the Wald tests
for both the pre-critical (χ2(1) = 17.18, p < 0.001) and the final sentences
(χ2(1) = 16.54, p < 0.001), indicating that a higher reading ability was associated with
faster reading times. However, the estimate of the fixed effect for the predictor reading
ability did not reach significance for both the pre-critical (β = -0.02, SE = 0.01,
t = -1.74) and the final sentence (β = -0.01, SE = 0.01, t = -1.47), indicating that the
results for this predictor must be interpreted with caution. There were no effects of
source (background knowledge or text) on the reading times for either region. Thus,
we did not find any effects of the text and background knowledge manipulations on
the pre-target or the final sentence, indicating that the manipulations did not have an
effect prior to the presentation of the target sentence and that the spill-over effect of
background knowledge was no longer present on the final sentence.

45
Table 2.5. Wald tests and estimates of fixed effects for the final model including maximum random
slopes for the log transformed reading times on the pre-critical sentence. The following R code was
used: Z7.RT_Log ~ 1 + Focus * Reading ability * WM capacity + (1 + Focus | Subject) + (1 + Focus |
Item)

Wald tests χ2 df p
Focus 10.999 1 < 0.001
Reading Ability 17.178 1 < 0.001
Working Memory 0.005 1 0.942
Focus*Reading Ability 1.413 1 0.235
Focus*Working Memory 2.131 1 0.144
Reading Ability * Working Memory 0.565 1 0.452
Focus * Reading Ability * Working Memory 2.789 1 0.095

Estimated of the fixed effects Β SE t


Intercept 7.642 0.050 152.10
Focus 0.107 0.041 2.62
Reading Ability -0.015 0.008 -1.74
Working Memory -0.050 0.059 -0.85
Focus*Reading Ability -0.009 0.007 -1.20
Focus*Working Memory 0.063 0.051 1.24
Reading Ability * Working Memory -0.017 0.009 -1.76
Focus * Reading Ability * Working Memory 0.013 0.008 1.67

46
Table 2.6. Wald tests and estimates of fixed effects for the final model including maximum random
slopes for the log transformed reading times on the final sentence. The following R code was
used: Z10.RT_Log ~ 1 + Focus * Reading ability * WM capacity + (1 + Focus |Subject) + (1 + Focus
|Item)

Wald tests χ2 df p
Focus 7.481 1 0.006
Reading Ability 16.535 1 < 0.001 2
Working Memory 0.4532 1 0.465
Focus*Reading Ability 1.452 1 0.228
Focus*Working Memory 0.193 1 0.660
Reading Ability * Working Memory 1.742 1 0.187
Focus * Reading Ability * Working Memory 2.265 1 0.132

Estimates of the fixed effects Β SE t


Intercept 7.665 0.053 144.19
Focus 0.094 0.043 2.16
Reading Ability -0.013 0.009 -1.47
Working Memory -0.033 0.062 -0.54
Focus*Reading Ability -0.009 0.008 -1.22
Focus*Working Memory 0.014 0.054 0.25
Reading Ability * Working Memory -0.019 0.010 -2.00
Focus * Reading Ability * Working Memory 0.013 0.009 1.50

47
Discussion
In the present study we aimed to contrast validation against background
knowledge (knowledge-based monitoring) and validation against prior text (text-
based monitoring) to investigate their unique influences on processing. Additionally,
we wanted to shed light on whether validation is a passive or a reader-initiated process
and elucidate the possible role of working memory. First, in line with previous studies
participants took longer to read both the false and the incongruent targets (e.g.,
Albrecht & O’Brien, 1993; O’Brien et al., 1998; Rapp, 2008). Both prior text and
background knowledge influenced readers’ moment-by-moment processing: both
types of inconsistencies elicited an effect on the target sentence. However, only
inconsistencies with background knowledge elicited a spill-over effect on the next
sentence. Thus, it seems that text-based monitoring and knowledge-based monitoring
show distinct time courses, which suggests processing differences. Second, we
investigated whether validation is a passive or reader-initiated process by considering
both the influence of the task and the role of working memory. We examined
the influence of the task by manipulating the focus of the readers, by instructing them
to focus on either their background knowledge or the text. The task influenced the
reading process of the text as a whole (i.e., a main effect of focus: readers were slower
when instructed to focus on the text than when they were instructed to focus on
background knowledge), but did not influence the validation process (i.e., there was
no interaction between reading focus and inconsistency). Furthermore, we
investigated the role of working memory in validation and observed that working
memory influences processing of inconsistencies with background knowledge but not
processing of inconsistencies with prior text.
The results show that both background knowledge and prior text have an
influence on readers’ moment-by-moment processing. Moreover, they suggest
distinct time courses for validation against background knowledge and prior text,
because only knowledge violations elicit a spill-over effect. Similar results have been
found by Rapp (2008) in a series of experiments where he used texts with familiar or
unfamiliar topics to make the readers’ prior knowledge relevant or irrelevant. In two
of these experiments he used familiar topics and in the third experiment unfamiliar
topics, thereby minimizing - but not eliminating - the influence of background
knowledge. Rapp found that minimizing the influence of background knowledge (more
text-based monitoring) resulted in main effects of the inconsistencies only on the
targets but no spill-over effects. In contrast, when prior knowledge was made relevant
(more knowledge-based monitoring), he did observe spill-over effects. Although the
aim of this study was not to compare the two types of monitoring, the pattern of spill-
over effects when knowledge-based monitoring was important and no spill-over

48
effects for text-based monitoring can be interpreted in light of the current study. In
our study we made a direct comparison between the two types within a single design
to investigate their differences. The results showed that indeed text-based and
knowledge-based monitoring have different time courses and that knowledge
violations seem to have a different - and perhaps larger – impact on processing than
text violations.
There are several possible explanations for the time-course differences
between text-based and knowledge-based monitoring. First, the time course 2
differences may reflect differences in the possible repair processes that follow
the validation process. It could be that the repair processes are different for
knowledge violations than for text violations. For example, if it is more difficult to repair
an inconsistency with the more extensive background knowledge than one with a text
representation, then this would lead to the observed longer reading times.
Second, the differences may be related to how easily relevant information for
validation is accessed and activated. Earlier two-stage models of comprehension
(Kintsch, 1988, 1998; Long & Lea, 2005; Rizzella & O’brien, 1996; Rizzella & O’Brien,
2002; Sanford & Garrod, 1989) assume that comprehension consists of two stages:
activation and integration. More recent models, such as the RI-Val model of reading
(Cook & O’Brien, 2014; O’Brien & Cook, 2016a, 2016b), have included validation as
an additional stage of comprehension. All aforementioned models assume that
information needs to be activated before it can be integrated or validated. Relevant
concepts are activated through spread of activation (e.g., Anderson, 1983): a passive
and unrestricted process that continues until the system reaches an equilibrium
(Kintsch, 1988). Presumably, the memory representation of background knowledge
consists of a much larger and therefore richer network of possibly relevant concepts
(especially for well-known topics) than the memory representation of the text
(especially for relatively short texts). Because the memory representation of
background knowledge consists of a much larger and richer network it may be that
the onset of knowledge-based validation is later because it takes longer for the system
to reach an equilibrium during the activation and integration stages, resulting in spill-
over effects for knowledge violations.
A third, related, possibility is that the time course differences are caused by
differences in the duration of the validation processes. Because the memory
representation of background knowledge consists of a larger and richer network it
seems likely that after the initial activation and integration stages more concepts are
available as input for the validation process than for the memory representation of the
text. Validation may take longer because there simply are more possible concepts
against which the new information has to be validated, thus causing knowledge-based
validation to take longer.

49
Common to these three explanations for the time-course differences between
knowledge- and text-based monitoring is that the former requires more processing
resources than the latter, either because the repair process are more demanding or
because the amount of information that needs to be validated is larger. This
interpretation of the results is supported by the observation that working memory
capacity only played a role when readers validated inaccuracies with background
knowledge. Interestingly, a larger working memory capacity decreased – but not
eliminated- the inconsistency effect for the target sentences. Although a smaller
inconsistency effect could be taken to reflect less or even inferior validation, this
interpretation seems unlikely in the current situation assuming that high-capacity
readers are proficient validators. Hence, a more plausible explanation of the
attenuation of the inconsistency effect for high-capacity readers is that they either
validated the inconsistent information more quickly, or that they repaired their mental
representation more efficiently. Furthermore, our observation that working memory
capacity only played a role on the target sentence and not on the spill-over sentence
could indicate that its influence occurs relatively early during processing (i.e., when
readers integrate and validate the information) rather than at later processing (i.e.,
repair processes that follow validation). However, given the temporal resolution of the
current design, this account is speculative. Thus, although we can conclude that
working memory capacity affects the processing of knowledge violations, more
research is needed to clarify its specific role in validation and repair processes.
Whether validation is a passive or reader-initiated process remains an open
question. Evidence for a passive account of validation is the finding that, although the
focus manipulation successfully altered the reading process as a whole (i.e., a main
effect of focus), it did not influence the validation process (i.e., no interaction between
focus and the type of inconsistency). This suggests that the validation process is not
influenced by reading task and thus is a passive process, which would be in line with
previous studies (e.g., Hagoort & van Berkum, 2007; Isberner & Richter, 2014b;
Richter et al., 2009; Singer, 2006, 2013). Of course, one should be cautious
interpreting this null effect: It also is possible that the specific reading tasks used in
the present study were unable to influence the validation process. Evidence for a
reader-initiated account of validation comes from the finding that working memory
plays a role in the processing of knowledge violations but not text violations. This
suggests that validation – at least against background knowledge – is not entirely
effortless and thus not entirely passive.
The interpretation of these results depends on how one conceptualizes
passive and reader-initiated processes. Passive processes generally are
conceptualized as outside the reader’s conscious control, nonselective, and
unrestricted in the kind of information they return (e.g., Anderson, 1983; McKoon &
Ratcliff, 1992; Myers & O’Brien, 1998; O’Brien & Myers, 1999), whereas

50
reader-initiated processes are conceptualized as requiring control and attentional
resources. Based on this view our results create a complicated picture, because on
the one hand validation was not influenced by the reading task, but on the other hand
validation against background knowledge did require cognitive resources (i.e.,
working memory capacity). These seemingly contradictory findings may be
reconciled by a more refined conceptualization, namely that validation consists of sub-
components that operate at different levels of processing (more passive or more
reader-initiated).This interpretation would be in line with the suggestion that reader- 2
initiated processes lie on a continuum reflecting the degree to which they are
constrained by the text, ranging from processes that are close to the actual text itself
(almost passive) to processes that go well beyond the information in the text (more
interpretive) (van den Broek & Helder, 2017). If processes indeed range on such
a continuum our task-manipulation would tap into a relatively high level of
reader-initiated processes whereas the interaction between working memory and
knowledge-based monitoring would tap into processes closer to the text itself (and
thus closer to passive). The fact that we did not find an effect of focus but did find an
interaction with working memory capacity could mean that validation – at least against
background knowledge – indeed is a reader-initiated process, but at a lower level on
this continuum. Future research could try to determine whether there indeed is a
continuum of reader-initiated processes and, if so, where exactly on this continuum
the process of validation lies and whether this is the same for text-based and
knowledge-based monitoring.
In summary, we have shown that both prior text and background knowledge
have a unique influence on processing and that the processing of inconsistencies
against these two sources follow different time courses and, therefore, may involve
different mechanisms. The current study has taken a first step in elucidating the
processing differences between text-based and knowledge-based monitoring. Future
studies should examine why these processes differ and what exactly the differences
are. Another future challenge would be to design experiments that allow us to pinpoint
exactly when the various component processes of validation start and finish by using
more fine-grained measures to obtain better insight in their time course. Our results
suggest that validation could occur at different levels of processing and perhaps draw
on different mixes of passive and reader-initiated processes, but they call into
question whether validation categorically can be described as passive or reader-
initiated. Furthermore, the results show that working memory plays a role - in
particular in the processing of knowledge violations. This is a first step to elucidate the
possible role of working memory in comprehension monitoring and validation. An
interesting research avenue would be to determine the conditions under which
working memory does or does not play a role, for example by varying the demands a

51
task places on working memory and determine whether and, if so, how this influences
the processing of different types of inconsistencies.
The current results contribute to our understanding of monitoring processes
during reading and how different sources of information, such as text or background
knowledge, can influence this process. Many theoretical models of reading
comprehension (e.g., Albrecht & O’Brien, 1993; Gerrig et al., 2005; Gerrig & McKoon,
1998; Johnson-Laird, 1983; McKoon & Ratcliff, 1998; O’Brien & Albrecht, 1992; van
Dijk & Kintsch, 1983) make a distinction in the origin (i.e., the text or background
knowledge) of information that is included in a mental representation; The current
study shows that a similar distinction can be made with respect to the origin of
information – text or background knowledge - against which incoming information is
validated during reading.

52
2

53
3
What you read vs what you
know: Neural correlates of
accessing context information
and background knowledge in
constructing a mental
representation during reading

This chapter is based on:


van Moort, M. L., Jolles, D. D., Koornneef, A., & van den Broek, P. (2020). What you
read vs what you know: Neural correlates of accessing context information and prior
knowledge in constructing a mental representation during reading. Journal of
Experimental Psychology: General, 149(11), 2084–2101.
Abstract
A core issue in psycholinguistic research is what the online processes are by
which we combine language input and our background knowledge to construct the
meaning of a message. We investigate this issue in the context of reading. To build a
coherent and correct mental representation of a text readers monitor incoming
information for consistency with the preceding text and with their background
knowledge. Prior studies have not distinguished between text-based and knowledge-
based monitoring, therefore it is unclear to what extent these two aspects of text
comprehension proceed independently or interactively. We addressed this issue in a
contradiction paradigm with coherent and incoherent versions of texts. We combined
behavioral data with neuroimaging data to investigate shared and unique brain
networks involved in text-based and knowledge-based monitoring, focusing on
monitoring processes that affected long-term memory representations. Consistent
with prior findings, behavioral results indicate that text and background knowledge
each have a unique influence on processing. However, neuroimaging data suggests
a more nuanced interpretation: Text-based and knowledge-based monitoring involve
shared and unique brain regions, as well as regions that are sensitive to interactions
between the two sources. It appears that the (d)mPFC and hippocampus -which are
important for the influence of existing knowledge on encoding processes in non-
reading contexts- are particularly involved in knowledge-based monitoring. In
contrast, the right IFG is primarily involved in text-based monitoring, whereas left IFG
and precuneus are implicated in integration processes. Furthermore, processes
during reading affect recall of information (in)consistent with prior text or background
knowledge.

56
Introduction
Constructing meaning (i.e., a mental representation that makes sense) from
discourse is a fundamental human ability. To comprehend the world around us we
build mental representations in which we integrate the current input (or recently
acquired knowledge) with our existing knowledge base (stored in long term memory),
for example when we read a book, watch a movie, or have a conversation. Building
this representation is a dynamic process; the representation must be monitored and
updated continuously as new information is encountered (e.g., Graesser et al., 1994;
Kintsch & van Dijk, 1978; Trabasso et al., 1984; van den Broek et al., 1999). Moreover,
it is crucial that the incoming information is validated to protect the mental
representation against incongruencies or inaccuracies (e.g., Isberner & Richter, 3
2014a; Singer, 2013). The current study investigates these validation processes in the
context of reading comprehension. Validating (written) materials against various
sources of information is increasingly important, as they frequently contain
inconsistencies, misinformation, or even fake news, especially today. Current
theoretical frameworks presume a rudimentary cognitive architecture for validation
processes, but they do not provide detailed information on when and how different
sources of information, such as recently acquired knowledge (from the text) and
readers’ background knowledge (from memory), exert their influence. The current
study examines the (neuro)cognitive architecture of the processes involved in text-
based and knowledge-based validation.

Behavioral correlates of text-based and knowledge-


based monitoring

Constructing meaning is a crucial ability in many major cognitive tasks,


including learning, memory, perception, decision making, and language processing.
The processes by which such meaning construction takes place have been
investigated extensively in the context of language processing, in particular reading,
as the interpretation of (written) language draws on many cognitive processes
involved in meaning construction (Graesser et al., 1997). Although frameworks that
describe how we construct meaning from language vary in approach and scope, they
all attempt to describe the architecture and time course of processing: how and when
the different language processes interact with each other. Some focus on the interplay
between syntactic and extra-syntactic systems in processing individual sentences or
very short discourse (e.g., Friederici, 2002; Hagoort, 2003), whereas others focus on

57
how readers construct mental models when they process more extended discourse
or texts (e.g., Kintsch, 1988; van den Broek et al., 1999; Zwaan et al., 1995).
A core aspect of constructing mental models from texts is the interplay
between semantic and linguistic cues in the text (the text base) and relevant portions
of world knowledge (which is integrated into the final mental representation) (e.g.,
Fodor, 1983; Kintsch, 1988; Millis & Just, 1994; van den Broek & Helder, 2017).
Current theoretical frameworks of text comprehension provide elaborate descriptions
of the cognitive processes involved, however they are underspecified on how and
when various sources of information, such as contextual information or background
knowledge, exert their influence on processing. For example, two prominent models
of validation, the two-step model of validation and the RI-Val model, describe
validation as the process where incoming information is evaluated for consistency
with stored knowledge (Isberner & Richter, 2014a) or all information activated from
long-term memory (Cook & O’Brien, 2014; O’Brien & Cook, 2016a, 2016b). This
stored knowledge or activated information includes both information from the episodic
representation of the text as well as general world knowledge. Although these models
state that the two sources of information can impact validation processes, they do not
describe the underlying architecture of text-based and knowledge-based validation
processes. As a result, it is unclear whether validating against background knowledge
and validating against prior text involve a common mechanism or (partially) separate
mechanisms, and what happens when they are in conflict.
Interestingly, models that focus on processing individual sentences or very
short discourse describe very similar issues (e.g., the interplay between syntax and
semantics) but provide more detailed descriptions of the cognitive architecture of
processing, and specifically, when and how various sources of information (e.g.,
syntax, semantics) influence processing (e.g., Ferreira & Clifton, 1986; Friederici,
2002; Hagoort, 2005; Hagoort et al., 2004; Hagoort & van Berkum, 2007; Jackendoff,
2007; van Berkum et al., 1999). To achieve such detailed descriptions neurobiological
data has proven useful in developing, selecting, and constraining cognitive models,
as assumptions on the cognitive architecture of a process also have implications for
the hypothesized neural organization of that process (Hagoort, 2017). Analogous to
models of sentence comprehension, models of text comprehension could benefit from
neuroimaging research: Information on the neural architecture can be used to make
more specific and more grounded claims on the cognitive architecture of text
processing. More specifically, it can be used to make more specific claims on how
different sources of information -such as contextual information and background
knowledge- influence processing.
There is a considerable amount of behavioral data on the effect of contextual
information and background knowledge on discourse processing (e.g., Albrecht &
O’Brien, 1993; Nieuwland & Van Berkum, 2006; O’Brien & Albrecht, 1992; Rapp,

58
2008; Richter et al., 2009). The influence of these two sources of information on
integration and updating processes is often examined using contradiction paradigms
where participants read both coherent and incoherent versions of sentences or texts
(e.g., Albrecht & O’Brien, 1993; Rapp, 2008). A robust finding in these paradigms is
that readers slow down when they read sections that are inconsistent with their world
knowledge, suggesting that they check the incoming information against background
knowledge on the fly (O’Brien & Albrecht, 1992; Rapp, 2008). Background knowledge
affects sentence and discourse processing at various levels of language processing
(i.e., sentence and discourse level) and in different text genres (e.g., narrative and
expository texts) (e.g., O’Brien & Albrecht, 1992; Rapp, 2008; Richter et al., 2009;
Rodd et al., 2016; Wiley et al., 2018). Similarly, contextual information (i.e., provided
within a text) can also affect processing and comprehension (e.g., Colbert-Getz &
3
Cook, 2013; Myers et al., 2000). In fact, if contextual information is strong (e.g.,
recently encountered or more elaborate) it may even override the influence of world
knowledge during reading ((e.g., Colbert-Getz & Cook, 2013; Myers et al., 2000;
Nieuwland & Van Berkum, 2006; Rizzella & O’Brien, 2002). For example, Nieuwland
and Van Berkum (2006) used a paradigm in which they presented readers with the
sentence ‘The peanut is in love’ or ‘The peanut is salted’. Although both are
grammatically correct, the first sentence is more difficult to process than the second
sentence because it violates background and lexico-semantic knowledge – i.e.,
peanuts can be salted but they are inanimate objects and therefore cannot be in love.
However, when the same sentences were presented in a story about a peanut singing
a song about his new girlfriend, readers would show the opposite pattern, because
‘the peanut is in love’ is appropriate giving the context, whereas ‘the peanut is salted’
is not.
Taken together, there is ample evidence that both contextual information and
background knowledge influence the construction of a mental representation.
However, studies on text-based monitoring do not always control for the influence of
background knowledge to investigate the influence of prior text (Isberner & Richter,
2014a; van Moort et al., 2018). Instead, the influence of these two sources on
processing and comprehension often are studied in tandem, for example in paradigms
where the target (e.g., children are building a snowman) can only be incoherent with
the preceding context (e.g., it was a hot sunny day) if readers had certain background
knowledge (e.g., snow melts on a hot sunny day). As a result, it is unclear whether the
mechanisms of validating against background knowledge are the same as those
validating against prior text or whether these mechanisms are fundamentally different.
Addressing this issue, van Moort et al. (2018) developed a paradigm that
explicitly contrasts validation against background knowledge and validation against
prior text. Participants read expository texts about well-known historical topics
containing a target sentence that was either true or false relative to the readers

59
background knowledge and that was either supported or called into question in the
preceding context. Reading time measures on targets provided a measure of readers’
difficulty integrating statements into their discourse representation (e.g., Albrecht &
O’Brien, 1993; Cook et al., 1998b; Rapp, 2008; Rapp et al., 2001). Results indicated
that both prior text and background knowledge influenced readers’ moment-by-
moment processing on the target sentence, but only inconsistencies with background
knowledge elicited a spill-over effect on the next sentence. This suggests that text-
based monitoring and knowledge-based monitoring follow distinct time courses.
Based on these results, Van Moort et al. (2018) speculated that text-based and
knowledge-based monitoring may involve different cognitive systems. However, an
alternative explanation suggests that there is a single system involved with these
different types of validation, and the observed differences in reading times are the
result of quantitative rather than qualitative differences in cognitive demands.
Because the behavioral data is subject to multiple interpretations, neuroimaging data
may help to better characterize the cognitive architecture of text-based and
knowledge-based validation. More specifically, the main goal of the current study is
to investigate whether we should distinguish between the influence of contextual
information and background knowledge and, more specifically, whether we should
assign separate roles for the two sources in the cognitive architecture of validation.
By combining behavioral measures with neuroimaging data we aim to examine to what
extent validation against background knowledge and validation against prior text call
on the same underlying brain systems or (partly) different brain systems, as the neural
organization of these processes can help differentiate between cognitive theories
about common versus separate validation mechanisms (e.g., Frank & Badre, 2015;
Hagoort, 2017). As background, we highlight some crucial findings in the
neurocognitive literature below.

Division of labor in the coherence-monitoring


network

Functional magnetic resonance imaging (fMRI) studies have begun to reveal


a network of regions that contribute to the construction of coherent mental
representations of texts (e.g., Egidi & Caramazza, 2013; Ferstl et al., 2008;
Ferstl & Von Cramon, 2001; Mason & Just, 2006; Moss & Schunn, 2015;
Virtue et al., 2006; Yarkoni et al., 2008). A meta-analysis of neuroimaging studies on
text comprehension processes showed a network of regions that was more active for
coherent compared to incoherent (or less coherent) language, including the left
inferior frontal gyrus (IFG), medial prefrontal cortex (mPFC), posterior cingulate cortex
(PCC), precuneus, and several temporal lobe regions (Ferstl et al., 2008). These areas

60
are thought to be involved in coherence building processes (Ferstl & Von Cramon,
2001, 2002; Kuperberg et al., 2006; Maguire et al., 1999; Mellet et al., 2002). However,
as most studies in the meta-analysis compared processing coherent stories with
processing sets of unrelated sentences it is difficult to determine to what extent
regions in this network contribute to specific processes involved incoherence
monitoring, such as detecting inconsistencies with prior text or background
knowledge.
To study coherence monitoring in more detail, fMRI studies have employed
variations of the contradiction paradigm (e.g., Ferstl et al., 2005; Hasson et al., 2007;
Helder et al., 2017; Menenti et al., 2009). Two of these studies are particularly relevant
for the current study. First, Helder, van den Broek, Karlsson, and Van Leijenhorst
(2017) examined the neural correlates of coherence-break detection during reading
3
and found a network of regions that were more active in response to incoherent than
coherent target sentences, including the left IFG, precuneus, (dorso)medial prefrontal
cortex (dmPFC), right supramarginal gyrus, and a number of temporal lobe regions.
These findings suggest that the coherence-building network (Ferstl et al., 2008)
becomes more active when inconsistencies are encountered. In addition, they found
activation in subcortical clusters, including left hippocampus (HC), left amygdala,
and bilateral caudate. Because the (in)coherence of targets could only be established
in the context of readers’ background knowledge the authors speculated that the
left HC may have played a specific role in the reactivation of episodic memory traces
of the text in combination with the retrieval of background knowledge. However,
the paradigm they used did not allow distinguishing between these sources of
information. Second, Menenti et al. (2008) investigated how context information
provided within a text influences the processing of (erroneous or implausible) world-
knowledge information. Their results suggest that the left IFG is particularly sensitive
to the consistency of information with background knowledge (showing increased
activation for false information), whereas the right IFG also takes into account the
ongoing discourse (showing a reduced effect of false information if the context
provides an explanation) (Menenti et al., 2008). These findings suggest that both world
knowledge and discourse context affect the integration of world knowledge into the
mental representation, and do so by recruiting partly different sets of brain areas.
As in the behavioral research, most neuro-imaging studies do not make a
clear distinction between text-based and knowledge-based monitoring. The study by
Menenti et al. (2008) was the only study that included contextual and world knowledge
manipulations in a single design, but their main goal was to examine whether the
context can override world knowledge, thereby disregarding text-based monitoring
as an independent process. As a result, it is difficult to disentangle potentially separate
monitoring networks. However, these studies do suggest that the regions involved,
i.e., (d)mPFC, precuneus, left and right IFG, display a division of labor with respect to

61
building, monitoring, and updating the situation model (e.g., Ferstl et al., 2008;
Hagoort et al., 2004; Helder et al., 2017; Menenti et al., 2009).
The dmPFC and the precuneus are both key nodes of a ‘default-mode
network’ that is involved in, among other processes, building mental models (e.g.,
Andrews-Hanna et al., 2010). During reading, these regions are suggested to be
important for monitoring and inferencing in complex or ambiguous situations as well
as in situation-model building (Ferstl et al., 2008). Their functional division is still under
debate, but it seems that the dmPFC may be particularly important for coherence
monitoring (e.g., Ferstl et al., 2005; Hasson et al., 2007; Helder et al., 2017), whereas
the precuneus/PCC may play a crucial role in building and updating mental
representations (e.g., Speer et al., 2009).
The left IFG is a key region in a left lateralized perisylvian language network
that is associated with basic language processing on the word and sentence level, as
well as with more complex processes such as coherence building (Ferstl et al., 2008).
As described above, it is sensitive to sentences that are incoherent with the text given
that readers have certain background knowledge (e.g., Helder et al., 2017) and is
involved in recruiting world knowledge during comprehension (e.g., Hagoort et
al., 2004; Menenti et al., 2009). Hagoort et al. (2004) showed that sentences
participants regarded as false elicited an activity increase in the left IFG compared to
true sentences. The left IFG is sensitive to the consistency of information with
background knowledge, even if the preceding discourse overrides this knowledge
(Menenti et al., 2008). So, although the left IFG seems to play a role in both text-based
and knowledge-based monitoring, it may be more dedicated to world-knowledge
validation processes. Similar claims have been made for the right IFG, although this
region is thought to play a larger role in integrating previously stored knowledge with
discourse information (Menenti et al., 2008).
In summary, both prior text and background knowledge influence the
neurocognitive processes that take place during reading comprehension. These two
informational sources are processed in overlapping neural networks, consisting of
several core brain regions (dmPFC, precuneus/PCC and left and right IFG). It is not
year clear whether these core regions are uniquely involved in either text-based or
knowledge-based processing, or whether they contribute to both types of validation.
Therefore, the second goal of the current study is to provide a more detailed picture
of the division of labor between these regions during text-based and knowledge-
based monitoring.

62
Updating the memory representation

When new information is encountered during reading, the long-term memory


representation can be updated or revised to take this information into account
(Kendeou, 2014). However, it is not yet clear how reading an inconsistency affects the
long(er) term memory representation. Therefore, the third goal of the current study is
to investigate the influence of text-based and knowledge-based monitoring on the
long-term memory representation. With respect to memory performance, reading
inconsistent information could affect the memory representation in different ways: It
could be that information that fits with pre-existing memory representations (i.e., the
readers’ background knowledge or the mental representation of the text) is
remembered better (e.g., Anderson et al., 1983). Alternatively, it could be that 3
inconsistent information is remembered better, as it is relatively ‘new’ information with
respect to the mental representation or the readers’ background knowledge and
novelty is suggested to improve retention of that information (e.g., Kormi-Nouri et al.,
2005). These possible memory performance differences may be reflected in
differences in neural activation during encoding. This is illustrated by a study by
Hasson et al. (2007), who showed that activation in the precuneus, dmPFC and right
superior temporal gyrus predicted memory for spoken narratives. This study
suggested that at least part of the coherence-monitoring network is involved in both
detecting (and repairing) inconsistencies and in encoding them in memory. Thus,
differences in memory encoding between inconsistent and consistent information
may affect the neural activation during the task, therefore it is important to take
memory performance into account.

The current study

As current theoretical frameworks of text comprehension are underspecified


on when and how contextual information and background knowledge influence
validation processes, it is unclear whether we should distinguish between the
influence of contextual information and background knowledge and, more specifically,
whether we should assign separate roles for the two sources in the cognitive
architecture of validation. Moreover, models of text comprehension are not grounded
in the neural architecture of the brain, as neuroimaging research investigating
coherence monitoring is still in its early stages. To investigate the specific roles of
contextual information and background knowledge in the cognitive architecture of
validation processes and explore the neural underpinnings of these processes, the
current study combined behavioral data with neuroimaging data in a contradiction
paradigm with coherent and incoherent versions of text. To investigate to what extent

63
text-based and knowledge-based monitoring call on the same or (partly) different
brain regions we investigated specific regions of interest that are known to play a key
role in coherence monitoring (i.e., dmPFC, precuneus, left and right IFG) to examine
a possible division of labor between them. Moreover, we investigated whether other
relevant regions for text-based and knowledge-based monitoring could be identified
and compared the results to studies that (tentatively) proposed a more extended
coherence monitoring network (e.g., Helder et al., 2017). Finally, to allow for a more
grounded interpretation of the neural correlates of online text processing and later
recall, we investigated whether the processes during initial reading affect the recall of
information (in)consistent with prior text or background knowledge. In addition, as
these possible differences in memory encoding may affect the neural activation during
the task, we took memory performance into account in our fMRI analyses by focusing
specifically on successfully retrieved targets. Therefore, we only included trials in our
analyses if the targets were remembered correctly as we can be confident that in
these trials the text representation was updated successfully.
We employed a self-paced sentence-by-sentence reading paradigm, with
recording of reading times and neuroimaging data during reading. Reading times
provide a behavioral measure of readers’ difficulty integrating statements into a
mental representation as texts unfold (e.g., Albrecht & O’Brien, 1993; Cook et al.,
1998b). Participants read expository texts in which half of the texts included a target
sentence that contained information that could be either true or false with the readers
background knowledge. At the same time, the context sentences prior to the target
provided contextual information that made it more or less likely that the event
described in the target sentence occurred, while remaining historically accurate (see
Table 1 for a sample text). In line with previous behavioral studies (e.g., Albrecht &
O’Brien, 1993; Cook et al., 1998b; Rapp, 2008) we expected longer reading times for
the targets that were inconsistent with either text or background knowledge, which
would suggest that readers indeed check the incoming information against
background knowledge and the preceding text. To examine the respective influences
of information from the text and from background knowledge on the mental
representation and participants’ memory of the texts, we employed a surprise memory
task the next day. The aim of this memory task was twofold: First, recall data would
give us insight in the effects of processing inconsistencies on the memory
representation. Second, it allowed us to elaborate on the neurocognitive processes
involved in updating the mental representation of a text during reading by specifically
considering the fMRI results for correctly remembered items.

64
Method
Participants

Thirty-one, right-handed, native speakers of Dutch (11 men, 20 women) aged


19-28 years (M = 23, SD = 3) participated for monetary compensation. All participants
had normal eyesight and none reported having neurological and/or psychiatric
disorders or using psychotropic medication. Participants provided written informed
consent. Procedures were approved by the internal review board at the Leiden
University Medical Centre and conducted in accordance with the Declaration of
Helsinki. Anatomical scans were cleared by a radiologist.
3
Materials

The 40 texts were a subset of the materials used in Chapter 2 of this


dissertation (based on Rapp, 2008). The texts were normed to ensure that the
presented facts were common knowledge in our sample. The presented facts were
familiar to at least 80% of the participants in the norming study (see Chapter 2 of this
dissertation for a more detailed description of the norming study). Furthermore, only
on 1.5% of the items participants in the current study indicated that they did not have
any or very little knowledge of the topic on a questionnaire assessing their background
knowledge they filled in afterwards. The texts are about well-known historical topics
and each text contains a target sentence that is either true or false (with the readers’
background knowledge); at the same time the preceding text could either support
(i.e., unambiguous context) or call into question (i.e., suspenseful context) the
information in the targets. Four different versions of each text were constructed by
orthogonally varying the target sentence itself (true vs. false) and the context prior to
the target (congruent vs. incongruent with target). More specifically, the context could
bias towards either the true or the false target, making the context either congruent
or incongruent with the target (see sample text in Table 3.1). It is important to note
that bias-false contexts did not include erroneous information. Although the phrasing
of the context sentences called into question the certainty of events stated in the
target, all facts described in the context sentences were historically correct.
Each text consisted of ten sentences (see Table 3.1). Sentences 1-2 were
identical across conditions and introduced the topic. Sentences 3-7 differed in
content, depending on context condition (congruent/incongruent). On average, the
bias-true context consisted of 64 words (SD = 4) and 400 characters (SD = 22) and
the bias-false context consisted of 66 words (SD = 4) and 406 characters (SD = 27).
Sentence 8 was the target sentence, which was either true or false. Overall, targets

65
were equated for length: true and false targets contained on average 9 words
(SD = 2) and 60 characters (SDtrue = 11; SDfalse = 10). Half of the true targets and half
of the false targets included the word “not” or “never” (e.g., “Jack the Ripper was
never caught and punished for his crimes.”) and half did not (e.g., “The Titanic
withstood the damage from the iceberg collision.”). Sentences 9-10 concluded the
text and did not elaborate on the fact potentially called into question in the target. On
average, texts contained 121 words (SD = 5) and 763 characters (SD = 37), across
all four text versions.
To implement a repeated-measures design we used a Latin square to
construct four lists, with each text appearing in a different version as a function of
target (true or false) and text context (congruent or incongruent with target) on each
list. Each participant was assigned to one list and, hence, read one version of each
text.

66
Table 3.1. Sample text with the four text versions (translated from Dutch original)
Knowledge accuracy
Target true Target false

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with He conceptualized a giant sculpture along with
artist Auguste Bartholdi. artist Auguste Bartholdi.

[Bias True Context] [Bias False Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target congruent with context

fundraising work. fundraising work.


They organized a public lottery to generate Raising the exorbitant funds for the statue proved
support for the sculpture. an enormous challenge.
American businessmen also contributed money to Because of financial difficulties France could not
build the statue’s base.
Despite falling behind schedule, the statue was
afford to make a gift of the statue.
Fundraising was arduous and plans quickly fell
3
completed. behind schedule.
The statue’s base was finished as well and ready Because of these problems, completion of the
for mounting. statue seemed doomed to failure.

[Target True] [Target False]


The Statue of Liberty was delivered from France The Statue of Liberty was not delivered from
to the United States. France to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
Text congruency

This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with He conceptualized a giant sculpture along with
artist Auguste Bartholdi. artist Auguste Bartholdi.

[Bias False Context] [Bias True Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target incongruent with context

fundraising work. fundraising work.


Raising the exorbitant funds for the statue proved They organized a public lottery to generate
an enormous challenge. support for the sculpture.
Because of financial difficulties France could not American businessmen also contributed money to
afford to make a gift of the statue. build the statue’s base.
Fundraising was arduous and plans quickly fell Despite falling behind schedule, the statue was
behind schedule. completed.
Because of these problems, completion of the The statue’s base was finished as well and ready
statue seemed doomed to failure. for mounting.

[Target True] [Target False]


The Statue of Liberty was delivered from France The Statue of Liberty was not delivered from
to the United States. France to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

67
Apparatus

Reading task

Participants read the 40 texts in the scanner in two blocks. Texts were
presented sentence-by-sentence, while reading times were recorded. The
presentation rate was self-paced. Participants were instructed to read for
comprehension (“Please read the texts carefully, it is important that you understand
the texts”) and to advance to the next sentence by pressing a button with their left
index finger. Sentences remained on screen for a maximum of 10 seconds. A fixation
cross was presented between texts (variable duration 7000 – 14800 ms). One second
before the next trial would start the cross would turn red to alert participants to the
start of the trial. A short practice block preceded the experimental blocks.

Recognition Memory Task

The recognition memory task consisted of 160 sentences: 40 target, 40


context, 40 neutral, and 40 distractor sentences. Participants were presented with
sentences that either matched or did not match sentences they had encountered in
the reading task (e.g., when they were presented with ‘the statue of liberty was
delivered to the US’ during the reading task they could be presented with either ‘the
statue of liberty was delivered to the US’ or ‘the statue of liberty was not delivered to
the US’). For each sentence participants indicated whether they had read the
sentence during the reading task in the scanner or not and rated confidence in their
answer on a scale ranging from (1) very uncertain to (7) very certain. Half of the
recognition items were consistent with the version that was presented in the reading
task (correct response ‘yes’), the other half was not (i.e., correct response ‘no’). Half
of the presented items contained the word ‘not’ or ‘never’ and half did not (both for
true and false items). Half of the recognition items were from context versions that
were presented in the reading task; the other half were from the other context version.
Thus, correct recognition responses included correct hits (sentence was present
during the reading task and participants indicated that they have read the sentence)
and correct rejections (sentence was not present during the reading task and
participants indicated that they did not read the sentence). Neutral sentences were
presented in the reading task and stemmed from neutral parts of the text (i.e.,
sentence 1,2,9 or 10). Distractor sentences were sentences that were not presented
at all in the reading task.

68
Procedure

Participants were tested individually in two sessions. In the first session they
completed the reading task in the MRI scanner (total duration ca. 75 min). The second
session took place about 24 hours after the first. In this session they completed the
memory task and a questionnaire to assess whether they had the required
background knowledge or not by indicating for each topic how much they knew about
it prior to reading the text on a scale from 1 (nothing) to 7 (a lot).

Behavioral data analysis

To investigate the effects of the manipulations on the reading process we


3
conducted mixed-effects linear regression analyses on the log-transformed reading
times for target and spill-over sentences (sentence following the target). For each
sentence we started with a full interactional model that included the interaction
between the fixed factors background knowledge (target true/false), text (target
congruent/incongruent with preceding context) and the random factors subjects and
items. Based on the results of this analysis we selected the most parsimonious model
and included the maximal participant and item random-effect structure that resulted
in a converging model (Barr et al., 2013). To investigate the effects of the
manipulations on participants’ recall of the texts we conducted mixed-effects logistic
regression analyses on memory and certainty scores using the same approach. We
will only report final models for each analysis. As it is not clear how to determine the
degrees of freedom for the t statistics estimated by mixed models for continuous
dependent variables (Baayen, 2008), we do not report degrees of freedom and p
values. Instead, statistical significance at approximately the 0.05 level is indicated by
t ≥ 1.96 (e.g., Schotter et al., 2014). We report both likelihood tests and tests of the
fixed estimates for all models. Unless mentioned otherwise, we only discuss effects
that were significant in both likelihood tests (p < 0.05) and fixed-estimates tests
(t > 1.96).

fMRI data acquisition

fMRI data were acquired using a standard whole-head coil on a 3-Tesla


Philips Achieva MRI scanner. Foam inserts were used to minimize head movement.
Prior to the functional runs, a high resolution 3DT1-weighed anatomical scan
was obtained for registration purposes (repetition time (TR) = 9.7; echo time
(TE) = 4.60, flip angle = 8%, 140 axial slices, field of view (FOV) = 224 mm × 168 mm
× 177.333 mm, 0.275mm slice gap and voxel size = 0.875 mm × 0.875 mm × 1.2 mm).

69
For fMRI, T2*-weighed whole-brain Echo-Planar Images were acquired in two runs
with the following parameters: TR = 2.2 s, TE = 30 ms, flip angle = 80°, 38 axial slices,
FOV = 220 × 115 x 220 mm, 2.75 mm isotropic voxels, 0.275 mm slice gap,
voxel size = 2.75 mm × 2.75 mm × 2.75 mm and max 800 volumes per run. Five
dummy scans preceded each run to allow for equilibration of T1 saturation effects.
Because the task was self-paced the number of volumes per block varied. When the
participants finished the task the T2 scan was stopped. Stimuli were projected using
E-prime software (version 2.0.10.147, Psychology Software Tools) onto a screen at
the head of the scanner bore which participants viewed through a mirror attached to
the head-coil.

fMRI preprocessing and data analyses

Data pre-processing was performed using FSL (version 5.0.9) and consisted
of motion correction using MCFLIRT (Jenkinson et al., 2002), non-brain removal using
BET (Smith, 2002), 5 mm Gaussian kernel FWHM spatial smoothing, grand-mean
intensity normalization of the entire 4D dataset by a single multiplicative factor, and
high-pass temporal filtering (Gaussian-weighted least-squares straight line fitting, with
sigma = 100 s). Functional scans were registered to the T1-weighted images, and then
to the 2 mm MNI-152 standard space template.
Data analysis was performed using FEAT (version 6.00), part of FSL. A set of
whole-brain analyses was conducted to identify regions involved in text-based and
knowledge-based monitoring. We focused specifically on trials that were remembered
correctly in the memory task because for these trials we can be confident that the text
representation was updated successfully3. Trials that were remembered incorrectly in
the memory task were included in the model but excluded from the contrasts of
interest. For each participant in each run, ten EVs with their temporal derivatives were
included in the general linear model, representing the presentation of (1) sentences
1-7, (2) congruent true target, correctly remembered, (3) congruent true target,
incorrectly remembered, (4) congruent false target, correctly remembered, (5)
congruent false target, incorrectly remembered, (6) incongruent true target, correctly
remembered, (7) incongruent true target, incorrectly remembered, (8) incongruent
false target, correctly remembered, (9) incongruent false target, incorrectly
remembered and (10) sentences 9-10. Onset of the EVs was determined using
custom-written scripts in R Studio (version 0.99.903, RStudio, Inc.), based on each
participant’s button presses. EVs were convolved with a double gamma hemodynamic
response function.

3 Analyses including all trials can be found in the supplementary material

70
At the single subject level, we created the following contrasts: (1) incongruent
> congruent, (2) congruent > incongruent, (3) false > true, (4) true > false, (5) positive
interaction, i.e., voxels where false > true is larger for congruent than for incongruent
texts, (6) negative interaction, i.e., voxels where false > true is larger for incongruent
than for congruent texts. We did not create contrasts for incorrectly remembered trials
because some of the conditions did not have enough incorrect trials. A direct
comparison between correct and false could not be performed for the same reason.
Contrasts were combined across the runs on a subject-by-subject basis using fixed-
effects analyses (Woolrich et al., 2004), and submitted to whole-brain mixed-effect
group analyses. Resulting whole-brain statistical maps were corrected for multiple
comparisons using cluster-based correction (p < .0.05, initial cluster-forming
threshold Z > 2.3). All local maxima are reported as MNI coordinates. Anatomical
3
location was determined using the Harvard-Oxford Cortical and Subcortical structural
atlas for FSL.
ROI analyses were performed using Featquery and SPSS version 23 (IBM
Corp., Armonk, NY, USA), focusing on dmPFC, IFG, and precuneus. We created
8 mm spherical ROIs centered at local maxima described by Helder et al. (2017):
dmPFC [-9, 48, 21], precuneus [-12, -45, 42], left IFG [-42, 24, -12] and its right
homologue [42, 24, -12]. Repeated-measures ANOVAs were conducted to examine
the effects of text (congruent, incongruent) and background knowledge (true, false)
on the mean activation in each ROI on correctly remembered targets.

Results
Reading task

To examine the effects of the manipulations we conducted mixed-effects


linear regression analyses on the log transformed reading times on target and spill-
over sentence (see Table 3.2 for descriptives). The Wald chi-square test revealed
main effects of background knowledge (χ2 (1) = 11.34, p < 0.001) and of text
(χ2 (1) = 8.05, p = 0.005): reading times were longer for false than for true targets
(β = 0.09, SE = 0.026, t = 3.37) and reading times were longer for incongruent than
for congruent targets (β = 0.05, SE = 0.018, t = 2.84). On the spill-over sentence the
Wald chi-square tests revealed a marginal effect of background knowledge
(χ2 (1) = 3.39, p = 0.066), and no effect of text.

71
Table 3.2. Mean reading times and standard deviations (in ms) at the regions of interest (target
and spill-over sentence) for the experimental manipulations of background knowledge (target
true or false) and text (target congruent or incongruent with preceding context).

Target Spill-over
Text Background knowledge
M SD M SD
True 2715 1051 3370 1349
Congruent
False 2938 1118 3520 1348
True 2810 970 3343 1309
Incongruent
False 3141 1301 3433 1236

Memory task

Participants were proficient in distinguishing whether they had read a


sentence or not. Overall, they scored on average 77% correct and they were 75%
certain of their answer. On sentences originating from the task (target, context, and
neutral) they scored on average 73% correct and were 73% certain of their answer.
On distractor sentences they scored on average 91% correct and were 72% certain
of their answers. This shows they had read the texts attentively. To investigate the
effect of the manipulations on memory of the texts we conducted mixed-effects linear
logistic regression analyses on accuracy and certainty scores for target sentences.
The Wald chi-square test revealed main effects of background knowledge
(χ2(1) = 8.327, p = 0.003) and of text (χ2(1) = 7.354, p = 0.006) on the accuracy scores,
indicating that false targets were remembered less well than true targets (β = -0.86,
SE = 0.298, z = -2.89) and that targets that were incongruent with the preceding text
were also remembered less well than congruent targets (β = -0.58, SE = 0.215,
z = -2.71). We did not find any effects of text or background knowledge on the
certainty scores.

Region-of-interest analyses

For the mean activation in the dmPFC we found a main effect of background
knowledge (F(1) = 8.85, p = 0.006) but not of text (F(1) = 0.48, p = 0.492): false targets
elicited more activation than true targets (Fig. 3.1). We observed a trend in the
interaction (F(1) = 3.95, p = 0.056). For the mean activation in the precuneus we found
a text x background knowledge interaction (F(1) = 4.28, p = 0.047), but no main effects
of text (F(1) = 0.09, p = 0.769) or background knowledge (F(1) = 0.21, p = 0.649)

72
(Fig. 3.1). For the mean activation in the left IFG we found a text x background
knowledge interaction (F(1) = 10.86, p = 0.003), but no main effects of text
(F(1) = 0.42, p = 0.521) or background knowledge (F(1) = 1.12, p = 0.298) (Fig. 3.1).
For the mean activation in the right IFG we found a main effect of text (F(1) = 5.24,
p = 0.029) but not of background knowledge (F(1) = 3.29, p = 0.080): congruent
targets elicited more activation than incongruent targets. We did not find a significant
interaction (F(1) = 0.40, p = 0.842) (Fig. 3.1).

Figure 3.1. ROI mean activation in response to true or false targets and congruent or incongruent
targets, for 8mm spherical ROIs centered at MNI coordinates [-9, 48, 21] (dmPFC), [-12, -45, 42]
(precuneus), [-42, 24, -12] (left IFG), and [42, 24, -12]

73
Whole-brain analyses of text-based and knowledge-
based monitoring

We conducted a set of whole brain analyses to explore the involvement of


other regions in text-based and knowledge-based monitoring. To examine the neural
correlates of knowledge-based monitoring, we contrasted activation on trials in which
participants read true targets with activation on trials in which they read false targets
and vice versa. The whole-brain contrast of true versus false resulted in clusters of
activation in the left and right lateral occipital cortex extending into the fusiform gyrus,
and the left HC (Table 3.3; Fig. 3.2). The reverse contrast yielded a cluster of activation
in the dmPFC4, left angular gyrus (AG), and bilateral lateral PFC. To examine the
neural correlates of text-based monitoring, we contrasted activation on trials in which
participants read targets that were incongruent with the context with activation on
trials in which they read congruent targets and vice versa. The whole-brain contrast
of congruent versus incongruent targets resulted in clusters of activation in bilateral
inferior temporal occipital cortex, bilateral superior parietal lobule, precuneus 5, and
supplementary motor cortex (Table 3.3; Fig. 3.2). The reverse contrast did not yield
any significant clusters. The text by background knowledge interaction resulted in
clusters of activation in the bilateral middle frontal gyrus (MFG), left IFG6, and right AG
(Table 3.3; Fig. 3.2; Fig 3.3). These regions showed more activation for incongruent
than congruent targets if targets reflected true world knowledge information, but more
activation for congruent than incongruent targets if targets reflected false world
knowledge information. The reverse contrast did not yield any significant clusters.

4 The same portion of the dmPFC was activated in both whole-brain and ROI analyses.
5 A different portion of the precuneus was activated in the whole-brain contrast congruent > incongruent than in the
ROI analyses.
6
In the whole-brain analyses a more dorsal portion of the left IFG was activated than in the ROI analyses.

74
3

Figure 3.2. Whole-brain statistics maps for (a) the contrast true > false targets, (b) the contrast false >
true targets, (c) congruent > incongruent targets, and (d) interaction text x background knowledge
across all participants (thresholded at z =2.3 and p < 0.05). The left side of the brain is plotted on the
right side of the image.

75
Table 3.3. Whole-brain group activations for the correctly remembered targets in response to
true/false and congruent/incongruent targets.

Z MNI coordinates
Anatomical region L/R
max X Y Z voxels p
a. Results for contrast true > false
40% Inferior Lateral Occipital Cortex L 3.85 -38 -72 4 3317 p < 0.001

57% Inferior Lateral Occipital Cortex L 3.83 -42 -76 -8

35% Superior Lateral Occipital Cortex L 3.8 -22 -80 24

48% Superior Lateral Occipital Cortex L 3.63 -24 -82 28

55% Posterior Temporal Fusiform Cortex L 3.48 -32 -40 -16

28% Occipital Fusiform Gyrus L 3.42 -38 -70 -10

39% Inferior Lateral Occipital Cortex R 3.64 36 -84 4 1723 p < 0.001

46% Inferior Lateral Occipital Cortex R 3.4 40 -82 -2

62% Temporal Occipital Fusiform Cortex R 3.33 30 -50 -14

16% Temperooccipital Inferior Temporal Gyrus R 3.28 42 -58 -8

31% Inferior Lateral Occipital Cortex R 3.27 46 -62 -8

39% Occipital Fusiform Gyrus R 3.1 22 -64 -12

39% Superior Lateral Occipital Cortex R 3.83 26 -64 34 535 0.008

60% Superior Lateral Occipital Cortex R 3.45 32 -72 30

58% Superior Lateral Occipital Cortex R 2.96 34 -78 26

58% Superior Lateral Occipital Cortex R 2.88 24 -74 52

73% Superior Lateral Occipital Cortex R 2.82 30 -74 44

31% Superior Lateral Occipital Cortex R 2.81 20 -70 46

b. Results for contrast false > true


36% Superior Frontal Gyrus R 3.86 2 36 54 3515 p < 0.001

17% Superior Frontal Gyrus L 3.78 0 30 58

62% Paracingulate Gyrus R 3.74 2 44 30

59% Paracingulate Gyrus R 3.7 6 52 16

27% Superior Frontal Gyrus R 3.64 8 50 30

66% Paracingulate Gyrus R 3.63 6 52 12

25% Inferior Frontal Gyrus, pars triangularis7 R 4.3 50 22 12 1667 p < 0.001

65% Frontal Orbital Cortex R 4.18 28 20 -14

57% Frontal Orbital Cortex R 4.09 44 30 -12

76% Frontal Orbital Cortex R 4.08 34 24 -20

36% Frontal Orbital Cortex R 3.58 46 26 -4

77% Frontal Pole R 3.57 48 40 -14

7
Note that there is no overlap in activation in the right IFG between the whole-brain and the ROI results.

76
40% Posterior Supramarginal Gyrus L 3.69 -58 -52 26 467 0.018

49% Angular Gyrus L 3.32 -56 -58 30

39% Posterior Supramarginal Gyrus L 3.27 -64 -42 24

46% Posterior Supramarginal Gyrus L 3.2 -58 -50 34

53% Superior Lateral Occipital Cortex L 3.16 -56 -62 30

45% Posterior Supramarginal Gyrus L 3.07 -64 -44 28

62% Middle Frontal Gyrus L 3.26 -42 20 42 444 0.024

40% Middle Frontal Gyrus L 3.26 -34 24 38

35% Frontal Pole L 3.22 -22 38 40

48% Middle Frontal Gyrus L 3.21 -44 20 38

65% Middle Frontal Gyrus L 3.15 -46 16 42

30% Superior Frontal Gyrus L 2.96 -26 28 48 3


53% Frontal Orbital Cortex L 3.53 -28 22 -14 414 0.034

36% Frontal Orbital Cortex L 3.46 -32 26 -12

37% Frontal Orbital Cortex L 3.29 -28 12 -16

c. Results for contrast congruent > incongruent


22% Lingual Gyrus L 3.57 -10 -72 -14 1538 p < 0.001

10% Lingual Gyrus L 3.38 -18 -66 -16

100% Cerebellum L 3.3 -20 -60 -20

42% Inferior Lateral Occipital Cortex L 3.29 -46 -64 -4

2% Temporal Occipital Fusiform Cortex L 3.24 -26 -60 -22

36% Temperooccipital Inferior Temporal Gyrus L 3.21 -42 -54 -8

60% Temporal Occipital Fusiform Cortex R 3.42 30 -48 -20 865 p < 0.001

60% Temporal Occipital Fusiform Cortex, R 3.42 32 -42 -22

71% Inferior Lateral Occipital Cortex R 3.27 44 -82 -10

72% Temporal Occipital Fusiform Cortex R 3.24 40 -50 -22

61% Inferior Lateral Occipital Cortex R 3.14 50 -68 6

66% Inferior Lateral Occipital Cortex R 3.09 54 -64 8

65% Supplementary Motor Cortex R 3.43 2 0 54 756 0.009

58% Paracingulate Gyrus R 3.29 2 10 48

34% Anterior Cingulate Gyrus R 3.25 10 10 40

66% Paracingulate Gyrus R 3.23 6 16 46

22% Paracingulate Gyrus L 3.2 -10 8 44

14% Paracingulate Gyrus R 2.97 14 18 34

50% Superior Parietal Lobule L 3.16 -34 -52 52 586 0.005

40% Precuneous Cortex8 L 3.07 -14 -70 30

8
Note that the whole-brain and the ROI analyses showed activation in different portions of the precuneus.

77
13% Superior Parietal Lobule L 3.04 -24 -46 42

62% Superior Lateral Occipital Cortex L 2.98 -26 -66 46

55% Superior Lateral Occipital Cortex L 2.95 -24 -68 42

35% Superior Parietal Lobule L 2.93 -34 -58 60

26% Angular Gyrus R 3.38 36 -54 44 449 0.025

54% Superior Lateral Occipital Cortex R 3.29 28 -64 44

41% Superior Parietal Lobule R 2.92 36 -44 44

33% Precuneous Cortex R 2.88 16 -64 42

19% Posterior Supramarginal Gyrus R 2.87 38 -36 40

d. Results for contrast incongruent > congruent


No significant clusters

e. Results for contrast interaction


24% Inferior Frontal Gyrus, pars opercularis L 3.61 -50 22 24 966 p < 0.001

41% Middle Frontal Gyrus L 3.38 -40 12 42

62% Middle Frontal Gyrus L 3.32 -42 12 52

25% Inferior Frontal Gyrus, pars triangularis L 3.31 -58 24 16

8% Middle Frontal Gyrus L 3.28 -30 10 36

17% Inferior Frontal Gyrus, pars opercularis L 3.17 -40 22 20

39% Posterior Superior Temporal Gyrus L 3.55 -60 -40 4 687 0.001

36% Posterior Superior Temporal Gyrus L 3.42 -50 -40 6

73% Temporal Pole L 3.35 -50 8 -22

36% Posterior Superior Temporal Gyrus L 3.17 -52 -14 -8

53% Posterior Middle Temporal Gyrus L 3.02 -60 -38 -4

41% Posterior Superior Temporal Gyrus L 2.99 -50 -26 -4

33% Middle Frontal Gyrus R 3.62 46 18 28 472 0.013

44% Middle Frontal Gyrus R 3.48 54 26 32

43% Middle Frontal Gyrus R 3.39 54 22 32

61% Middle Frontal Gyrus R 3.19 50 20 40

55% Middle Frontal Gyrus R 3.1 44 16 40

13% Precentral Gyrus R 3.04 58 16 32

62% Angular Gyrus R 3.24 52 -56 26 451 0.017

47% Angular Gyrus R 3.15 56 -58 34

34% Superior Lateral Occipital Cortex R 3.13 42 -62 26

46% Superior Lateral Occipital Cortex R 3.04 50 -60 26

31% Superior Lateral Occipital Cortex R 2.96 60 -62 22

50% Angular Gyrus R 2.88 46 -50 20

f. Results for contrast negative interaction


No significant clusters

78
3

Figure 3.3. Activation diagrams for the four significant clusters in the whole-brain positive interaction.
ROI mean activation in response to true/false and congruent/incongruent targets for peaks of the four
significant clusters of activation in the whole-brain positive interaction contrast centered at MNI
coordinates [52, -56, 26] (right AG), [46, 18, 28] (right MFG), [-50, 22, 24] (left IFG) and [-60, -40, 4]
(left PSTG). The left side of the brain is plotted on the right side of the image.

Discussion
Although current theoretical frameworks of text comprehension are clear that
contextual information and background knowledge influence processing, they are
underspecified on how and when these sources influence processing. Therefore, the
current study aimed to explicate these models by investigating whether we can
distinguish between the influence of contextual information and background
knowledge in coherence monitoring and whether we should assign separate roles for
the two sources in the cognitive architecture of validation. Furthermore, we examined
the neural correlates of text-based and knowledge-based monitoring to ground
models of validation in the neural architecture of the brain and broaden our
understanding of the neural underpinnings of coherence monitoring. Finally, we

79
examined how reading inconsistent information affects the long(er) term memory
representation to shed light on how validation processes during reading affect long-
term memory.
In line with our prior work, behavioral results showed inconsistency effects
for both text and background knowledge on targets, in the absence of an interaction
between text and knowledge effects (van Moort et al., 2018). Furthermore, we found
that validation processes also affected the long(er) term memory representation,
i.e., mismatching information -either with the text or with background knowledge-
is remembered less well than matching information. To examine whether the
behavioral effects were driven by a singular or two separate validation systems, we
studied the underlying brain networks involved in text- and knowledge-based
processing. The strongest evidence in favor of a multiple systems account would be
a double dissociation in terms of the brain regions involved in both types of validation.
Our results provide some evidence of such a dissociation, but indicate that there are
interactions between the systems as well. More specifically, our ROI data revealed a
main effect of knowledge (but not of text) in the dmPFC, and a main effect of text (but
not of knowledge) in the right IFG, and an interaction between knowledge and text in
the precuneus and left IFG. These results were largely confirmed by whole-brain
analyses. We found a large network of regions involved only in knowledge-based
monitoring (including the mPFC and right HC, Table 3.3; Fig. 3.2), a somewhat smaller
network of regions that were involved only in text-based monitoring (including the
ACC, Table 3.3; Fig. 3.2), and a number of regions that showed an interaction (e.g.,
left IFG, right AG and bilateral MFG, Table 3.3; Fig. 3.2). Finally, In the whole brain
data, there were also a number of regions where text-based and knowledge-based
monitoring processes had similar effects, showing two main effects but no interaction
(e.g., inferior temporal occipital cortex and superior parietal lobule).

Cognitive architecture of validation

Our first aim was to explicate the specific roles of contextual information and
background knowledge in the cognitive architecture of validation and investigate
whether we should assign separate roles for these two sources. Van Moort et al.
(2018) suggested that prior text and background knowledge each have a unique
influence on processing and that knowledge-based inconsistencies have a more
pronounced effect on processing. Consistent with this notion, our behavioral results
showed inconsistency effects for both text and background knowledge on targets, in
the absence of an interaction between text and knowledge effects. However, in
contrast to van Moort et al. (2018) our behavioral results did not show a more

80
pronounced effect for knowledge-based inconsistencies, as we did not find a spill-
over effect of background knowledge.
To further differentiate between text-based and knowledge-based validation,
we performed neuroimaging analyses. The results of these analyses are largely in line
with the hypothesized multiple-systems account, but they paint a more nuanced
picture. In line with the multiple-systems account, we found a number of regions that
showed a main effect of either text or background knowledge, suggesting that readers
process text-based and knowledge-based information separately, at least to some
extent (e.g., in the right IFG, dmPFC, and HC). Furthermore, whole-brain results
indicated that knowledge-based monitoring recruits a larger network of brain regions
than text-based monitoring. Thus, whereas the behavioral results did not show a more
pronounced effect for knowledge-based inconsistencies, the neuroimaging results
3
did. There are a couple of possible explanations for this pattern of results. First, a
process-based explanation suggests that knowledge-driven validation requires more
processing resources than validating against prior text. In contrast, a task-based
explanation suggests that the current findings were caused by differences in the
strength of the inconsistencies in the current paradigm. Because the knowledge
inaccuracies were outright errors whereas the text incongruencies were merely
incongruent, the knowledge inaccuracies might have been stronger than the text
incongruencies and therefore recruited a larger number of brain regions. Varying the
strength of the inconsistencies can provide insight into which of these interpretations
is accurate.
Another interesting difference between text-based and knowledge-based
validation is that all regions that were involved in text-based monitoring were sensitive
to coherence rather than incoherence of information with the text, whereas
knowledge-based monitoring involved both regions that were more active for true
information and regions that were more active for false information. A common
assumption in coherence-monitoring research is that processing inconsistent
information requires more effort and, thus, more resources than processing
consistent information. Therefore, most neuroimaging studies focus on brain regions
that are more active in response to incongruent than to congruent information.
However, our results show that text-based monitoring only involves regions that are
sensitive to congruency rather than incongruency of information. In itself, this
congruency effect may not be a striking finding, as it is plausible that there are regions
specifically involved in fitting congruent information into the existing mental
representation (see also our discussion of the role of the right IFG) and some studies
have also shown more activation for coherent than incoherent texts (Although these
studies compared cohesion and coherence, which are both manipulations on a text-
level, e.g., Ferstl & Von Cramon, 2001; Siebörger et al., 2007). However, what is
surprising is that we did not find any brain regions that showed sensitivity to

81
incongruency of information with the text, which suggests that processing an
incongruent target does not recruit other brain regions than those that are already
active in processing congruent information. Interestingly, the behavioral results did
show incongruency effects in terms of reading times. Thus, it seems that processing
incongruent information takes longer than congruent information, but it does not
activate additional brain regions. We can speculate on possible explanations for this
pattern of results. The extent to which incongruent texts are processed may depend
on specific task demands. Prior work has shown that passive comprehension –
compared to consistency judgment– prompts a processing approach that is focused
primarily on establishing local rather than global coherence (e.g., Egidi & Caramazza,
2016). Hence, the passive reading task used in the present study may have decreased
the effects of the contextual manipulation. However, the behavioral results show
effects of both text and background knowledge, which suggests participants were still
focused on establishing both local and global coherence. A second possibility is that
the text incongruencies in the current study elicit a weaker neural response because
they are not outright contradictions with the preceding text.
Taken together, our results provide evidence that there may be (partially)
different cognitive mechanisms involved and that separate roles for contextual
information and background knowledge should be assigned when describing the
cognitive architecture of validation. However, there were also similarities in activation
patterns between the two conditions, as well as interaction effects (e.g., in left IFG and
precuneus), suggesting that readers integrate information from both sources to
construct a coherent mental representation. The current study is an important first
step in explicating the neurocognitive architecture of text-based and knowledge-
based validation processes. Moreover, our results provide a fruitful base for
constructing and testing more specific hypotheses about the interaction between the
two systems.

Division of labor in the coherence monitoring


network

Our second aim was to examine the neural correlates of text-based and
knowledge-based monitoring by investigating the division of labor between specific
regions that are known to play a key role in coherence monitoring. Furthermore, we
aimed to identify additional shared and unique brain regions involved in either text-
based or knowledge-based monitoring. Our ROI results showed different activation
patterns in the dmPFC, precuneus and the left and right IFG, suggesting that these
regions contribute differently to coherence monitoring processes. In addition to the
pre-specified regions, the whole-brain results revealed activation in AF, MFG and HC

82
during text-based validation and/or knowledge-based validation, indicating that these
regions contribute to coherence monitoring as well. We will discuss the potential
division of labor between these regions in more detail below.
Prior studies suggested a general role for the dmPFC in coherence-
monitoring and integration processes (Ferstl et al., 2005; Hasson et al., 2007;
Helder et al., 2017; Mason & Just, 2006). The findings in the current study allow for a
more refined picture of the role of the dmPFC as the region appears to be particularly
sensitive to false information (i.e., information that is inconsistent with readers’
background knowledge). This finding complements research outside the reading
domain showing that the dmPFC is involved in memory and decision making and is
activated when an error is detected to signal to other brain regions that changes in
cognitive control are needed to address the error (e.g., Ridderinkhof et al., 2004).
3
Extrapolating to coherence monitoring during reading this would imply that the
dmPFC may communicate to other brain regions that an inaccuracy is detected in the
text and that additional processing is needed to resolve this inaccuracy (e.g., inhibition
of inaccurate information or inference generation to assimilate mismatching
information). The observation that the dmPFC is most sensitive to inconsistencies with
background knowledge is compatible with this proposal, as such inconsistencies can
be classified as outright ‘errors’ whereas inconsistencies with the preceding text are
merely incongruent and thus may not evoke the same response. Note, however, that
the thresholds for categorizing implausible input as actual errors could show variation
across readers and situations as the judgments depend on a particular reader’s
background knowledge and self-calibrated evaluation criteria (Zacks & Ferstl, 2015).
For the precuneus we observed an interaction between text-based and
knowledge-based monitoring, which is consistent with previous proposals that it is
involved in coherence-building processes (Ferstl & Von Cramon, 2001, 2002;
Kuperberg et al., 2006; Maguire et al., 1999; Mellet et al., 2002) and updating the
mental representation of a text (Speer et al., 2009). However, whereas previous
studies suggested that it is primarily sensitive to the incongruency of target
information with prior text (e.g., Ferstl et al., 2008; Helder et al., 2017), we observed a
more complex activation pattern. That is, the precuneus became particularly active
when the target was either false, but part of a congruent text, or when the target was
true, but incongruent with prior text. When the target matched or mismatched with
both context and background knowledge, activation in the precuneus decreased
significantly. This pattern is consistent with the idea that the precuneus becomes
active when the situation model requires a major update (Speer et al., 2009). This is
the case when the information provided by the context must be inhibited, adjusted, or
overwritten to fit readers’ knowledge of the world or when world-knowledge
information becomes less reliable (or more ambiguous) due to prior text. Targets that
matched with both context and background knowledge only required a minor update

83
and therefore may not elicit activation in the precuneus. Furthermore, targets that
mismatch with both context and background knowledge make little sense and may
not elicit an update at all.
In addition, our study showed an interaction between text-based and
knowledge-based monitoring in the left IFG. These results support the idea that the
left IFG plays a general role in coherence building (Ferstl & Von Cramon, 2001, 2002;
Kuperberg et al., 2006; Maguire et al., 1999; Mellet et al., 2002) and in processing
world knowledge violations (Menenti et al., 2009). Our results, however, allow for a
finer-grained account. Although the left IFG showed high activation for all target
sentences, the activation was most prominent when readers encountered false
targets in a congruent text. In other words, it is sensitive to world knowledge violations,
but evaluates these violations in the context of the text and becomes particularly
involved when the context presents a reason to assume that information that seems
false relative to world knowledge should not be dismissed easily.
Although the dmPFC, precuneus and left IFG showed different activation
patterns, a common denominator is that they all became more active in the case of a
mismatch (i.e., the target was false and/or incongruent with text). Interestingly, the
right IFG displayed a different activation pattern, with higher activation for targets that
were congruent with prior text9. This finding suggests that the right IFG is involved in
integrating sentences with the preceding discourse, but only when the target presents
a plausible continuation of that discourse. Moreover, since no influence of background
knowledge was observed, discourse integration in the right IFG seems to proceed
without validating the current input against world-knowledge information, indicating
that some integrative aspects of text-based monitoring occur independently of
knowledge-based monitoring. This conclusion is similar, yet not identical, to a
proposal put forward by Menenti et al. (2008). Similarly, they suggested that the right
IFG is sensitive to the (in)congruency between a target and the discourse context.
However, they assumed that long-term memory -including world-knowledge of the
type being tested with the materials in the present study- affects how information of
current linguistic input is integrated with information of prior text, while in the current
study we did not observe any influence of background knowledge in the right IFG.
In addition to these ROI-based results, the whole-brain results revealed
involvement of several additional regions, including the AG, MFG and HC. The right
AG and the MFG both showed an interaction between text-based and knowledge-
based monitoring. Previous work implicated the AG in inconsistency processing

9
It may be relevant to note that the whole-brain results show a main effect of background knowledge
in the right IFG (MNI coordinates 50, 22 12). However, this is a slightly different region than the ROI
discussed here.

84
(e.g., Helder et al., 2017; Menenti et al., 2009; Moss & Schunn, 2015), semantic and
episodic memory retrieval (Binder & Desai, 2011), and in sustaining activated
representations to support cognition (Guerin & Miller, 2011). With respect to
coherence monitoring, perhaps the AG is involved in situation-model updating (similar
to the precuneus) or detecting or resolving contextual or semantic conflicts
(e.g., Ye & Zhou, 2009). The MFG is part of the frontoparietal control network (Vincent
et al., 2008) and has been implicated in monitoring and manipulating cognitive
representations in general (e.g., Koechlin et al., 2003) and, more specifically, in
coherence-break detection (Hasson et al., 2007; Mason & Just, 2006) and coherence-
break resolution (Helder et al., 2017). Thus, the MFG may be involved in retaining and
manipulating the mental representation.
The role of the HC was particularly sensitive to ‘correctness’ of information
3
(i.e., increased activation for true versus false targets), whereas the dmPFC is
sensitive to ‘incorrectness’ of information. To better understand this pattern of results,
it may be relevant to look at neural models of schema formation and adjustment (van
Kesteren et al., 2010, 2013; van Kesteren et al., 2012). Specifically, the Schema-
Linked Interactions between Medial prefrontal and Medial temporal regions (SLIMM)
model (van Kesteren et al., 2012) has identified HC-mPFC interactions as important
for the influence of existing knowledge on encoding, consolidation, and retrieval
processes. According to this model, the HC can be seen as a register that stores the
links to several parts of a memory, which is particularly important for novel (rather
than schematic) memories(van Kesteren et al., 2017). In contrast, the mPFC helps
integrating new memories with the existing knowledgebase. Based on this framework
one might predict that false information would elicit more HC activation because the
existing knowledgebase would need to be adjusted, whereas true information would
elicit more mPFC activation because the information could easily be integrated into
the existing knowledge structures. Interestingly, the direction of the activation in the
current study was the exact opposite to what the SLIMM model would predict: the HC
was more active for true than for false targets, whereas the mPFC was more active
for false than for true targets10. This pattern of activation may be explained by the task
used in the present experiment, which differed crucially from the tasks often used in
memory studies. In most studies investigating memory participants are instructed to
remember the information, even if it is false according to their background knowledge.
The HC would then play a role in adjusting existing knowledge structures or in building
new ones altogether. In contrast, in the current study participants were instructed to
read the texts attentively for comprehension, but not necessarily to encode them in
memory. Therefore, if participants encountered false information, the mPFC may have

10
Note that the current paradigm activated a more dorsal portion of the mPFC than the region identified by van
Kesteren et al. (2010). In fact, a more ventral region in mPFC was deactivated across all four conditions.

85
signaled the HC that the information is false and knowledge structures should not be
updated, resulting in a different activation pattern than during other memory studies.
Thus, our findings suggest that the HC-mPFC interaction during memory encoding is
strongly influenced by the goal of the task (e.g., the reading goal).
Together, our results indicate a division of labor for the regions of interest.
The dmPFC is mostly oriented towards knowledge-based processing, whereas the
right IFG is mostly involved in text-based processing. Furthermore, the two streams
of information affect the precuneus and left IFG interactively. Based on the pattern of
results it seems that the dmPFC and the left IFG are involved in different aspects of
inconsistency detection, as the dmPFC seems to detect erroneous world-knowledge
information and signal the HC that existing knowledge structures should not be
updated, while the left IFG evaluates world knowledge violations in the context of the
text. The precuneus may be involved in repair processes, as it becomes deactivated
either when there is nothing to repair (entirely congruent) or when the target makes
little sense and is perhaps impossible to repair (entirely incongruent). Whereas all
aforementioned regions seem involved in validating or monitoring incoming
information, both the HC and right IFG seem involved in ‘normal’ integration
processes when there is no explicit task to store the information in long-term memory.
Interpreted as such, our results not only indicate that text-based and knowledge-
based processing mechanisms recruit both shared (precuneus, left IFG, AG) and
unique (dmPFC, right IFG, HC) brain regions but also that integration (in the right IFG)
and validation processes (in the dmPFC, precuneus, and left IFG) operate
independently to some extent. This seems in line with the assumptions of recent
theoretical frameworks on text comprehension where connections in a text are formed
in an initial integration stage and then checked against long-term memory in a
validation stage (e.g., O'Brien & Cook, 2016b). Note, however, that neither the fMRI
nor the behavioral data can confirm (or disconfirm) this integration-precedes-
validation order of processing because neither dependent variable has sufficient
temporal resolution to disentangle the time courses of these mechanisms.
In addition, validation processes seem predominantly knowledge-driven, as
we only found regions involved in the detection (dmPFC) and evaluation (left IFG) of
knowledge-based inconsistencies, but no regions involved in text-based detection
and/or repair processes. Only after a knowledge inaccuracy is detected this
information seems to be evaluated in the context of the text (in the left IFG). This
evaluation could be part of initial detection processes (as it could indicate that
congruency with the context is evaluated) but it could also be part of knowledge-
based repair processes (e.g., using contextual information to resolve the knowledge
inaccuracy).

86
Effects of processing inconsistencies on the
memory representation

Finally, our third aim was to investigate how processes during reading affect
the memory representation of a text. Behavioral results show that mismatching
information -either with the text or with background knowledge- is remembered less
well than matching information. These findings support the account that information
that mismatches pre-existing memory representations, i.e., the readers’ background
knowledge or the mental representation of the text, is remembered less well (e.g.,
Anderson, 1983; Johnson-Laird, 1983; Zwaan & Radvansky, 1998). From the current
results it is not clear whether these findings solely reflect a retrieval problem. In fact, 3
mismatching information may not have been encoded in memory in the first place.
Because of potential differences during encoding, our neuroimaging analyses only
included trials that were remembered correctly in the memory task. To explore
encoding differences between different trial types, future research should compare
brain activation between items that were remembered compared to forgotten items.
Due to the small number of trials in some of the conditions, this was not possible in
the current study.
To conclude, an important goal of reading is to learn from texts, hence it is
crucial that we understand how processes during reading affect the (long-term)
memory representation. Results suggest that existing models of memory encoding
and consolidation (e.g., the SLIMM model by van Kesteren et al., 2013) can aid our
understanding of the memory processes involved in reading and provide a fruitful
base for developing neurocognitive models of learning from texts, as they seem to
share underlying neural processes. In turn, studies of reading comprehension can
inform memory research, as investigating memory for texts presents an ecologically
valid situation for testing models of memory.

Conclusion

Constructing meaning from discourse is a fundamental human ability that is


intertwined with virtually all cognitive processes, including learning, memory,
perception, decision making and language processing. A core issue in psycho-
linguistic research is when and how different sources of information (e.g., syntax,
semantics, discourse, pragmatics) interact. The current study examined the complex
interplay between recently acquired knowledge (from the text) and long-term
knowledge (from memory) in constructing meaning from language. More specifically,
we studied how these sources affect online processing and the (offline) memory

87
representation. By combining theoretical models originating from the discourse
comprehension literature with (more) specific predictions of neurocognitive models
of memory, we aimed to further our understanding of the neural underpinnings of text
comprehension. Results of the current study are relevant for models of sentence and
discourse processing and, moreover, for our understanding of how we construct
meaning in a broader context (for example from spoken or visual input).

88
3

89
SUPPLEMENTARY MATERIALS
3
Results of model without splitting correctly and
incorrectly remembered items

We ran additional analyses where we also included the incorrectly


remembered items. For the whole-brain analyses the overall pattern of results is very
similar. The contrasts that showed significant clusters were very similar to our original
analysis, only the clusters in the congruent > incongruent contrast did not reach
significance. With respect to the ROI analyses the main differences in the activation
patterns is in the incongruent false condition, as the activation in the other conditions
does seem similar to our original analysis. It is difficult to ascertain what caused these
differences, as there may be different underlying processes involved. However, this
pattern of results does seem in line with the notion that misremembered items may
be processed differently compared to the correctly remembered items, as the items
that showed the largest discrepancy were also the ones that were remembered the
least on the memory task.

Region-of-interest analyses

For the mean activation in the dmPFC we found a main effect of background
knowledge (F(1) = 10.50, p = 0.003): False targets elicited more activation than true
targets. We observed a trend in the effect of text (F(1) = 3.71, p = 0.064). We did not
find a text x background knowledge interaction (F(1) = 0.88, p = 0.357) (Fig. 3.4). For
the mean activation in the left IFG we found a main effect of background knowledge
(F(1) = 9.46, p = 0.004): False targets elicited more activation than true targets. We
did not find main effects of text (F(1) = 1.60, p = 0.216) or a text x background
knowledge interaction (F(1) = 0.45, p = 0.509) (Fig. 3.4). For the mean activation in
the right IFG we found a main effect of background knowledge (F(1) = 11.22,
p = 0.002) and a text x background knowledge interaction (F(1) = 5.54, p = 0.025), but
no significant effect of text (F(1) = 0.09, p = 0.772) (Fig. 3.4). The right IFG showed
more activation for congruent than incongruent targets if targets reflected true world
knowledge information, but more activation for incongruent than congruent targets if
targets reflected false world knowledge information. For the mean activation in the
precuneus we did not observe any significant effects (Fig. 3.4).

92
3

Figure 3.4. ROI mean activation in response to true/false and congruent/incongruent targets, for 8mm
spherical ROIs centered at MNI coordinates [-9, 48, 21] (dmPFC), [-12, -45, 42] (precuneus), [-42, 24,
-12] (left IFG), and [42, 24, -12] (right IFG). The left side of the brain is plotted on the right side of the
image.

Whole-brain analyses of text-based and knowledge-based


monitoring

We conducted a set of whole brain analyses to explore the involvement of


other regions in text-based and knowledge-based monitoring. To examine the neural
correlates of knowledge-based monitoring, we contrasted activation on trials in which
participants read true targets with that on trials in which they read false targets and
vice versa. The whole-brain contrast of false versus true targets resulted in clusters
of activation in the right IFG, right superior frontal gyrus, right paracingulate gyrus, left
posterior supramarginal gyrus, bilateral AG and bilateral caudate (Table 3.4; Fig. 3.5).
The reverse contrast did not yield any significant clusters. To examine the neural
correlates of text-based monitoring, we contrasted activation on trials in which
participants read targets that were incongruent with the context to activation on trials

93
in which they read congruent targets and vice versa. Both the whole-brain contrast of
congruent versus incongruent targets and the opposite contrast did not yield any
significant clusters of activation. The text by background knowledge interaction
resulted in clusters of activation in the left IFG and bilateral middle frontal gyrus (MFG)
(Table 3.4; Fig. 3.5). These regions showed more activation for incongruent than
congruent targets if targets reflected true world knowledge information, but more
activation for congruent than incongruent targets if targets reflected false world
knowledge information. The reverse contrast did not yield any significant clusters.

Figure 3.5. Whole-brain statistics maps for (a) the contrast false > true targets and (b) the interaction
text x background knowledge across all participants (thresholded at z =2.3 and p < 0.05). The left side
of the brain is plotted on the right side of the image.

94
Table 3.4. Whole-brain group activations for targets in response to true/false and
congruent/incongruent targets.

Z MNI coordinates
Anatomical region L/R
max X Y Z voxels p
a. Results for contrast false > true
47% Superior Frontal Gyrus R 4.23 14 14 60 4934 p < 0.001
77% Paracingulate Gyrus R 4.18 8 48 10
73% Paracingulate Gyrus R 4.14 4 52 12
41% Superior Frontal Gyrus R 4.08 2 22 52
55% Superior Frontal Gyrus R 4.03 6 30 54

3
47% Superior Frontal Gyrus R 4.02 6 30 50
32% Inferior Frontal Gyrus, pars triangularis R 4.91 50 22 6 3099 p < 0.001
59% Inferior Frontal Gyrus, pars opercularis R 4.5 56 18 8
43% Frontal Orbital Cortex R 4.21 50 26 -12
45% Frontal Orbital Cortex R 4.16 46 32 -12
10% Inferior Frontal Gyrus, pars opercularis R 4.07 58 20 -2
71% Frontal Orbital Cortex R 4.05 36 24 -20
39% Insular Cortex L 5.07 -32 22 -6 1301 p < 0.001
53% Frontal Operculum Cortex L 4.03 -44 18 0
37% Frontal Orbital Cortex L 3.97 -28 12 -16
52% Frontal Orbital Cortex L 3.39 -38 22 -20
12% Frontal Orbital Cortex L 3.35 -52 24 -12
21% Inferior Frontal Gyrus, pars triangularis L 3.26 -54 22 -4
41% Posterior Supramarginal Gyrus L 4.45 -56 -50 26 569 0.011
33% Angular Gyrus L 3.57 -52 -52 32
40% Angular Gyrus L 3.5 -54 -60 32
48% Angular Gyrus L 3.48 -50 -56 34
54% Angular Gyrus L 3.32 -56 -56 36
41% Angular Gyrus L 3.31 -48 -54 40
98% Right Caudate R 3.9 14 12 12 534 0.016
82% Right Caudate R 3.52 14 2 16
77% Left Cerebral White Matter L 3.47 -10 6 0
71% Right Pallidum R 3.24 14 6 -2
82% Left Caudate L 3.12 -14 10 10
89% Left Caudate L 2.77 -12 0 16

95
42% Angular Gyrus R 3.58 58 -46 32 438 0.043
71% Angular Gyrus R 3.57 60 -50 30
72% Angular Gyrus R 3.23 54 -52 40
62% Angular Gyrus R 3.11 52 -52 36
48% Posterior Supramarginal Gyrus R 3.04 54 -44 40
26% Angular Gyrus R 2.9 46 -52 16

b. Results for contrast true > false


No significant clusters

c. Results for contrast incongruent > congruent


No significant clusters

d. Results for contrast congruent > incongruent


No significant clusters

e. Results for contrast interaction


17% Inferior Frontal Gyrus, pars triangularis L 3.51 -58 26 18 605 0.004
57% Middle Frontal Gyrus L 3.19 -42 14 54
36% Middle Frontal Gyrus L 3.15 -36 8 44
40% Inferior Frontal Gyrus, pars triangularis L 3.14 -54 28 20
55% Middle Frontal Gyrus L 3.12 -42 12 48
59% Middle Frontal Gyrus L 3.08 -44 16 38
41% Middle Frontal Gyrus R 3.44 44 16 30 420 0.030
61% Middle Frontal Gyrus R 3.34 50 20 40
50% Middle Frontal Gyrus R 3.26 52 22 34
48% Middle Frontal Gyrus R 3.13 50 18 34
36% Middle Frontal Gyrus R 3.01 54 22 30
54% Middle Frontal Gyrus R 2.76 52 30 30

f. Results contrast negative interaction


No significant clusters

96
3

97
4
Differentiating text-based and
knowledge-based validation
processes during reading:
Evidence from eye movements

This chapter is based on:


van Moort, M. L., Koornneef, A., & van den Broek, P. W. (2021). Differentiating Text-
Based and Knowledge-Based Validation Processes during Reading: Evidence from
Eye Movements. Discourse Processes, 58(1), 22–41.
Abstract
To build a coherent accurate mental representation of a text readers routinely
validate information they read against the preceding text and their background
knowledge. It is clear that both sources affect processing, but when and how they
exert their influence remains unclear. To examine the time course and cognitive
architecture of text-based and knowledge-based validation processes we employed
eye-tracking methodology. Participants read versions of texts that varied
systematically in (in)coherence with prior text or background knowledge.
Contradictions with respect to prior text and background knowledge both were found
to disrupt reading, but in different ways: The two types of contradiction led to distinct
patterns of processes and, importantly, these differences were evident already in early
processing stages. Moreover, knowledge-based incoherence triggered more
pervasive and longer (repair) processes than did text-based incoherence. Finally,
processing of text-based and knowledge-based incoherence was not influenced by
readers’ working memory capacity.

Keywords: reading comprehension, validation, eye-tracking, background


knowledge, (in)coherence

100
Introduction
Successful comprehension requires readers to build a coherent, meaningful
mental representation or situation model of a text (Graesser et al., 1994; van den
Broek, 1988; Zwaan & Singer, 2003). An essential aspect of building such mental
representation is that readers routinely monitor to what extent incoming information
is both coherent and accurate (e.g., Isberner & Richter, 2014a; Singer, 2013). Recent
theoretical models of epistemic monitoring (Isberner & Richter, 2014a) and validation
(Richter, 2015; Singer, 2013) suggest that such monitoring consists of validation
processes involved in detecting possible inconsistencies and by (optional) epistemic
elaboration processes involved in attempting to resolve detected inconsistencies
(e.g., Isberner & Richter, 2014a; Richter, 2011; Richter et al., 2009; Schroeder et al.,
2008).
These monitoring processes can be influenced by various sources
of information, most notably contextual information (from preceding text) and the

4
reader’s background knowledge. Prior behavioral work using self-paced sentence-by-
sentence reading paradigms suggests that each of these two sources has a unique
influence on monitoring (van Moort et al., 2018, 2020).
Such results highlight the importance of distinguishing between text-based
and knowledge-based monitoring processes, but they do not provide a detailed
picture of the constituent processes of validation against each source as reading times
aggregate over all such processes. As a result, for example, they cannot distinguish
between earlier (e.g., detecting an inconsistency) and later (e.g., repairing an
inconsistency) validation processes. Also, the self-paced sentence-by-sentence
reading task is somewhat unnatural as readers typically are unable to look back in the
text (e.g., Hyönä et al., 2002). The current study aims to provide a more detailed
picture of the component validation processes involved in coherence monitoring by
adopting eye-tracking methodology. This method offers high temporal resolution and
various indices of processing in a more natural reading situation (as texts are
presented in their entirety), allowing us to elucidate when and how processes involved
in detecting and resolving inconsistencies are influenced by contextual information
and background knowledge, respectively.

Building and validating mental representations of


texts

Understanding discourse requires comprehension of individual words and


sentences as well as integration across sentences, to form a coherent understanding

101
of the discourse as a whole (Perfetti & Frishkoff, 2008). As readers proceed through
a text, they continually use various forms of information – e.g., semantic (the meaning
of words), syntactic (grammatical) and pragmatic (their understanding of the world) –
to build an overall representation of the discourse meaning (Johnson-Laird, 1983;
Kintsch, 1988). To build a coherent and accurate mental representation readers
validate incoming information against various sources of information, most notably the
preceding text and the readers’ background knowledge (Isberner & Richter, 2014a;
Nieuwland & Kuperberg, 2008; O’Brien & Cook, 2016a, 2016b; Schroeder et al., 2008;
Singer et al., 1992). Successful validation is widely considered to be a prerequisite for
comprehension accuracy or, more specifically, situational updating (Cook & Myers,
2004; Ferretti et al., 2008), but theoretical models differ in how they define the
validation process.
For example, the RI-Val model of comprehension (Cook & O’Brien, 2014;
O’Brien & Cook, 2016a, 2016b) describes validation as one of three processing stages
–resonance, integration and validation- that comprise comprehension. According
to this model, incoming information activates related information from long-term
memory via a low-level passive resonance mechanism (Myers & O’Brien, 1998;
O’Brien & Myers, 1999). This activated information then is integrated with the contents
of working memory. Finally, the initial linkages formed by integration are validated
against information in memory that is ‘readily available’ to the reader. Thus, it is
validated against information that either already is part of working memory or easily
can be made available from long-term memory (e.g., McKoon & Ratcliff, 1995; Myers
& O’Brien, 1998; O’Brien & Albrecht, 1992). These contents of active memory
includes both portions of the episodic representation of the text (i.e., context) and
general world knowledge, therefore each source has the potential to influence
validation at any point during comprehension. The three processes –resonance,
integration and validation- run parallel but their onset is asynchronous: Activation must
produce a minimum of two concepts (or ideas) before integration can begin, and
integration must produce a minimum of one linkage before validation can begin. Thus,
the RI-Val model presents validation as a single, passive pattern-matching process
that is part of comprehension and is involved in detecting mismatches between the
linkages made during the integration stage and the contents of active memory. RI-Val
focuses on the (in)consistency detection component of coherence monitoring rather
than on potential (repair) processes triggered by a detected inconsistency.
A second model describes validation as consisting of two components: (1)
epistemic monitoring (i.e., detecting inconsistencies) during a comprehension stage,
followed by (2) optional epistemic elaboration processes (i.e., attempting to resolve
an inconsistency) during an evaluative stage (e.g., Isberner & Richter, 2014a; Richter,
2011; Richter et al., 2009; Schroeder et al., 2008). According to this model only the
initial detection of inconsistencies (i.e., epistemic monitoring) is a routine part of

102
comprehension. Similar to the RI-Val model, these detection processes are memory-
based and carried out routinely and efficiently, i.e., they pose little demands on
cognitive resources and are not dependent on readers’ processing goals (Richter et
al., 2009). However, whereas the RI-Val model focuses in detail on the detection
component of coherence monitoring, the two-component model focuses on the
resources-demanding processes that may be triggered by the detection of the
inconsistency in the epistemic monitoring component. Specifically, readers may
initiate evaluation processes, including epistemic elaboration or repair processes, in
an attempt to resolve an inconsistency. For example, they may doubt the validity of
their current mental representation/situation model (e.g., perhaps they misunderstood
earlier parts of the text), they may disbelieve the target sentence rather than their
situation model (e.g., perhaps the target sentence contains a mistake), or they may
try to solve the comprehension problem by elaborating possible solutions to the
apparent inconsistency (Hyönä et al., 2003). These processes are optional and only
occur when readers are motivated and have enough cognitive resources available, as
they are assumed to be slow, resource-demanding and under at least some strategic
control of the reader (Isberner & Richter, 2014a; Richter, 2003, 2015) 4
These examples illustrate that current theoretical models presume a
rudimentary cognitive architecture and time course for validation processes. They
generally agree that incoming information is routinely validated against elements of
the current situation model or world knowledge during the comprehension stage and
that contextual information and background knowledge both have the potential to
influence processing (e.g., Cook & Myers, 2004; Kintsch, 1988; O’Brien & Cook,
2016a; Richter, 2011; Richter et al., 2009; Rizzella & O’Brien, 2002; Schroeder et al.,
2008; Singer, 2013; van den Broek & Helder, 2017). However, it is unclear when and
how contextual information and the readers’ background knowledge exert their
influence.
General models of discourse comprehension (i.e., without a specific focus on
the validation aspect of comprehension) present different viewpoints about the
respective influences of context and background knowledge on comprehension.
Some accounts presume a fully interactive architecture with contextual information
and background knowledge immediately influencing processing (e.g., the memory-
based text processing view; Cook et al., 1998b; Gerrig & McKoon, 1998; Myers &
O’Brien, 1998; O’Brien & Myers, 1999; Rizzella & O’Brien, 2002). Other accounts
presume an architecture in which one of the informational sources plays a more
dominant (and sometimes earlier) role. In some of these accounts, background
knowledge is regarded as the dominant source (i.e., driving comprehension) as
incoming information is first connected to general world knowledge and only later
integrated in the discourse context (e.g., Garrod & Terras, 2000; Kintsch, 1988;
Sanford & Garrod, 1989). In other of these accounts, contextual information is

103
regarded as the dominant source as it can influence language comprehension
immediately (e.g., Hess et al., 1995; Marslen-Wilson & Tyler, 1980; van Berkum et al.,
1999) and can fully override background knowledge (e.g., Nieuwland & Van Berkum,
2006).
Within validation research there are a considerable number of empirical
investigations of the effects of contextual information and background knowledge on
validation processes (e.g., Albrecht & O’Brien, 1993; Menenti et al., 2009; O’Brien et
al., 1998, 2004, 2010; O’Brien & Albrecht, 1992; Rapp, 2008; Richter et al., 2009).
Usually the focus of each investigation is on one source of potential inconsistencies
or the other, whereas in reality both sources operate in tandem. With respect to
investigations of text-based monitoring detection of within-text incongruencies
inevitably depends on background knowledge as well. For example, in paradigms in
which targets (e.g., children are building a snowman) presumably are incongruent
with preceding context (e.g., it was a hot, sunny day) detection only occurs if readers
have certain background knowledge (e.g., snow melts on a hot sunny day). Even
blatant incongruencies (e.g., a car is described as solid blue in one sentence, but as
solid red in the next) require at least a minimal amount of background knowledge
(e.g., red and blue are colors and something cannot be solid red and blue at the same
time). Thus, although the role of background knowledge is implied because it is
essential for detecting the (in)congruency of textual targets it is not explicitly included
as a factor. With respect to studies that do include both contextual and world
knowledge manipulations the central question tends to be whether context can
override (erroneous) world knowledge, not whether text-based monitoring is an
independent process (e.g., Creer et al., 2018; Menenti et al., 2009; Walsh et al., 2018).
As a result, it is difficult to distinguish between the respective impacts of textual
information and background knowledge and to define possibly unique influences.
To address this issue, van Moort et al. (2018) contrasted validation against
background knowledge and validation against prior text in a single design.
Participants read expository texts about well-known historical topics in a self-paced,
sentence-by-sentence manner. Each text contained a target sentence that was either
true (e.g., the Statue of Liberty was delivered to the United States) or false (e.g., the
Statue of Liberty was not delivered to the United States) relative to the reader’s
background knowledge and that was either supported or called into question by the
preceding context (e.g., context that described that the construction of the statue went
according to plan vs. context that described problems that occurred during
construction of the statue). Results indicated that both prior text and background
knowledge influenced readers’ moment-by-moment processing on targets, but only
inaccuracies with background knowledge elicited spill-over effects. This suggests that
both sources of information have unique influences on processing. Furthermore, a
recent study used the same reading paradigm while collecting neuroimaging data

104
(fMRI) to examine the neural underpinnings of text-based and knowledge-based
validation (van Moort et al., 2020). Consistent with Van Moort et al. (2018), the
neuroimaging data suggested a ‘division of labor’ for text-based and knowledge-
based validation processes. The medial Prefrontal Cortex (mPFC) seems to be
oriented towards knowledge-based processing, whereas the right Inferior Frontal
Gyrus (right IFG) is more involved in text-based processing. Interestingly, the
Precuneus and the left Inferior Frontal Gyrus (left IFG) seem to combine the
information provided by a text with the information stored in long-term memory. Taken
together, the results from these two studies suggest that both sources impact
processing and that text-based and knowledge-based validation processes may
involve (partially) different cognitive mechanisms.

The current study

The aim of this study was to provide insight into component validation
processes involved in coherence monitoring and into when and how contextual
information and background knowledge influence these processes. Specific
4
questions were whether (parts of) the validation process are more knowledge-driven
or text-driven, and to what extent text-based and knowledge-based validation
processes take place independently or interactively. We employed eye-tracking
methodology because it offers the possibility to distinguish between relatively early
processing (e.g., first-pass reading times) and later processing (e.g., second-pass
reading times) (Cook & Wei, 2017; Rayner, 1998). It also allows for relative naturalistic
reading as texts are presented in their entirety (e.g., Hyönä et al., 2002). The basic
assumption of eye tracking methods is that increased processing demands, e.g., when
readers encounter a comprehension problem in a text, are associated with increased
processing time or changes in the pattern of fixations (e.g., Frazier & Rayner, 1982;
Hyönä et al., 2003; Rayner et al., 2004; Rayner & Slattery, 2009; Rinck et al., 2003;
Stewart et al., 2004). Such changes are assumed to be indicative of underlying
processes. For example, readers may detect and attempt to resolve incoherence by
spending more time on the critical regions (Yuill & Oakhill, 1991), by engaging in
rereading activities to look for the possible source of the incoherence (Hyönä et al.,
2003; Zabrucky & Ratner, 1986), or by making regressions to earlier parts of the text
to reinstate information from the text that they would like to elaborate or reactivate in
working memory (Hyönä & Lorch, 2004).
Participants read expository texts containing information that conflicted with
the preceding text and/or readers’ background knowledge (based on Van Moort et
al., 2018), while their eye movements were recorded as they freely read through the
texts. Assuming that initial validation processes (i.e., detection of inconsistencies)

105
occur relatively early in processing and elaboration and repair processes (i.e.,
attempts to resolve the inconsistency) occur later in processing, recording eye
movements allows us to investigate whether text and background knowledge affect
early and later validation processes independently or interactively and, conversely,
whether early and late processes (or both) are predominantly knowledge-driven or
context-driven.
The secondary aim was to investigate whether working memory capacity
affects text-based and knowledge-based validation processes differentially and
whether validation components are impacted by individual differences in processing
capacity. An important assumption of most models of validation is that incoming
information can only be validated against information that is available and activated
during comprehension. Therefore, working memory capacity is likely to play a role in
validation as it limits the amount of information that can be activated (e.g., Hannon &
Daneman, 2001; Singer, 2006). For instance, two-component models of validation
suggest that epistemic elaboration or repair processes may be particularly impacted
by individual differences in processing capacity, as they are assumed to be reader-
initiated and resource-demanding. Working memory indeed has been found to play a
role in at least knowledge-based validation (van Moort et al., 2018) but it is unclear
whether it impacts epistemic elaboration as well. To investigate these possibilities
we obtained a measure of participants’ working memory capacity.

Method
Participants

Forty-seven native speakers of Dutch (39 females, 8 males) aged 18-27 years
(M = 21, SD = 2) participated in this study11. All participants had normal or corrected-
to-normal eyesight and none had diagnosed reading or learning disability. Participants
provided written informed consent prior to testing and were paid for participating. All
procedures were approved by the Leiden University Institute of Education and Child
Studies ethics committee and conducted in accordance with the Declaration of
Helsinki.

11
In total 70 participants were tested. Due to technical issues with the eye tracker at the start of the study the data
of 23 participants was of insufficient quality and could not be analyzed.

106
Materials

We used the texts of Chapter 2 of this dissertation (based on Rapp, 2008). The texts
are about well-known historical topics. The texts were normed to
ensure that the presented facts were common knowledge in our sample
(see Chapter 2 of this dissertation for a more detailed description of the norming
study). Each text contains a target sentence that is either true or false (with respect to
the readers’ background knowledge); at the same time the preceding text could either
support or call into question the information in the targets. More specifically, the
context could bias towards either the true or the false target, making the context either
congruent or incongruent with the target (see sample text in Table 4.1). Four different
versions of each of the 80 texts were constructed, by orthogonally varying the target
sentence (i.e., true versus false) and the context prior to the target sentence (i.e.,
congruent versus incongruent with target). It is important to note that contexts
biasing towards false targets did not include erroneous information. Although the
phrasing of the context sentences called into question the certainty of events stated
4
in the target, all facts described in the context sentences were historically correct.
Each text consisted of ten sentences (see Table 4.1 for a sample text).
Sentence 1-2 were identical among all conditions, providing an introduction to the
topic. Sentences 3–7 differed in content, depending on context condition
(congruent/incongruent). On average, the context biasing towards the true target
consisted of 64 words (SD = 4.20) and 399 characters (SD = 23.48) and the context
biasing towards the false target consisted of 66 words (SD = 4.30) and 407 characters
(SD = 22.79). Sentence 8 was the target sentence and was either true or false. Overall,
targets were equated for length: both true (SD = 1.92) and false (SD = 1.90) targets
contained on average 9 words and both true (SD = 10.51) and false (SD = 10.42)
targets contained on average 60 characters (including spaces and punctuation). Half
of the true targets and false targets included the word ‘not’ or ‘never’ (e.g., “Jack the
Ripper was never caught and punished for his crimes.”), and half did not (e.g., “The
Titanic withstood the damage from the iceberg collision.”). Sentence 9-10 concluded
the text. They contained a general conclusion that did not elaborate on the fact
potentially called into question in the target sentence and maintained historical
accuracy. On average, the texts contained 121 words (SD = 5.66) and 766 characters
per text (SD = 37.63), across all four text versions.
To implement a repeated-measures design we used a Latin square to
construct four lists, with each text appearing in a different version as a function of
target (true or false) and text context (congruent or incongruent with target) on each
list. Each list was randomized. Each participant was assigned to one list and, hence,
read one version of each text.

107
Table 4.1. Sample text with the four text versions (translated from Dutch original)

Knowledge accuracy

Target true Target false

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with artist He conceptualized a giant sculpture along with artist
Auguste Bartholdi. Auguste Bartholdi.

[Bias True Context] [Bias False Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target congruent with context

fundraising work. fundraising work.


They organized a public lottery to generate support Raising the exorbitant funds for the statue proved an
for the sculpture. enormous challenge.
American businessmen also contributed money to Because of financial difficulties France could not
build the statue’s base. afford to make a gift of the statue.
Despite falling behind schedule, the statue was Fundraising was arduous and plans quickly fell
completed. behind schedule.
The statue’s base was finished as well and ready for Because of these problems, completion of the statue
mounting. seemed doomed to failure.

[Target True] [Target False]


The Statue of Liberty was delivered from France to The Statue of Liberty was not delivered from France
the United States. to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
Text congruency

This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with artist He conceptualized a giant sculpture along with artist
Auguste Bartholdi. Auguste Bartholdi.

[Bias False Context] [Bias True Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target incongruent with context

fundraising work. fundraising work.


Raising the exorbitant funds for the statue proved an They organized a public lottery to generate support
enormous challenge. for the sculpture.
Because of financial difficulties France could not American businessmen also contributed money to
afford to make a gift of the statue. build the statue’s base.
Fundraising was arduous and plans quickly fell Despite falling behind schedule, the statue was
behind schedule. completed.
Because of these problems, completion of the statue The statue’s base was finished as well and ready for
seemed doomed to failure. mounting.

[Target True] [Target False]


The Statue of Liberty was delivered from France to The Statue of Liberty was not delivered from France
the United States. to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

108
Reading task

Participants read 80 texts while eye movements were recorded. The texts
were presented as a whole and participants were instructed to read for
comprehension at their normal pace and to advance to the next text by pressing a
button. A fixation cross was presented at the position of the first word of the first
sentence in between texts for 300ms. The task consisted of two blocks of 40 texts.
Each block started with a calibration of the eye tracking apparatus. Participants
performed a short practice block.

Apparatus

Eye movements were recorded using an EyeLink 1000 desktop mounted eye
tracker of the SR Research Company (Oakville, Canada). Sampling frequency was
1000 Hz, and spatial accuracy was approximately 0.4°. Viewing was binocular, but
only the right eye was tracked. A chin-and-head rest was used to minimize 4
participants’ head movements. The texts were presented in their entirety on a 19-inch
screen at approximately 65 cm from the participant.

Measures

Working memory capacity

Working memory capacity was measured with the Swanson Sentence Span
task (Swanson et al., 1989). In this task, the experimenter reads out sets of sentences,
with set length increasing from 1 to 6 sentences while they progress in the test. At the
end of each set a comprehension question is asked about one of the sentences in the
set. Participants have to remember the last word of each sentence, and recall these
after answering the comprehension question. The test is terminated when the
participant’s error rate exceeds a given threshold. Participants earn points for each
correct answer on the comprehension questions and each correctly recalled set of
words. The sum of these points is the index of working memory capacity.

Procedure

Participants were tested individually. The eye tracker was calibrated by


means of a 9-point calibration grid that covered the entire computer screen. Upon
successful calibration participants completed the reading task. Next, they completed

109
the Swanson Sentence Span Task and a questionnaire assessing their background
knowledge on the text topics. The duration of the total session was approximately 90
minutes.

Eye-fixation measures

For each text eye-fixation measures for two regions of interest were
calculated: the target sentence and the spill-over sentence (the sentence following
the target). We examined several measures for each of these regions. First-pass
reading times reflects initial processing of a sentence (Rayner et al., 1989) and were
computed by summing all first-pass fixations for each word (all fixations on a word
from the first fixation on that word until the first time the reader exits that word) in the
sentence. First-pass probability of a regression reflects the probability that a
regression was made to an earlier section of the text and was computed for each word
in a sentence (Clifton Jr et al., 2007). Second-pass reading times (or re-reading times)
reflect later processing or re-processing of the words in a sentence after the words
were exited for the first time (Rayner et al., 1989). They were computed by summing
all fixations on each word in the sentence excluding first-pass fixations on these
words. The probability of rereading reflects the probability that the sentence was read
again after the reader exited the sentence for the first time and was computed binarily
(rereading present vs. absent) for each sentence. The regression path duration or go-
past duration is computed by summing all fixations from first entering a region until
exiting in the forward direction (Duffy et al., 1988; Rayner & Duffy, 1986; Rayner &
Liversedge, 2012). This measure includes any regression out of a region prior to
moving forward in the text.

Analyses

To investigate the effects of the manipulations on the reading process we


conducted mixed-effects linear regression analyses using the R package LME4
version 1.1.21 (D. Bates et al., 2015). For each measure on each sentence we started
with a full interactional model that included the interaction between the fixed factors
background knowledge (target true/false), text (target congruent/incongruent),
working memory capacity (median centered), and the random factors subjects and
items. Effect coding was applied in the main analyses (true was coded as -0.5 and
false as 0.5; congruent was coded as -0.5 and incongruent as 0.5). We did not include
random slopes in our models. We report the relevant fixed-effects estimates and the
associated t-values (for the continuous dependent variables) and z-values (for the
categorical dependent variables) in tables (see Table 4.3 and 4.4). To obtain

110
fixed-effects estimates and the associated statistics for the relevant simple effects of
an interaction, we performed pairwise comparisons. The results of the follow-up
analysis are provided in the text. As it is not clear how to determine the degrees of
freedom for the t statistics estimated by mixed models for continuous dependent
variables (Baayen, 2008), we do not report degrees of freedom and p values. Instead,
statistical significance at approximately the 0.05 level is indicated by |t| ≥ 1.96. (e.g.,
Schotter et al., 2014).

Results
Data for one experimental text was dropped from the analyses, as it
concerned Stephen Hawking whose death changed the truth value of the text. For the
regions of interest (target and spill-over sentence) in the other texts first-pass reading
time, first-pass probability of a regression, re-reading probability, second-pass
reading time, and regression path duration were determined (see Table 4.2 for
descriptive statistics).
4
Target sentence

First-pass reading times

Wald chi-square tests revealed a main effect of background knowledge, but


not of text. Furthermore, we observed a background knowledge * text interaction on
the log-transformed first-pass reading times (see Table 4.3 for fixed-effects estimates
and associated statistics). Post-hoc multiple comparisons showed increased first-pass
reading times for false targets than true targets, both when the target was presented
in a congruent context (β = -0.07, SE = 0.02, z = -3.92) and when it was presented in
an incongruent context (β = -0.12, SE = 0.02, z = -6.90). This effect of background
knowledge was modulated by congruency of the target with the preceding context:
False targets presented in a incongruent context (e.g., target states that the Statue of
Liberty was not delivered to the United States but context suggests it was) elicited
longer first-pass reading times than false targets presented in a congruent context
(e.g., the target states that the Statue of Liberty was not delivered to the United States
and context also suggests that it was not) (β = -0.05, SE = 0.02, z = -2.64). However,
true targets showed no effect of (in)congruency with the preceding context (β = 0.01,
SE = 0.02, z = 0.38) (Figure 4.1a). We did not find any effects of working memory
capacity on the first-pass reading times (Table 4.3).

111
Table 4.2. Means and standard deviations for the dependent variables at the regions of interest (target
and spill-over sentence) for the experimental manipulations regarding text (target congruent /
incongruent with context) and background knowledge (target true / false).

Background Target Spill-over


Text
knowledge M SD M SD
knowledge Congruent 1471 737 1897 846
True
First-pass reading time Incongruent 1454 673 1948 943
(in ms) Congruent 1611 848 1991 886
False
Incongruent 1658 805 2028 920
Congruent 405 396 476 447
True
Second-pass reading Incongruent 439 412 496 488
time (in ms) Congruent 480 428 452 457
False
Incongruent 543 520 527 559
Congruent 0.15 0.36 0.15 0.35
True
First-pass probability of Incongruent 0.18 0.38 0.16 0.37
regression Congruent 0.19 0.39 0.16 0.37
False
Incongruent 0.20 0.40 0.17 0.37
Congruent 2032 1883 2429 1698
True
Regression path Incongruent 2134 2714 2701 2687
duration (in ms) Congruent 2218 2015 2600 1646
False
Incongruent 2360 1619 2773 1870
Congruent 0.48 0.50 0.50 0.50
True
Incongruent 0.54 0.50 0.52 0.50
Re-reading probability
Congruent 0.58 0.49 0.54 0.50
False
Incongruent 0.62 0.48 0.54 0.50

First-pass probability of regression

In addition to a text * background knowledge interaction, results showed main


effects of background knowledge and of text on the first-pass probability of
regressions (see Table 3 for fixed-effects estimates and associated statistics).
Post-hoc multiple comparisons showed that readers were more likely to make
regressions on false targets than true targets, both when the target was presented in
a congruent context (β = -0.03, SE = 0.06, z = -5.02) and when it was presented in an
incongruent context (β = -0.14, SE = 0.06, z = -2.44) (Figure 4.1b). However, there
was only an effect of (in)congruency with the context on targets containing true world

112
knowledge information: Readers were more likely to make first-pass regressions when
true targets were presented in an incongruent context than when they were presented
in a congruent context (β = -0.02, SE = 0.06, z = -3.96). False targets showed no effect
of (in)congruency with the preceding context (β = -0.08, SE = 0.06, z = -1.40) (Figure
4.1b). We did not find any effects of working memory capacity on the first-pass
probability of a regression (Table 4.3).

Regression path duration

Results showed main effects of background knowledge, text and working


memory capacity on the regression path duration (see Table 4.3 for fixed-effects
estimates and associated statistics). False targets elicited longer regressions than true
targets (Figure 4.1e). Similarly, incongruent targets elicited longer regressions than
congruent targets (Figure 4.1e). Furthermore, participants with a larger working
memory made shorter regressions than participants with a smaller working memory.

Re-reading probability
4
Results showed main effects of background knowledge, text and working
memory capacity on the probability of rereading targets (see Table 4.3 for fixed-
effects estimates and associated statistics). Participants were more likely to reread
false targets than true targets (Figure 4.1c). Similarly, they were more likely to reread
incongruent targets than congruent targets (Figure 4.1c). Furthermore, participants
with a larger working memory were less likely to reread targets than participants with
a smaller working memory.

Second-pass reading times

Results showed main effects of background knowledge, text, and working


memory capacity on the second pass reading times on targets 12 (see Table 4.3 for
fixed-effects estimates and associated statistics). Participants spent more time
re-reading false targets than true targets (Figure 4.1d). Similarly, participants spent
more time re-reading incongruent targets than congruent targets (Figure 4.1d).
Furthermore, participants with a larger working memory spend less time re-reading
targets than participants with a smaller working memory.

12
Note that second-pass reading times were only included in this measure if a second pass was made.

113
Figure 4.1. Reading patterns for (a) first-pass reading time, (b) first pass probability of regression, (c)
probability of rereading, (d) second-pass reading time and (e) regression path duration on target
sentences as a function of match with text (congruent / incongruent) and background knowledge
(true/false). Error bars represent standard errors of the means.

114
Table 4.3. Fixed effects estimates and the associated statistics of the sum-coded models fitted for the
dependent variables on the target sentence. Note. The following R code was used for all models:
dependent variable ~ 1 + Text * Background * Working Memory + (1|Subject) + (1|Item).

Statistics
Measure Fixed effect
B SE t/z
Intercept 7.23 0.04 197.69*
Text 0.21 0.01 1.71
Background 0.10 0.01 7.45*
First-pass reading WM -0.05 0.05 -1.64*
time Text * Background 0.05 0.03 2.16*
Text * WM 0.03 0.02 1.47
Background * WM -0.02 0.02 -1.25
Text * Background * WM 0.02 0.04 0.38
Intercept -1.59 0.09 -18.22*
Text 0.16 0.04 3.82*
Background 0.22 0.04 5.20*
First-pass probability WM -0.23 0.13 -1.83
of a regression Text * Background -0.17 0.08 -2.01* 4
Text * WM -0.01 0.07 -0.12
Background * WM -0.05 0.07 -0.73
Text * Background * WM -0.16 0.13 -1.19
Intercept 7.50 0.04 192.39*
Text 0.06 0.02 3.68*
Background 0.10 0.02 6.33*
Regression path WM -0.11 0.05 -2.14*
duration Text * Background 0.05 0.03 1.43
Text * WM 0.004 0.02 0.19
Background * WM -0.01 0.02 -0.39
Text * Background * WM -0.02 0.05 -0.47
Intercept 0.29 0.15 2.03*
Text 0.29 0.08 3.75*
Background 0.41 0.08 5.23*
WM -0.45 0.21 -2.19*
Re-reading probability
Text * Background 0.01 0.15 0.09
Text * WM 0.004 0.12 0.04
Background * WM 0.01 0.12 0.07
Text * Background * WM -0.17 0.23 -0.73
Intercept 5.78 0.04 146.30*
Text 0.16 0.03 4.60*
Background 0.07 0.03 2.18*
Second-pass reading WM -0.13 0.06 -2.23*
time Text * Background 0.04 0.07 0.51
Text * WM -0.05 0.06 -0.84
Background * WM -0.01 0.06 -0.22
Text * Background * WM 0.03 0.11 0.30
* indicates |z| or |t| score < 1.96 and thus significance at the 0.05 level.

115
Table 4.4. Fixed-effects estimates and the associated statistics of the sum-coded models fitted for the
dependent variables on the spill-over sentence. The following R code was used for all models:
dependent variable ~ 1 + Text * Background * Working Memory + (1|Subject)+(1|Item).

Statistics
Measure Fixed effect
B SE t/z
Intercept 7.48 0.04 183.26*
Text 0.15 0.01 1.27
Background 0.04 0.01 3.18*
First-pass reading WM -0.02 0.05 -0.45
time Text * Background 0.02 0.03 0.74
Text * WM 0.02 0.05 1.15
Background * WM -0.03 0.05 -1.71
Text * Background * WM 0.06 0.04 1.75
Intercept -1.77 0.09 -19.19*
Text 0.18 0.04 2.05*
Background 0.11 0.04 2.91*
First-pass probability WM -0.32 0.14 -2.35
of a regression Text * Background 0.01 0.08 0.18
Text * WM -0.05 0.06 -0.81
Background * WM 0.02 0.06 0.27
Text * Background * WM 0.09 0.12 0.80
Intercept 7.71 0.04 175.51*
Text 0.53 0.01 3.70*
Background 0.05 0.01 3.70*
Regression path WM -0.10 0.06 -1.68
duration Text * Background 0.01 0.03 0.27
Text * WM 0.01 0.02 0.23
Background * WM -0.004 0.02 -0.18
Text * Background * WM 0.06 0.04 1.27
Intercept 0.13 0.14 0.89
Text 0.04 0.07 0.49
Background 0.16 0.08 2.17*
WM -0.42 0.20 -2.06*
Re-reading probability
Text * Background -0.08 0.15 -0.52
Text * WM -0.10 0.11 -0.89
Background * WM -0.06 0.11 -0.48
Text * Background * WM 0.09 0.23 0.37
Intercept 5.79 0.04 139.91*
Text 0.002 0.03 -0.06
Background 0.10 0.03 2.89*
Second-pass reading WM -0.19 0.06 -3.24*
time Text * Background 0.11 0.07 1.65
Text * WM 0.01 0.05 0.16
Background * WM -0.02 0.05 -0.37
Text * Background * WM 0.03 0.11 0.31
* indicates |z| or |t| score < 1.96 and thus significance at the 0.05 level.

116
Spill-over sentence

First-pass reading times

Results showed a main effect of background knowledge on the first-pass


reading times on spill-over sentences (see Table 4.4 for fixed-effects estimates and
associated statistics). Readers were slower to read spill-over sentences following false
targets than those following true targets. We did not find any effects of text and
working memory capacity.

First-pass probability of regression

Results showed a main effect of background knowledge, text, and working


memory capacity on the first pass chance of regressions on spill-over sentences (see
Table 4.4 for fixed-effects estimates and associated statistics). Spill-over sentences
that were preceded by false targets were more likely to elicit regressions to earlier
text than those preceded by true targets. Similarly, spill-over sentences following 4
incongruent targets were more likely to elicit regressions than those following
congruent targets. Furthermore, readers with a larger working memory were less
likely to engage in regressions on the spill-over sentence than readers with a smaller
working memory.

Regression path duration

Results showed main effects of background knowledge and text on the


regression path duration on spill-over sentences (see Table 4.4 for fixed-effects
estimates and associated statistics). Spill-over sentences following false targets
elicited longer regressions than those following true targets. Similarly, spill-over
sentences following incongruent targets elicited longer regressions than those
following congruent targets.

Re-reading probability

Results showed main effects of background knowledge and working


memory capacity on the re-reading probability on spill-over sentences (see Table 4.4
for fixed-effects estimates and associated statistics). Participants were more
likely to reread spill-over sentences preceded by false targets than those preceded
by true targets. Furthermore, readers with a smaller working memory were more likely
to reread spill-over sentences than readers with a larger working memory.

117
Second-pass reading times

Results showed main effects of text and working memory capacity on the
second pass reading times on spill-over sentences (see Table 4.4 for fixed-effects
estimates and associated statistics). Participants spend more time re-reading spill-
over sentences following incongruent targets than those following congruent targets.
Furthermore, readers with a larger working memory spend less time re-reading the
spill-over sentence than readers with a smaller working memory.

Discussion
The aim of the current eye-movement study was to investigate when and how
contextual information and background knowledge, respectively, influence validation
processes during text comprehension. Additionally, we examined the role of working-
memory capacity in validating against these two sources of information.
During first-pass reading, target sentences that contained world-knowledge
inconsistencies induced longer reading times than did target sentences that did not
contain such inconsistencies. Sentences that contained text incongruencies induced
longer reading times than did sentences that contained congruent targets but,
interestingly, only when the target contained false (inaccurate) world-knowledge
information. In addition, knowledge inaccuracies elicited more regressions during
first-pass reading. Text incongruencies also elicited more regressions during first-
pass reading, but only in the absence of a knowledge inaccuracy (i.e., when targets
contained true world-knowledge information). No interactions between text-based
and knowledge-based processing were observed in later processing measures
(i.e., regression-path duration, re-reading probability, and second-pass reading time).
Instead, these later measures revealed reliable main effects of both context and
background knowledge: Readers were more likely to re-read the target and displayed
both longer regressions and longer re-reading when they encountered targets that
either were false with their world knowledge or incongruent with the preceding
context.
In addition to these effects on target sentences, we observed a spill-over
effect of knowledge inaccuracies during first pass reading (i.e., longer first-pass
reading times on the spill-over sentence if it was preceded by a target containing false
world knowledge information) but not of contextual incongruencies. Furthermore,
readers were more likely to regress to earlier parts of the text and displayed longer
regressions on spill-over sentences following false or incongruent targets. Finally, they
were more likely to re-read spill-over sentences following false or incongruent targets
and, when they reread a sentence, spent more time doing so.

118
Working memory did not affect early processing but it did influence later
processing (e.g., regressions and re-reading). Readers with a larger working memory
were less likely to make regressions or reread targets and, if they do reread, spent
less time doing so. Importantly, working memory did not interact with the two types of
inconsistencies, indicating that the effects of incongruency with text or inaccuracy
with background knowledge did not depend on readers’ working-memory capacity.

Differentiating text-based and knowledge-based


validation processes

These findings point to several conclusions. First, they support the notion that
both incongruencies with context and inaccuracies with background knowledge have
a profound impact on validation processes during reading, as evidenced by early and
later eye-movements on both target and spill-over sentences. Second, although
contextual incongruencies and knowledge inaccuracies are not completely
independent (i.e., contextual incongruencies must involve, at the very least, some 4
violation of logic that exist in the reader’s background knowledge), the results show
that they trigger distinct processes. Third, the distinct patterns of processing for text
and knowledge violations already are evident at an early stage in the processing of
incoming text information.
Fourth, first-pass reading times and regression probabilities revealed
interaction patterns that further differentiate processing of text incongruencies and
knowledge inaccuracies. Knowledge inaccuracies consistently disrupted initial
processing (i.e., longer first-pass reading times and more first-pass regressions); Text
incongruencies also disrupted initial processing but the way in which they did
depended on whether there also was a knowledge inaccuracy present. If the target
also violated background knowledge, then the contextual incongruency resulted in
further slowdown in reading (additional first-pass reading times); In contrast, if the
target was accurate according to background knowledge then the contextual
incongruency elicited more re-readings (more first-pass regressions). Thus, the
target’s accuracy/inaccuracy with respect to background knowledge modulates the
type of effect that a text incongruence elicits: if incoming text information is
inconsistent with both earlier text and the reader’s knowledge, then reading becomes
extra slow; if the incoming information is inconsistent only with earlier text then it is
more likely to be reread.
Fifth, text incongruencies and knowledge inaccuracies differed in the
strength of the disruption they caused: Knowledge inaccuracies appeared to induce
a more intensive, prolonged disruption of the reading process than did text
incongruencies, as reflected in spill-over effects (i.e., effects on first-pass reading of

119
the spill-over sentence) for background-knowledge but not for context contradictions.
This pattern is consistent with the notion of dissociable text-based and knowledge-
based validation processes.
It is interesting to speculate about possible mechanisms underlying these
findings. Drawing an analogy to sentence-processing literature, one possibility is to
assume a serial mechanism where knowledge-based information is processed first
and text-based information comes into play at a later stage during processing (e.g.,
Frazier, 1987; Frazier & Fodor, 1978; Frazier & Rayner, 1982). Another possibility is
an interactive, constraint-based mechanism in which both sources of information are
processed simultaneously (similar to interactive constraint models; Bates &
MacWhinney, 1989; MacDonald et al., 1994; Marslen-Wilson, 1973; Tanenhaus &
Trueswell, 1995). The serial account fits with the finding that contextual incongruence
increases first-pass reading time only if a knowledge-based inaccuracy is detected.
However, in the absence of a knowledge inaccuracy, contextual incongruencies do
increase the probability of a first-pass regression. In so far as first-pass regressions
reflect early processes, this latter finding suggests an interactive mechanism. To
distinguish between these two possible mechanisms more detailed investigation of
the discourse-level processes is necessary. Within the eye-tracking context this would
require a fine-grained analyses of the moments at which the lengthening of reading
times and the regressions occur; other methods with high temporal resolution, such
as ERP/EEG, may also be useful. Regardless, the current results do show, however,
that context and background knowledge interact very early in the processing of
incoming information and together constrain validation. Such conclusion is in line with
spread-of-activation mechanisms posited in the discourse comprehension literature,
such as the memory-based processing view (e.g., Cook et al., 1998b; Gerrig &
McKoon, 1998; Myers & O’Brien, 1998; O’Brien & Myers, 1999; Rizzella & O’Brien,
2002) and cohort activation within the Landscape model view (van den Broek et al.,
1999; van den Broek & Helder, 2017).
In this context, the observation in the current study that inaccuracy with world
knowledge had stronger and longer effects than incongruency with context may
reflect a structural property of the monitoring mechanisms – that validation always
occurs first or primarily against the reader’s background knowledge. It is also
possible, however, that the observed dominance of world-knowledge is not an
inherent property of the system but emerged due to other factors. For example, the
dominance of one informational source over the other may depend on the strength of
the reader’s text-relevant general world knowledge (Cook, 2014) versus the strength
of the contextual information (e.g., Cook & Guéraud, 2005; Myers et al., 2000; O’Brien
& Albrecht, 1991). In the current study, knowledge inaccuracies tended to be stronger
than the text incongruencies, as the former were outright errors whereas the latter
merely were unlikely. To determine whether background knowledge structurally is

120
dominant, future studies could systematically vary the strength of background
knowledge, similar to studies have varied the strength of the context (e.g., Creer et
al., 2018; Walsh et al., 2018)

Early and late processes in validation

Theoretical models of validation assume distinct components to validation: a


coherence-detection component and a post-detection processing component (Cook
& O’Brien, 2014; Isberner & Richter, 2014a; Richter, 2015; Singer, 2019; van den
Broek & Helder, 2017). Models such as the RI-Val model (Cook & O’Brien, 2014;
O’Brien & Cook, 2016a, 2016b) focus on the passive, memory-based processes that
are presumed to be involved in the initial detection of an inconsistency. Once
detected, inconsistencies may trigger further processes, for example processes
aimed at repairing the inconsistency (as described in the two-step model of validation;
Isberner & Richter, 2014a; Richter, 2011; Richter et al., 2009; Schroeder et al., 2008).
The models are not specific with respect to the relation between these components
(e.g., does the detection component finish before possible repair processes, do the
4
two components overlap, do detection processes interact with post-detection
processes by triggering renewed detection processes?) but they generally agree that,
as processing proceeds, the balance gradually shifts from detection to post-detection
(repair) processes. Thus, although all eye movements may be influenced by both
components of validation, early eye-tracking measures such as first-pass reading
times are considered to reflect early processing (e.g., Clifton Jr et al., 2007; Keith
Rayner & Liversedge, 2012) and therefore are relatively close to the detection
processes. Conversely, later eye-tracking measures such as rereads and spill-over
effects on subsequent sentences reflect later processing and are relatively more
sensitive to reader-initiated (including possible repair) processes.
The current findings show that text-based and knowledge-based validation
processes follow distinct trajectories in the very early stages of the processing
of incoming information. Knowledge-based validation influences all early processes
considered in this study, validation against earlier text also influences these processes
but in qualitatively different ways depending on the presence or absence of
knowledge violations. If the textual information is incongruent with the preceding text
but fits the reader’s background knowledge then the reader is likely to re-inspect
the textual information. In contrast, if the textual information is incongruent with
prior text and also violates the reader’s background knowledge then the combined
inconsistencies lead to longer reading time (over and above the already longer time
due to the background knowledge inaccuracy), possibly reflecting more pervasive
checking of textual input with background knowledge.

121
Interestingly, whereas initial text-based and knowledge-based validation
processes show different processing patterns, later text-based and knowledge-based
validation processes (e.g., regression path duration, re-reading probability,
second-pass reading time, and several measures on the spill-over sentences) seem
relatively similar. In so far as that later processing measures reflect repair processes,
results suggest that repair processes for both types of inconsistencies involve a
similar pallet of actions and sources. This may reflect that the final, adjusted mental
representation of readers must fit with both contextual information and the existing
knowledgebase. It is worth noting that the processing of knowledge inaccuracies
required a more intensive, prolonged validation process (in line with van Moort et al,
2018), reflected in the presence of some spill-over effects (i.e., effects on first-pass
reading of the spill-over sentence) for inconsistencies with background knowledge
but not with text. Thus, knowledge inaccuracies in our study seemed more difficult to
repair than textual incongruencies. It could be that in the case of knowledge
inaccuracies the information has to be validated against a more elaborate network
(i.e., the existing knowledgebase) than the episodic memory trace of the text
representation and therefore it may take longer to activate relevant information. As
mentioned above, this could also be caused by differences in strengths for the two
types of inconsistencies.
In all, the results provide compelling evidence that the source of the
incoherence influences processing from a very early stage. Both types of
inconsistencies are detected early in processing with each triggering different
processes. In comparison, in later processing the toolbox of (repair) processes for
text incongruencies and knowledge inconsistencies seems rather similar.

The role of working memory in validation

The findings indicate that working memory modulates later processing (i.e.,
regressions, rereading): readers with a larger working-memory capacity made fewer
regressions and were less likely to reread targets than those with smaller working-
memory capacity. This suggests that readers adapt their later processing strategies
depending on the resources they have available. Speculatively, readers with a larger
working-memory capacity may have more relevant information available for
processing, enabling them to avoid costly re-reading and reprocessing. In contrast,
readers with a smaller working-memory capacity have less relevant information
activated and, thus, may need to look back in the text to construct a coherent situation
model.
No working-memory capacity effects were observed for the early processes.
This suggests that early validation processes require few resources and is consistent

122
with the notion that such processes are relatively passive (e.g., RI-Val and two-step
model of validation) whereas later validation processes are more resource demanding
and reader-initiated.
In an earlier study using sentence-by-sentence self-paced reading,
Van Moort et al. (2018) did observe an effect of working-memory capacity on reading
times for knowledge inaccuracies but not text incongruencies, suggesting that
knowledge-based validation is, in fact, resource demanding. Because the studies
used the same materials but differed in presentation mode (sentence-by-sentence vs.
texts presented in their entirety), it seems plausible that the constraints imposed by
presentation mode may account for the different patterns of results. For example,
during sentence-by-sentence presentation readers cannot look back to related
information to resolve an inconsistency. Therefore, they may attempt to validate
information for each sentence immediately and meticulously before proceeding in the
text (Chung-Fat-Yim et al., 2017; Koornneef et al., 2019; Koornneef & Van Berkum,
2006) and, also, may need to rely more on their memory representation to conduct
the validation (Gordon et al., 2006). As a result, sentence-by-sentence reading may
elicit a greater effect of differences in working-memory capacity than reading of a text 4
presented in its entirety. The potential effect of presentation mode on discourse-level
comprehension processes is worth closer scrutiny as it would have important
consequences for the interpretation of results from studies using sentence-by-
sentence reading.

Broader implications and future directions

Successful comprehension requires readers to build a coherent, meaningful


mental representation or situation model of a text. An essential aspect of building such
mental representation is that readers routinely validate to what extent incoming
information is consistent with what they already know. The current study shows that
the processes involved in coherence monitoring depend on validation against both
contextual information and background knowledge. Moreover, these sources exert
their influence very early in the processing of new text information and they do so in
distinct ways. The current conclusions are consistent with but also expand
considerably current models of validation (e.g., RI-Val; Cook & O’Brien, 2014;
O’Brien & Cook, 2016a, 2016b; The two-step model of validation; Isberner & Richter,
2014). They also are consistent with neuroimaging findings (Chapter 3 of this
dissertation), revealing brain regions that seem mostly involved in either knowledge-
based processing (e.g., dorsomedial Prefrontal Cortex) or text-based processing
(right Inferior Frontal Gyrus) and regions that are affected by the two sources of
information interactively (e.g., Precuneus and left Inferior Frontal Gyrus).

123
In addition to the text features that were the focus of this study, individual
differences are likely to affect monitoring processes, especially in the later
components of validation. We considered working memory, which indeed seemed
to impose a capacity constraints on these later processes. Other individual difference
factors that may differentially affect monitoring and repair processes are the standards
of coherence that the reader applies or the toolbox of repair strategies he/she
has available.
Comprehension monitoring occurs in the context of reading, as in this study
and most validation research, but of course also in many other contexts of life. When
encountering (fake) news, for example, one needs to validate whether the news is
internally congruent and accurate with respect to world knowledge. Paradigms and
models such as those discussed in this paper may provide a fruitful starting point for
investigations of people’s susceptibility to such (un)reliable information sources.

124
4

125
5
Purposeful validation: Are
validation processes and the
construction of a mental
representation influenced by
reading goal?

This chapter is based on:


Van Moort, M.L., Koornneef A., & van den Broek, P.W. (under revision). Purposeful
validation: Do reading goals affect monitoring processes during reading and the
construction of a mental representation? Journal of Educational Science.
Abstract
People read for many different reasons. These goals affect the cognitive
processes and strategies they use during reading. Understanding how reading goals
exert their effects requires investigation of whether and how they affect specific
component processes, such as validation. We investigated the effects of reading goal
on text-based and knowledge-based validation processes during reading and on the
resulting offline mental representation. We employed a self-paced sentence-by-
sentence contradiction paradigm with versions of texts containing target sentences
that varied systematically in congruency with prior text and accuracy with background
knowledge. Participants were instructed to read for general comprehension or study.
Memory for text information was assessed the next day. We also measured the degree
to which each text topic was novel to a reader and their working-memory capacity.
Results show that reading goals affect readers’ general processing but provide no
clear evidence that reading goals influence online validation processes. However,
reading goal effects on readers’ memory for target information did depend on the
congruency of that information with the preceding text: Reading for study generally
resulted in better memory for target information than reading for comprehension did,
but not for target information that was incongruent with prior text. These results
suggest that reading goals may not influence validation processes directly but affect
the processes that take place after the detection of an inconsistency – particularly in
the case of incongruencies with prior text.

Keywords: comprehension, reading goals, monitoring, background


knowledge, memory, validation

128
Introduction
Reading is a purposeful activity. People can have many different reasons
for reading a text: they can read for pleasure, to learn for school, to obtain instructions,
and so on. It is clear that these different goals affect the cognitive processes and
strategies readers use when they proceed through a text (Britt et al., 2018; Linderholm
et al., 2004; McCrudden et al., 2011; van den Broek et al., 1999, 2001). Changes in
cognitive processes, in turn, affect readers’ memory for text information (Lorch et al.,
1993, 1995; van den Broek et al., 2001), particularly memory for text information that
is relevant to their reading purpose (Anderson, Pichert, & Shirey, 1983; Baillet &
Keenan, 1986; Hyönä, Lorch, Kaakinen, Lorch Jr, & Kaakinen, 2002; Pichert &
Anderson, 1977).
To understand how reading goals modulate reading process and outcomes,
it is necessary to investigate how they affect specific component processes that occur
during reading. Successful comprehension requires readers to continually use
various forms of information – for example, semantic (the meaning of words), syntactic
(grammatical), and pragmatic (their understanding of the world) – to build a coherent,
meaningful mental representation or situation model of a text (Graesser et al., 1994;
Johnson-Laird, 1983; Kintsch, 1988; van den Broek, 1988; Zwaan & Singer, 2003).
An essential aspect of building such representations is that readers monitor to what
extent the incoming information is both coherent with prior information in the text (i.e., 5
congruent) and valid with respect to their background knowledge (i.e., accurate)
– a process called validation (O’Brien & Cook, 2016a; Richter & Rapp, 2014; Singer,
2013, 2019; Singer et al., 1992). By validating incoming information readers establish
coherence during reading and protect the emerging mental representation against
incongruencies and inaccuracies.
In the current study we investigated whether a reader’s purpose for reading
affects validation processes and, if so, in what manner. As described below, prior
studies on the influence of reading purpose on comprehension involved examinations
of how people read and learn valid, accurate information (Bohn-Gettler & Kendeou,
2014; Linderholm & van den Broek, 2002; Narvaez et al., 1999; Salmerón et al., 2010;
van den Broek et al., 2001; Yeari et al., 2015). But in daily life people are not always
presented with accurate information; they frequently encounter ideas and concepts
that are inaccurate or incongruent, representing misinformation or even fake news
(Richter & Rapp, 2014). Therefore, it is crucial that we understand how reading
purpose may affect the processing of texts containing false or incongruent information
and readers’ subsequent memory for those texts. To this end, we considered the
influence of reading goals on validation processes during reading as well as their
effects on the final product of reading a text (i.e., the offline memory representation).

129
Purposeful reading

There is considerable evidence that readers’ purpose for reading affects


general reading processes and comprehension (Britt et al., 2018; Cain, 1999;
Kaakinen & Hyönä, 2005, 2010; Narvaez et al., 1999; van den Broek et al., 2001; Van
den Broek et al., 1995). The development of such reading goals is generally assumed
to be influenced by the instructions provided by the reader – either directly or in
interaction with the personal intentions of the reader (e.g., McCrudden, Magliano, &
Schraw, 2010; McCrudden & Schraw, 2007). Such instructions can highlight discrete
text elements by posing pre-reading questions or objectives (e.g., by prompting
readers to identify specific text segments), prompt individuals to read a text from a
designated reference point (e.g., from a particular perspective) or prompt individuals
to read for a general purpose (e.g., reading for study, reading for entertainment,
reading for general comprehension) (Bohn-Gettler & Kendeou, 2014; Bråten &
Samuelstuen, 2004; Cerdán & Vidal-Abarca, 2008; Linderholm & van den Broek,
2002; Narvaez et al., 1999; Rouet et al., 2001; Salmerón et al., 2010; van den Broek
et al., 2001; Yeari et al., 2015). For example, in some studies readers were asked to
read a text describing a location and evaluate whether that location would be suitable
for living (e.g., Hyönä et al., 2002; McCrudden et al., 2010; McCrudden & Schraw,
2007) or to read a text describing a house from the perspective of a potential home
buyer or a burglar (e.g., Baillet & Keenan, 1986; McCrudden, Schraw, & Kambe,
2005). In other studies, more general instructions were given to modify readers’
criteria for how well or how deeply they should process the text, for example by
contrasting superficial or lower-effort reading purposes (e.g., reading for pleasure or
proofreading) with deeper or more effortful reading purposes (e.g., reading in
preparation for an exam) (Bohn-Gettler & Kendeou, 2014; Linderholm & van den
Broek, 2002; Narvaez et al., 1999; Salmerón et al., 2010; van den Broek et al., 2001).
Results of these studies show that the way people read and what information
they acquire systematically varies as a function of reading purpose. These studies
investigated various aspects of reading comprehension. Some studies have focused
on the effects of reading goals on the cognitive processes that take place during
reading (i.e., online), whereas others have focused on the outcome of comprehension
(i.e., the offline representation). Online studies have shown that relevance instructions
affect readers’ attention toward relevant and irrelevant information (Goetz et al., 1983;
Kaakinen et al., 2002; McCrudden et al., 2005). Furthermore, readers with more
effortful general reading purposes (e.g., reading for study) spend more time reading
the texts (Yeari et al., 2015) and engage in more coherence-building processes during
reading (e.g., generating connecting, explanatory, and predictive inferences) than
readers with superficial or lower-effort reading purposes (e.g., reading for

130
etertainment) (Linderholm & van den Broek, 2002; Lorch et al., 1993; Narvaez et al.,
1999; van den Broek et al., 2001). In general, it seems that demanding reading
purposes lead to more careful text processing than superficial reading purposes
(Lorch et al., 1993, 1995; van den Broek et al., 2001). With respect to the offline mental
representation, more demanding reading purposes result in the construction of a
more coherent text representation and in better memory for the text than superficial
reading purposes (Britt et al., 2018; Linderholm et al., 2004; Lorch et al., 1993, 1995;
van den Broek et al., 2001; Yeari et al., 2015).

Validating mental representations

To protect the mental representation of a text against incongruencies and


inaccuracies, readers routinely validate incoming information against various sources
of information – most notably the preceding text and their own background knowledge
(Isberner & Richter, 2014a; Nieuwland & Kuperberg, 2008; O’Brien & Cook, 2016a,
2016b; Schroeder et al., 2008; Singer, 2006). In describing the cognitive architecture
of validation, theoretical models assume distinct components of validation: a
coherence-detection component and a post-detection processing component (Cook
& O’Brien, 2014; Isberner & Richter, 2014a; Richter, 2015; Singer, 2019; van den
Broek & Helder, 2017). The coherence-detection component, involved in detecting
(in)consistencies, is the main focus of the RI-Val model of comprehension (Cook & 5
O’Brien, 2014; O’Brien & Cook, 2016a, 2016b). In this model, validation is described
as one of three processing stages – resonance, integration, and validation – that
comprise comprehension. According to the model, incoming information activates
related information from long-term memory via a low-level passive resonance
mechanism (Myers & O’Brien, 1998; O’Brien & Myers, 1999). This activated
information then is integrated with the contents of working memory and these linkages
made during the integration stage are validated against information in memory that is
readily available to the reader (i.e., information that either already is part of
working memory or easily can be made available from long-term memory) in a single,
passive pattern-matching process (e.g., McKoon & Ratcliff, 1995; Myers & O’Brien,
1998; O’Brien & Albrecht, 1992). These contents of active memory include both
portions of the episodic representation of the text (i.e., context) and general world
knowledge. In addition, the model includes a coherence threshold: a point at which
processing is deemed ‘good enough’ for the reader to move on in a text. This
threshold is assumed to be flexible: readers may wait for more or less information to
accrue before moving on in the text depending on variables associated with the
reader, the task and the text (O’Brien & Cook, 2016b). The three processes are
assumed to have an asynchronous onset and to run to completion over time,

131
regardless of whether the reader has moved on in the text (i.e., reached their
coherence threshold).
Once detected, inconsistencies may trigger further processing. Such
post-detection processes include possible efforts to repair coherence triggered by
the detection of the inconsistency as elaborated in a second validation
model, the two-step model of validation (Isberner & Richter, 2014a; Richter, 2011;
Richter et al., 2009; Schroeder et al., 2008). In this model, validation is described
as consisting of two components: (1) epistemic monitoring (i.e., detecting
inconsistencies) during a comprehension stage, followed by (2) optional epistemic
elaboration processes (e.g., resolving inconsistencies) during an evaluative stage
(e.g., Isberner & Richter, 2014; Richter, 2011; Richter, Schroeder, & Wöhrmann, 2009;
Schroeder et al., 2008). According to this model the initial detection of inconsistencies
(i.e., epistemic monitoring) is a routine part of comprehension. Similar to the RI–Val
model, these detection processes are memory-based, pose little demands on
cognitive resources, and are not dependent on readers’ goals (Richter et al., 2009). If
an inconsistency is detected, readers may initiate epistemic elaboration processes to
resolve the inconsistency. Such elaboration processes may take place during reading
(e.g., generating elaborative and bridging inferences to establish hypothetical truth
conditions) or after reading of a text is completed (e.g., searching for evidence that
could support dubious information). These processes only occur when readers are
motivated and have enough cognitive resources available, as these processes are
assumed to be slow, resource-demanding and under strategic control of the reader
(Maier & Richter, 2013; Richter, 2011).
Theoretical accounts such as the two-step model of validation emphasize that
validation processes function as a gatekeeper for the quality of the mental
representation of a text and as such assume a close relation between online validation
processes and offline memory products (e.g., Isberner & Richter, 2014; Singer, 2006,
2019). On the one hand, successful validation (i.e., information is deemed congruent
and accurate) should result in the integration of the incoming information into the
emerging mental representation thereby increasing the likelihood that it will be
encoded in readers’ long-term memory (Schroeder et al., 2008; Singer, 2006, 2019).
On the other hand, if validation fails (i.e., incoming information is deemed inaccurate
or incongruent), integration of the incoming information into the reader’s mental
representation and long-term memory fails – making this information harder to
remember. Consistent with this idea, readers tend to have poorer memory for
inaccurate or incongruent text information than for accurate or congruent text
information (e.g., Schroeder et al., 2008; van Moort, Jolles, et al., 2020).
As described, current theoretical frameworks offer a time course and a
rudimentary cognitive architecture for validation processes. Furthermore, they
generally agree that incoming information is routinely validated against a reader’s

132
evolving situation model of a text (e.g., Isberner & Richter, 2014; Nieuwland &
Kuperberg, 2008; O’Brien & Cook, 2016a, 2016b; Schroeder et al., 2008;
Singer et al., 1992; Singer, 2006, 2013). Because a situation model comprises
both textual and world knowledge information, most accounts assume that both
sources can affect validation processes, yet few accounts make an explicit distinction
between these sources in their depiction of the cognitive architecture of validation.
Recent research shows that such distinction is essential as incoming information may
be validated against contextual information and background knowledge through
dissociable, interactive, validation channels involving (partially) distinct neurocognitive
mechanisms (van Moort et al., 2018, 2020, 2021). Furthermore, although readers are
assumed to use both sources of information for validation, the dominance of one
informational source over the other may depend on the strength of the reader’s topic-
relevant world knowledge (Cook & O’Brien, 2014) versus the strength of the
contextual information (Cook & Guéraud, 2005; Myers, Cook, Kambe, Mason, &
O’Brien, 2000; O’Brien & Albrecht, 1991).

Validation and reading goals

The degree to which readers validate incoming information may depend on


their purpose for reading (Singer, 2019). Studies have shown that readers' sensitivity
to false or implausible information varies with their goals (e.g., Rapp, Hinze, Kohlhepp, 5
& Ryskin, 2014). For example, Rapp et al. (2014) presented participants with stories
containing both accurate and inaccurate assertions while manipulating the
instructions. Participants were asked to read for comprehension or to engage in
evaluative activities (e.g., fact checking and immediately correcting erroneous content
or highlighting inaccuracies without changing the content). Instructions that promote
evaluative activities reduced the intrusive effects of misinformation on post-reading
tasks (e.g., judging the validity of statements), as compared to the performance of
participants who merely read the text for comprehension.
The potential role of reading goals in online validation processes may take
several forms, depending on when reading goals assert their influence. First, they may
affect coherence-detection processes. In general, the RI-Val model (Cook & O’Brien,
2014; O’Brien & Cook, 2016a, 2016b) and the two-step model of validation
(Isberner & Richter, 2014; Richter, 2015; Schroeder et al., 2008) do not predict strong
effects of reading goals on the coherence-detection component of validation
(i.e., epistemic monitoring) because both accounts assume that coherence detection
involves routine processes that are, by and large, not under strategic control of the
reader. However, in terms of the RI-Val model people that read for study may set a
higher coherence threshold (i.e., set it later in time) accumulating more ‘evidence’

133
from the validation process before deeming information (in)consistent and continuing
to the next sentence. If so, reading will be slower for people with a study goal and
there is a greater chance that validation is complete at the end of reading a sentence.
As a result, the chance that validation processes continue while reading subsequent
sentences of a text (i.e., spill over) decreases. Second, reading goals may affect post-
detection epistemic elaboration processes (cf. Isberner & Richter, 2014; Richter,
2015; Schroeder et al., 2008). Reading for study may result in investing more effort
and resources in resolving inconsistencies than does reading for general
comprehension. These elaborative repair processes can be performed immediately
after the detection of an inconsistency – thereby inflating the processing times of
incoherent sections of a text – but they may also (or still) be carried out after a text
has been read. Consequently, the influence of reading goals may manifest itself early
or late in the epistemic elaboration phase. Third, it is possible that reading goals do
not influence the manner in which readers validate incoming information – neither in
the coherence-detection phase nor in the epistemic elaboration phase – and only
influence post-validation processes such as consolidating the newly read information
in memory.
In reflecting on these options, it should be emphasized that all possible effects
of reading goals on validation may vary depending on the source of a violation. For
example, reading for study may focus readers more on the text itself or, alternatively,
encourage them to recruit more relevant background knowledge into the mental
representation. Hence, knowledge inaccuracies or contextual incongruencies (or
both) may become more salient due to a study reading goal, resulting in strengthening
of any observed effects.
As mentioned above, reading goals may also affect post-validation or offline
memory products. Prior research shows that reading for study generally results in
better memory for congruent and accurate text information (e.g., Lorch et al., 1995,
1993; van den Broek et al., 2001; Yeari et al., 2015) but whether the same holds for
incongruent and inaccurate text information is unknown. Given that the quality of the
offline text representation is assumed to be influenced by the cognitive processes that
readers perform online (Goldman & Varma, 1995; Kintsch, 1988; Trabasso & Suh,
1993; van den Broek et al., 1999; Zwaan & Singer, 2003), comparing online and offline
patterns may provide insight into the underlying mechanisms. For example, if reading
for study triggers more extensive attempts to integrate the inconsistency into the
representation, you would expect readers to show increased online processing
difficulty and better memory for the inconsistent information. However, if reading for
study triggers more thorough validation processes the detection and correction of the
inconsistency may hinder the integration of the inconsistent information into the
representation. If so, you would expect increased online processing difficulty and
poorer memory for the inconsistent information.

134
Reader factors and validation

The degree to which the information in a text is novel to the reader plays a
critical role in many online comprehension processes, including inference making
(Cain et al., 2001; Singer, 2013), comprehension monitoring (Richter, 2015), and
validation processes (e.g., Singer, 2019). With respect to validation, the more novel
the information is to the reader, the less knowledge the reader has against which to
validate the accuracy of the textual information. Thus, novelty likely affects
knowledge-based validation processes in particular: Disruptions due to inaccurate
information will be stronger when a reader is highly familiar with a topic than when
most of the information in a text is new to the reader. In contrast, topic novelty is likely
to either have little impact on the degree of conflict readers experience when they
encounter contextual violations (i.e., incongruencies), or to have a reverse impact:
Readers that lack sufficient topic-relevant knowledge may validate primarily against
contextual information, resulting in stronger disruptions for textual incongruencies.
A reader’s memory for a text may also be affected by the degree to which
the reader has topic knowledge. In general, knowledge about a topic facilitates
encoding of new information into a long-term memory representation, resulting in
better memory for text information (e.g., Alexander, Kulikowich, & Schulze, 1994;
Royer, Carlo, Dufresne, & Mestre, 1996; Schneider, Körkel, & Weinert, 1989; Voss &
Bisanz, 1985). This knowledge effect may influence memory for all text information, 5
irrespective of its accuracy or congruency, or only memory for accurate or congruent
information. For inaccurate or incongruent information, having prior knowledge about
a topic may have no positive effect or it may even impede memory, as conflicting
information in the existing knowledge base may interfere with the encoding and/or
retrieval of the inconsistent information. These scenarios may apply to both text and
knowledge violations but perhaps most to knowledge violations, given that processing
of text violations depends less on readers’ prior knowledge.
In addition to topic novelty, individual differences in working-memory
capacity may affect validation processes (Singer & Doering, 2014). Working memory
constrains the cognitive resources available to the reader for information processing
and storage (Baddeley, 1998; Baddeley & Hitch, 1974; Cowan, 1988, 2017). In the
context of our study, it limits the amount of information that is available for the
validation process (Hannon & Daneman, 2001; Singer, 2006) and, thus, may interfere
with the ability to detect and resolve inconsistencies while reading a text. As a result,
it may create a bottleneck during validation processes that may manifest itself in
different ways. On the one hand, if the bottleneck primarily affects the detection of
inconsistencies (i.e., coherence-detection phase), readers with a lower working-
memory capacity may experience less disruption due to inconsistent information in a

135
text than readers with a higher working-memory capacity, because lower working-
memory capacity readers are less likely to detect the inconsistency. On the other
hand, if the bottleneck primarily affects the repair processes that are triggered by
inconsistencies (i.e., epistemic elaboration phase), readers with a lower working-
memory capacity may experience more disruption due to inconsistent information
than readers with a higher working-memory capacity, because lower working-
memory capacity readers may have relatively fewer resources to execute the
necessary inconsistency resolution. Finally, the impact of reading goals on
comprehension processes in general depends on readers’ working-memory capacity
(Linderholm & van den Broek, 2002; Narvaez et al., 1999; van den Broek et al., 1993,
2001), and this may apply to validation processes as well.

The current study

The current study aims to provide insight into how reading goals affect
validation processes and products. We compare potential online and offline effects of
reading for study (readers were instructed to memorize the text information as their
memory for the text contents would be tested – a commonly used high-effort reading
purpose) with reading for general comprehension (readers were instructed to read
for general comprehension and unaware of the memory test – the most commonly
investigated purpose and the default assumption for reading comprehension models)
(Kendeou et al., 2011). In addition, we examine how online validation processes are
translated into offline memory representations. Because text-based and knowledge-
based validation processes may be partially distinct (van Moort et al., 2018, 2020,
2021), we distinguish between these sources in our examinations. Participants read
expository texts that either did or did not contain information that conflicted
with the preceding text and/or readers’ background knowledge (based on
Van Moort et al., 2018) in a self-paced sentence-by-sentence reading task. Reading
times were recorded as a measure of readers’ difficulty integrating statements into a
mental representation as texts unfold (Albrecht & O’Brien, 1993; Cook et al., 1998b).
To assess post-reading memory, we employed a recognition memory task the
following day. A secondary aim was to investigate whether potential effects of reading
goals on validation processes and products were modulated by the degree to which
the text topic is novel to a reader and by individual differences in working-memory
capacity. As participants’ knowledge on the information presented in a text may vary
across texts, we asked participants to indicate for each text (immediately after reading
the text) how much of the information in that text was novel to them. To investigate
the possible effect of individual differences in working-memory capacity we used the

136
Swanson Sentence Span task (Swanson et al., 1989) as a measure of participants’
working-memory capacity.

Methods
Participants

One hundred and twenty undergraduate students that were native speakers
of Dutch (25 men, 95 women) aged 18-34 years (M = 21.6, SD = 3.13) participated
for monetary compensation. All participants had normal or corrected-to-normal
eyesight and none had diagnosed reading or learning disabilities. Participants
provided written informed consent prior to testing and received financial
compensation for participating. All procedures were approved by the Leiden
University Institute of Education and Child Studies ethics committee and conducted
in accordance with the Declaration of Helsinki.

Materials

We used the texts of van Moort, Jolles, et al. (2020) (Rapp, 2008). The forty
texts are about well-known historical topics. All texts were on different topics and the
5
contents of the texts was not related. The texts were normed to ensure that the
presented facts were common knowledge in our sample (see Chapter 2 for a more
detailed description of the norming study). Each text contained a target that is either
true or false with respect to the readers’ background knowledge; at the same time the
target could either be supported or called into question by the preceding text. Hence,
the context could bias towards either the true or the false target, making it either
congruent or incongruent with the target (see sample text in Table 5.1). Four different
versions of each of the 40 texts were constructed, by orthogonally varying the
accuracy of the target with background knowledge (i.e., true/false) and the
congruency of the target with the preceding context (i.e., congruent/incongruent). It
is important to note that contexts biasing towards false targets did not include
erroneous information; although the phrasing of the context sentences called into
question the certainty of events stated in the target, all facts described in the context
sentences were historically correct.

137
Table 5.1. Sample text with the four text versions (translated from Dutch original)

Knowledge accuracy

Target true Target false

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with artist He conceptualized a giant sculpture along with artist
Auguste Bartholdi. Auguste Bartholdi.

[Bias True Context] [Bias False Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target congruent with context

fundraising work. fundraising work.


They organized a public lottery to generate support Raising the exorbitant funds for the statue proved
for the sculpture. an enormous challenge.
American businessmen also contributed money to Because of financial difficulties France could not
build the statue’s base. afford to make a gift of the statue.
Despite falling behind schedule, the statue was Fundraising was arduous and plans quickly fell
completed. behind schedule.
The statue’s base was finished as well and ready for Because of these problems, completion of the
mounting. statue seemed doomed to failure.

[Target True] [Target False]


The Statue of Liberty was delivered from France to The Statue of Liberty was not delivered from France
the United States. to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
Text congruency

This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

[Introduction] [Introduction]
In 1865, a Frenchman named Laboulaye wished to In 1865, a Frenchman named Laboulaye wished to
honor democratic progress in the U.S. honor democratic progress in the U.S.
He conceptualized a giant sculpture along with artist He conceptualized a giant sculpture along with artist
Auguste Bartholdi. Auguste Bartholdi.

[Bias False Context] [Bias True Context]


Their ‘Statue of Liberty’ would require extensive Their ‘Statue of Liberty’ would require extensive
Target incongruent with context

fundraising work. fundraising work.


Raising the exorbitant funds for the statue proved They organized a public lottery to generate support
an enormous challenge. for the sculpture.
Because of financial difficulties France could not American businessmen also contributed money to
afford to make a gift of the statue. build the statue’s base.
Fundraising was arduous and plans quickly fell Despite falling behind schedule, the statue was
behind schedule. completed.
Because of these problems, completion of the The statue’s base was finished as well and ready for
statue seemed doomed to failure. mounting.

[Target True] [Target False]


The Statue of Liberty was delivered from France to The Statue of Liberty was not delivered from France
the United States. to the United States.

[Coda] [Coda]
The intended site of the statue was a port in New The intended site of the statue was a port in New
York harbor. York harbor.
This location functioned as the first stop for many This location functioned as the first stop for many
immigrants coming to the U.S. immigrants coming to the U.S.

138
Each text consisted of ten sentences (see Table 5.1). Sentences 1-2 were
identical across conditions and introduced the topic. Sentences 3-7 differed in
content, depending on context condition (congruent/incongruent). On average, the
bias-true context consisted of 64 words (SD = 4) and 400 characters (SD = 22) and
the bias-false context consisted of 66 words (SD = 4) and 406 characters (SD = 27).
Sentence 8 was the target sentence, which was either true or false. Overall, targets
were equated for length: true and false targets contained on average 9 words
(SD = 2) and 60 characters (SDtrue = 11; SDfalse = 10). Half of the true targets and half
of the false targets included the word not/never and half did not. Accuracy of the
targets would be manipulated by either adding or omitting negation. Sentences 9-10
were identical across conditions. Sentence 9 was the spill-over sentence and did not
elaborate on the fact potentially called into question in the target. Sentence 10
concluded the text. On average, texts contained 121 words (SD = 5) and 763
characters (SD = 37), across all four text versions.
To implement a repeated-measures design we used a Latin square to
construct four lists, with each text appearing in a different version as a function of text
context (congruent or incongruent with target) and target (true or false) on each list.
The order of the texts was randomized. Each participant received one list and, hence,

5
read one version of each text.

Experimental tasks

Reading task

Participants read the 40 texts in two blocks. Texts were presented sentence-
by-sentence, while reading times were recorded. The presentation rate was self-
paced and sentences remained on screen for a maximum of 10s. A fixation cross
(1000 ms) was presented between texts.
At the start of the reading task, participants were instructed to read for study
(“Read the texts attentively. It is important that you memorize the information in the
texts, as your memory for their contents will be tested tomorrow”) or to read for
general comprehension (“Read the texts attentively”). Participants that were
instructed to read for general comprehension were unaware of the memory test and
were told that they had to perform additional cognitive tests during the second session
that were part of another experiment. Participants were reminded of the instructions
between blocks.

139
Novelty rating

After reading each text, participants indicated how much of the information
in the text they just read was new to them on a visual analog scale: This scale was
presented as a horizontal 100 mm line on which the novelty of the information in the
text is represented by a point between the extremes of ‘nothing is new’ and
‘everything is new’. Participants’ response on this scale provides a score ranging from
0 (nothing is new) to 100 (everything is new) and provides an indication of how familiar
they were with the contents of each text.

Recognition memory task

The recognition memory task (based on van Moort et al., 2020) consisted of
160 items (40 target, 40 context, 40 neutral, and 40 distractor items) that were
presented in random order. Participants were presented with single sentences
containing information that either matched or mismatched the information they
encountered in the reading task (e.g., when they were presented with the information
that the statue of liberty was delivered to the US during the reading task they could
be presented with information stating either that the statue of liberty was delivered to
the US or that it was not delivered to the US). The sentences that were presented in
the memory task were not the exact sentences that were presented during the reading
task. They were adapted to make them comprehensible outside the context of the text
(e.g., anaphoric references were replaced with the original antecedent to facilitate
sentence comprehension). Participants were instructed to base their answers on the
information presented in the sentences, not on whether they had seen this exact
sentence before. For each sentence participants indicated whether they recognized
the information from the texts they read the day before (yes/no). Half of the recognition
items were consistent with the version that was presented in the reading task (correct
response ‘yes’), the other half was not (correct response ‘no’). Half of the presented
items contained the word ‘not’ or ‘never’ and half did not (both for true and false
items). Half of the recognition items were from context versions that were presented
in the reading task; the other half were from the other context version. Thus, correct
recognition responses included correct hits (sentence was present during the reading
task and participants indicated that they read the sentence) and correct rejections
(sentence was not present during the reading task and participants indicated that they
did not read the sentence). Neutral sentences were presented in the reading task and
stemmed from neutral parts of the text (i.e., sentence 1, 2, 9 or 10). Distractor
sentences were sentences that had not been presented in the reading task.

140
Measures

Working-memory capacity

Working-memory capacity was measured with a Dutch version of the


Swanson Sentence Span task (Swanson et al., 1989). In this task, the experimenter
reads out sets of sentences, with set length increasing from 1 to 6 sentences as the
test progresses. At the end of each set a comprehension question is asked about one
of the sentences in the set. Participants have to remember the last word of each
sentence and recall these after answering the comprehension question. The test is
terminated when participants incorrectly recall a set of words or give an incorrect
answer to the comprehension question twice in one set. Participants earn 0.25 points
for each correct answer on the comprehension questions and each correctly recalled
set of words. The sum of these points (ranging between 0-5) is the index of working-
memory capacity.

Procedure

Participants were tested individually in two sessions. In the first session they
completed the reading task (max. 60 minutes). After reading each text they provided
a novelty rating for that text. In the second session, that took place about 24 hours 5
after the first, they completed the recognition memory task (10-15 minutes), followed
by the Swanson Sentence Span Task (max. 5 minutes) and various additional
cognitive tests that were not part of the current experiment.

Analyses

To investigate the effects of the manipulations on the reading process we


conducted mixed-effects linear regression analyses on the log-transformed
reading times on target and spill-over sentences (i.e., sentences 8 and 9) and mixed-
effects linear logistic regression on memory performance scores (i.e., probability
correct) for targets using the R package LME4 version 1.1.21 (Bates et al., 2015).
For each measure we tested a model that included the random factors
subjects and items and the following fixed factors: the main effects of our experimental
manipulations goal (study / comprehension), accuracy (target true / false),
congruency (target congruent / incongruent with context) and their interactions. In
addition, we included the main effect of novelty (the amount of novel information per
text, individual scores were median-centered) and the interactions between our
experimental manipulations and novelty. Finally, we included the main effect of

141
working memory capacity (individual scores were median-centered) and the
interactions between our experimental manipulations and working memory capacity.
Sum coding was applied in the main analyses (comprehension was coded as -0.5 and
study was coded as 0.5; true was coded as -0.5 and false as 0.5; congruent was coded
as -0.5 and incongruent as 0.5). For each model, residuals were normally distributed,
and variance of the random effects residuals was equal across groups for subjects
and items. We report the relevant fixed-effects estimates and the associated t-values
(for the continuous dependent variables) and z-values (for the categorical dependent
variables) in tables (see Table 5.4, 5.5, and 5.6). For ease of interpretation, we report
raw means and standard errors (in ms) for relevant main effects (in text) and back-
transformed estimates for interactions on a secondary y-axis (in figures). To obtain
fixed-effect estimates and the associated statistics for the relevant simple effects of
an interaction, pairwise comparisons were performed using the EMMEANS package
(version 1.4.4) in R. In these comparisons continuous variables were centered on
scores one standard deviation above and below the mean, respectively. We report
odds ratio’s (OR) as indices of effect size for logistic mixed models and estimated
effect sizes (Cohen’s d) for differences in condition means based on the approximate
formula proposed by Westfall, Kenny, & Judd, (2014) for linear mixed models with
contrast codes and single-degree-of-freedom tests (see also Judd, Westfall, & Kenny,
2017). Results of the follow-up analyses will be provided in the text. We do not report
degrees of freedom and p values. Instead, statistical significance at approximately the
0.05 level is indicated by |t| or |z| ≥ 1.96. (Schotter et al., 2014).

Results
Data for six participants were dropped from the analyses, as the Swanson
Sentence Span was terminated incorrectly and, thus, a reliable score for working-
memory capacity could not be calculated. In addition, items to which participants had
not responded in time on the target or spill-over sentences (i.e., within 10 sec) were
excluded from all analyses (resulting in a total loss of 0.4% of the data). Reading times
were log-transformed to correct for right-skewness. On the memory task, participants
were generally proficient in distinguishing whether they had read the information of a
sentence or not. Averaged across all targets, they scored 79% correct (SD = 40). On
sentences originating from the task (target, context, and neutral) they scored on
average 76% correct (SD = 42). On distractor sentences they scored on average 90%
correct (SD = 30). This shows they had read the texts attentively.

142
Effects of reading goals on validation

On target and spill-over reading times (see Table 5.2 for descriptive statistics)
we observed main effects of goal but no interactions with congruency or accuracy
(see Table 5.3 and 5.4 for fixed-effects estimates and associated statistics). The main
effects of goal indicated that participants that read for study showed longer reading
times than participants that read for comprehension, both on target (Mstudy = 2723 ms,
SEstudy = 28; Mcomp = 2488 ms, SEcomp = 25, β = 0.097, SE = 0.03, t = 2.277, d = 0.23)
and spill-over sentences (Mstudy = 3294 ms, SEstudy = 34; Mcomp = 3027 ms, SEstudy = 31,
β = 0.102, SE = 0.05, t = 2.138, d = 0.24).
On memory scores we observed a main effect of goal (β = 0.289, SE = 0.11, z
= 2.604, OR = 1.32) and a goal * congruency interaction (see Table 5.5 for fixed-effects
estimates and associated statistics). To interpret the main and interaction effects of
congruency and goal, we conducted post-hoc pair-wise comparisons. Overall,
congruent targets were remembered better than incongruent targets both when
participants read for study (β = 0.561, SE = 0.01, z = 5.042, OR = 1.75) and when they
read for comprehension (β = 0.197, SE = 0.01, z = 2.004, OR = 1.22). As displayed in
Figure 5.1b this congruency effect was most prominent when participants read for
study. This modulation of the effect of congruency emerged because goal had a
profound influence on memory for congruent targets, i.e., targets of congruent texts
were remembered better when individuals read for study than when they read for 5
comprehension (β = 0.463, SE = 0.137, z = 3.389, OR = 1.59), yet no reliable simple
main effect of goal was observed for incongruent targets (β = 0.099, SE = 0.13,
z = 0.771).

143
Figure 5.1. Fixed effect estimates of the logit memory performance scores (probability correct) on (a)
true and false targets and (b) congruent and incongruent targets as a function of reading goal
(comprehension or study). Scales of exponentiated log-values (i.e., approximating untransformed
values) are provided as secondary y-axes on the right side of the graphs.

144
Table 5.2. Mean reading times and standard deviations (in ms) and mean novelty scores and standard
deviations at the regions of interest (target and spill-over sentence) for the experimental manipulations
(target congruent/incongruent, target true/false and reading goal comprehension/study).

Target Spill-over Novelty


Reading goal Accuracy Congruency
M SD M SD M SD
Congruent 2266 1009 2890 1353 32.10 25.81
True
Incongruent 2421 1137 2953 1395 34.52 26.86
Comprehension
Congruent 2523 1206 3095 1572 39.66 26.77
False
Incongruent 2747 1323 3174 1530 38.45 26.89
Congruent 2440 1126 3172 1505 35.07 24.82
True
Incongruent 2646 1176 3213 1418 35.34 24.90
Study
Congruent 2770 1336 3348 1579 39.09 24.40
False
Incongruent 3045 1333 3446 1587 38.60 23.74

5
Table 5.3. Mean memory performance scores on targets (in %) and standard deviations for the
experimental manipulations reading goal (comprehension/study), congruency with context (target
congruent/incongruent) and accuracy with background knowledge (target true/false).

Memory performance
Reading goal Accuracy Congruency
M SD
Congruent 81 39
True
Incongruent 77 42
Comprehension
Congruent 64 48
False
Incongruent 61 49
Congruent 88 33
True
Incongruent 80 40
Study
Congruent 72 45
False
Incongruent 61 49

145
Table 5.4. Fixed effects estimates and the associated statistics of the sum-coded models fitted for log-
transformed reading times on target sentences.

Fixed effect Beta SE t

Intercept 7.772 0.031 247.367

Reading goal 0.097 0.043 2.277

Accuracy 0.118 0.011 11.079

Congruency 0.081 0.011 7.616

Novelty 0.001 0.000 3.717

WMC -0.028 0.029 -0.989

Reading goal * Accuracy 0.039 0.021 1.833

Reading goal * Congruency 0.024 0.021 1.118

Accuracy * Congruency 0.015 0.021 0.716

Reading goal * Novelty 0.000 0.000 0.728

Accuracy * Novelty -0.001 0.000 -2.437

Congruency * Novelty -0.000 0.000 -0.667

Reading goal * WMC 0.013 0.057 0.233

Accuracy * WMC 0.005 0.014 0.391

Congruency * WMC 0.015 0.014 1.032

Reading goal * Accuracy * Congruency -0.003 0.043 -0.063

Reading goal * Accuracy * Novelty -0.000 0.001 -0.104

Reading goal * Congruency * Novelty 0.000 0.001 0.489

Accuracy * Congruency * Novelty 0.001 0.001 1.773

Reading goal * Accuracy * WMC 0.046 0.028 1.637

Reading goal * Congruency * WMC 0.043 0.028 1.539

Accuracy * Congruency * WMC -0.027 0.028 -0.950

Reading goal * Accuracy * Congruency * Novelty -0.000 0.002 -0.255

Reading goal * Accuracy * Congruency * WMC -0.048 0.056 -0.855

Note. The following R code was used: Reading times ~ 1 + Reading goal * Accuracy * Congruency * Novelty +
Reading goal * Accuracy * Congruency * Working Memory + (1|Subject) + (1|Item).

146
Table 5.5. Fixed effects estimates and the associated statistics of the sum-coded models fitted for log-
transformed reading times on spill-over sentences.

Fixed effect Beta SE T

Intercept 7.961 0.035 225.165

Reading goal 0.102 0.048 2.138

Accuracy 0.061 0.010 5.875

Congruency 0.026 0.010 2.506

Novelty 0.001 0.000 3.919

WMC -0.030 0.032 -0.941

Reading goal * Accuracy 0.000 0.021 0.011

Reading goal * Congruency -0.005 0.021 -0.222

Accuracy * Congruency 0.012 0.021 0.565

Reading goal * Novelty 0.001 0.000 2.474

Accuracy * Novelty -0.001 0.000 -1.575

Congruency * Novelty 0.000 0.000 0.550

Reading goal * WMC -0.007 0.064 -0.117 5


Accuracy * WMC -0.004 0.014 -0.286

Congruency * WMC 0.010 0.014 0.721

Reading goal * Accuracy * Congruency -0.008 0.042 -0.191

Reading goal * Accuracy * Novelty -0.001 0.001 -1.184

Reading goal * Congruency * Novelty 0.001 0.001 1.610

Accuracy * Congruency * Novelty -0.000 0.001 -0.479

Reading goal * Accuracy * WMC 0.014 0.027 0.513

Reading goal * Congruency * WMC -0.012 0.027 -0.444

Accuracy * Congruency * WMC -0.039 0.028 -1.418

Reading goal * Accuracy * Congruency * Novelty -0.000 0.002 -0.052

Reading goal * Accuracy * Congruency * WMC 0.114 0.055 2.077

Note. The following R code was used: Reading times ~ 1 + Reading goal * Accuracy * Congruency * Novelty +
Reading goal * Accuracy * Congruency * Working Memory + (1|Subject) + (1|Item).

147
Table 5.6. Fixed effects estimates and the associated statistics of the sum-coded models fitted for
memory scores on targets.

Fixed effect Beta SE z

Intercept 1.113 0.077 14.472

Reading goal 0.289 0.111 2.604

Accuracy -0.953 0.076 -12.493

Congruency -0.371 0.076 -4.895

Novelty -0.001 0.002 -0.398

WMC 0.002 0.074 0.030

Reading goal * Accuracy -0.139 0.152 -0.912

Reading goal * Congruency -0.341 0.152 -2.246

Accuracy * Congruency 0.144 0.152 0.951

Reading goal * Novelty -0.002 0.003 -0.809

Accuracy * Novelty 0.006 0.003 1.931

Congruency * Novelty -0.002 0.003 -0.592

Reading goal * WMC 0.234 0.147 1.590

Accuracy * WMC -0.102 0.101 -1.012

Congruency * WMC 0.014 0.101 0.143

Reading goal * Accuracy * Congruency -0.080 0.303 -0.265

Reading goal * Accuracy * Novelty -0.002 0.006 -0.397

Reading goal * Congruency * Novelty -0.005 0.006 -0.822

Accuracy * Congruency * Novelty -0.006 0.006 -1.067

Reading goal * Accuracy * WMC -0.248 0.200 -1.237

Reading goal * Congruency * WMC -0.119 0.200 -0.594

Accuracy * Congruency * WMC -0.001 0.202 -0.007

Reading goal * Accuracy * Congruency * Novelty 0.010 0.012 0.893

Reading goal * Accuracy * Congruency * WMC -0.184 0.401 -0.459

Note. The following R code was used: Memory Performance ~ 1 + Reading goal * Accuracy * Congruency * Novelty
+ Reading goal * Accuracy * Congruency * Working Memory + (1|Subject) + (1|Item)

148
Text-based vs. knowledge-based validation

We observed inconsistency effects of congruency (β = 0.081, SE = 0.01,


t = 7.616, d = 0.26) and accuracy (β = 0.118, SE = 0.01, t = 11.079, d = 0.26) on
targets, with longer reading times for incongruent (M = 2707 ms, SE = 27) than
congruent (M = 2493 ms, SE = 25) targets and longer reading times for false
(M = 2765 ms, SE = 28) than true targets (M = 2438 ms, SE = 24), respectively. In
addition, we observed spill-over effects of congruency (β = 0.026, SE = 0.01, t = 2.506,
d = 0.06) and accuracy (β = 0.061, SE = 0.01, t = 5.875, d = 0.13), with longer reading
times on spill-over sentences following incongruent (M = 3190 ms, SE = 32) than
congruent targets (M = 3119 ms, SE = 32) and longer reading times on spill-over
sentences following false (M = 3260 ms, SE = 34) than true targets (M = 3051 ms,
SE = 30).
On memory for target information, we observed main effects of accuracy
(β = -0.953, SE = 0.08, z = -12.493, OR = 2.53) and congruency (β = -0.371, SE = 0.08,
z = -4.895, OR = 1.46): true targets (M = 0.81, SE = 0.01) were remembered better
than false targets (M = 0.65, SE = 0.01) and congruent targets (M = 0.76, SE = 0.01)
were remembered better than incongruent targets (M = 0.70, SE = 0.01) (Figure 5.1a
and 5.1b).
To investigate whether the effects we observed of congruency and accuracy
on readers’ memory for target information were mediated by their reading times on 5
targets we conducted multilevel structural equation modeling (MSEM) in Mplus
(version 7.31; Muthén & Muthén, 1998-2012). We specified our MSEM consistent with
the recommendations of Preacher, Zyphur, & Zhang, (2010) for modeling multilevel
mediation when all variables contain both Level 1 (within-person) and Level 2
(between-person) variance (i.e., 1-1-1 mediation). Mediation analysis was performed
separately for congruency and accuracy. For both manipulations we tested a 1-1-1
mediation model with a cross-classified structure with random effects for subjects
and items that included either congruency (congruent/incongruent) or accuracy
(true/false) as a level 1 predictor, (log-transformed) reading times on targets as
mediator (level 1) and memory performance scores on targets as dependent variable
(level 1). For the estimation, a Bayesian procedure (BAYES estimator in Mplus) was
used. Results showed that the within indirect effect of congruency on memory
performance through reading times was significant (β = -0.003, SD = 0.002, p = 0.033,
95% CI [-0.006, 0.000]); Longer reading times on incongruent targets (i.e., a larger
online incongruency effect) resulted in poorer memory for those targets (i.e., a larger
offline incongruency effect). The within indirect effect of accuracy on memory
performance scores was not mediated by target reading times (β = 0.001, SD = 0.002,
p = 0.393, 95% CI [-0.004, 0.005]).

149
Reader factors and validation

Novelty

We observed main effects of novelty on both target and spill-over reading


times: reading times increased when the amount of novel information participants
encountered increased. In addition, we observed an accuracy * novelty interaction on
target reading times and a goal * novelty interaction on spill-over reading times. We
observed no effects of novelty on memory scores.
To interpret these interactions, we conducted post-hoc comparisons
by centering the model on the novelty ratings on one standard deviation below (11)
and above (62) the mean, respectively. As illustrated in Figure 5.2, comparisons for
the target sentences revealed that reading times were generally longer for false
targets (M = 2765 ms, SE = 28) than for true targets (M = 2438 ms, SE = 24). This
inaccuracy effect was most prominent in texts that contained the least novel
information for participants (β = 0.14, SE = 0.02, t = 9.274, d = 0.33). The inaccuracy
effect diminished as a function of novelty (β = 0.09, SE = 0.02, t = 5.844, d = 0.20). This
modulation of accuracy effects emerged because novelty had an influence on the
reading times of true targets (i.e., true targets of texts with low novelty scores were
read faster than true targets of texts with high novelty scores; β = 0.0014, SE = 0.0003,
t = 4.473), yet no reliable simple main effect of novelty was observed for false targets
(β = 0.0004, SE = 0.0003, t = 1.247).
As illustrated in Figure 5.3, post-hoc pair-wise comparisons for the spill-over
sentences revealed that reading times were longer for participants that read for study
than for participants that read for comprehension. This effect of goal was more
prominent if texts contained more new information for participants (β = 0.14,
SE = 0.05, t = 2.757, d = 0.30) and diminished for texts that contained less new
information, to such an extent that for texts with lower novelty ratings no reliable
differences between reading times were observed for different reading goals
(β = 0.08, SE = 0.05, t = 1.603). This modulation of the effect of goal emerged because
participants that read for study showed increased reading times as the amount of
novel information increased (β = 0.0015, SE = 0.0003, t = 4.404), whereas participants
that read for comprehension showed no such effect of novelty (β = 0.0004,
SE = 0.0003, t = 1.276).

150
Figure 5.2. Fixed effect estimates for log transformed reading times on true and false targets (in log
ms) as a function of novelty (i.e., participants ratings of how much of the information they encountered
in the next was novel to them on a scale from 0 (nothing is new) to 100 (everything is new)). Error
bars represent the SE of the mean at novelty ratings one standard deviation above (62) and below
(11) the mean. Scales of exponentiated log-values (i.e., approximating untransformed values) are
provided as secondary y-axes on the right side of the graphs (in ms).

Figure 5.3. Fixed effect estimates of reading times on spill-over sentences (in log ms) for participants
that read for study and participants that read for comprehension as a function of novelty (i.e.,
participants ratings of how much of the information they encountered in the next was novel to them
on a scale from 0 (nothing is new) to 100 (everything is new). Error bars represent the SE of the mean
at novelty ratings one standard deviation above (62) and below (11) the mean. Scales of exponentiated
log-values (i.e., approximating untransformed values) are provided as secondary y-axes on the right
side of the graphs.

151
Working-memory capacity

Overall, we observed no effects of working memory capacity, apart from a


four-way interaction between goal, congruency, accuracy and working memory
capacity on spill-over sentence reading times (Figure 5.4). To understand this four-
way interaction we ran separate linear models (including the full factorial interactions
between the fixed factors congruency, accuracy and working memory capacity for
participants that read for study and participants that read for comprehension.
We observed a main effect of accuracy in both models: Longer reading
times on spill-over sentences following false targets (Mstudy = 3397 ms, SEstudy = 49;
Mcomp = 3134 ms, SEcomp = 46) than on spill-over sentences following true targets
(Mstudy = 3193 ms, SEstudy = 45; Mcomp = 2922 ms, SEcomp = 41) both when participants
read for study (β = 0.060, SE = 0.02, t = 3.914, d = 0.13) and when they read
for comprehension (β = 0.063, SE = 0.01, t = 4.630, d = 0.14). However, the two
models also showed an important difference: In addition to the main effect of
accuracy, participants that read for comprehension showed a three-way interaction
between accuracy, congruency and working memory capacity (β = -0.089, SE = 0.04,
t = -2.174), whereas participants that read for study showed no other main effects
or interaction effects. These results indicate that the four-way interaction in our
main model is the result of an accuracy * congruency * working memory capacity
interaction that only occurs when participants read for general comprehension, not
when they read for study (see Figure 5.4).
To further characterize the three-way interaction in the reading for general
comprehension condition, we conducted post-hoc pairwise comparisons separately
for lower-capacity readers and higher-capacity readers by centering the model on
working-memory scores one standard deviation below (1.5) and above (3.0) the mean
(M = 2.25, SD = 0.75), respectively. In comparison to the spill-over sentences in the
true-congruent condition, lower-capacity readers show longer reading times for
sentences following false-incongruent targets (β = 0.083, SE = 0.03, t = 2.885,
d = 0.20), but not for sentences following true-incongruent targets (β = 0.027,
SE = 0.03, t = 0.951) and false-congruent targets (β = 0.026, SE = 0.03, t = 0.907) (see
Figure 5.4, left side). For higher-capacity readers a different pattern is observed. In
comparison to the spill-over sentences in the true-congruent condition, higher-
capacity readers show longer reading times for sentences following false-congruent
targets (β = 0.082, SE = 0.03, t = 2.671, d = 0.18), true-incongruent targets (β = 0.069,
SE = 0.03, t = 2.230, d = 0.15), and false-incongruent targets (β = 0.091, SE = 0.03,
t = 2.969, d = 0.20) (see Figure 5.4, right side).
Taken together, results of the post-hoc analyses of the four-way interaction
show knowledge inaccuracy spill-over effects in both reading goal conditions.
However, for participants that read for comprehension spill-over effects are

152
modulated by the readers’ working-memory capacity: Higher-capacity readers show
spill-over effects of inaccuracy (world knowledge) and incongruency (contextual),
whereas lower-capacity readers show a more restricted pattern with spill-over effects
only emerging when target information is inconsistent with both sources.

Figure 5.4. Fixed effect estimates of reading times on spill-over sentences (in log ms) for participants
with a lower working-memory capacity (one SD below the mean) and participants with a higher
working-memory capacity (one SD above the mean) as a function of reading goal (reading for general
comprehension/reading for study), congruency (target congruent/incongruent with context) and
accuracy (target true/false). Error bars represent the SE of the mean at working memory capacity
scores one standard deviation above (2.75) and below (1.5) the mean. Scales of exponentiated log-
values (i.e., approximating untransformed values) are provided as secondary y-axes in the center of
the graphs.

153
Discussion
The aim of this study was to determine whether readers’ purpose for reading
affects online validation processes and readers’ memory for (in)consistent
information. In addition, we investigated whether and how online validation processes
translate into offline (memory) products. In doing so, we distinguished between
text-based and knowledge-based validation processes. A secondary aim was to
investigate whether these effects were influenced by the novelty of the text
information (i.e., the degree to which the topic of the text was novel to each individual
reader) and readers’ working-memory capacity.

Effect of reading goals on validation

In line with prior findings (Lorch et al., 1993, 1995; van den Broek et al., 2001;
Yeari et al., 2015), we observed general effects of reading goal on comprehension,
with reading for study resulting in slower reading and in better memory than reading
for general comprehension. Furthermore, we found no clear evidence that reading
goals influence online validation processes. Reading goals did have distinct effects on
readers’ offline memory for (in)consistent target information of the texts. Specifically,
the observed stronger memory for participants that read to study applied when the
targets in the reading task contained information that was congruent with the
preceding text but not when they contained incongruent information. In the latter case,
reading for study did not result in a stronger memory representation. We elaborate on
each of these findings below.
Results showed no evidence that reading goals affect validation processes
that occur while readers are processing a text – neither in the target sentences nor in
the spill-over sentences. This finding is consistent with the idea that the coherence-
detection (or epistemic monitoring) component of validation, as described in the RI-
Val model (Cook & O’Brien, 2014; O’Brien & Cook, 2016a, 2016b) and the two-step
model of validation (Isberner & Richter, 2014; Richter, 2015; Schroeder et al., 2008),
is a passive and routine process that takes place regardless of people’s goals
for reading a text. Furthermore, we hypothesized that readers may apply a more
stringent coherence threshold (a key component of the RI-Val model) when they read
for study. In that case, spill-over effects due to incongruent or inaccurate information
should be less prominent for people that read for study than for people that read for
comprehension, because the former are more inclined to complete the validation
process of inconsistent sentences before moving on in the unfolding text. Our results
do not support this hypothesis as we did not observe such modulations of spill over
as a function of reading goal.

154
With regard to the repair processes posited by validation models – most
explicitly described in the two-step model of validation (Isberner & Richter, 2014;
Richter, 2015; Schroeder et al., 2008) – the interpretation of our data is less
straightforward. As discussed in the Introduction, elaborative processes to repair and
resolve an inconsistent section of a text are under strategic control of the reader and
may take place during or after reading a text (Maier & Richter, 2013; Richter, 2011).
Our reading time data seems incompatible with the idea that reading goals modulate
epistemic elaboration during reading, yet that does not rule out that post-reading
repair and reflective processes vary as a function of reading goal – i.e., these
validation processes may become more intensive when people read to study.
On a methodological note, it is possible that we did not observe an interaction
between reading goals and online validation because sentence-by-sentence reading
times are not sensitive enough to detect changes in validation processes elicited by
reading goal manipulations. However, sentence-by-sentence reading time measures
have been used in other studies investigating the influence of task demands on online
validation processes (e.g., Williams et al., 2018). Williams et al. (2018) used changes
in task demands (i.e., varying the number of comprehension questions participants
had to answer after reading each text) rather than explicit instructions (as in the
current study) to manipulate readers’ coherence threshold, observing that these
subtle changes affected reading times for the target sentences. Thus, sentence

5
reading times in principle are sensitive enough to pick up validation effects. The
absence of reading goal effects in the present study therefore suggests that variations
in global goals for reading the texts do not (or less strongly) affect validation processes
in comparison to properties of the immediate learning context, such as the task
demands used by Williams et al. (2018).
Considering the on- and offline results together yields an interesting contrast:
reading for study led to more careful processing of all target types; it also led to
stronger memory for all textual information except for incongruent information which
was processed more extensively, just like the other portions of the texts, but in
contrast was not remembered better. Given that readers did detect all violations –
including those involving text incongruencies – as all violations resulted in increased
reading times, this pattern suggest that incongruency with the text is dealt with
differently than inaccuracy with reader’s background knowledge. Because readers
that read for study are more likely to put effort into building a comprehensive,
coherent representation of the text than are readers with a simple comprehension
goal (e.g., Britt et al., 2018; Lorch et al., 1995), they are more likely to try and resolve
incongruencies. Indeed, they take more time to read the texts than their counterparts
that read for comprehension, thus providing support for this prediction. As noted
earlier, this added processing time did not result in better memory for incongruent
target information, suggesting that the effort generally did not lead to successful

155
resolution or attained resolution by adjusting the representation of the target
information to fit the context (i.e., make it congruent) – and thus lowering memory for
the precise target sentence.

Text-based and knowledge-based validation

In addition to the effects of reading goals on validation, the current study


considered potential differences between text-based and knowledge-based
validation. In line with prior findings (Albrecht & O’Brien, 1993; Menenti et al., 2009;
O’Brien et al., 1998, 2004, 2010; O’Brien & Albrecht, 1992; Rapp, 2008; Richter et al.,
2009), our results showed inconsistency effects of both text and knowledge violations.
Furthermore, the current results replicated those of earlier studies (van Moort et al.,
2018, 2021) by showing that knowledge violations generally elicited a prolonged
disruption of the reading process frequently spilling over to the next sentence.
However, our current results also contradicted prior findings. Unlike in earlier studies,
using a similar paradigm, (van Moort et al., 2018, 2020), in the current study we
observed prolonged disruptions due to text violations as evidenced by spill-over to
the next sentence. These mixed patterns of spill-over effects across studies are
puzzling. One possible explanation would be that that subtle variations in samples,
instructions, and research methodologies (cf. van Moort et al., 2018; van Moort, Jolles,
et al., 2020; van Moort, Koornneef, et al., 2020) affected the settings of readers’
coherence thresholds (see RI-Val model; Cook & O’Brien, 2014; O’Brien & Cook,
2016a, 2016b), resulting in small but detectable differences in the amount of spill over
across studies.
To obtain a detailed picture of the relation between the on- and offline results
for text-based and knowledge-based validation, we conducted a series of mediation
analyses. The results showed that the reading times at the target sentence mediated
the offline memory results. Specifically, when readers encounter a sentence that is
incongruent with the context, the reading time for that sentence increases and the
magnitude of this increase, in turn, predicts the decrease in performance on the
memory test. Thus, the probability of correctly recalling an incongruent section of a
text seems to diminish as the intensity of the repair processes that occur after
detecting that incongruency increase. Repair processes may encompass different
strategies. For example, readers may adjust the incoming information to make it fit
with the representation of the preceding text, they may decide to dismiss the
incongruent information and ‘remove’ it from their developing situation model, and so
on. It would be useful for both theory and instruction to investigate the range of repair
processes in which readers engage in response to within-text inconsistencies and

156
how the different processes relate to comprehension and memory for the text as a
whole.
In contrast, the effects of world knowledge inaccuracies on memory
performance were not mediated by reading time. This absence of a mediation effect
can be interpreted in several ways. It could mean that reading time disruptions that
are observed when readers encounter inaccurate sentences do not index post-
detection repair processes. If that is the case, our world knowledge manipulations
seem to influence the (offline) memory products primarily via mechanisms that
occur after a text has been read. Alternatively, it could mean that the online repair
processes to resolve world-knowledge inaccuracies in our materials are relatively
straightforward and do not result in detectable changes in sentence reading time.
A final possibility is that efforts to repair inaccuracies elicit detectable processing costs
but that the amount of time spent on them does not reflect the quality or effectiveness
of those processes. In that case increased reading time durations will not correlate
with reduced performance on the memory test.
In conclusion, although the mediation analysis cannot tell the full story, it is a
powerful tool to decipher whether and how online (reading time) processes translate
into offline (memory) products. In the context of our discussion on text-based versus
knowledge-based validation, the mediation analyses complement prior findings by
indicating that these types of validation have different processing signatures and may

5
trigger different coping mechanisms to protect emerging and final mental
representations of readers against inconsistencies (cf. van Moort et al., 2018; van
Moort, Jolles, et al., 2020; van Moort, Koornneef, et al., 2020).

Individual variations

We explored whether the above findings were influenced by individual


differences; we specifically considered the degree to which the topic of a text was
novel to the reader, and reader’s working-memory capacity.
With respect to the influence of novelty we found, as predicted, that the
processing difference between accurate and inaccurate targets – the amount
of conflict a reader experiences – diminished when readers had less knowledge of
the topic (i.e., the topic had greater novelty). This finding supports the premise
that validation against background knowledge indeed takes places routinely,
distinguishing accurate and inaccurate textual information. It also illustrates the
importance of topic-relevant or world knowledge in successful comprehension of texts
(Alexander & Jetton, 2000; Kintsch, 1988; Myers & O’Brien, 1998; Ozuru, Dempsey,
& McNamara, 2009; Samuelstuen & Bråten, 2005; Shapiro, 2004). Interestingly,
novelty interacted with accuracy in that increasing novelty resulting in slower reading

157
of accurate but not of inaccurate information. Although one should be cautious with
this subtle interaction, one can speculate that it signifies that having knowledge about
a topic primarily facilitates textual information that converges with that knowledge,
rather than that it hinders processing of conflicting information.
Furthermore, the effect of novelty on spill-over sentence reading times was
modulated by reading goal: When the amount of novel information in a text increased,
readers tended to slow down on the post-target sentence when they read for study,
but not when they read for comprehension. These results suggest that readers
engage in deeper or more effortful processing of novel information when the reading
goal requires a deep understanding of the text.
With respect to the role of working memory in validation, we considered
scenarios in which working-memory capacity would affect the coherence-detection
phase and/or the epistemic elaboration phase. Our results did not signal any main
effects of differences in working-memory capacity on processing of the target
sentences. We did observe an effect of working memory on spill-over sentences as
part of a complex (four-way) interaction. When reading for comprehension, the spill-
over patterns of higher-capacity readers differed from the spill-over patterns of lower-
capacity readers – i.e., arguably more prominent spill-over effects for higher-capacity
readers. When reading for study, however, the spill-over patterns for higher- and
lower-capacity readers showed no differences. A possible, speculative, explanation
for this pattern of results is that when higher-capacity readers are reading for
comprehension, they adopt a more lenient processing approach (where processing
is allowed to spill-over to the next sentence) than lower-capacity readers. This
difference between higher- and lower-capacity readers disappears when people are
reading for study: Reading for study may trigger a more stringent processing
approach for higher-capacity readers that allows more validation processes to be
completed before proceeding to the next sentence. Interpreted as such, these results
may also have interesting implications for the coherence threshold of the RI-Val model
(Cook & O’Brien, 2014; O’Brien & Cook, 2016a, 2016b), as they suggest that this
threshold varies depending on readers’ working-memory capacity: Because higher-
capacity readers have the capacity to process more information simultaneously they
may set a lower coherence threshold than lower-capacity readers, resulting in more
‘delayed’ processing. The observation that spill-over effects become weaker when
higher-capacity individuals read for study also fits this explanation: When reading for
study these individuals may set a higher threshold that allows more validation
processes to be completed before proceeding to the next sentence. This account
cannot provide a perfect explanation for our results, but it raises interesting points for
future research.

158
Conclusions and future directions

The current study investigated whether readers’ purpose for reading affects
online validation processes and the translation of these processes into the offline
mental representation. Results suggest that coherence-detection is a routine aspect
of comprehension that is not affected by reading goals (Cook & O’Brien, 2014;
Isberner & Richter, 2014a; Singer, 2019; van den Broek & Helder, 2017). The
interpretation of the results is less straightforward for the epistemic elaboration
component of validation (Isberner & Richter, 2014; Richter, 2015; Schroeder et al.,
2008); our results are incompatible with the idea that reading goals modulate the early
phases of epistemic elaboration, yet do not rule out that late epistemic phases
(including possibly post-text validation processes) are affected by reading goal
manipulations. Because reading goals did affect readers’ memory for target
information, the most parsimonious conclusion is that reading goal influences take
place after the initial detection of the inconsistency and also after initial repair
processes activated by epistemic elaboration. Determining precisely which after-
detection processes are influenced by reading goals and whether the effects we
observed are unique to the particular goals for reading used in this study would be
fruitful directions for further research. In addition, mapping the time course of such
reading goal influences requires more detailed examinations of when and how goals
exert their influence (e.g., by assessing the mental representation during reading, 5
immediately after reading a text, and at later points in time).
In addition, reading goal effects depend on the quality of the text, as reading
for study improves memory for congruent, but not for incongruent target information.
This has important consequences for the interpretation of results from studies
investigating the effects of reading purpose, because these studies predominantly use
coherent and accurate texts. Moreover, these results raise interesting questions for
future research, for example whether an incongruency in a text only affects memory
for the incongruent information itself or whether it also affects memory for other
(related) elements in the mental representation.
The current results replicate earlier findings that the processes involved in
coherence monitoring depend on validation against both contextual information and
background knowledge (van Moort, Jolles, et al., 2020; Van Moort et al., 2018; van
Moort, Koornneef, et al., 2020). Furthermore, they suggest that reading goals
differentially affect processing of text and knowledge violations, respectively, given
that reading for study results in longer reading times on both types of violations but
only improves memory for the latter. To further examine these differential effects a
more detailed overview of these processes is needed. Research methods that have
high temporal resolution (e.g., eye tracking, EEG) and research methods that provide

159
more qualitative data (e.g., think-aloud procedures) may be useful in mapping
potential differences between text-based and knowledge-based validation. In addition,
statistical methods such as mediation analyses can further enlighten us about how
online comprehension and validation processes translate into offline memory
products. Moreover, to gain insight into the effects of readers’ background knowledge
(and the extent of this knowledge) on knowledge-based validation processes, future
studies could include more extensive assessments of readers’ knowledge on a text
topic. Finally, the current study focused on text-based and knowledge-based
validation processes in the context of reading single texts, but future studies could
extend this work by examining when and how readers use these informational sources
– and perhaps other informational sources (e.g., readers’ prior beliefs; Gilead et al.,
2018) – to construct a coherent and adequate mental model when reading multiple
texts (e.g., when reading on the web to make an informed decision on a controversial
topic; Rouet & Britt, 2011).
We observed minimal effects of working memory on either online processing
or offline representation. These results are only partly in agreement with earlier
findings from studies using a similar paradigm (van Moort et al., 2020; van Moort et
al., 2018). The mixed effects across studies may be attributed to differences between
the groups that were tested, or it may illustrate that the role of working memory in
validation processes is more complex than initially thought. Including working-
memory capacity as a covariate seems insufficient to see which of these possibilities
is accurate. Therefore, future studies may include direct manipulations of working
memory load during processing (cf. de Bruïne et al., 2021).
There has been a longstanding acknowledgement by reading researchers
that one’s purpose for reading plays an important role in reading, but a challenge for
theories of reading has been to describe when and how reading processes are
influenced by reading goals. To deepen our understanding of this issue a more
detailed examination of how readers’ goals affect component processes of
comprehension is needed. Building on the strong tradition of research on goal effects
on online comprehension processes and offline products of comprehension, the
current study has taken the first step by examining how reading goals affect validation
processes. Although reading goals affect readers’ general processing, we observed
no evidence that they affect the coherence-detection phase of validation. They did
influence post-detection processes, differentially affecting readers’ memory for
incongruent and inaccurate targets. To develop a comprehensive model of reading
goal effects, future studies may extend this work by going beyond the impact of
reading goals on general comprehension and focus on their effects on specific
component processes.

160
5

161
6
Summary and General
Discussion
General Discussion
The overarching goal of this dissertation was to examine the complex
interactions between contextual information and information from the readers’
background knowledge in building coherent and accurate mental representations of
text. More specifically, the experimental studies in this dissertation focused on how
these two sources of information influence both validation processes that take place
during reading (all Chapters) and the resulting long-term memory representation after
reading (Chapter 4 and 5). They examined the nature and time course of text-based
and knowledge-based validation processes to ascertain whether the influence of
contextual information and background knowledge in validation processes can be
distinguished and whether we should assign separate roles for the two sources in the
cognitive architecture of validation.
To investigate possible processing differences between text-based and
knowledge-based validation processes all studies in this dissertation employed a
contradiction paradigm that contrasts validation against background knowledge and
validation against prior text in a single design. In this paradigm, participants read
versions of expository texts about well-known historical topics that varied
systematically in (in)consistency with prior text or background knowledge. Each text
contained a target sentence that was either true (e.g., the Statue of Liberty was
delivered to the United States) or false (e.g., the Statue of Liberty was not delivered
to the United States) relative to the reader’s background knowledge and that was
either congruent (i.e., supported by the preceding context) or incongruent (i.e., called
into question by the preceding context) relative to the information from the preceding
text (e.g., context that described that the construction of the statue went according to
plan vs. context that described problems that occurred during construction of the
statue). Processing differences between knowledge-based and text-based validation
processes were examined by comparing the processing of true versus false targets
with the processing of congruent versus incongruent targets, respectively.
The empirical studies in this dissertation employed complementary research
methods. In Chapter 2 I employed self-paced sentence-by-sentence reading to
investigate the unique influences of information from the text or readers’ background
knowledge on processing. To further explicate the specific roles of the two sources of
information in the cognitive architecture of validation, I used fMRI to examine the
neural correlates of text-based and knowledge-based validation and eye-tracking to
investigate the time course of these processes (Chapter 3 and 4, respectively). In
addition to these measures of online validation processes (i.e., processes that take
place during reading), I included a recognition memory task in study two and four
(described in Chapter 3 and 5) to investigate the potential effects of online text-based

164
and knowledge-based validation processes on what readers remember from a text.
Moreover, I investigated whether the task (Chapter 2) or the readers’ purpose for
reading (Chapter 5) affects validation processes. Finally, I investigated the role of
individual differences in readers’ working memory capacity (Chapter 2, 4 and 5) and
in how familiar they were with the text topic (Chapter 5) in validation.
As the empirical studies in this dissertation employed the same contradiction
paradigm but different research methods they provide a comprehensive and in-depth
overview of the processes involved in text-based and knowledge-based validation.
The combined results of these studies address several core questions with respect to
the time course and neurocognitive architecture of validation. In the remainder of this
chapter, I will discuss these core architectural questions and reflect on the broader
implications of the results discussed in this dissertation. Following this discussion, I
will present a neurocognitive model of validation. To conclude, I will reflect on the
broader implications of the results discussed in this dissertation, as they bear
relevance beyond the context of validation models.

The neurocognitive architecture of validation

One for all, all for one - Common versus separate validation
processes

The main aim of the studies in this dissertation to investigate whether


validating against background knowledge and validating against prior text involve a
common mechanism or (partially) separate mechanisms, and whether separate roles
for contextual information and background knowledge should be assigned when 6
describing the cognitive architecture of validation. To investigate this issue, first we
must establish that the processes involved in coherence monitoring depend on
validation against both sources. Across the studies in this dissertation, we routinely
observed inconsistency effects for both text incongruencies and knowledge
inaccuracies, indicating that both sources impact validation processes. Thus, in line
with prior studies (e.g., Cook & Myers, 2004; Kintsch, 1988; O’Brien & Cook, 2016;
Richter et al., 2009; Rizzella & O’Brien, 2002; Schroeder et al., 2008; Singer, 2013;
van den Broek & Helder, 2017) the results of the experimental studies in this
dissertation show that incoming information is routinely validated against elements of
the current situation model and the readers’ background knowledge. Although this
illustrates that both sources have a profound impact on validation processes, the
underlying architecture of text-based and knowledge-based validation processes may
be modeled in different ways. Validating against background knowledge and
validating against prior text may involve a common mechanism (i.e., the mechanisms

165
of validating against background knowledge are the same as those validating against
prior text) or they may involve separate, fundamentally different, text-based and
knowledge-based validation mechanisms. The results of the experimental studies
described in this dissertation provide insight into whether validation should be
modeled as a single validation system or multiple validation systems (i.e., separate
text-based and knowledge-based validation systems). In deciding between a single-
system or a multiple-systems account the eye movement results (Chapter 4) and the
neuroimaging results (Chapter 3) are particularly relevant, as they provide information
on the time course of text-based and knowledge-based validation processes and to
what extent these processes call on the same underlying brain systems or (partly)
different brain systems.
Results of both studies provide some evidence for a single system account.
First, the eye movement results discussed in Chapter 4 show that both sources
affect early processing and that post-detection processes for both types of
inconsistencies seem to involve a similar pallet of actions and sources (i.e., readers
were more likely to re-read targets and displayed both longer regressions and longer
re-reading when they encountered inconsistent targets than consistent targets
– regardless of the source of the inconsistency). Moreover, the neuroimaging results
discussed in Chapter 3 show similarities in activation patterns between the two
conditions, as well as interaction effects (e.g., in left IFG and precuneus), suggesting
that readers integrate information from both sources to construct a coherent mental
representation. These similarities could be taken as evidence that, in line with a single
system account, text incongruencies and knowledge inaccuracies affect processing
in similar ways. However, across both studies we also observed evidence for a
multiple-systems account. First, the neuroimaging results described in Chapter 3
showed a neurocognitive ‘division of labor’ for validation processes: whereas some
brain regions are mostly involved in either knowledge-based processing or text-based
processing, others are affected by a combination of the two sources of information.
Moreover, the neuroimaging results showed that all regions that were involved in text-
based validation were sensitive to coherence rather than incoherence of information
with the text, whereas knowledge-based validation involved both regions that were
more active for true information and regions that were more active for false
information. Taken together, these results suggest that readers process text-based
and knowledge-based information separately, at least to some extent.
Further evidence for this multiple-systems account was provided by the
results of the eye tracking study described in Chapter 4. Although contextual
incongruencies and knowledge inaccuracies are not completely independent (i.e.,
contextual incongruencies must involve, at the very least, some violation of logic that
exist in the reader’s background knowledge), results show that they seem to trigger
distinct processes in the very early stages of the processing of incoming information:

166
Whereas knowledge-based validation influences all early processes considered in
Chapter 4, validation against earlier text also influences these processes but in
qualitatively different ways depending on the presence or absence of knowledge
violations. If incoming text information is inconsistent with both earlier text and the
reader’s knowledge, then reading becomes extra slow, but if the incoming information
is inconsistent only with earlier text, then it is more likely to be reread. Finally, across
both studies text incongruencies and knowledge inaccuracies differed in the strength
of the disruption they caused. Knowledge inaccuracies appeared to induce a more
intensive, prolonged disruption of the reading process than did text incongruencies,
as reflected in spill-over effects (i.e., effects on first-pass reading of the spill-over
sentence) for background-knowledge but not for context contradictions. In addition,
knowledge-based validation recruits a larger network of brain regions than text-based
validation. These patterns are also consistent with a multiple systems account
– although they do not explicitly exclude the possibility of a single system account.
Taken together, the processing differences we observed across studies may
reflect readers responding differently to specific violations (regardless of the source
of those violations), but a more likely explanation is that these violations trigger
different processing strategies because they are violations of a particular type (i.e.,
text- or knowledge violations). These findings expand considerably current models of
validation (e.g., the RI-Val model; Cook & O’Brien, 2014; O’Brien & Cook, 2016b,
2016a; and the two-step model of validation; Isberner & Richter, 2014), as they
provide compelling evidence that information is not only routinely validated against
these two sources of information, but that it is validated against information from
background knowledge and prior text in functionally dissociable text-based and
knowledge-based validation processes that involve (partially) different neurocognitive
mechanisms. Thus, these results suggest that separate roles for contextual 6
information and background knowledge may be assigned when describing the
cognitive architecture of validation.

The devil is in the details – Unraveling the cognitive


architecture of validation

Positing distinct text-based and knowledge-based validation components


raises important questions on the architecture and sequence of these two component
processes (i.e., how and when the different language components communicate with
each other). The results discussed in this dissertation offer a window into the
architecture of validation processes and, more broadly, into the architecture of the
human language system, as they have implications for hypotheses about the
fundamental architecture of language processing stages. Bearing this in mind, it may
be particularly interesting to draw an analogy to sentence-processing literature -as

167
the fundamental characteristics of the cognitive architecture of our language system
(i.e., how and when the different language components communicate with each other)
continue to be the subject of vigorous debate in that literature. Core architectural
issues in this debate are whether information from different sources is processed
sequentially or in parallel and whether component processes take place
autonomously or interactively. Whereas these two issues are not independent, they
do emphasize different aspects of the cognitive architecture.
The serial-parallel debate focuses on whether the processing of information
from different sources is constrained by an inherent serial order of processing or,
alternatively, whether information from the two sources is processed simultaneously.
Within the serial-parallel processing debate models differ in their assumptions on the
temporal relations between component processes. Parallel processing models
assume that semantic and syntactic information are processed simultaneously (e.g.,
Bates & MacWhinney, 1989; Fazio & Marsh, 2008; Hagoort, 2005; Jackendoff, 1999;
Kuperberg, 2007; MacDonald et al., 1994; Marslen-Wilson, 1973; Tanenhaus &
Trueswell, 1995; Van de Meerendonk et al., 2009; Van Herten et al., 2006).
For example, constraint-based models describe a parallel approach in which all
available syntactic and semantic information is used to activate (or construct) multiple
(often competing) interpretations of the given sentential input that are weighted
probabilistically (e.g., Bates & MacWhinney, 1989; MacDonald et al., 1994; Marslen-
Wilson, 1973; Tanenhaus & Trueswell, 1995). In contrast, serial models assume that
that semantic and syntactic information are processed sequentially. For example,
single-stream, syntax-first models describe a serial approach in which a syntactic
parse is performed first, based on syntactic principles only, before other kinds of
information (such as information derived from semantics and pragmatics) are brought
to bear on the comprehension process (Ferreira & Clifton, 1986; Frazier, 1987; Frazier
& Fodor, 1978; Rayner et al., 1983, 1992).
A serial account fits well with the finding discussed in Chapter 4 that
contextual incongruence increases first-pass reading time only if a knowledge-based
inaccuracy is detected, as this suggests that incoming information is validated against
background knowledge first and that contextual incongruencies are detected later
in processing. However, in so far as first-pass regressions reflect early processes,
the finding that contextual incongruencies increase the probability of a first-pass
regression in the absence of a knowledge inaccuracy suggests that both sources are
processed in a similar time frame and, thus, that they may be processed in parallel.
Taken together, these results do not paint a clear picture of the serial or parallel nature
of validation processes: Some of the results are compatible with a serial model, but
the observation that information is processed in a similar time frame would be more
in line with a parallel processing approach. However, the observation that contextual
information is not always utilized but that the utilization of contextual information

168
depends on the presence or absence of a knowledge violation would be less
compatible with the general notion of parallel processing that all information
immediately influences the comprehension process (i.e., as soon as the relevant
pieces of information are available) (Jackendoff & Jackendoff, 2002; Marslen-Wilson,
1989; Zwitserlood, 1989).
The autonomous-interactive debate focuses on another important aspect
of defining the cognitive architecture of processing: to what extent different sources
of information contribute independently or interactively to comprehension and,
in the context of validation, to what extent the functionally distinct text-based
and knowledge-based component processes take place autonomously or
interactively.13 In the sentence-processing literature two main classes of models can
be distinguished: modular accounts and interactive accounts. Modular accounts
assume that information from different sources is processed in separate autonomous
subcomponents (i.e., information from each source is processed in a separate module
that has no access to what is happening in other modules). For example, the Garden
path model proposed by Frazier & Rayner (1982) argues that sentence processing
involves the analysis of each individual unit or module of a sentence, with little or no
feedback, thus inhibiting correction. Interactive accounts, on the other hand, assume
that different sources of information can influence one another in an interactive
fashion (e.g., Kuperberg, 2007; MacDonald et al., 1994; Marslen-Wilson, 1975;
Tanenhaus & Trueswell, 1995). Such interactions could take place early in processing,
for example in a fully interactional system where the different informational sources
immediately and constantly influence each other (e.g., interactive constraint-based
model; Bates & MacWhinney, 1989; MacDonald et al., 1994; Marslen-Wilson, 1973;
Tanenhaus & Trueswell, 1995). But autonomy and interaction may also pertain to
different processing phases during language comprehension (i.e., early versus late). 6
In the tradition of serial syntax-first models, Friederici (2002) proposed a
comprehensive framework in which she suggested that syntactic processes precede,
and are initially independent of, semantic processes but interact during later
processing. Hence, although in both classes of models syntactic and semantic
information are integrated during language perception to achieve understanding,
interaction takes place at different points during processing: interactive, constraint-
based models predict early interaction, whereas syntax-first models predict
interaction during a later stage of processing.
The current data are compatible with both modular and interactive models.
On the one hand, our results show that several brain regions are involved in either
text-based or knowledge-based validation, suggesting that readers process text-

13
Note that the autonomous versus interactive processing debate is not independent of the serial-parallel processing
debate, as serial models often assume autonomous processing components.

169
based and knowledge-based information separately -at least to some extent-. On the
other hand, we observe brain regions that process text-based and knowledge-based
information interactively (e.g., in left IFG and precuneus), suggesting that readers
integrate information from both sources to construct a coherent mental representation
(Chapter 3). In addition, the eye movement results (Chapter 4) show that context and
background knowledge interact very early in the processing of incoming information.
Such early interactions would be at variance with the notion of completely
autonomous processing and more in line with interactive models. Taken together,
these results suggest that text-based and knowledge-based validation are not
completely autonomous or interactive. Rather, the results discussed in this
dissertation suggest a hybrid model, in which information is processed autonomously
to some extent, but during certain processing stages the two components also
interact.
In addition, across studies we observed that inaccuracy with world
knowledge elicited stronger and longer effects than incongruency with context. This
observed dominance of world knowledge over contextual information can be
modelled as a structural property of the system (i.e., world knowledge always plays a
more dominant role than context). If so, one would assume a knowledge-driven
architecture (e.g., Garrod & Terras, 2000; Kintsch, 1988; Sanford & Garrod, 1989), in
which world knowledge always plays a more dominant role than context and validation
always occurs first or primarily against the reader’s background knowledge. However,
another possibility is that the observed dominance of world knowledge is not an
inherent property of the system but emerged due to other factors. For example, the
current findings could be attributed to differences in the relative strength of the
violation, as knowledge violations tended to be stronger than the text violations in
these studies (as the former were outright errors and the latter merely unlikely). If so,
this would suggests an architecture in which the two sources compete for initial
influence on processing (Cook et al., 1998a; e.g., Gerrig & McKoon, 1998; Myers &
O’Brien, 1998; O’Brien & Cook, 2016a, 2016b; O’Brien & Myers, 1999; Rizzella &
O’Brien, 2002) and that the dominance of one informational source over the other
may depend on the strength of the reader’s text-relevant general world knowledge
(Cook & O’Brien, 2014) versus the strength of the contextual information (e.g., Cook
& Guéraud, 2005; Myers et al., 2000; O’Brien & Albrecht, 1991). To determine whether
world knowledge is structurally dominant, future studies could systematically vary the
strength of background knowledge, similar to studies have varied the strength of the
context (e.g., Creer et al., 2018; Walsh et al., 2018).
Thus, the combined results of this dissertation suggest that information is
processed in parallel in a (partially) interactive architecture in which context and
background knowledge interact very early in the processing of incoming information
and together constrain validation. Such conclusion would be in line with spread-of-

170
activation mechanisms posited in the discourse comprehension literature, such as the
memory-based processing view (e.g., Cook et al., 1998; Gerrig & McKoon, 1998;
Myers & O’Brien, 1998; O’Brien & Myers, 1999; Rizzella & O’Brien, 2002) and cohort
activation within the Landscape model view (van den Broek et al., 1999; van den Broek
& Helder, 2017), as it suggests that all information that is activated through fast,
autonomous, passive, memory-based processing is immediately available for the
comprehension process. However, results also show that the two sources of
information only interface when readers encounter false world knowledge information,
suggesting that readers do not always use information from both sources but rely
primarily on their background knowledge.
The idea that readers focus on particular aspects of incoming information –
and do not always use all available information—would fit well with accounts that
assume that readers do not always use algorithmic processing to compute detailed,
fully specified, representations, but often tend to use heuristic processing to generate
shallow or superficial representations that are not necessarily exact, but often simply
“good enough” (Christianson et al., 2001; Ferreira, 2003; Ferreira et al., 2002; Ferreira
& Patson, 2007; Henderson et al., 2016; Karimi & Ferreira, 2016; Sanford & Sturt,
2002). What is “good enough” in a particular reading situation may depend on the
readers’ standards of coherence – the (often implicit) criteria readers have for what
constitutes adequate comprehension and coherence in a particular reading situation
(van den Broek et al., 2011, 2015; Van den Broek et al., 1995). In the context of
validation, such standards may affect the extent to which readers engage in
elaborative processing (as described in the two-step model of validation; Isberner &
Richter, 2014a; Richter, 2015) or they may (also) affect the passive processes involved
in detecting the inconsistency (as described in the RI-Val model; O’Brien & Cook,
2016a, 2016b). For example, within the RI-Val model (O’Brien & Cook, 2016a, 2016b) 6
the assumption that readers will move on in the text without engaging in strategic
processing if comprehension is deemed “good enough” to meet their standards is
elegantly operationalized in the form of a coherence threshold. This threshold reflects
a point in time at which processing is deemed ‘good enough’ for the reader to move
on in a text. It is viewed as a point on a continuum of processing that is below the
reader’s conscious awareness and is assumed to be flexible: readers may wait for
more or less information to accrue before moving on in the text depending on
variables associated with the reader, the task and the text (O’Brien & Cook, 2016b).
Such a good-enough processing framework may also aid in explaining the
world knowledge dominance we observed across studies. Within this framework, such
world knowledge dominance may be explained by a knowledge-based processing
stream that is more heuristic in nature (c.f., Kuperberg, 2007; Van Herten et al., 2006).
Thinking about validation in terms of strictly algorithmic and heuristic routes may be
a bridge too far, but it provides an interesting perspective. If knowledge-based

171
information is processed more heuristically than text-based information this could
explain the observed world knowledge dominance: It may be that text-based and
knowledge-based processes start simultaneously, but that knowledge-based
information is processed in a more superficial and economic way (e.g., readers may
employ a knowledge-based plausibility heuristic rather than computing a fully
specified representation) and, thus, dominates processing (similar to the dual-route
model; Kuperberg, 2007). Computing a heuristic, more economical – but not always
adequate- representation of a text may be the default processing approach or it may
be that readers construct either more detailed algorithmic representations or more
heuristic “good-enough’ representations, for example depending on their standards
of coherence, task demands and/or their available processing capacity.
Thus, the results discussed in this dissertation provide interesting insights
into the cognitive architecture of validation, but the discussion above illustrates that
they are hard to capture in a single type of model. Similar challenges in describing
when and how different sources of information (e.g., syntax, semantics) interact have
been present in the sentence-processing literature for decades. As readers validate
incoming information on various levels of language processing and against various
sources of information, validation research may be crucial in unraveling the complex
interactions between the different sources of information. Combining insights from
validation research on various levels of language processing may not only aid our
understanding of the cognitive architecture of validation, but also our understanding
of the cognitive architecture of the language system in general.

Timing is everything - Early and late processes in validation

Another important issue that has been discussed throughout this dissertation
is the time course of text-based and knowledge-based validation processes.
Theoretical models of validation assume distinct components to validation: a
coherence-detection component and a post-detection processing component (Cook
& O’Brien, 2014; Isberner & Richter, 2014a; Richter, 2015; Singer, 2019; van den
Broek & Helder, 2017). Models such as the RI–Val model (Cook & O’Brien, 2014;
O’Brien & Cook, 2016a, 2016b) focus on the passive, memory-based processes that
are presumed to be involved in the initial detection of an inconsistency. Once
detected, inconsistencies may trigger further processes, for example, processes
aimed at repairing the inconsistency (as described in the two-step model of validation;
Isberner & Richter, 2014a; Richter, 2011; Richter et al., 2009; Schroeder et al., 2008).
The models are not specific with respect to the relation between these components
(e.g., does the detection component finish before possible repair processes, do the
two components overlap, do detection processes interact with post-detection
processes by triggering renewed detection processes?) but generally agree that, as

172
processing proceeds, the balance gradually shifts from detection to post-detection
(repair) processes. Although all eye movements may be influenced by both
components of validation, early eye-tracking measures such as first-pass reading
times are considered to reflect early processing (e.g., Clifton Jr et al., 2007; Rayner &
Liversedge, 2012) and therefore are relatively close to the detection processes.
Conversely, later eye-tracking measures such as rereads and spill-over effects on
subsequent sentences reflect later processing and are relatively more sensitive to
reader-initiated (including possible repair) processes.
The results of the eye movement study discussed in Chapter 4 show that text-
based and knowledge-based validation processes follow distinct trajectories in the
very early stages of the processing of incoming information. Whereas knowledge-
based validation influences all early processes considered in Chapter 4, validation
against prior text also influences these processes but in qualitatively different ways
depending on the presence or absence of knowledge violations. If the textual
information is incongruent with the preceding text but fits the reader’s background
knowledge, then the reader is likely to reinspect the textual information. In contrast, if
the textual information is incongruent with prior text and also violates the reader’s
background knowledge, then the combined inconsistencies lead to longer reading
time (over and above the already longer time due to the background knowledge
inaccuracy), possibly reflecting more pervasive checking of textual input with
background knowledge.
Interestingly, whereas initial text-based and knowledge-based validation
processes show different processing patterns, later text-based and knowledge-based
validation processes (e.g., regression path duration, re-reading probability, second-
pass reading time, and several measures on the spill-over sentences) seem relatively
similar. In so far as that later processing measures reflect repair processes; results 6
suggest that repair processes for both types of inconsistencies involve a similar pallet
of actions and sources. This may reflect that the final, adjusted mental representation
of readers must fit with both contextual information and the existing knowledgebase.
Thus, both types of inconsistencies are detected early in processing with
each triggering different processes. In comparison, in later processing the toolbox of
(repair) processes for text-based and knowledge-based inconsistencies seems rather
similar. In all, these results provide compelling evidence that both sources exert their
influence very early in the processing of new text information and they do so in distinct
ways. These conclusions are consistent with current models of validation (e.g., RI-Val;
Cook & O’Brien, 2014; O’Brien & Cook, 2016a, 2016b; The two-step model of
validation; Isberner & Richter, 2014), as they illustrate the processes involved in
coherence monitoring depend on validation against both contextual information and
background knowledge. However, they also expand these models considerably, as
they provide compelling evidence that the source of the incoherence influences

173
processing and that text-based and knowledge-based validation processes follow
distinct trajectories from a very early stage. Assuming that first-pass measures reflect
early processing, our results suggest that readers detect the source of the
inconsistency very early in processing and adapt their processing accordingly.
However, as it is difficult to pinpoint when processing shifts from detection to
postdetection (repair) processes, our results cannot provide conclusive evidence
whether the differences we observed in the first-pass measures are the result of
differences in early repair processes or whether the passive processes involved in
detecting an inconsistency are also affected by the source of the incoherence. But
that the two sources should be distinguished when describing the time course and
component processes of validation is evident.

But the memory remains… or does it? - Effects of reading


inconsistencies on readers’ memory

As validation processes not only impact how readers process a text but also
what they remember (or learn) from a text (Cook & Myers, 2004; Ferretti et al., 2008;
Schroeder et al., 2008), the current dissertation extended prior research by
investigating the effect of inaccuracy and incongruency on memory for the target
information (Chapter 3 and 5). In line with earlier work (e.g., Anderson, 1983; Johnson-
Laird, 1983; Zwaan & Radvansky, 1998), behavioral results of both studies show that
memory for information that is consistent with pre-existing memory representations
(i.e., the readers’ background knowledge or the mental representation of the text) is
stronger than memory for inconsistent information, regardless of reading goal. These
findings support the notion that the (in)coherence of a text affects memory for that
text (Cook & Myers, 2004; Loxterman et al., 1994; McKeown et al., 1992). Also, they
show that the common assumption that allocating more attention to text information
results in a stronger memory representation for that information (e.g., Van den Broek
et al., 1996) applies to accurate and congruent information, but not to inaccurate or
incongruent information. Whereas readers allocate more attention to processing
inaccurate or incongruent information than accurate or congruent information (as
reflected in increased reading times), this results in weaker memory for that
information. Such weaker memory representations could emerge because readers
are unable to retrieve the incongruent or inaccurate information from memory due to
interference of information from the readers’ existing knowledgebase and/or the
mental representation of the text itself. Another possibility is that readers cannot
encode the information in memory during reading, despite allocating additional
attention. Readers strive for coherence, therefore information that is not validated
successfully may not be integrated into the mental representation but instead may not
be encoded or only encoded as an isolated node in the representation. From the

174
current results it is not clear whether these findings solely reflect a retrieval problem
or whether they (also) reflect an encoding problem. Future research could explore
encoding differences, for example by comparing the online processing (e.g., brain
activation, think-aloud responses or eye tracking measures) of items that were
remembered with forgotten items.
In addition, results of the studies in this dissertation illustrate that the general
notion that having relevant knowledge facilitates the integration of newly read
information into the representation and, thus, aids in remembering text information
(e.g., Chiesi et al., 1979; Dochy et al., 1999; Means & Voss, 1985; Recht & Leslie,
1988), applies to texts containing accurate and congruent information, but not to
inaccurate or incongruent texts. These results suggest that the facilitative effects of
background knowledge depend on the accuracy of the presented text information
with the readers’ existing knowledge base. Having (accurate) relevant knowledge
seems to facilitate memory for accurate text information and protect the memory
representation against inaccurate information, as memory for inaccurate information
is weaker than memory for accurate information (Chapter 3 and 5). John Locke once
said: “The only defense against the world is a thorough knowledge of it” and the
results of our experimental studies seem to prove him right. This is good news, as
these results suggest that a readers’ existing knowledgebase is not revised
immediately when they encounter false information in a text. Moreover, readers’
existing knowledge on the text topic seems to not only facilitate the acquisition of
new (accurate) information, but also protect the memory representation against false
information -and given the increasing amount of disinformation and fake news on the
internet, that sounds like a good thing.
Unfortunately, there is always a catch. As discussed throughout this
dissertation, not everything we read is accurate, but neither is everything we know. 6
Many studies have shown that misconceived knowledge (e.g., misconceptions,
inaccurate beliefs, misinformation, myths) is difficult to revise (Chi et al., 1994;
Ecker et al., 2015; Isberner & Richter, 2014a; Özdemir & Clark, 2007; Rapp & Braasch,
2014; Turcotte, 2012; Vosniadou, 1994). In line with the notion that processes that
contribute to adaptive responding may also produce distortions in memory (e.g.,
Bartlett, 1932; Brainerd & Reyna, 2005; Howe, 2011; Howe et al., 2011; Newman &
Lindsay, 2009; Schacter, 1999; Schacter & Dodson, 2001), it may be that the same
memory mechanism that protects our mental representation against inaccurate
information – by not integrating or encoding information that is inconsistent with the
existing knowledgebase – is actually counterproductive when the existing knowledge
of the reader is incorrect (e.g., in the case of misconceptions).
Taken together, the results of the studies described in this dissertation
illustrate the crucial role of readers’ background knowledge in processing textual
information and, more specifically, in validating text information. However, they

175
suggest a more complex role of background knowledge in learning from texts: Having
relevant background knowledge may (1) facilitate learning of accurate text information
and (2) both protect the memory representation against inaccuracies (if the readers
knowledge is accurate) and impede learning of accurate text information when the
readers’ knowledge is inaccurate. Based on these results, a slight modification of John
Locke’s statement may be in order: it seems that the only defense against the world
is a thorough accurate knowledge of it.

No two persons ever read the same text - Individual


differences and validation

A secondary aim of this dissertation was to explore whether the above


findings were influenced by individual differences, such as the readers’ purpose for
reading (Chapter 2 and 5), their working-memory capacity (Chapter 2 3 & 5) and topic
novelty (i.e., the degree to which the topic of a text was novel to the reader; Chapter
5).

Reading purpose

“Without goals, and plans to reach them, you are like a ship
that has set sail with no destination.” - Fitzhugh Dodson

As described in the Introduction of this thesis, reading is a purposeful activity.


Readers can have different goals for reading and they process texts differently
depending on these reading goals. In the context of validation, readers’ purpose for
reading may determine how extensive they validate incoming information (Singer,
2019). To investigate whether and to what extend reading goals affect validation
processes during reading and the translation of these processes into the offline
memory representation, the studies in this dissertation investigated whether readers’
purpose for reading affects online validation processes (Chapter 2 and 5) and the
offline mental representation (Chapter 5). In line with prior findings (Lorch et al., 1993,
1995; van den Broek et al., 2001; Yeari et al., 2015), we observed general effects of
reading goal on comprehension. Readers were slower when they were instructed to
focus on the text (by thinking of a summary and writing it down) than when they were
instructed to focus on background knowledge (by writing down one thing they already
knew about the topic) (Chapter 2). In addition, reading for study (readers were
instructed to memorize the text information as their memory for the text contents
would be tested – a commonly used high-effort reading purpose) resulting in slower
reading and in better memory than reading for general comprehension (readers were

176
instructed to read for general comprehension and unaware of the memory test – the
most commonly investigated purpose and the default assumption for reading
comprehension models; Kendeou, Bohn-Gettler, & Fulton, 2011) (Chapter 5).
However, in both studies we found no clear evidence that reading goals influence
online validation processes. In contrast, we did observe distinct reading goal effects
on readers’ offline memory for (in)consistent target information of the texts (Chapter
5). Specifically, the observed stronger memory for participants that read to study
applied when the targets in the reading task contained information that was congruent
with the preceding text but not when they contained incongruent information. In the
latter case, reading for study did not result in a stronger memory representation. I
elaborate on each of these findings below.
Results of both studies showed no evidence that reading goals affect
validation processes that occur while readers are processing a text – neither in the
target sentences nor in the spill-over sentences. These findings are consistent with
the idea that the coherence-detection (or epistemic monitoring) component of
validation, as described in the RI-Val model (Cook & O’Brien, 2014; O’Brien & Cook,
2016a, 2016b) and the two-step model of validation (Isberner & Richter, 2014a;
Richter, 2015; Schroeder et al., 2008), is a passive and routine process that takes
place regardless of people’s goals for reading a text. With regard to the repair
processes posited by validation models – most explicitly described in the two-step
model of validation (Isberner & Richter, 2014a; Richter, 2015; Schroeder et al., 2008)
– the interpretation of our data is less straightforward. As discussed throughout this
dissertation, elaborative processes to repair and resolve an inconsistent section of a
text are under strategic control of the reader and may take place during or after
reading a text (Maier & Richter, 2013; Richter, 2011). Our reading time data seems
incompatible with the idea that reading goals modulate epistemic elaboration during 6
reading, yet that does not rule out that post-reading repair and reflective processes
vary as a function of reading goal – e.g., these validation processes may become more
intensive when people read to study.
There are several possible explanations for these observations. One
possibility is that we did not observe interactions between reading goals and online
validation because sentence-by-sentence reading times are not sensitive enough to
detect changes in validation processes elicited by reading goal manipulations.
However, sentence-by-sentence reading time measures have been used in other
studies investigating the influence of task demands on online validation processes
(e.g., Williams et al., 2018). Williams et al. (2018) used changes in task demands
(i.e., varying the number of comprehension questions participants had to answer after
reading each text) rather than explicit instructions (as in the current study) to
manipulate readers’ coherence threshold, observing that these subtle changes
affected reading times for the target sentences. Thus, sentence reading times in

177
principle are sensitive enough to pick up validation effects. The absence of reading
goal effects in Chapter 2 and 5 therefore suggest that variations in global goals for
reading the texts do not (or less strongly) affect validation processes in comparison
to properties of the immediate learning context, such as the task demands used by
Williams et al. (2018). Another possibility is that the reading goals used in these
studies did not affect online validation processes because they did not focus on the
critical evaluation of information. Rather, the reading goals used in this dissertation
focused on the memorization of information (Chapter 5) or on the offline product of
reading (write a summary) and the activation of relevant knowledge prior to reading
the text (write down two things you know about the topic), respectively (Chapter 2).
Reading purposes that focus more on critical evaluation of the information (for
examples of such evaluative reading goals see Rapp et al., 2014; Richter, 2003; Wiley
& Voss, 1999) may result in stronger online validation effects.
Considering the online and offline results discussed in Chapter 5 together
yields an interesting contrast: reading for study led to more careful processing of all
target types; it also led to stronger memory for all textual information except for
incongruent information. This information was processed more extensively, just like
the other portions of the texts, but in contrast was not remembered better. Given that
readers did detect all violations – including those involving text incongruencies – as
all violations resulted in increased reading times, this pattern suggest that
incongruency with the text is dealt with differently than inaccuracy with reader’s
background knowledge. Because readers that read for study are more likely to put
effort into building a comprehensive, coherent representation of the text than are
readers with a simple comprehension goal (e.g., Britt et al., 2018; Lorch et al., 1995),
they are more likely to try and resolve incongruencies. Indeed, they take more time to
read the texts than their counterparts that read for comprehension, thus providing
support for this prediction. As noted earlier, this added processing time did not result
in better memory for incongruent target information, suggesting that the effort
generally did not lead to successful resolution or attained resolution by adjusting the
representation of the target information to fit the context (i.e., make it congruent) –
and thus lowering memory for the precise target sentence.
Taken together, results of the two studies suggest that coherence-detection
is a routine aspect of comprehension that is not affected by reading goals (e.g., Cook
& O’Brien, 2014; Isberner & Richter, 2014a; Richter, 2015; Singer, 2019; van den
Broek & Helder, 2017). The interpretation of the results is less straightforward for the
epistemic elaboration component of validation (Isberner & Richter, 2014a; Richter,
2015; Schroeder et al., 2008); our results are incompatible with the idea that reading
goals modulate the early phases of epistemic elaboration, yet do not rule out that late
epistemic phases (including possibly post-text validation processes) are affected by
reading goal manipulations. Because the reading goals used in Chapter 5 did affect

178
readers’ memory for target information, the most parsimonious conclusion is that
reading goal influences take place after the initial detection of the inconsistency and
also after initial repair processes activated by epistemic elaboration. For example,
reading goals may influence offline deliberation epistemic elaboration processes
(Richter, 2011). These processes may range from reasoning about the conflicting
information (plausible reasoning; e.g., Collins & Michalski, 1989) to attempts
to ascertain its validity with the help of external sources (e.g., looking up information
in a textbook or encyclopedia or searching the Internet). Another possibility is that
they affect the processes involved in encoding - or perhaps even consolidating - the
newly read information into memory. Determining precisely which after-detection
processes are influenced by reading goals would be a fruitful direction for further
research.

Working memory

We observed mixed results across the different studies with respect to the
role of working memory in text-based and knowledge-based validation processes. On
the one hand, results suggest no role for working memory in validation, as we
observed no effects of working memory on processing of inaccurate or incongruent
target information (Chapter 4 and 5) and no effects of working memory on later
processing of inconsistent information (Chapter 4). These observations are consistent
with the idea that the coherence-detection (or epistemic monitoring) component of
validation, as described in the RI-Val model (Cook & O’Brien, 2014; O’Brien & Cook,
2016a, 2016b) and the two-step model of validation (Isberner & Richter, 2014a;
Richter, 2015; Schroeder et al., 2008), is a passive and routine process that does not
depend on the amount of resources readers have available for processing.
However, we also observed evidence that suggests that working memory 6
does play a role in validation (Chapter 2 and 5), but the observed effects differed
across the two studies. In Chapter 2 we observed that having a larger working memory
capacity decreased, but did not eliminate, the inconsistency effect
for the target sentences. Although a smaller inconsistency effect could be taken
to reflect less or even inferior validation, this interpretation seems unlikely in
the current situation assuming that higher-capacity readers are proficient validators.
Hence, a more plausible explanation of the attenuation of the inaccuracy effect for
higher-capacity readers may be that they validated the inaccurate information more
efficiently and/or that they repaired the mental representation more efficiently, for
example because they are more likely to have the required information available,
because they are able to generate inferences that help integrate new information with
previous information (e.g., Calvo, 2001) and because they may be more efficient in
suppressing irrelevant information.

179
Results of Chapter 5 also support the general notion that working memory
plays a role in validation, but show a different pattern: we observed no effects of
working memory on the processing of targets, but we did observe working memory
effects on spill-over sentences as part of a complex (four-way) interaction: When
reading for comprehension, the spill-over patterns of higher-capacity readers differed
from the spill-over patterns of lower-capacity readers – i.e., arguably more prominent
spill-over effects for higher-capacity readers. When reading for study, however, the
spill-over patterns for higher- and lower-capacity readers showed no differences. A
possible, speculative, explanation for this pattern of results is that when higher-
capacity readers are reading for comprehension, they adopt a more lenient
processing approach (where processing is allowed to spill-over to the next sentence)
than lower-capacity readers. This difference between higher- and lower-capacity
readers disappears when people are reading for study: Reading for study may trigger
a more stringent processing approach for higher-capacity readers that allows more
validation processes to be completed before proceeding to the next sentence.
These observations seem in line with prior studies showing that lower-
capacity readers are more likely to engage in less resource-demanding cognitive
processes and strategies than higher-capacity readers (e.g., breaking texts up into
smaller conceptual units and allocating more processing resources to the integration
of information as it is introduced in text) to avoid a resource overload of their working
memory system (see e.g., Budd et al., 1995; Rayner et al., 2006; Swets et al., 2007;
Whitney et al., 1991). Interpreted as such, these results may have interesting
implications for the coherence threshold of the RI-Val model (Cook & O’Brien, 2014;
O’Brien & Cook, 2016a, 2016b), as they suggest that this threshold varies depending
on readers’ working-memory capacity: Because higher-capacity readers have the
capacity to process more information simultaneously they may set a lower coherence
threshold than lower-capacity readers, resulting in more ‘delayed’ processing. The
observation that spill-over effects become weaker when higher-capacity individuals
read for study also fits this explanation: When reading for study these individuals may
set a higher threshold that allows more validation processes to be completed before
proceeding to the next sentence. In addition, the observation that higher-capacity
readers show decreased inconsistency effects when they encounter knowledge
violations (Chapter 2) would also be compatible with this interpretation: Although we
observed no differences in ‘delayed’ processing for higher- and lower-capacity
readers (i.e., we did not observe increased spill-over effects for higher-capacity
readers), higher-capacity readers seem quicker to reach their coherence threshold
and continue to the next sentence than lower-capacity readers.
Although these accounts provide interesting points for future research, they
cannot provide a perfect explanation for our combined results. Thus, as we do not

180
observe a clear pattern of results across studies, these studies cannot provide
conclusive evidence on the role of working memory in validation.
There are several possible explanations for the mixed effects we observed
across studies. First, they may be attributed to differences between research
methodologies, the groups that were tested and subtle variations in instructions. For
example, all studies used the same materials but differed in presentation mode
(sentence-by-sentence vs. texts presented in their entirety). It may be that constraints
imposed by presentation mode account for the different patterns of results. During
sentence-by-sentence presentation readers cannot look back to related information
to resolve an inconsistency. Therefore, they may attempt to validate information for
each sentence immediately and meticulously before proceeding in the text (Chung-
Fat-Yim et al., 2017; Koornneef et al., 2019; Koornneef & Van Berkum, 2006) and,
also, may need to rely more on their memory representation to conduct the validation
(Gordon et al., 2006). As a result, sentence-by-sentence reading may elicit a greater
effect of differences in working-memory capacity than reading of a text presented in
its entirety.
Second, the mixed effects may be related to the working memory span
measure we used in these studies. As comprehension requires the combined
processing and storage resources of working memory (e.g., Baddeley, 1992;
Duff & Logie, 2001; Frank & Badre, 2015; Oberauer et al., 2003), the Swanson
Sentence Span Task is designed to tap into these combined resources. Although this
makes it a better predictor of comprehension than measures that tap only the storage
capacity (e.g., word span, digit span) (e.g., Daneman & Merikle, 1996), it also makes
it difficult to distinguish the unique contributions of the processing component and the
temporary storage component. Investigating how these working memory components
are involved in the different phases of validation may require more fine-grained 6
measures that can distinguish between the respective influences of the specific
components of working memory. For example, future studies could employ a variation
of the sentence span used by Duff and Logie (2001) in which the two components of
the working memory span task were assessed separately before they were combined
in a single task.
Finally, it may be that the mixed effects illustrate that the role of working
memory in validation processes is more complex than initially thought. Including
working-memory capacity as a covariate may be insufficient to gain insight into
how different components of working memory may affect the different phases of
validation. Including direct manipulations of working memory load during processing
(cf. de Bruïne et al., 2021) may be required to determine the conditions under which
working memory does or does not play a role.

181
Novelty

The results described in Chapter 5 show that the degree to which the topic
of a text was novel to the reader affects online validation processes. With respect to
the influence of novelty we found, as predicted, that the processing difference
between accurate and inaccurate targets – the amount of conflict a reader
experiences – diminished when readers had less knowledge of the topic (i.e., the topic
had greater novelty). This finding supports the premise that validation against
background knowledge indeed takes places routinely, distinguishing accurate and
inaccurate textual information. It also illustrates the importance of topic-relevant or
world knowledge in successful comprehension of texts (Alexander & Jetton, 2000;
Kintsch, 1988; Myers & O’Brien, 1998; Ozuru et al., 2009; Samuelstuen & Bråten,
2005; Shapiro, 2004). Interestingly, novelty interacted with accuracy in that increasing
novelty resulting in slower reading of accurate but not of inaccurate information.
Although one should be cautious with this subtle interaction, one can speculate that
it signifies that having knowledge about a topic primarily facilitates textual information
that converges with that knowledge, rather than that it hinders processing of
conflicting information.
Furthermore, the effect of novelty on spill-over sentence reading times was
modulated by reading goal: When the amount of novel information in a text increased,
readers tended to slow down on the post-target sentence when they read for study,
but not when they read for comprehension. These results suggest that readers
engage in deeper or more effortful processing of novel information when the reading
goal requires a deep understanding of the text.
Although the current dissertation can provide limited insight into the general
role of novelty, as it was optimized to investigate validation processes, the results do
provide a fruitful starting point for future investigations. First, the fact that we
consistently observed novelty effects on all sentences and these effects vary across
sentences illustrates that the novelty measure provides a valid indication of the
amount of knowledge readers have on the topic of the text. However, the novelty
measure used in the current study is relatively coarse. Novelty is a property that is
likely to vary not only per text, but also within a text, or even within a sentence.
Therefore, a deeper understanding of the role of novelty in comprehension requires
the use of more fine-grained novelty scores, or even direct manipulations of novelty,
to investigate its effects on moment-by-moment processing (e.g., whether the effect
of novelty fluctuates over time and whether novelty affects processing of the text as a
whole or only specific parts of the text). Second, the construct of novelty consists
of different components, including a text-related component (texts vary in how much
knowledge individuals generally have on the topic of the text or in how much
knowledge is required to comprehend the text) and a reader-related component

182
(readers vary in the amount of general knowledge they possess). Disentangling these
components can provide insight into novelty effects on processing.

My theory of everything: a neurocognitive model of


validation

When I started this endeavor, my ultimate goal was to develop a


comprehensive neurocognitive approach to validation that not only provides a specific
time course of processing, but also describes how these processes are instantiated
in, and supported by, the organization of the human brain (the neural architecture).
As the results are hard to capture in a single type of model, it is difficult to provide
such a detailed framework but I do want to propose a tentative model describing the
cognitive architecture of text-based and knowledge-based validation. To inspire such
a model I draw on frameworks that focus on processing individual sentences or very
short discourse, as they describe similar issues, but provide more detailed
descriptions of the cognitive architecture of processing, and specifically, of when and
how various sources of information (e.g., syntax, semantics) influence processing
(e.g., Ferreira & Clifton, 1986; Friederici, 2002; Hagoort, 2005; Hagoort et al., 2004;
Hagoort & van Berkum, 2007; Jackendoff, 2007; van Berkum et al., 1999). For
example, Van Herten et al. (2005, 2006) proposed a cognitive architecture describing
the interplay between syntax and semantics in which an algorithmic, syntax-driven
stream works in parallel to a heuristic stream driven by world knowledge (Kolk et al.,
2003; Kolk & Chwilla, 2007; van de Meerendonk et al., 2009; Van De Meerendonk et
al., 2010; Van Herten et al., 2005; Ye & Zhou, 2008). Moreover, in addition to sentence
processing models that focus on providing a detailed cognitive framework, other 6
sentence processing models focus on the neural correlates underlying these
cognitive processes. For example, the neurocognitive model of sentence processing
by Friederici (2002) proposed a time course of the functional subprocesses involved
in sentence understanding and describes their underlying neural correlates based on
data from multiple studies using various research methods (e.g., ERP, fMRI and PET).
Although I do not claim that these models per se are completely accurate, as there
are many alternative frameworks (e.g., Ferreira et al., 2002; Ferreira & Clifton, 1986;
Ferreira & Patson, 2007; Hagoort, 2003; Karimi & Ferreira, 2016; Kuperberg, 2007),
they exemplify the kind of detail on both cognitive aspects (e.g., Van Herten et al.,
2005, 2006) and neural aspects (e.g., Friederici, 2002) of how sources of information
interface that one would to achieve for neurocognitive models of text validation and
comprehension as well.
As discussed earlier, providing such a comprehensive and detailed
framework based on the results of this dissertation is a bridge to far – at least for now.

183
But combining the results of the studies in this dissertation and the findings and
theorizing in the sentence comprehension literature allows me to propose a tentative
basic framework describing the cognitive architecture of text-based and knowledge-
based validation. In this framework text-based and knowledge-based information are
processed in two parallel, interactive, processing streams (possibly with an
asynchronous onset) that are combined into a single integrated mental
representation. The knowledge-based processing stream evaluates the validity of
incoming information based on the readers’ background knowledge stored in long
term memory, while the text-based stream evaluates the validity of incoming
information based on the information from the mental representation of the text thus
far. Which informational source is more dominant depends on the strength of the
reader’s text-relevant general world knowledge (Cook & O’Brien, 2014) versus the
strength of the contextual information (e.g., Cook & Guéraud, 2005; Myers et al., 2000;
O’Brien & Albrecht, 1991). In line with theoretical models of validation, I assume
distinct components to validation within these two processing streams: a passive
coherence-detection component and a post-detection processing component (Cook
& O’Brien, 2014; Isberner & Richter, 2014a; Richter, 2015; Singer, 2019; van den
Broek & Helder, 2017). During the initial phases of validation (i.e., the detection of the
inconsistency) text-based and knowledge-based processes interface. After the initial
detection of the inconsistency qualitatively different follow-up processes are
employed depending on the type of violation the reader encounters in the text (e.g.,
checking information against the existing knowledgebase in the case of knowledge
violations and reprocessing the preceding text in case of text violations). Whether and
to what extent readers engage in such elaborative processing may depend on their
standards of coherence (i.e., a higher standard of coherence is likely to increase the
chance that a reader will engage in elaborative processing) and the coherence
threshold they set (cf. O’Brien & Cook, 2016b). Later repair processes for both types
of inconsistencies involve a similar pallet of actions and sources, as the final, adjusted
mental representation of readers must fit with both contextual information and the
existing knowledgebase. Which later processing strategies readers employ may
depend on the resources they have available (i.e., their working memory capacity).
Of course, a neurocognitive model would not be complete without a
description of the neural architecture (i.e., how these cognitive processes are
instantiated in, and supported by, the organization of the human brain). Therefore,
based on the neuroimaging results discussed in Chapter 3 as well as prior findings, I
want to extend this model by proposing the following neural division of labor between
key brain regions in text-based and knowledge-based validation: The dmPFC seems
mostly oriented towards knowledge-based processing, whereas the right IFG seems
mostly involved in text-based processing. Furthermore, the two streams of information
affect the precuneus and left IFG interactively. Based on the pattern of results

184
described in Chapter 3 it seems that the dmPFC and the left IFG seem involved in
initial inconsistency detection processes. However, they seem involved in different
aspects of inconsistency detection, as the dmPFC seems to detect erroneous world-
knowledge information and signal the HC that existing knowledge structures should
not be updated, while the left IFG evaluates world knowledge violations in the context
of the text. Later repair processes may involve the precuneus, as it becomes
deactivated either when there is nothing to repair (entirely congruent) or when the
target makes little sense and is perhaps impossible to repair (entirely incongruent).
It goes without saying that this is still a relatively simple neurocognitive model
and that the assumptions discussed in this model need more extensive testing, but I
believe that it provides a fruitful base for constructing and testing more specific
hypotheses about text-based and knowledge-based validation processes and the
interaction between the two systems.

Whatever begins, also ends – Concluding remarks


and future directions

By combining different research methods and theories from different


research domains the studies in this dissertation have contributed to our
understanding of how readers monitor and validate textual information against two of
the main informational sources – the text itself and their own background knowledge.
Results provide evidence that the processes involved in coherence monitoring
depend on validation against both contextual information and background knowledge.
Moreover, they illustrate that information is validated against these two sources in
dissociable, (partially) interactive, text-based and knowledge-based validation 6
processes. In addition, the studies described in this dissertation extend prior work by
investigating how inaccurate or incoherent information may affect readers’ memory
for text information. Most work on readers’ memory for text information to date
involved examinations of how people learn valid, accurate information that we hope
they will encode into their knowledgebase (e.g., Bohn-Gettler & Kendeou, 2014; van
den Broek et al., 2001). But as I illustrated throughout this dissertation, people are
not always presented with accurate information; they often encounter ideas and
concepts that are instead inaccurate and invalid, representing misinformation.
Understanding when and how readers are influenced by inaccurate information is an
important first step in understanding the facilitative and interfering (or protective)
effects of background knowledge. Paradigms and models such as those discussed in
this dissertation may provide a fruitful starting point for investigations of people’s
susceptibility to false information, but also for investigations of how inaccurate
knowledge can be revised. Finally, results of the current dissertation provide insight

185
into the complex interplay between recently acquired knowledge (from the text) and
long-term knowledge (from memory) in constructing meaning from language. As
such, they are relevant for models of sentence and discourse processing and,
moreover, for our understanding of how we construct the meaning of a message in a
broader context (e.g., from spoken or visual input) and how we monitor incoming
information in general.
To conclude, the results discussed in this dissertation may bear relevance
beyond the context of theoretical models of validation and even beyond the context
of theoretical models of comprehension. In 1764 Voltaire once said: “Let us read and
let us dance; these two amusements will never do any harm to the world.” In 1764 this
statement may have been true, but not anymore. I believe that dancing is still a
(relatively) safe activity, but unfortunately the same cannot be said about reading. In
times marked by unprecedented access to information, the proliferation of
misinformation on social media is faster than the spread of COVID-19 and learning
false information from the web can have dire consequences for decision making (e.g.,
false information on the safety of vaccines may affect one’s decision to vaccinate). So,
reading information online has become a rather risky activity. In view of these
developments, it is crucial that we understand how readers evaluate written materials
and protect their emerging mental representations from being contaminated by
inaccuracies or incongruencies. The studies in this dissertation have contributed to
our understanding of the complex interplay between what we read and what we know
in validation and provide novel input on the pervasive effects of false information on
comprehension and memory. Consequently, they provide a fruitful starting point for
developing further hypotheses on people’s susceptibility to (un)reliable information
and on effectively combatting misinformation. As such, in the long run, they have
(hopefully) contributed to making reading harmless again.

186
6

187
7
Nederlandse Samenvatting
Wat je leest versus wat je weet
Het construeren en valideren van mentale
representaties

Om een tekst te kunnen begrijpen, maar ook om de wereld om ons heen te


begrijpen, proberen we in ons hoofd een mentaal beeld (i.e., een mentale
representatie) van de inhoud en betekenis van een tekst of -in bredere zin- van de
informatie die we binnenkrijgen te vormen - bijvoorbeeld wanneer we een boek lezen,
een film kijken of wanneer we een gesprek voeren (Kintsch, 1988; voor een
Nederlandstalig hoofdstuk zie Helder et al., 2020). Het opbouwen van zo’n mentale
representatie is een dynamisch proces: met elke nieuw stukje informatie dat we
tegenkomen wordt de representatie aangepast en/of bijgewerkt (e.g., Graesser et al.,
1994; Kintsch & van Dijk, 1978; Trabasso et al., 1984; van den Broek et al., 1999).
Bij iedere nieuw stukje informatie proberen we verbanden te leggen met de eigen
achtergrondkennis en de informatie die we eerder al in de tekst zijn tegengekomen
om de nieuwe informatie te begrijpen. Om tot een samenhangende en kloppende
mentale representatie te komen is het essentieel dat er voor ieder nieuw stukje
informatie gecontroleerd wordt of deze informatie klopt met de informatie die we
eerder zijn tegengekomen (de mentale representatie tot dusver) en met de eigen
kennis voordat het wordt opgenomen in het mentale model. Het proces van het
controleren van deze informatie wordt validatie genoemd (e.g., Isberner & Richter,
2014a; O’Brien & Cook, 2016a; Richter & Rapp, 2014; Singer, 2013, 2019; Singer et
al., 1992; Singer & Doering, 2014). Nadat een stukje informatie congruent is bevonden
met de mentale representatie tot dusver en accuraat met de eigen kennis, wordt dit
opgenomen in het mentale model en uiteindelijk – als alles goed gaat - in het
langetermijngeheugen opgeslagen. Validatieprocessen spelen dus niet alleen een rol
in hoe wij begrijpen, maar ook in wat we uiteindelijk onthouden of leren van de
gepresenteerde informatie.
De studies in dit proefschrift richten zich op validatieprocessen in de context
van begrijpend lezen. Met de opkomst van digitale technologie hebben we snel en
makkelijk toegang tot een ongekende hoeveelheid (tekstuele) informatie. Dit biedt
enorme mogelijkheden voor het verwerven van nieuwe kennis, maar vergt wel een
waakzame en kritische lezer: Iedereen kan tegenwoordig informatie verspreiden, met
als gevolg dat teksten die we tegenkomen niet alleen variëren in kwaliteit maar ook in
juistheid en betrouwbaarheid. Met het oog op deze ontwikkelingen is het belangrijk
dat onderzoek zich richt op hoe lezers omgaan met teksten die niet (helemaal)
kloppen en dat we inzicht krijgen in hoe lezers tekstuele informatie valideren ten

190
opzichte van verschillende informatiebronnen. De belangrijkste informatiebronnen
waartegen lezers tekstuele informatie valideren zijn hun eigen achtergrondkennis (wat
ze weten) en de informatie die ze eerder zijn tegengekomen in de tekst zelf (wat ze
gelezen hebben). Theoretische modellen van validatie geven een globale beschrijving
van de cognitieve architectuur van validatieprocessen, maar geven geen
gedetailleerde informatie over wanneer en hoe deze twee informatiebronnen een rol
spelen. Hoewel ze aannemen dát lezers informatie valideren ten opzichte van beide
informatiebronnen, specificeren ze niet hoe en wanneer lezers deze twee bronnen
gebruiken en of deze twee bronnen validatieprocessen op vergelijkbare wijze
beïnvloeden of dat zij op verschillende manieren invloed uitoefenen.
Om meer inzicht te krijgen in hoe lezers deze twee informatiebronnen
gebruiken om wat ze lezen te valideren heb ik in dit proefschrift de cognitieve
processen die betrokken zijn bij het valideren ten opzichte van de eerdere tekst
(tekstvalidatie) en het valideren ten opzichte van achtergrondkennis (kennisvalidatie)
onderzocht. Omdat validatie ook een rol speelt bij het leren van teksten heb ik ook
onderzocht hoe deze validatieprocessen invloed hebben op wat lezers uiteindelijk
onthouden van een tekst. De experimentele studies in dit proefschrift focussen op hoe
deze twee informatiebronnen validatieprocessen die tijdens het lezen plaatsvinden
beïnvloeden (alle hoofdstukken) en hoe deze processen op hun beurt de uiteindelijke
mentale representatie zoals die na het lezen van de tekst in het langetermijngeheugen
wordt opgeslagen beïnvloeden (Hoofdstuk 4 en 5). In deze studies heb ik de aard en
de timing van deze tekst- en kennisvalidatieprocessen onderzocht om vast te stellen
of er een onderscheid gemaakt kan worden tussen de invloed van informatie uit de
tekst zelf en informatie uit de achtergrondkennis van de lezer op validatieprocessen
en of we verschillende rollen moeten toewijzen aan deze twee bronnen in de
cognitieve architectuur van validatie.

Individuele verschillen tussen lezers en validatie

Omdat iedere lezer anders is, kunnen de validatieprocessen die opgewekt


7
worden door een bepaalde tekst verschillen van lezer tot lezer of –voor dezelfde lezer–
van leessituatie tot leessituatie. In dit proefschrift wil ik daarom ook in kaart brengen
in hoeverre validatie beïnvloed wordt door individuele verschillen tussen lezers in het
doel waarmee zij lezen, hoeveel werkgeheugencapaciteit zij hebben en hoe
(on)bekend ze zijn met de informatie in de teksten.

Leesdoelen

Mensen lezen vrijwel nooit een tekst alleen maar om te lezen. Vaak hebben
ze een specifiek leesdoel: ze kunnen bijvoorbeeld lezen voor hun plezier, om te

191
studeren voor school, om instructies op te kunnen volgen etc. Het klinkt logisch dat
hoe lezers teksten verwerken afhangt van het doel waarmee ze lezen. Wanneer men
bijvoorbeeld leest ter voorbereiding van een tentamen of een toets (ook wel lezen
voor studie genoemd) vereist dit gevoelsmatig andere cognitieve processen en
andere strategieën dan het voor je plezier lezen van een tijdschrift of roman. Veel
empirische bevindingen ondersteunen het idee dat leesdoelen invloed hebben op de
cognitieve processen en strategieën die lezers gebruiken wanneer zij lezen, maar ook
dat ze invloed hebben op wat lezers onthouden van een tekst (Britt et al., 2018;
Linderholm et al., 2004; Narvaez et al., 1999; van den Broek et al., 2001). Bijvoorbeeld,
onderzoek laat zien dat lezers die gevraagd worden om een tekst te lezen alsof ze
leren voor een toets (i.e. lezen voor studie) langzamer lezen (e.g., Linderholm & van
den Broek, 2002; Lorch et al., 1993; Narvaez et al., 1999; van den Broek et al., 2001;
Yeari et al., 2015) en meer onthouden van de tekst (e.g., Lorch et al., 1993,
1995; van den Broek et al., 2001; Yeari et al., 2015) dan lezers die gevraagd wordt
om de tekst te lezen alsof ze lezen voor hun plezier. Dergelijke bevindingen
suggereren dat lezers inderdaad hun cognitieve processen en strategieën aanpassen
aan hun leesdoel, en dat deze aanpassingen ook invloed hebben op wat ze leren of
onthouden van de tekst (Britt et al., 2018; Linderholm et al., 2004; van den Broek et
al., 1999, 2001). In de context van validatie kan het leesdoel bepalen hoe uitgebreid
lezers binnenkomende informatie valideren (Singer, 2019). Eerder onderzoek heeft
bijvoorbeeld laten zien dat hoe gevoelig lezers zijn voor onjuiste of onwaarschijnlijke
informatie verschilt afhankelijk van het doel waarmee ze lezen (e.g., Rapp et al., 2014).
Daarom onderzoek ik in dit proefschrift of en in hoeverre leesdoelen invloed hebben
op validatieprocessen tijdens het lezen (Hoofdstuk 2 en 5) en op wat lezers na het
lezen hebben onthouden van de tekst (Hoofdstuk 5).

Werkgeheugencapaciteit

Het werkgeheugen is een belangrijk onderdeel van het menselijk geheugen


waar informatie verwerkt wordt en tijdelijk opgeslagen wordt (e.g., Baddeley, 1998,
2000; Baddeley & Hitch, 1974; Cowan, 2017; Daneman & Carpenter, 1980; Just &
Carpenter, 1992). Het werkgeheugen heeft een beperkte capaciteit (e.g., Miller, 1956;
Simon, 1974) die tijdens het lezen verdeeld moet worden over het verwerken van de
nieuw gelezen tekst en het actief houden van de relevante informatie uit de eerdere
tekst en de achtergrondkennis van de lezer (Graesser et al., 1997; Kintsch, 1998; van
den Broek, 2010). In de context van validatie beperkt het werkgeheugen de
hoeveelheid informatie die beschikbaar is voor het validatieproces (e.g., Hannon &
Daneman, 2001; Singer, 2006), wat mogelijk kan interfereren met de vaardigheid om
tijdens het lezen incongruenties of onjuistheden in een tekst te detecteren en op te
lossen. Om nieuwe informatie succesvol te kunnen valideren moet namelijk zowel de

192
nieuw gelezen informatie als de relevante informatie uit de eerdere tekst en/of de
informatie uit de achtergrondkennis van de lezer beschikbaar zijn in het geheugen. Of
en hoeveel van deze informatie tegelijkertijd actief kan zijn kan afhangen van de
werkgeheugencapaciteit van de lezer. Om meer inzicht te krijgen in de rol van
werkgeheugen in validatie, onderzoek ik in dit proefschrift of en in hoeverre validatie
beïnvloed wordt door verschillen tussen lezers in hun werkgeheugencapaciteit.

(On)bekendheid van de informatie

Het hebben van relevante kennis speelt een belangrijke rol in veel
begripsprocessen die plaatsvinden tijdens het lezen (e.g., Cain et al., 2001; Singer,
2013; Richter, 2015; Singer, 2019). Over het algemeen helpt het hebben van
relevante kennis bij het verwerken en onthouden van de gelezen informatie (e.g.,
Alexander et al., 1994; Royer et al., 1996; Schneider et al., 1989; Voss & Bisanz, 1985).
Om een indicatie te krijgen van hoeveel kennis lezers hebben over
de tekstonderwerpen hebben we in Hoofdstuk 5 meegenomen hoe
(on)bekend de informatie in de teksten was voor de lezers als indirecte indicatie van
de kennis van de lezer over de onderwerpen van de teksten. Het algemene idee is
dat lezers die meer nieuwe -en dus onbekende- informatie tegenkomen in een tekst
waarschijnlijk ook minder relevante kennis over het onderwerp hebben dan lezers die
weinig nieuwe informatie tegenkomen. Lezers die bekend zijn met het onderwerp van
de tekst (en dus beschikken over relevante juiste achtergrondkennis) kunnen
waarschijnlijk de tekstuele informatie makkelijker verwerken en onthouden. Maar
wanneer er onjuiste informatie in een tekst staat zullen lezers die bekender zijn met
de informatie een sterker conflict ervaren tussen de gelezen (onjuiste) informatie en
de eigen (juiste) kennis. Omgekeerd, wanneer lezers minder bekend zijn met de
informatie – en dus weinig of geen achtergrondkennis hebben over het onderwerp -
zullen zij minder of geen conflict ervaren tussen de gelezen informatie en de eigen
kennis. Wanneer lezers over onvoldoende kennis beschikken dan zullen zij
waarschijnlijk niet in staat zijn om onjuistheden in een tekst te herkennen.
7
Mijn onderzoeksmethoden

Alle studies in dit proefschrift maken gebruik van een contradictie-paradigma.


In dit paradigma lezen deelnemers verschillende versies van korte teksten over
historische onderwerpen die systematisch variëren in juistheid en congruentie (zie
Tabel 7.1 voor een voorbeeldtekst). Er waren vier verschillende versies van iedere
tekst. Deelnemers lazen steeds één versie van iedere tekst. In al deze teksten stond
een zogeheten ‘targetzin’ (zin 8) die een historische gebeurtenis beschreef. Deze
targetzin bevatte informatie die juist (e.g., het vrijheidsbeeld is aangekomen in de

193
Verenigde Staten) of onjuist (e.g., het vrijheidsbeeld is niet aangekomen in de
Verenigde Staten) was met de achtergrondkennis van de lezer. De targetzin werd
voorafgegaan door een aantal contextzinnen (zin 3-7). Deze contextzinnen konden
congruent of incongruent zijn met de targetzin: Zij bevatten informatie die ofwel de
juiste of de onjuiste versie van de targetzin waarschijnlijker maakten (bijv. een context
beschreef dat de bouw van het vrijheidsbeeld geheel volgens plan verliep vs. een
context die beschreef dat er problemen waren tijdens de bouw). Er waren dus voor
iedere tekst 4 mogelijke combinaties van targetzin en context, afhankelijk van welke
versie van de targetzin (juist/onjuist) en welke versie van de context (maakt de juiste
of de onjuiste target waarschijnlijker) er getoond werd. Iedere deelnemer kreeg van
iedere tekst steeds één versie te lezen. Om kennisvalidatie te onderzoeken
vergeleken we de verwerking van de juiste en de onjuiste targetzinnen. Om
tekstvalidatie te onderzoeken vergeleken we de verwerking van targetzinnen die
congruent waren met de context en targets die incongruent waren met de context.
De congruentie van de target met de eerdere tekst hing dus af van de combinatie van
target en context (bijv. voor de juiste targetzin is de context die de juiste target
waarschijnlijker maakt congruent en de context die de onjuiste target waarschijnlijker
maakt incongruent, en voor de onjuiste targetzin geldt het omgekeerde). Om
verschillen te onderzoeken tussen tekst- en kennisvalidatie vergeleken we
respectievelijk de verwerking van juiste versus onjuiste targets (het effect van de
juistheid van de targetinformatie) met de verwerking van congruente versus
incongruente targets (het effect van de congruentie van de targetinformatie.
Om een volledig en gedetailleerd beeld te krijgen van tekst- en
kennisvalidatie gebruiken de studies in dit proefschrift hetzelfde paradigma maar
verschillende onderzoeksmethoden. Ik heb onderzoeksmethoden gebruikt die inzicht
geven in de cognitieve processen die tijdens het lezen plaatsvinden, waaronder
zin-voor-zin lezen (Hoofdstuk 2 en 5), functional magnetic resonance imaging (fMRI;
Hoofdstuk 3) en oogbewegingsregistratie (Hoofdstuk 4), en onderzoekmethoden die
inzicht geven in de uiteindelijke mentale representatie zoals die na het lezen van de
tekst in het langetermijngeheugen wordt opgeslagen (Hoofdstuk 4 en 5).
In Hoofdstuk 2 en 5 heb ik zin-voor-zin lezen gebruikt om de unieke invloed
van informatie uit de tekst en de achtergrondkennis van de lezer op de cognitieve
processen die plaatsvinden tijdens het lezen te onderzoeken. Bij deze
onderzoeksmethode lezen de deelnemers de teksten zin voor zin op een
computerscherm terwijl hun leestijd per zin gemeten wordt. Deze leestijden geven
een indicatie van hoeveel moeite lezers hebben met het integreren van de zojuist
gelezen informatie in de mentale representatie tijdens het lezen en, in de context van
dit proefschrift, hoeveel moeite lezers hebben met het verwerken van onjuiste of
incongruente informatie in een tekst (e.g., Albrecht & O’Brien, 1993; Cook et al.,
1998b; Rapp et al., 2001).

194
Tabel 7.1. Voorbeeldtekst met de vier versies van de tekst
Juistheid met kennis
Target juist Target onjuist

[Introductie] [Introductie]
In 1865 wilde de Fransman Laboulaye de In 1865 wilde de Fransman Laboulaye de
democratische ontwikkelingen in de Verenigde democratische ontwikkelingen in de Verenigde
Staten eren. Staten eren.
Samen met kunstenaar Auguste Bartholdi ontwierp Samen met kunstenaar Auguste Bartholdi ontwierp
hij een gigantisch beeld. hij een gigantisch beeld.
Target congruent met context

[Context Bias Juist] [Context Bias Onjuist]


De bouw van hun vrijheidsbeeld zou veel De bouw van hun vrijheidsbeeld zou enorm veel
fondsenwerving vereisen. fondsenwerving vereisen.
Ze organiseerden een publieke loterij om donaties Het inzamelen van het kapitaal voor het beeld
voor het beeld te krijgen. bleek een grote uitdaging te zijn.
Amerikaanse zakenmannen droegen financieel bij Door financiële problemen kon Frankrijk zich niet
om het voetstuk van het standbeeld te bouwen. veroorloven het beeld te geven.
Ondanks dat ze wat achterliepen op schema werd Het werven van fondsen ging moeizaam en de
het beeld toch afgemaakt. plannen liepen snel achter op schema.
Het voetstuk van het beeld werd ook voltooid en Door deze problemen leek de voltooiing van het
was klaar voor de bouw. beeld gedoemd te mislukken.

[Target Juist] [Target Onjuist]


Het vrijheidsbeeld werd aan de Verenigde Staten Het vrijheidsbeeld werd niet aan de Verenigde
overhandigd. Staten overhandigd.
Congruentie met de tekst

[Slot] [Slot]
De beoogde locatie voor het beeld was een haven De beoogde locatie voor het beeld was een haven
in het New Yorkse havengebied. in het New Yorkse havengebied.
Deze locatie was de eerste stop voor veel Deze locatie was de eerste stop voor veel
immigranten die naar de Verenigde Staten immigranten die naar de Verenigde Staten
kwamen. kwamen.

[Introductie] [Introductie]
In 1865 wilde de Fransman Laboulaye de In 1865 wilde de Fransman Laboulaye de
democratische ontwikkelingen in de Verenigde democratische ontwikkelingen in de Verenigde
Staten eren. Staten eren.
Samen met kunstenaar Auguste Bartholdi ontwierp Samen met kunstenaar Auguste Bartholdi ontwierp
hij een gigantisch beeld. hij een gigantisch beeld.
Target incongruent met context

[Context Bias Onjuist] [Context Bias Juist]


De bouw van hun vrijheidsbeeld zou enorm veel De bouw van hun vrijheidsbeeld zou veel
fondsenwerving vereisen. fondsenwerving vereisen.
Het inzamelen van het kapitaal voor het beeld bleek Ze organiseerden een publieke loterij om donaties
een grote uitdaging te zijn. voor het beeld te krijgen.

7
Door financiële problemen kon Frankrijk zich niet Amerikaanse zakenmannen droegen financieel bij
veroorloven het beeld te geven. om het voetstuk van het standbeeld te bouwen.
Het werven van fondsen ging moeizaam en de Ondanks dat ze wat achterliepen op schema werd
plannen liepen snel achter op schema. het beeld toch afgemaakt.
Door deze problemen leek de voltooiing van het Het voetstuk van het beeld werd ook voltooid en was
beeld gedoemd te mislukken. klaar voor de bouw.

[Target Juist] [Target Onjuist]


Het vrijheidsbeeld werd aan de Verenigde Staten Het vrijheidsbeeld werd niet aan de Verenigde
overhandigd. Staten overhandigd.

[Slot] [Slot]
De beoogde locatie voor het beeld was een haven De beoogde locatie voor het beeld was een haven
in het New Yorkse havengebied. in het New Yorkse havengebied.
Deze locatie was de eerste stop voor veel Deze locatie was de eerste stop voor veel
immigranten die naar de Verenigde Staten kwamen. immigranten die naar de Verenigde Staten kwamen.

195
In Hoofdstuk 3 heb ik de neurale architectuur van tekst- en kennisvalidatie
onderzocht met behulp van fMRI - een non-invasieve methode die hersenactiviteit
tijdens het lezen meet. In deze studie heb ik de hersenactiviteit tijdens het lezen van
juiste targetzinnen vergeleken met die tijdens het lezen van onjuiste targetzinnen om
inzicht te krijgen in de hersengebieden die betrokken zijn bij kennisvalidatie.
Daarnaast heb ik de hersenactiviteit tijdens het lezen van congruente en incongruente
targetzinnen vergeleken om inzicht te krijgen in de gebieden die betrokken zijn bij
tekstvalidatie. Door deze condities te vergelijken (juist/onjuist en congruent/
incongruent) krijgen we inzicht in welke hersengebieden meer actief zijn in de ene
conditie dan in de andere conditie en, dus, welke hersengebieden (meer) betrokken
zijn bij tekst- en/of kennisvalidatie. De verkregen inzichten in de hersengebieden die
betrokken zijn bij tekst- en kennisvalidatie kunnen worden gebruikt om specifiekere
uitspraken te doen over de cognitieve architectuur van tekst- en kennisvalidatie.
Specifiek kunnen de betrokken hersengebieden een indicatie geven of er aparte
rollen toegekend moeten worden aan de twee bronnen in de cognitieve architectuur
van validatie of niet. Wanneer de twee validatieprocessen dezelfde hersengebieden
aanspreken suggereert dit dat er een enkel validatieproces is dat geen onderscheid
maakt tussen de twee bronnen, maar wanneer er (deels) verschillende gebieden
betrokken zijn bij de twee validatieprocessen suggereert dit dat er (deels)
verschillende cognitieve mechanismes betrokken zijn bij tekst- en kennisvalidatie die
dus verschillende hersengebieden aanspreken (e.g., Frank & Badre, 2015; Hagoort,
2017).
In Hoofdstuk 4 heb ik met behulp van oogbewegingsregistratie, ofwel eye-
tracking, de oogbewegingen van lezers gevolgd tijdens het lezen om inzicht te krijgen
in de timing van validatieprocessen. Deze onderzoeksmethode kan een gedetailleerd
beeld geven van wanneer en hoe informatie uit de eerdere tekst en informatie uit de
achtergrondkennis van de lezer invloed hebben op de verschillende deelprocessen
van validatie: detectieprocessen (betrokken bij het detecteren van inconsistenties) en
elaboratieprocessen (betrokken bij het oplossen/repareren van inconsistenties). Met
het meten van de oogbewegingen kan namelijk een onderscheid gemaakt worden
tussen maten die vroege verwerking meten (bijv. de initiële lezing van een zin en hoe
vaak lezers tijdens deze initiële lezing terugkijken, ook wel ‘first-pass’ leestijden en
‘first-pass’ regressies genoemd) en maten die de latere verwerking meten (bijv. in
hoeverre lezers een zin herlezen of terugkijken in de zin tijdens het lezen). Als we
aannemen dat detectieprocessen relatief vroeg in de verwerking van de informatie
plaatsvinden en dat elaboratieprocessen later plaatsvinden (want een inconsistentie
moet eerst gedetecteerd zijn voordat er eventueel elaboratie plaats kan vinden)
kunnen deze vroege en latere maten ons dus inzicht geven in deze twee deel-
processen en wanneer en hoe de verschillende bronnen invloed uitoefenen op deze
deelprocessen.

196
In Hoofdstuk 3 en 5 heb ik een geheugentaak gebruikt om de effecten van
het lezen van onjuistheden of tekstuele incongruenties op wat lezers onthouden van
een tekst te onderzoeken. In deze twee studies werd het geheugen van de
deelnemers een dag nadat zij de teksten gelezen hadden getest. In deze test kregen
deelnemers kregen zinnen te zien die informatie bevatte die wel of niet overeenkwam
met de informatie die ze gelezen hadden. Bijvoorbeeld, wanneer zij tijdens het
experiment gelezen hadden dat het vrijheidsbeeld afgeleverd was in de VS dan zagen
zij in de geheugentaak ofwel dat het vrijheidsbeeld afgeleverd was in de VS ofwel dat
deze niet afgeleverd was in de VS. Voor iedere zin moesten deelnemers aangeven of
ze de informatie herkenden van de vorige dag of niet en hoe zeker ze waren van hun
antwoord (zie hoofdstuk 3 en 5 voor een uitgebreidere beschrijving van de taak). Door
te kijken naar hoe goed de deelnemers scoren op de geheugentaak voor de juiste
versus onjuiste en congruente versus incongruente zinnen krijgen we een indicatie
van hoe lastig het is om onjuiste of incongruente informatie te onthouden en of
deelnemers deze informatie beter of slechter onthouden dan juiste en congruente
informatie.
Door het combineren van metingen van het ‘offline’ geheugen van de lezer
voor de tekst (i.e., wat lezers uiteindelijk onthouden van de tekst) met ‘online’ maten
kunnen we inzicht krijgen in hoe de patronen die we tijdens het lezen zien vertalen
naar de uiteindelijke (offline) geheugenrepresentatie en, daarmee, in wat lezers
onthouden of leren van een tekst. Alle hierboven beschreven onderzoeksmethoden
hebben uiteraard hun eigen sterke en zwakke punten, maar samen zijn zij ‘sterker’
dan alleen. Juist door de inzichten die deze verschillende onderzoeksmethoden
bieden te combineren kunnen we een vollediger en gedetailleerder beeld schetsen
van de timing en de neurocognitieve architectuur van tekst- en kennisvalidatie. In de
volgende secties zal ik de belangrijkste bevindingen uit dit proefschrift bespreken en
een beeld schetsen van de timing en de neurocognitieve architectuur van tekst- en
kennisvalidatie op basis van de gecombineerde bevindingen uit dit proefschrift.

De belangrijkste bevindingen uit dit proefschrift


7
De resultaten van de studie beschreven in Hoofdstuk 2 laten zien dat zowel
onjuistheden als incongruenties in de targetzin het leesproces beïnvloeden (beide
resulteren in langere leestijden op de targetzin), maar alleen voor onjuistheden werkt
dit effect door op de zin ná de targetzin (i.e., de spill-over zin). Voor onjuistheden zien
we daar ook langere leestijden, wat suggereert dat onjuistheden een heftiger en/of
langduriger effect hebben op het leesproces dan incongruenties. We vonden een
algemeen effect van leesdoel op het leesproces (i.e., deelnemers lazen langzamer
wanneer een instructie kregen die ze meer moest focussen op de tekst zelf dan

197
wanneer zij een instructie kregen die ze meer moest focussen op hun achtergrond-
kennis), maar we vonden geen effect van leesdoel op het validatieproces. Wat betreft
de rol van werkgeheugen in validatie vonden we dat werkgeheugencapaciteit invloed
had op de verwerking van onjuistheden maar niet op de verwerking van
incongruenties. Specifiek lieten lezers met een grotere capaciteit een kleiner effect
van onjuistheden zien op de targetzinnen dan lezers met een kleinere capaciteit.
In lijn met de gedragsresultaten uit Hoofdstuk 2 laten de gedrags-
resultaten van de fMRI studie beschreven in Hoofdstuk 3 zien dat beide soorten
inconsistenties resulteren in langere leestijden op de targetzinnen. Maar we vinden
geen spill-over effecten voor onjuistheden. Wat betreft de effecten van onjuistheden
en incongruenties op wat lezers onthouden van een tekst, laten de resultaten van
Hoofdstuk 3 zien dat informatie die inconsistent is met de eerdere tekst en/of met de
achtergrondkennis van de lezer slechter onthouden wordt dan informatie die
consistent is met deze informatiebronnen. Bij het analyseren van de fMRI data hebben
we naar activatie in specifieke hersengebieden, waaronder de inferieure frontale
gyrus (IFG), de dorsomediale prefrontale cortex (dmPFC) en de precuneus, gekeken
door middel van region-of-interest (ROI) analyses. Uit deze analyses bleek dat er
zowel gedeelde als unieke breingebieden betrokken lijken te zijn bij tekst- en
kennisvalidatie. Specifiek zien we dat de dmPFC voornamelijk betrokken lijkt bij
kennisvalidatie, de rechter IFG voornamelijk bij tekstvalidatie en de precuneus en
linker IFG lijken betrokken bij beide. Aanvullende analyses waarbij we naar de
activatie in het hele brein gekeken hebben (i.e., ‘whole-brain’ analyses) ondersteunen
deze resultaten. Deze analyses laten een groot netwerk van hersengebieden zien die
betrokken zijn bij kennisvalidatie (waaronder de mPFC en de rechter hippocampus),
een kleiner netwerk dat betrokken is bij tekstvalidatie (o.a., de anterieure cingulate
cortex of ACC) en een aantal gebieden waar de twee bronnen samen verwerkt
worden (o.a., de linker IFG en de rechter angulaire gyrus en de bilaterale mediale
frontale gyrus). Als laatste vonden we dat kennisvalidatie een groter netwerk aan
hersengebieden aanspreekt dan tekstvalidatie.
De eye-tracking resultaten beschreven in Hoofdstuk 4 laten zien dat zowel
incongruenties als onjuistheden het leesproces verstoren, maar op verschillende
wijze. Deze verschillen lijken al vroeg in het verwerkingsproces naar voren te komen:
Waar onjuistheden stelselmatig de vroege verwerking verstoren, wat blijkt een
langere initiële lezing van de zin (‘first-pass’ leestijden) en meer terugkijken
gedurende deze initiële lezing (‘first-pass’ regressies), hangt de manier waarop
incongruenties de vroege verwerking verstoren af van of de targetzin juist of onjuist
is met de kennis van de lezer. Wanneer de informatie in de targetzin incongruent is
met de eerdere tekst én onjuist dan resulteerde dit in een nóg langzamere initiële of
‘first-pass’ lezing, maar wanneer de targetzin incongruent is maar juiste informatie
bevat dan resulteerde dit in meer first-pass regressies, ofwel meer herlezen tijdens

198
de initiële lezing. In latere maten van verwerking (bijv. hoe lang lezers terugkijken, hoe
groot de kans was dat zij de zin herlezen en hoe lang zij de zin herlezen) zien we
vergelijkbare patronen voor incongruenties en onjuistheden. Wanneer een targetzin
onjuist of incongruent is dan zijn lezers meer geneigd om de zin opnieuw te lezen,
kijken ze langer terug in de tekst en doen ze langer over het herlezen van de zin dan
bij een juiste of congruente targetzin. We vonden alleen voor onjuistheden heftigere
en langere effecten die ook op de spill-over zin zichtbaar waren: Lezers deden langer
over het verwerken van spill-over zinnen volgend op onjuiste targetzinnen dan op
spill-over zinnen volgend op juiste targetzinnen. We vonden geen effecten van de
werkgeheugencapaciteit van de lezer op de verwerking van onjuistheden en
incongruenties, maar wel een algemeen effect op de maten van latere verwerking:
Lezers met een groter werkgeheugen waren minder geneigd om terug te kijken in de
tekst of targetzin opnieuw te lezen en áls ze de targetzin opnieuw lazen deden ze dit
sneller.
Resultaten beschreven in Hoofdstuk 5 laten zien dat leesdoelen een
algemeen effect hebben op het leesproces: Deelnemers lazen langzamer wanneer
zij als doel hadden om de inhoud van de teksten goed te onthouden (i.e., lezen voor
studie) dan wanneer zij de teksten ‘gewoon’ aandachtig moesten lezen (i.e., lezen
voor algemeen begrip). Maar we vonden geen effecten van leesdoel op de verwerking
van onjuiste of incongruente informatie. Wel vonden we effecten op lezers’ geheugen
voor onjuiste en incongruente targetinformatie (i.e., de informatie in de targetzinnen).
Specifiek zagen we dat deelnemers die lazen voor studie congruente targetinformatie
beter onthielden dan deelnemers die lazen voor algemeen begrip, maar wanneer
de informatie incongruent was met de eerdere tekst resulteerde lezen voor studie
niet in beter geheugen. Wat betreft de effecten van onjuistheden en incongruenties
op de verwerking vonden we wederom langere leestijden voor beide soorten
inconsistenties en voor onjuistheden werkt dit effect door op de spill-over zinnen. Voor
incongruenties zagen we ook spill-over effecten, maar alleen onder bepaalde
omstandigheden –namelijk wanneer lezers met een groter werkgeheugen lezen
voor algemeen begrip en de targetzin juiste informatie bevat. Ook onthielden
lezers onjuiste of incongruente informatie slechter dan juiste of congruente informatie 7
- ongeacht het leesdoel.
Voor de (on)bekendheid van de informatie vonden we dat lezers targetzinnen
over onbekendere onderwerpen langzamer lazen – gegeven dat de informatie juist is.
Ook vonden we dat lezers langzamer lazen naarmate de teksten meer onbekende
informatie bevatten – maar alleen wanneer zij lezen voor studie niet wanneer zij lezen
voor algemeen begrip. We vonden alleen een klein effect van werkgeheugen-
capaciteit onder zeer specifieke omstandigheden: Bij het lezen voor algemeen begrip
waren de patronen van de spill-over effecten anders voor lezers met een grotere
capaciteit dan voor lezers met een kleinere capaciteit.

199
De cognitieve architectuur van validatie

Het hoofddoel van dit proefschrift was om te onderzoeken of valideren ten


opzichte van achtergrondkennis (i.e., kennisvalidatie) en valideren ten opzichte van
de eerdere tekst (i.e., tekstvalidatie) een gemeenschappelijk (neuro)cognitief
mechanisme aanspreekt of (deels) verschillende (neuro)cognitieve mechanismes en
of er aparte rollen zouden moeten worden toegekend aan de twee bronnen van
informatie in de cognitieve architectuur van validatie. Een eerste stap in het
beantwoorden van deze vraag is dat er vastgesteld moet worden dat beide bronnen
invloed uitoefenen op validatieprocessen. In alle studies in dit proefschrift zien we
consequent effecten van onjuistheden met achtergrondkennis (i.e., kennis-
schendingen) en incongruenties met de eerdere tekst (i.e., tekstschendingen) op de
maten van processen die tijdens het lezen plaatsvinden en op het geheugen van
de lezers, wat suggereert dat binnenkomende informatie inderdaad gevalideerd wordt
ten opzichte van de twee bronnen. Dus dat de twee bronnen invloed uitoefenen
lijkt duidelijk. Maar de resultaten laten nog meer zien: Er lijkt namelijk sprake te
zijn van een neurocognitieve taakverdeling voor validatieprocessen. Een aantal
hersengebieden lijken vooral betrokken te zijn bij tekstvalidatie of kennisvalidatie
terwijl andere hersengebieden betrokken lijken bij beide (Hoofdstuk 3). Ook lijken
onjuistheden en incongruenties verschillende verwerkingsstrategieën op te roepen
(Hoofdstuk 4). Deze bevindingen breiden bestaande modellen van validatie (e.g., het
RI-Val model; Cook & O’Brien, 2014; O’Brien & Cook, 2016b, 2016a; en het twee-
staps model van validatie; Isberner & Richter, 2014) uit. Ze leveren namelijk bewijs
dat informatie niet alleen routinematig gevalideerd wordt ten opzichte van deze twee
bronnen, maar dat informatie gevalideerd wordt ten opzichte van achtergrondkennis
en de eerdere tekst via verschillende validatie processen die (deels) verschillende
neurocognitieve mechanismes aanspreken. Deze resultaten suggereren dat er dus
een onderscheid te maken valt tussen deze twee bronnen binnen validatie en dat er
bij het beschrijven van de cognitieve architectuur van validatie aparte rollen moeten
worden toegewezen aan tekstuele informatie en achtergrondkennis.
Wat betreft de timing van tekst- en kennisvalidatie suggereren de resultaten
van de oogbewegingsstudie besproken in Hoofdstuk 4 dat onjuistheden en
incongruenties al vroeg in de verwerking gedetecteerd worden, maar dat de twee
‘soorten’ inconsistenties andere verwerkingsprocessen oproepen. In de maten
van latere verwerking zien we een ander patroon: daar lijken onjuistheden en
incongruenties namelijk vergelijkbare (reparatie) processen op te roepen. Deze
gecombineerde resultaten kunnen betekenen dat er in eerste instantie mogelijk
verschillende processen ingezet worden, maar dat de uiteindelijke mentale
representatie wel consistent moet zijn met zowel de informatie uit de eerdere tekst als

200
de achtergrondkennis van de lezer. Deze resultaten suggereren dat beide bronnen al
vroeg in de verwerking van nieuwe tekstuele informatie invloed uitoefenen en dat ze
dat op verschillende manieren doen. Deze conclusies zijn consistent met huidige
theoretische modellen van validatie (e.g., Cook & O’Brien, 2014; O’Brien & Cook,
2016a, 2016b; Isberner & Richter, 2014) omdat ze laten zien dat lezers
binnenkomende informatie valideren ten opzichte van beide informatiebronnen. Maar
op basis van de resultaten uit dit proefschrift kunnen we nog een stap verder gaan,
omdat de resultaten laten zien dat de verwerking van tekst- en kennisschendingen al
vroeg een ander traject volgt. Als we aannemen dat de maten van initiële verwerking
die in dit proefschrift gemeten zijn inderdaad vroege verwerking meten, dan
suggereren de resultaten dat lezers in ieder geval al vroeg in de verwerking de bron
van de inconsistentie lijken te detecteren en daar hun verwerking op aanpassen. Maar
welke processen er dan precies beïnvloed worden tijdens deze vroege verwerking is
lastig te duiden, omdat het lastig is om een specifiek moment aan te geven waarop
detectieprocessen overgaan in elaboratie- of reparatieprocessen. Daarom kunnen we
op basis van onze resultaten niet met zekerheid zeggen of de verschillen die we zien
in de maten van vroege verwerking alleen het resultaat zijn van verschillen in vroege
elaboratieprocessen of dat de passieve processen die betrokken zijn bij het
detecteren van een inconsistentie ook beïnvloed worden door de bron van de
inconsistentie. Maar dat er een onderscheid gemaakt moet worden tussen de twee
bronnen bij het beschrijven van de timing en de componentprocessen van validatie is
duidelijk.
De resultaten van dit proefschrift geven interessante inzichten in de
cognitieve architectuur van validatie, maar roepen ook weer veel nieuwe vragen op.
Ze passen namelijk bij verschillende ‘soorten’ cognitieve modellen, en geven dus
geen eenduidig beeld van de cognitieve architectuur van validatie. Op basis van de
gecombineerde resultaten uit dit proefschrift kunnen we wel speculeren over een
mogelijke cognitieve architectuur van validatie. Deze resultaten suggereren namelijk
dat beide informatiebronnen parallel verwerkt wordt in een (deels) interactieve
architectuur waarin tekstuele informatie en achtergrondkennis al vroeg in de
verwerking van nieuwe informatie interacteren en samen het validatieproces 7
beïnvloeden. In lijn met bestaande theoretische modellen (e.g., memory-based
processing view; Cook et al., 1998; Gerrig & McKoon, 1998; Myers & O’Brien, 1998;
O’Brien & Myers, 1999; Rizzella & O’Brien, 2002; van den Broek et al., 1999; van den
Broek & Helder, 2017), suggereren deze resultaten dat alle informatie die geactiveerd
wordt door snelle, automatische geheugenprocessen in een vroeg stadium van
verwerking beschikbaar is voor validatie. Maar ze suggereren ook dat de twee
bronnen alleen interacteren wanneer lezers informatie tegenkomen die niet klopt met
hun kennis, wat past bij het algemene idee dat lezers niet altijd alle beschikbare

201
informatie van de twee bronnen gebruiken, maar dat ze vooral vertrouwen op hun
achtergrondkennis.

Effecten van het lezen van inconsistenties op het


geheugen van lezers

In lijn met eerder onderzoek (e.g., Anderson, 1983; Johnson-Laird, 1983;


Zwaan & Radvansky, 1998) laten de resultaten uit Hoofdstuk 3 en 5 zien dat lezers
informatie die consistent is met de bestaande geheugenrepresentaties (i.e., hun eigen
achtergrondkennis of de mentale representatie van de tekst) beter onthouden dan
inconsistente informatie, ongeacht met welk doel ze lezen. Deze bevindingen
ondersteunen het idee dat de (in)coherentie van een tekst invloed heeft op wat lezers
onthouden van deze tekst (Cook & Myers, 2004; Loxterman et al., 1994; McKeown et
al., 1992). Ook laten ze zien dat de algemene aanname dat tekstuele informatie waar
lezers meer tijd aan besteden ook beter onthouden wordt (e.g., Van den Broek et al.,
1996) alleen opgaat voor juiste en congruente informatie, niet voor onjuiste en
incongruente informatie. Hoewel de leestijden laten zien dat lezers meer tijd besteden
aan het verwerken van onjuiste of incongruente informatie dan aan het verwerken van
juiste of congruente informatie, resulteert dit in het slechter onthouden van deze
informatie. Op basis van de huidige resultaten is het lastig te zeggen of het slechtere
geheugen voor onjuiste/incongruente informatie komt door problemen met het
terughalen van de informatie uit het geheugen of door problemen met het opslaan
van de informatie in het geheugen (of beide). Het zou bijvoorbeeld kunnen dat de
(kloppende) achtergrondkennis van de lezer en/of de informatie uit de mentale
representatie van de tekst zelf ervoor zorgen dat het lastiger is om de onjuiste of
incongruente informatie terug te halen uit het geheugen. Maar het kan ook zijn dat
informatie die niet succesvol gevalideerd kan worden niet (goed) geïntegreerd kan
worden in de mentale representatie, maar opgeslagen wordt als losstaand stukje
informatie of zelfs helemaal niet opgeslagen wordt – wat het lastiger maakt voor lezers
om zich deze informatie op een later moment te herinneren.
Ook laten de resultaten van dit proefschrift zien dat de algemene aanname
dat het hebben van relevante kennis helpt bij het integreren van de gelezen informatie
in de mentale representatie en, daarmee, bij het onthouden van deze informatie (e.g.,
Chiesi et al., 1979; Dochy et al., 1999; Means & Voss, 1985; Recht & Leslie, 1988)
alleen van toepassing is op teksten die juiste en congruente informatie bevatten, maar
niet op teksten die onjuiste of incongruente informatie bevatten. De resultaten
beschreven in dit proefschrift suggereren dat het faciliterende effect van kennis
afhangt van of de informatie uit de tekst klopt met de bestaande kennis van de lezer
of niet. We zien dat lezers onjuiste informatie slechter onthouden dan juiste informatie

202
(zie Hoofdstuk 3 en 5). Op zich is dit goed nieuws, want dit zou betekenen dat lezers
niet onmiddellijk hun bestaande kennis aanpassen wanneer zij informatie lezen die
niet klopt. Bovendien suggereert dit dat de bestaande kennis van de lezer
niet alleen het opdoen van nieuwe ((juiste) informatie faciliteert, maar ook
het geheugen beschermt tegen onjuiste informatie – en gegeven de hoeveelheid
desinformatie en nepnieuws die tegenwoordig online beschikbaar is lijkt dit goed
nieuws.
Helaas zit er wel nog een addertje onder het gras. Zoals ik al eerder in dit
proefschrift besproken heb klopt niet alles wat we lezen, maar helaas klopt ook niet
alles wat we weten. Iedereen brengt bepaalde kennis of ideeën mee die gevormd zijn
door eerdere (lees)ervaringen. Veel studies hebben laten zien dat onjuiste kennis
(e.g., misconcepties, misinformatie, mythes) lastig te veranderen is (Chi et al., 1994;
Ecker et al., 2015; Isberner & Richter, 2014a; Özdemir & Clark, 2007; Rapp & Braasch,
2014; Turcotte, 2012; Vosniadou, 1994). Op basis van de resultaten uit dit proefschrift
zouden we deze problemen met het veranderen van onjuiste kennis kunnen
verklaren. Het kan namelijk zo zijn dat hetzelfde geheugenmechanisme dat onze
mentale representatie beschermt tegen onjuiste informatie – door informatie die niet
klopt met de eigen kennis niet te integreren of op te slaan– juist contraproductief is
wanneer de eigen kennis niet klopt (bijv. in het geval van misconcepties). In dat geval
interfereert de eigen kennis met het aanpassen van deze kennis, waardoor het lastiger
is om onjuiste kennis te veranderen.
Bij elkaar genomen illustreren de resultaten in dit proefschrift de cruciale rol
van de kennis van de lezer in het verwerken van tekstuele informatie en, specifieker,
in het valideren van tekstuele informatie. De resultaten suggereren echter dat kennis
niet simpelweg het leren van teksten faciliteert, maar dat de rol van achtergrondkennis
complexer in elkaar zit: het hebben van relevante kennis kan (1) het leren van nieuwe
(juiste) teksten faciliteren en (2) zowel de geheugenrepresentatie beschermen tegen
onjuistheden in een tekst (wanneer de kennis van de lezer juist is) als het leren van
juiste tekstuele informatie hinderen wanneer de kennis van de lezer onjuist is.

Individuele verschillen en validatie 7

Een secundair doel van dit proefschrift was om te onderzoeken of de


bovenstaande bevindingen beïnvloed werden door individuele verschillen tussen
lezers in het doel waarmee zij lezen (Hoofdstuk 2 en 5), hun werkgeheugencapaciteit
(Hoofdstuk 2, 3 & 5) en hun (on)bekendheid met de tekstuele informatie (Hoofdstuk
5).

203
Leesdoel

In lijn met eerdere studies (Lorch et al., 1993, 1995; van den Broek et al.,
2001; Yeari et al., 2015), zien we algemene effecten van leesdoel op de verwerking
en het geheugen: Deelnemers lazen bijvoorbeeld langzamer en onthielden de
tekstuele informatie beter wanneer ze verteld werd om de inhoud van de teksten goed
te onthouden (i.e., lezen voor studie) dan wanneer ze verteld werd om de teksten
‘gewoon’ aandachtig te lezen (i.e., lezen voor algemeen begrip) (Hoofdstuk 5). Maar
in beide studies vonden we geen bewijs dat leesdoelen validatieprocessen tijdens het
lezen beïnvloeden. Wel vonden we duidelijke leesdoeleffecten op wat lezers
onthielden van de onjuiste en incongruente informatie uit de teksten (Hoofdstuk 5).
Specifiek zagen we dat deelnemers met een studiedoel de informatie uit de teksten
beter onthielden wanneer deze congruent was met de eerdere tekst, maar niet
wanneer de informatie incongruent was met de eerdere tekst.
Resultaten van beide studies suggereren dat validatieprocessen tijdens
het lezen niet beïnvloed te worden door het doel van de lezer. Maar ze sluiten niet
uit dat latere validatieprocessen (inclusief validatieprocessen die eventueel na het
lezen van de tekst plaatsvinden) wel beïnvloed worden door leesdoel. Omdat het
studiedoel gebruikt in Hoofdstuk 5 wel invloed hadden op wat lezers onthielden van
de teksten, lijkt de meest logische conclusie leesdoeleffecten plaatsvinden nadat
de inconsistentie gedetecteerd is en nadat initiële elaboratie- of reparatieprocessen
zijn uitgevoerd. Mogelijk heeft het doel van de lezer invloed op reparatie- of
elaboratieprocessen die na het lezen plaatsvinden (Richter, 2011), zoals bijvoorbeeld
het nadenken over de inconsistente informatie (e.g., Collins & Michalski, 1989) of het
opzoeken van relevante informatie in andere bronnen (bijv. een tekstboek of op
internet) om de juistheid van de eerder gelezen informatie vast te stellen. Een andere
mogelijkheid is dat leesdoelen helemaal geen invloed hebben op validatieprocessen
tijdens of na het lezen, maar dat zij de processen die betrokken zijn bij het onthouden
van de informatie beïnvloeden.

Werkgeheugen

Wat betreft de rol van werkgeheugen in tekst- en kennisvalidatie zien we


wisselende resultaten in de verschillende studies. Aan de ene kant suggereren de
resultaten dat werkgeheugen géén rol speelt in validatie, aangezien we geen effecten
van werkgeheugen zien op de verwerking van onjuiste of incongruente informatie
(Hoofdstuk 4 en 5). Maar we zien ook wat bewijs dat werkgeheugen wél een rol speelt
in validatie (Hoofdstuk 2 en 5) - hoewel deze twee studies geen eenduidig patroon
laten zien. Omdat de resultaten van de verschillende studies geen duidelijk patroon
laten zien is het is lastig om tot een perfecte verklaring te komen voor de

204
gecombineerde resultaten. Daarom kan dit proefschrift helaas geen sluitend bewijs
leveren over de rol van werkgeheugen in validatie.
Er zijn verschillende mogelijke verklaringen voor de wisselende effecten die
we geobserveerd hebben. Deze effecten kunnen het resultaat zijn van verschillen
tussen de gebruikte onderzoeksmethoden of de groepen die onderzocht zijn. Een
andere mogelijkheid is dat ze gerelateerd zijn aan de maat van werkgeheugen die we
gebruikt hebben. Bij begrijpend lezen moet zowel de verwerkingscomponent als de
opslagcomponent van werkgeheugen aangesproken worden (e.g., Baddeley, 1992;
Duff & Logie, 2001; Frank & Badre, 2015; Oberauer et al., 2003) en onze taak meet
daarom ook deze gecombineerde componenten. Hoewel dit de taak een betere
voorspeller maakt van begrip, maakt dit het ook lastig om de unieke bijdrage van de
twee componenten te onderscheiden. Wellicht dat taken waarbij de twee
componenten onderscheiden kunnen worden meer inzicht geven in de rol van
werkgeheugen in validatie. Als laatste kunnen de wisselende effecten laten zien dat
de rol van werkgeheugen in validatie complexer is dan gedacht. Mogelijk is
werkgeheugen als covariaat meenemen niet voldoende om inzicht te krijgen in hoe
de verschillende componenten van werkgeheugen de verschillende fases van
validatie beïnvloeden en moeten toekomstige studies de belasting van het
werkgeheugen direct manipuleren tijdens de verwerking (cf. de Bruïne et al., 2021)
om inzicht te krijgen wanneer en onder welke omstandigheden werkgeheugen wel of
geen rol speelt in validatie.

(On)bekendheid van informatie

Hoewel de studie beschreven in Hoofdstuk 5 slechts een eerste stap was in


het in kaart brengen van de rol van de (on)bekendheid van de tekstuele informatie in
validatie, bieden de resultaten wel interessante inzichten. Allereerst vonden we dat
lezers die onbekender zijn met de informatie in een tekst minder conflict ervaren
wanneer zij onjuiste informatie lezen, wat suggereert dat de mate waarin de informatie
in de tekst (on)bekend is voor de lezer invloed heeft op validatieprocessen tijdens het
lezen. Deze bevinding laat zien dat de achtergrondkennis van de lezer directe invloed 7
heeft op validatieprocessen zoals die tijdens het lezen plaatsvinden. Ook suggereren
de resultaten dat lezers meer moeite doen om onbekendere informatie te begrijpen
wanneer het leesdoel een dieper begrip van de tekst vereist (bijvoorbeeld in het geval
van lezen voor studie versus lezen voor algemeen begrip). Deze resultaten bieden
vruchtbare startpunten voor toekomstig onderzoek naar de rol van (on)bekendheid
van informatie in validatie, of in begrijpend lezen in het algemeen, welke in Hoofdstuk
6 uitgebreider worden besproken.

205
Conclusies

Door verschillende onderzoeksmethoden en theorieën uit verschillende


onderzoeksdomeinen te combineren hebben de studies in dit proefschrift bijgedragen
aan ons begrip van hoe lezers wat ze lezen monitoren en valideren ten opzichte van
twee van de belangrijkste informatiebronnen – de tekst zelf en de achtergrondkennis
van de lezer. De resultaten laten zien dat lezers tekstuele informatie valideren ten
opzichte van informatie uit de tekst zelf en ten opzichte van de eigen kennis. Sterker
nog, op basis van de resultaten beschreven in dit proefschrift lijkt het dat validatie ten
opzichte van deze twee bronnen plaatsvindt in (deels) interactieve te onderscheiden
cognitieve processen. De studies in dit proefschrift verdiepen bestaande kennis over
validatie door te onderzoeken hoe onjuiste of incoherente informatie in een tekst het
geheugen van de lezer voor deze tekstuele informatie beïnvloed. Veel onderzoek naar
wat lezers onthouden van teksten focust zich op wat lezers onthouden van kloppende
en accurate teksten – informatie waarvan we hopen dat zij deze opslaan in hun
geheugen (e.g., Bohn-Gettler & Kendeou, 2014; van den Broek et al., 2001). Maar
mensen worden niet altijd gepresenteerd met accurate informatie: vaak worden zij
geconfronteerd met misinformatie - ideeën of concepten die niet kloppen. Begrijpen
hoe lezers beïnvloed worden door onjuiste informatie is een belangrijke eerste stap in
het begrijpen op welke manier en onder welke omstandigheden lezers hun (relevante)
achtergrondkennis inzetten om hun mentale representatie te beschermen tegen
onjuistheden, maar ook om het leren van (juiste) teksten te faciliteren. De in dit
proefschrift besproken paradigma’s en modellen kunnen een vruchtbaar startpunt zijn
voor verder onderzoek naar de ontvankelijkheid van mensen voor onjuiste informatie,
maar ook voor onderzoek naar hoe onjuiste kennis van de lezer zelf kan worden
aangepast of gereviseerd. Tenslotte geven de resultaten inzicht in de complexe
interactie tussen recent opgedane kennis (uit de tekst zelf) en de kennis uit het
langetermijngeheugen van de lezer in het geven van betekenis aan taal. Deze
resultaten zijn relevant voor modellen van zinsverwerking en tekstbegrip, maar ook
voor ons begrip van hoe wij betekenis geven aan boodschappen en hoe wij informatie
monitoren in een bredere zin (bijv. van gesproken of visuele input).

206
7

207
References
References
Albrecht, J. E., & O’Brien, E. J. (1993). Updating a Mental Model: Maintaining Both Local and Global
Coherence. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19(5),
1061–1070.
Alexander, P. A., & Jetton, T. L. (2000). Learning from text: A multidimensional and developmental
perspective.
Alexander, P. A., Kulikowich, J. M., & Schulze, S. K. (1994). How subject-matter knowledge affects
recall and interest. American Educational Research Journal, 31(2), 313–337.
Anderson, J. R. (1983). A spreading activation theory of memory. Journal of Verbal Learning and
Verbal Behavior, 22(3), 261–295.
Anderson, R. C., Pichert, J. W., & Shirey, L. L. (1983). Effects of the reader’s schema at different points
in time. Journal of Educational Psychology, 75(2), 271.
Andrews-Hanna, J. R., Reidler, J. S., Sepulcre, J., Poulin, R., & Buckner, R. L. (2010). Functional-
anatomic fractionation of the brain’s default network. Neuron, 65(4), 550–562.
Baayen, R. H. (2008). Analyzing linguistic data: A practical introduction to statistics using R. Cambridge
University Press.
Baddeley, A. D. (1992). Working memory. Science, 255(5044), 556–559.
Baddeley, A. D. (1998). Recent developments in working memory. Current Opinion in Neurobiology,
8(2), 234–238.
Baddeley, A. D. (2000). Short-term and working memory. The Oxford Handbook of Memory, 4, 77–
92.
Baddeley, A. D., & Hitch, G. (1974). Working Memory. In H. B. Gordon (Ed.), Psychology of Learning
and Motivation: Vol. Volume 8 (Issue 8, pp. 47–89).
Baillet, S. D., & Keenan, J. M. (1986). The role of encoding and retrieval processes in the recall of text.
Discourse Processes, 9(3), 247–268.
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory
hypothesis testing: Keep it maximal. Journal of Memory and Language, 68(3).
https://doi.org/10.1016/j.jml.2012.11.001
Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. In Remembering:
A study in experimental and social psychology. Cambridge University Press.
Bates, D., Maechler, M., Bolker, B., & Walker, S. (2015). Fitting Linear Mixed-Effects Models Using
lme4. Journal of Statistical Software, 67(1), 1–48.
Bates, E., & MacWhinney, B. (1989). Functionalism and the competition model. The Crosslinguistic
Study of Sentence Processing, 3, 73–112.
Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. In Trends in Cognitive
Sciences (2011/10/18, Vol. 15, Issue 11, pp. 527–536).
Bohn-Gettler, C. M., & Kendeou, P. (2014). The Interplay of Reader Goals, Working Memory, and Text
Structure During Reading. Contemporary Educational Psychology, 39.
Boland, J. E. (2004). Linking eye movements to sentence comprehension in reading and listening.
The On-Line Study of Sentence Comprehension: Eyetracking, ERP, and Beyond, 51–76.
Brainerd, C. J., & Reyna, V. F. (2005). The science of false memory. Oxford University Press.

210
Bråten, I., & Samuelstuen, M. S. (2004). Does the influence of reading purpose on reports of strategic
text processing depend on students’ topic knowledge? Journal of Educational Psychology,
96(2), 324.
Britt, M. A., Rouet, J. F., & Durik, A. (2018). Literacy beyond text comprehension : a theory of
purposeful reading.
Budd, D., Whitney, P., & Turley, K. J. (1995). Individual differences in working memory strategies for
reading expository text. Memory & Cognition, 23(6), 735–748.
Cain, K. (1999). Ways of reading: How knowledge and use of strategies are related to reading
comprehension. British Journal of Developmental Psychology, 17(2), 293–309.
Cain, K., Oakhill, J. V, Barnes, M. A., & Bryant, P. E. (2001). Comprehension skill, inference-making
ability, and their relation to knowledge. Memory & Cognition, 29(6), 850–859.
Calvo, M. G. (2001). Working memory and inferences: Evidence from eye fixations during reading.
Memory, 9(4–6), 365–381.
Carpenter, P. A., & Just, M. A. (1975). Sentence comprehension: a psycholinguistic processing model
of verification. Psychological Review, 82(1), 45.
Cerdán, R., & Vidal-Abarca, E. (2008). The effects of tasks on integrating information from multiple
documents. Journal of Educational Psychology, 100(1), 209.
Chi, M. T. H., Slotta, J. D., & De Leeuw, N. (1994). From things to processes: A theory of conceptual
change for learning science concepts. Learning and Instruction, 4(1), 27–43.
Chiesi, H. L., Spilich, G. J., & Voss, J. F. (1979). Acquisition of domain-related information in relation
to high and low domain knowledge. Journal of Verbal Learning and Verbal Behavior, 18(3),
257–273.
Christianson, K., Hollingworth, A., Halliwell, J. F., & Ferreira, F. (2001). Thematic roles assigned along
the garden path linger. Cognitive Psychology, 42(4), 368–407.
Chung-Fat-Yim, A., Peterson, J. B., & Mar, R. A. (2017). Validating self-paced sentence-by-sentence
reading: story comprehension, recall, and narrative transportation. Reading and Writing, 30(4),
857–869.
Clifton Jr, C., Staub, A., & Rayner, K. (2007). Eye movements in reading words and sentences. In Eye
Movements (pp. 341–371). Elsevier.
Colbert-Getz, J., & Cook, A. E. (2013). Revisiting effects of contextual strength on the subordinate bias
effect: Evidence from eye movements. Memory & Cognition, 41(8), 1172–1184.
Collins, A., & Michalski, R. (1989). The logic of plausible reasoning: A core theory. Cognitive Science,
13(1), 1–49.
Cook, A. E. (2014). Processing anomalous anaphors. Memory & Cognition, 42(7), 1171–1185.
Cook, A. E., & Guéraud, S. (2005). What Have We Been Missing? The Role of General World
Knowledge in Discourse Processing. Discourse Processes, 39(2–3), 265–278.
Cook, A. E., Halleran, J. G., & O’Brien, E. J. (1998a). No Title. Discourse Processes, 26(2–3), 109–129.
Cook, A. E., Halleran, J. G., & O’Brien, E. J. (1998b). What is readily available during reading? A
memory-based view of text processing. Discourse Processes, 26(2–3), 109–129.
Cook, A. E., & Myers, J. L. (2004). Processing discourse roles in scripted narratives: The influences of
context and world knowledge. Journal of Memory and Language, 50(3), 268–288.
Cook, A. E., & O’Brien, E. J. (2014). Knowledge activation, integration, and validation during narrative
text comprehension. Discourse Processes, 51(1–2), 26–49.

211
Cook, A. E., & Wei, W. (2017). Using Eye Movements to Study Reading Processes: Methodological
Considerations. In W. Christopher, S. Frank, & M. Bradley (Eds.), Eye-Tracking Technology
Applications in Educational Research (pp. 27–47). IGI Global.
Cowan, N. (1988). Evolving conceptions of memory storage, selective attention, and their mutual
constraints within the human information-processing system. Psychological Bulletin, 104(2),
163.
Cowan, N. (2017). The many faces of working memory and short-term storage. Psychonomic Bulletin
& Review, 24(4), 1158–1170.
Creer, S. D., Cook, A. E., & O’Brien, E. J. (2018). Competing activation during fantasy text
comprehension. Scientific Studies of Reading, 22(4), 308–320.
Daneman, M., & Carpenter, P. A. (1980). Individual differences in working memory and reading. 19(4),
450–466.
Daneman, M., & Merikle, P. M. (1996). Working memory and language comprehension: A meta-
analysis. Psychonomic Bulletin and Review, 3(4), 422–433.
de Bruïne, A., Jolles, D., & van den Broek, P. (2021). Minding the load or loading the mind: The effect
of manipulating working memory on coherence monitoring. Journal of Memory and Language,
118, 104212.
Deno, S. L. (1985). Curriculum-Based Measurement: The Emerging Alternative. Exceptional Children,
52(3), 219–232.
Dochy, F., Segers, M., & Buehl, M. M. (1999). The relation between assessment practices and
outcomes of studies: The case of research on prior knowledge. Review of Educational
Research, 69(2), 145–186.
Duff, S. C., & Logie, R. H. (2001). Processing and Storage in Working Memory Span. The Quarterly
Journal of Experimental Psychology Section A, 54(1), 31–48.
Duffy, S. A., Morris, R. K., & Rayner, K. (1988). Lexical ambiguity and fixation times in reading. Journal
of Memory and Language, 27(4), 429–446.
Ecker, U. K. H., Lewandowsky, S., Cheung, C. S. C., & Maybery, M. T. (2015). He did it! She did it! No,
she did not! Multiple causal explanations and the continued influence of misinformation. Journal
of Memory and Language, 85, 101–115.
Egidi, G., & Caramazza, A. (2013). Cortical systems for local and global integration in discourse
comprehension. NeuroImage, 71, 59–74.
Egidi, G., & Caramazza, A. (2016). Integration processes compared: Cortical differences for
consistency evaluation and passive comprehension in local and global coherence. Journal of
Cognitive Neuroscience, 28(10), 1568–1583.
Eslick, A. N., Fazio, L. K., & Marsh, E. J. (2011). Ironic effects of drawing attention to story errors.
Memory, 19(2), 184–191.
Espin, C. A., & Foegen, A. (1996). Validity of General Outcome Measures for Predicting Secondary
Students Performance on Content-Area Tasks. Exceptional Children, 62(6), 497–514.
Fazio, L. K., Barber, S. J., Rajaram, S., Ornstein, P. A., & Marsh, E. J. (2013). Creating illusions of
knowledge: Learning errors that contradict prior knowledge. Journal of Experimental
Psychology: General, 142(1), 1–5.
Fazio, L. K., & Marsh, E. J. (2008a). Older, not younger, children learn more false facts from stories.
Cognition, 106(2), 1081–1089.

212
Fazio, L. K., & Marsh, E. J. (2008b). Slowing presentation speed increases illusions of knowledge.
Psychonomic Bulletin & Review, 15(1), 180–185.
Ferreira, F. (2003). The misinterpretation of noncanonical sentences. Cognitive Psychology, 47(2),
164–203.
Ferreira, F., Bailey, K. G. D. D., & Ferraro, V. (2002). Good-enough representations in language
comprehension. Current Directions in Psychological Science, 11(1), 11–15.
Ferreira, F., & Clifton, C. (1986). The independence of syntactic processing. Journal of Memory and
Language, 25(3), 348–368.
Ferreira, F., & Patson, N. D. (2007). The ‘good enough’approach to language comprehension.
Language and Linguistics Compass, 1(1‐2), 71–83.
Ferretti, T. R., Singer, M., & Patterson, C. (2008). Electrophysiological evidence for the time-course of
verifying text ideas. Cognition, 108(3), 881–888.
Ferstl, E. C., Neumann, J., Bogler, C., & Von Cramon, D. Y. (2008). The extended language network:
A meta-analysis of neuroimaging studies on text comprehension. Human Brain Mapping, 29(5),
581–593.
Ferstl, E. C., Rinck, M., & Von Cramon, D. Y. (2005). Emotional and temporal aspects of situation model
processing during text comprehension: An event-related fMRI study. Journal of Cognitive
Neuroscience, 17(5), 724–739.
Ferstl, E. C., & Von Cramon, D. Y. (2001). The role of coherence and cohesion in text comprehension:
An event-related fMRI study. Cognitive Brain Research, 11(3), 325–340.
Ferstl, E. C., & Von Cramon, D. Y. (2002). What does the frontomedian cortex contribute to language
processing: Coherence or theory of mind? NeuroImage, 17(3), 1599–1612.
Fletcher, C. R., & Bloom, C. P. (1988). Causal reasoning in the comprehension of simple narrative
texts. Journal of Memory and Language, 27(3), 235–244.
Fodor, J. A. (1983). The Modularity of Mind (Vol. 94). MIT Press.
Fox, M. D., & Raichle, M. E. (2007). Spontaneous fluctuations in brain activity observed with functional
magnetic resonance imaging. Nature Reviews Neuroscience, 8(9), 700–711.
Frank, M. J., & Badre, D. (2015). How cognitive theory guides neuroscience. Cognition, 135, 14–20.
Frazier, L. (1987). Sentence processing: A tutorial review. In M. Coltheart (Ed.), Attention and
performance 12: The psychology of reading (pp. 559–586). Lawrence Erlbaum Associates, Inc.
Frazier, L., & Fodor, J. D. (1978). The sausage machine: A new two-stage parsing model. Cognition,
6(4), 291–325.
Frazier, L., & Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye
movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14(2),
178–210.
Friederici, A. D. (2002). Towards a neural basis of auditory sentence processing. Trends in Cognitive
Sciences, 6(2), 78–84.
Fuchs, L. S., Fuchs, D., & Fuchs., D. (1992). Identifying a measure for monitoring student reading
progress. School Psychology Review, 21(1), 45–58.
Garnham, A. (2001). Mental models and the interpretation of anaphora. In Mental models and the
interpretation of anaphora. Psychology Press.
Garnsey, S. M., Pearlmutter, N. J., Myers, E., & Lotocky, M. A. (1997). The contributions of verb bias
and plausibility to the comprehension of temporarily ambiguous sentences. Journal of Memory
and Language, 37(1), 58–93.

213
Garrod, S., & Terras, M. (2000). The contribution of lexical and situational knowledge to resolving
discourse roles: Bonding and resolution. Journal of Memory and Language, 42(4), 526–544.
Gerrig, R. J., & McKoon, G. (1998). The readiness is all: The functionality of memory‐based text
processing. Discourse Processes, 26(2–3), 67–86.
Gerrig, R. J., O’Brien, E. J., & O’Brien, E. J. (2005). The Scope of Memory-Based Processing.
Discourse Processes, 39(2–3), 225–242.
Gerrig, R. J., & Prentice, D. A. (1991). The Representation of Fictional Information. Psychological
Science, 2(5), 336–340.
Gibson, E. (1998). Linguistic complexity: Locality of syntactic dependencies. Cognition, 68(1), 1–76.
Gilead, M., Sela, M., & Maril, A. (2018). That’s My Truth: Evidence for Involuntary Opinion
Confirmation. Social Psychological and Personality Science, 10(3), 393–401.
Goetz, E. T., Schallert, D. L., Reynolds, R. E., & Radin, D. I. (1983). Reading in perspective: What real
cops and pretend burglars look for in a story. Journal of Educational Psychology, 75(4), 500.
Goldman, S. R., & Varma, S. (1995). CAPping the construction-integration model of discourse
comprehension. Discourse Comprehension: Essays in Honor of Walter Kintsch, 337–358.
Gordon, P. C., Hendrick, R., Johnson, M., & Lee, Y. (2006). Similarity-based interference during
language comprehension: Evidence from eye tracking during reading. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 32(6), 1304.
Graesser, A. C., Millis, K. K., & Zwaan, R. A. (1997). Discourse comprehension. Annual Review of
Psychology, 48(1), 163–189.
Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing Inferences During Narrative Text
Comprehension. Psychological Review, 101(3), 371–395.
Guerin, S. A., & Miller, M. B. (2011). Parietal cortex tracks the amount of information retrieved even
when it is not the basis of a memory decision. NeuroImage, 55(2), 801–807.
Hagoort, P. (2003). How the brain solves the binding problem for language: a neurocomputational
model of syntactic processing. NeuroImage, 20, S18–S29.
Hagoort, P. (2005). On Broca, brain, and binding: a new framework. Trends in Cognitive Sciences,
9(9), 416–423.
Hagoort, P. (2017). The core and beyond in the language-ready brain. Neuroscience & Biobehavioral
Reviews, 81, 194–204.
Hagoort, P., Hald, L., Bastiaansen, M., & Petersson, K. M. (2004). Integration of word meaning and
world knowledge in language comprehension. Science, 304(5669), 438.
Hagoort, P., & van Berkum, J. (2007). Beyond the sentence given. Philosophical Transactions of the
Royal Society B: Biological Sciences, 362(1481), 801–811.
Hakala, C. M., & O’Brien, E. J. (1995). Strategies for resolving coherence breaks in reading. Discourse
Processes, 20(2), 167–185.
Hannon, B. (2012). Understanding the relative contributions of lower‐level word processes, higher‐
level processes, and working memory to reading comprehension performance in proficient
adult readers. Reading Research Quarterly, 47(2), 125–152.
Hannon, B., & Daneman, M. (2001). Susceptibility to semantic illusions: An individual-differences
perspective. Memory and Cognition, 29(3), 449–461.
Hasson, U., & Giora, R. (2007). Experimental methods for studying the mental representation of
language. Methods in Cognitive Linguistics, 302–322.

214
Hasson, U., Nusbaum, H. C., & Small, S. L. (2007). Brain networks subserving the extraction of
sentence information and its encoding to memory. Cerebral Cortex, 17(12), 2899–2913.
Heeger, D. J., & Ress, D. (2002). What does fMRI tell us about neuronal activity? Nature Reviews
Neuroscience, 3(2), 142–151.
Helder, A., van den Broek, P., Karlsson, J., & Van Leijenhorst, L. (2017). Neural Correlates of
Coherence-Break Detection During Reading of Narratives. Scientific Studies of Reading, 21(6),
463–479.
Henderson, J. M., Choi, W., Lowder, M. W., & Ferreira, F. (2016). Language structure in the brain: A
fixation-related fMRI study of syntactic surprisal in reading. Neuroimage, 132, 293–300.
Hess, D. J., Foss, D. J., & Carroll, P. (1995). Effects of global and local context on lexical processing
during language comprehension. Journal of Experimental Psychology: General, 124(1), 62.
Howe, M. L. (2011). The adaptive nature of memory and its illusions. Current Directions in
Psychological Science, 20(5), 312–315.
Howe, M. L., Garner, S. R., Charlesworth, M., & Knott, L. (2011). A brighter side to memory illusions:
False memories prime children’s and adults’ insight-based problem solving. Journal of
Experimental Child Psychology, 108(2), 383–393.
Hyönä, J., & Lorch, R. (2004). Effects of topic headings on text processing: Evidence from adult
readers’ eye fixation patterns. Learning and Instruction, 14(2), 131–152.
Hyönä, J., Lorch, R. F., Kaakinen, J. K., Lorch Jr, R. F., & Kaakinen, J. K. (2002). Individual differences
in reading to summarize expository text: Evidence from eye fixation patterns. Journal of
Educational Psychology, 94(1), 44–55.
Hyönä, J., Lorch, R. F., & Rinck, M. (2003). Chapter 16 - Eye Movement Measures to Study Global
Text Processing. In J Hyönä, R. Radach, & H. Deubel (Eds.), The Mind’s Eye (pp. 313–334).
North-Holland.
Isberner, M.-B., & Richter, T. (2013). Can readers ignore implausibility? Evidence for nonstrategic
monitoring of event-based plausibility in language comprehension. Acta Psychologica, 142(1),
15–22.
Isberner, M.-B., & Richter, T. (2014a). Comprehension and Validation: Separable Stages of Information
Processing? A Case for Epistemic Monitoring in Language Comprehension. In D. N. Rapp & J.
L. G. Braasch (Eds.), Processing inaccurate information: Theoretical and applid perspectives
from cognitive science and the educational sciences (pp. 245–276). MIT Press.
Isberner, M.-B., & Richter, T. (2014b). Does Validation During Language Comprehension Depend on
an Evaluative Mindset? Discourse Processes.
Jackendoff, R. (1999). Parallel constraint-based generative theories of language. Trends in Cognitive
Sciences, 3(10), 393–400.
Jackendoff, R. (2007). A Parallel Architecture perspective on language processing. Brain Research,
1146, 2–22.
Jackendoff, R., & Jackendoff, R. S. (2002). Foundations of language: Brain, meaning, grammar,
evolution. Oxford University Press, USA.
Jenkinson, M., Bannister, P., Brady, M., & Smith, S. (2002). Improved Optimization for the Robust and
Accurate Linear Registration and Motion Correction of Brain Images. NeuroImage, 17(2), 825–
841.
Johnson-Laird, P. N. (1983). Mental Models: Towards a Cognitive Science of Language, Inference,
and Consciousness. Harvard University Press.

215
Judd, C. M., Westfall, J., & Kenny, D. A. (2017). Experiments with More Than One Random Factor:
Designs, Analytic Models, and Statistical Power. Annual Review of Psychology, 68(1), 601–625.
Just, M. A., & Carpenter, P. A. (1992). A capacity theory of comprehension: Individual differences in
working memory. Psychological Review, 99(1), 122–149.
Kaakinen, J. K., & Hyönä, J. (2005). Perspective effects on expository text comprehension: Evidence
from think-aloud protocols, eyetracking, and recall. Discourse Processes, 40(3), 239–257.
Kaakinen, J. K., & Hyönä, J. (2010). Task effects on eye movements during reading. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 36(6), 1561.
Kaakinen, J. K., Hyönä, J., & Keenan, J. M. (2002). Perspective effects on online text processing.
Discourse Processes, 33(2), 159–173.
Kaiser, E. (2013). Experimental paradigms in psycholinguistics. Research Methods in Linguistics, 135–
168.
Kaiser, E., & Trueswell, J. C. (2004). The role of discourse context in the processing of a flexible word-
order language. Cognition, 94(2), 113–147.
Karimi, H., & Ferreira, F. (2016). Good-enough linguistic representations and online cognitive
equilibrium in language processing. Quarterly Journal of Experimental Psychology, 69(5),
1013–1040.
Kendeou, P. (2014). Validation and Comprehension: An Integrated Overview. Discourse Processes,
51(1–2), 189–200.
Kendeou, P., Bohn-Gettler, C., & Fulton, S. (2011). What we have been missing: The role of goals in
reading comprehension. In Text relevance and learning from text. (pp. 375–394).
Kendeou, P., & van den Broek, P. (2007). The effects of prior knowledge and text structure on
comprehension processes during reading of scientific texts. Memory & Cognition, 35(7), 1567–
1577.
Kintsch, W. (1988). The Role of Knowledge in Discourse Comprehension: A Construction-Integration
Model. Psychological Review, 95(2), 163–182.
Kintsch, W. (1998). Comprehension: A Paradigm for Cognition. Cambridge University Press.
Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production.
Psychological Review, 85(5), 363–394. https://doi.org/10.1037/0033-295X.85.5.363
Koechlin, E., Ody, C., & Kouneiher, F. (2003). The architecture of cognitive control in the human
prefrontal cortex. Science, 302(5648), 1181–1185.
Kolk, H. J., & Chwilla, D. (2007). Late positivities in unusual situations. In Brain and Language (Vol.
100, Issue 3, pp. 257–261). Elsevier Science.
Kolk, H. J., Chwilla, D. J., Van Herten, M., & Oor, P. J. W. (2003). Structure and limited capacity in
verbal working memory: A study with event-related potentials. Brain and Language, 85(1), 1–
36.
Koornneef, A., Kraal, A., & Danel, M. (2019). Beginning readers might benefit from digital texts
presented in a sentence-by-sentence fashion. But why? Computers in Human Behavior, 92,
328–343.
Koornneef, A., & Van Berkum, J. A. (2006). On the use of verb-based implicit causality in sentence
comprehension: Evidence from self-paced reading and eye tracking. Journal of Memory and
Language, 54(4), 445–465.
Kormi-Nouri, R., Nilsson, L. G., & Ohta, N. (2005). The novelty effect: Support for the novelty-encoding
hypothesis. Scandinavian Journal of Psychology, 46(2), 133–143.

216
Kuperberg, G. R. (2007). Neural mechanisms of language comprehension: Challenges to syntax. Brain
Research, 1146, 23–49.
Kuperberg, G. R., Lakshmanan, B. M., Caplan, D. N., & Holcomb, P. J. (2006). Making sense of
discourse: an fMRI study of causal inferencing across sentences. NeuroImage, 33(1), 343–361.
Leinenger, M., & Rayner, K. (2013). Eye movements while reading biased homographs: Effects of prior
encounter and biasing context on reducing the subordinate bias effect. Journal of Cognitive
Psychology, 25(6), 665–681.
Linden, D. E. J., Prvulovic, D., Formisano, E., Völlinger, M., Zanella, F. E., Goebel, R., & Dierks, T.
(1999). The functional neuroanatomy of target detection: an fMRI study of visual and auditory
oddball tasks. Cerebral Cortex, 9(8), 815–823.
Linderholm, T., & van den Broek, P. (2002). The effects of reading purpose and working memory
capacity on the processing of expository text. Journal of Educational Psychology, 94(4), 778–
784.
Linderholm, T., Virtue, S., Tzeng, Y., & van den Broek, P. (2004). Fluctuations in the availability of
information during reading: Capturing cognitive processes using the landscape model.
Discourse Processes, 37(2), 165–186.
Long, D. L., & Lea, R. B. (2005). Have We Been Searching for Meaning in All the Wrong Places?
Defining the “Search After Meaning” Principle in Comprehension. Discourse Processes, 39(2–
3), 279–298.
Lorch, R. F., Klusewitz, M. A., & Lorch, E. P. (1995). Distinctions among reading situations. In J. R. F.
Lorch & O. E..J. (Eds.), Sources of coherence in reading (pp. 375–398). Erlbaum.
Lorch, R. F., Lorch, E. P., & Klusewitz, M. A. (1993). College students’ conditional knowledge about
reading. Journal of Educational Psychology, 85(2), 239.
Loxterman, J. A., Beck, I. L., & McKeown, M. G. (1994). The effects of thinking aloud during reading
on students’ comprehension of more or less coherent text. Reading Research Quarterly, 353–
367.
Ma, W., & Zhang, S. (2020). Research Methods in Linguistics. In Australian Journal of Linguistics (Vol.
40, Issue 2). Cambridge University Press.
MacDonald, M. C., Pearlmutter, N. J., & Seidenberg, M. S. (1994). The lexical nature of syntactic
ambiguity resolution. Psychological Review, 101(4), 676.
Magliano, J. P., Trabasso, T., & Graesser, A. C. (1999). Strategic processing during comprehension.
Journal of Educational Psychology, 91(4), 615.
Maguire, E. A., Frith, C. D., & Morris, R. G. M. (1999). The functional neuroanatomy of comprehension
and memory: the importance of prior knowledge. Brain, 10, 1839–1850.
Maier, J., & Richter, T. (2013). Text Belief Consistency Effects in the Comprehension of Multiple Texts
With Conflicting Information. Cognition and Instruction, 31(2), 151–175.
Marsh, E. J., & Fazio, L. K. (2006). Learning errors from fiction: Difficulties in reducing reliance on
fictional stories. Memory & Cognition, 34(5), 1140–1149.
Marsh, E. J., Meade, M. L., & Roediger Iii, H. L. (2003). Learning facts from fiction. Journal of Memory
and Language, 49(4), 519–536.
Marslen-Wilson, W. (1973). Linguistic structure and speech shadowing at very short latencies. Nature,
244(5417), 522.
Marslen-Wilson, W. (1975). Sentence Perception as an Interactive Parallel Process. Science,
189(4198), 226 LP – 228. https://doi.org/10.1126/science.189.4198.226

217
Marslen-Wilson, W. (1989). Access and integration: Projecting sound onto meaning.
Marslen-Wilson, W., & Tyler, L. K. (1980). The temporal structure of spoken language understanding.
Cognition, 8(1), 1–71.
Mason, R. A., & Just, M. A. (2006). Neuroimaging contributions to the understanding of discourse
processes. In M. J. Traxler & M. A. Gernsbacher (Eds.), Handbook of Psycholinguistics (pp.
765–799). Elsevier.
McCrudden, M. T., Magliano, J. P., & Schraw, G. (2010). Exploring how relevance instructions affect
personal reading intentions, reading goals and text processing: A mixed methods study.
Contemporary Educational Psychology, 35(4), 229–241.
McCrudden, M. T., & Schraw, G. (2007). Relevance and goal-focusing in text processing. Educational
Psychology Review, 19(2), 113–139.
McCrudden, M. T., Schraw, G., & Kambe, G. (2005). The Effect of Relevance Instructions on Reading
Time and Learning. Journal of Educational Psychology, 97(1), 88.
McCrudden, M. T., Magliano, J. P., & Schraw, G. (2011). Text relevance and learning from text. In Text
relevance and learning from text. IAP.
McKeown, M. G., Beck, I. L., Sinatra, G. M., & Loxterman, J. A. (1992). The contribution of prior
knowledge and coherent text to comprehension. Reading Research Quarterly, 79–93.
McKoon, G., & Ratcliff, R. (1992). Inference during reading. Psychological Review, 99(3), 440–466.
McKoon, G., & Ratcliff, R. (1995). How should implicit memory phenomena be modeled? Journal of
Experimental Psychology: Learning, Memory, and Cognition, 21(3), 777–784.
McKoon, G., & Ratcliff, R. (1998). Memory-based language processing: Psycholinguistic research in
the 1990s. Annual Review of Psychology, 49(1), 25–42.
McNamara, D. S., & Magliano, J. P. (2009). Towards a comprehensive model of comprehension. In B.
Ross (Ed.), The psychology of learning and motivation (Vol. 51, pp. 284–297). Elsevier.
McNamara, D. S., & McDaniel, M. A. (2004). Suppressing Irrelevant Information: Knowledge Activation
or Inhibition? Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(2),
465–482.
Means, M. L., & Voss, J. F. (1985). Star Wars: A developmental study of expert and novice knowledge
structures. Journal of Memory and Language, 24(6), 746–757.
Mellet, E., Bricogne, S., Crivello, F., Mazoyer, B., Denis, M., & Tzourio-Mazoyer, N. (2002). Neural Basis
of Mental Scanning of a Topographic Representation Built from a Text. Cerebral Cortex, 12(12),
1322–1330.
Menenti, L., Petersson, K. M., Scheeringa, R., & Hagoort, P. (2009). When elephants fly: Differential
sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of
Cognitive Neuroscience, 21(12), 2358–2368.
Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for
processing information. Psychological Review, 63(2), 81–97.
Millis, K. K., & Just, M. A. (1994). The Influence of Connectives on Sentence Comprehension. Journal
of Memory and Language, 33(1), 128–147.
Moss, J., & Schunn, C. D. (2015). Comprehension through explanation as the interaction of the brain’s
coherence and cognitive control networks. Frontiers in Human Neuroscience, 9(OCTOBER),
562.
Muthén, L. K., & Muthén, B. O. (1998). Mplus: The comprehensive modeling program for applied
researchers: User’s guide. Muthén & Muthén.

218
Myers, J. L., Cook, A. E., Kambe, G., Mason, R. A., & O’Brien, E. J. (2000). Semantic and Episodic
Effects on Bridging Inferences. Discourse Processes, 29(3), 179–199.
Myers, J. L., & O’Brien, E. J. (1998). Accessing the discourse representation during reading. Discourse
Processes, 26(2–3), 131–157.
Myers, J. L., O’Brien, E. J., Albrecht, J. E., & Mason, R. A. (1994). Maintaining global coherence during
reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(4), 876.
Narvaez, D., van den Broek, P., & Ruiz, A. B. (1999). The influence of reading purpose on inference
generation and comprehension in reading. Journal of Educational Psychology, 91(3), 488–496.
Newman, E. J., & Lindsay, D. S. (2009). False memories: What the hell are they for? Applied Cognitive
Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition,
23(8), 1105–1121.
Nieuwland, M. S., & Kuperberg, G. R. (2008). When the truth is not too hard to handle: An event-
related potential study on the pragmatics of negation. Psychological Science, 19(12), 1213–
1218.
Nieuwland, M. S., & Van Berkum, J. A. J. A. (2006). When Peanuts Fall in Love: N400 Evidence for the
Power of Discourse. Journal of Cognitive Neuroscience, 18(7), 1098–1111.
O’Brien, E. J. (1987). Antecedent search processes and the structure of text. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 13(2), 278–290.
O’Brien, E. J., & Albrecht, J. E. (1991). The role of context in accessing antecedents in text. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 17(1), 94–102.
O’Brien, E. J., & Albrecht, J. E. (1992). Comprehension strategies in the development of a mental
model. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(4), 777–784.
O’Brien, E. J., Albrecht, J. E., Hakala, C. M., & Rizzella, M. L. (1995). Activation and suppression of
antecedents during reinstatement. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 21(3), 626–634.
O’Brien, E. J., & Cook, A. E. (2016a). Chapter Seven - Separating the Activation, Integration, and
Validation Components of Reading. In H. R. Brian (Ed.), Psychology of Learning and Motivation:
Vol. Volume 65 (pp. 249–276). Academic Press.
O’Brien, E. J., & Cook, A. E. (2016b). Coherence Threshold and the Continuity of Processing: The RI-
Val Model of Comprehension. Discourse Processes, 53(5–6), 326–338.
O’Brien, E. J., Cook, A. E., & Guéraud, S. (2010). Accessibility of outdated information. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 36(4), 979–991.
O’Brien, E. J., Cook, A. E., & Lorch, J. R. F. (2015). Inferences during Reading. In E.J. O’Brien, A.E.
Cook, & J. R. F. Lorch (Eds.), Inferences during Reading. Cambridge University Press.
O’Brien, E. J., Cook, A. E., & Peracchi, K. A. (2004). Updating Situation Models: Reply to Zwaan and
Madden (2004). Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(1),
289–291.
O’Brien, E. J., & Myers, J. L. (1999). Text comprehension: A view from the bottom up. In S. R. Goldman,
A. C. Graesser, & P. W. van den Broek (Eds.), Narrative Comprehension, Causality, and
Coherence: Essays in Honor of Tom Trabasso (pp. 35–53). Erlbaum.
O’Brien, E. J., Plewes, P. S., & Albrecht, J. E. (1990). Antecedent retrieval processes. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 16(2), 241–249.

219
O’Brien, E. J., Rizzella, M. L., Albrecht, J. E., & Halleran, J. G. (1998). Updating a situation model: a
memory-based text processing view. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 24(5), 1200.
Oberauer, K., Süß, H.-M., Wilhelm, O., & Wittman, W. W. (2003). The multiple faces of working
memory: Storage, processing, supervision, and coordination. Intelligence, 31(2), 167–193.
Özdemir, G., & Clark, D. B. (2007). An overview of conceptual change theories. Eurasia Journal of
Mathematics, Science and Technology Education, 3(4), 351–361.
Ozuru, Y., Dempsey, K., & McNamara, D. S. (2009). Prior knowledge, reading skill, and text cohesion
in the comprehension of science texts. Learning and Instruction, 19(3), 228–242.
Perfetti, C. A., & Frishkoff, G. A. (2008). The neural bases of text and discourse processing. Handbook
of the Neuroscience of Language, 165–174.
Pichert, J. W., & Anderson, R. C. (1977). Taking different perspectives on a story. Journal of
Educational Psychology, 69(4), 309.
Poldrack, R. A., Mumford, J. A., & Nichols, T. E. (2011). Handbook of functional MRI data analysis.
Cambridge University Press.
Preacher, K. J., Zyphur, M. J., & Zhang, Z. (2010). A general multilevel SEM framework for assessing
multilevel mediation. Psychological Methods, 15(3), 209.
Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W. J., Gusnard, D. A., & Shulman, G. L. (2001).
A default mode of brain function. Proceedings of the National Academy of Sciences, 98(2),
676–682.
Rapp, D. N. (2008). How do readers handle incorrect information during reading? Memory and
Cognition, 36(3), 688–701.
Rapp, D. N., & Braasch, J. L. G. (2014). Processing Inaccurate Information: Theoretical and Applied
Perspectives from Cognitive Science and the Educational Sciences. MIT Press.
Rapp, D. N., Gerrig, R. J., & Prentice, D. A. (2001). Readers’ trait-based models of characters in
narrative comprehension. Journal of Memory and Language, 45(4), 737–750.
Rapp, D. N., Hinze, S. R., Kohlhepp, K., & Ryskin, R. A. (2014). Reducing reliance on inaccurate
information. Memory and Cognition, 42(1), 11–26.
Rayner, K. (1997). Understanding Eye Movements in Reading. Scientific Studies of Reading, 1(4),
317–339.
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research.
Psychological Bulletin, 124(3), 372–422.
Rayner, K, Carlson, M., & Frazier, L. (1983). The interaction of syntax and semantics during sentence
processing: Eye movements in the analysis of semantically biased sentences. Journal of Verbal
Learning and Verbal Behavior, 22(3), 358–374.
Rayner, K, Chace, K. H., Slattery, T. J., & Ashby, J. (2006). Eye Movements as Reflections of
Comprehension Processes in Reading. Scientific Studies of Reading, 10(3), 241–255.
Rayner, K, & Duffy, S. A. (1986). Lexical complexity and fixation times in reading: Effects of word
frequency, verb complexity, and lexical ambiguity. Memory & Cognition, 14(3), 191–201.
Rayner, K, Garrod, S., & Perfetti, C. A. (1992). Discourse influences during parsing are delayed.
Cognition, 45(2), 109–139.
Rayner, K, Sereno, S., Morris, R. K., Schmauder, A., & Clifton, C. (1989). Eye Movements and On-line
Language Comprehension Processes. Language and Cognitive Processes, 4.

220
Rayner, K, & Slattery, T. J. (2009). Eye movements and moment-to-moment comprehension
processes in reading. In Beyond decoding: The behavioral and biological foundations of
reading comprehension. (pp. 27–45). Guilford Press.
Rayner, K, Warren, T., Juhasz, B. J., & Liversedge, S. P. (2004). The effect of plausibility on eye
movements in reading. Journal of Experimental Psychology: Learning, Memory, and Cognition,
30(6), 1290–1301.
Rayner, K., & Liversedge, S. P. (2012). Linguistic and cognitive influences on eye movements during
reading. In The Oxford Handbook of Eye Movements.
Recht, D. R., & Leslie, L. (1988). Effect of prior knowledge on good and poor readers’ memory of text.
Journal of Educational Psychology, 80(1), 16.
Richter, T. (2003). Epistemologische Einschätzungen beim Textverstehen.
Richter, T. (2011). Cognitive Flexibility and Epistemic Validation in Learning from Multiple Texts. In J.
Elen, E. Stahl, R. Bromme, & G. Clarebout (Eds.), Links Between Beliefs and Cognitive
Flexibility: Lessons Learned (pp. 125–140). Springer Netherlands.
Richter, T. (2015). Validation and Comprehension of Text Information: Two Sides of the Same Coin.
Discourse Processes, 52(5–6), 337–355.
Richter, T., & Rapp, D. N. (2014). Comprehension and Validation of Text Information: Introduction to
the Special Issue. Discourse Processes, 51(1–2), 1–6.
Richter, T., Schroeder, S., & Wöhrmann, B. (2009). You don’t have to believe everything you read:
Background knowledge permits fast and efficient validation of information. Journal of
Personality and Social Psychology, 96(3), 538–558. https://doi.org/10.1037/a0014038
Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., & Nieuwenhuis, S. (2004). The role of the medial
frontal cortex in cognitive control. Science, 306(5695), 443–447.
Rinck, M., Gámez, E., Díaz, J. M., & De Vega, M. (2003). Processing of temporal information: Evidence
from eye movements. Memory & Cognition, 31(1 LB-Rinck2003), 77–86.
Rizzella, M. L., & O’Brien, E. J. (1996). Accessing global causes during reading. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 22(5), 1208.
Rizzella, M. L., & O’Brien, E. J. (2002). Retrieval of concepts in script-based texts and narratives: The
influence of general world knowledge. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 28(4), 780–790.
Rodd, J. M., Cai, Z. G., Betts, H. N., Hanby, B., Hutchinson, C., & Adler, A. (2016). The impact of recent
and long-term experience on access to word meanings: Evidence from large-scale internet-
based experiments. Journal of Memory and Language, 87, 16–37.
Rouet, J.-F., & Britt, M. A. (2011). Relevance processes in multiple document comprehension. In Text
relevance and learning from text (Issue June, pp. 19–52).
Rouet, J.-F., Vidal-Abarca, E., Erboul, A. B., & Millogo, V. (2001). Effects of information search tasks
on the comprehension of instructional text. Discourse Processes, 31(2), 163–186.
Royer, J. M., Carlo, M. S., Dufresne, R., & Mestre, J. (1996). The assessment of levels of domain
expertise while reading. Cognition and Instruction, 14(3), 373–408.
Salmerón, L., Kintsch, W., & Kintsch, E. (2010). Self-Regulation and Link Selection Strategies in
Hypertext. Discourse Processes, 47(3), 175–211.
Samuelstuen, M. S., & Bråten, I. (2005). Decoding, knowledge, and strategies in comprehension of
expository text. Scandinavian Journal of Psychology, 46(2), 107–117.

221
Sanford, A. J., & Garrod, S. C. (1989). What, when, and how?: Questions of immediacy in anaphoric
reference resolution. Language and Cognitive Processes, 4(3–4), SI235–SI262.
Sanford, A. J., & Sturt, P. (2002). Depth of processing in language comprehension: Not noticing the
evidence. Trends in Cognitive Sciences, 6(9), 382–386.
Schacter, D. L. (1999). The seven sins of memory: insights from psychology and cognitive
neuroscience. American Psychologist, 54(3), 182.
Schacter, D. L., & Dodson, C. S. (2001). Misattribution, false recognition and the sins of memory.
Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences,
356(1413), 1385–1393.
Schneider, W., Körkel, J., & Weinert, F. E. (1989). Domain-specific knowledge and memory
performance: A comparison of high-and low-aptitude children. Journal of Educational
Psychology, 81(3), 306.
Schotter, E. R., Tran, R., & Rayner, K. (2014). Don’t believe what you read (only once): comprehension
is supported by regressions during reading. Psychological Science, 25(6), 1218–1226.
Schroeder, S., Richter, T., & Hoever, I. (2008). Getting a picture that is both accurate and stable:
Situation models and epistemic validation. Journal of Memory and Language, 59(3), 237–255.
Shapiro, A. M. (2004). How including prior knowledge as a subject variable may change outcomes of
learning research. American Educational Research Journal, 41(1), 159–189.
Siebörger, F. T., Ferstl, E. C., & von Cramon, D. Y. (2007). Making sense of nonsense: An fMRI study
of task induced inference processes during discourse comprehension. Brain Research,
1166(1), 77–91.
Simon, H. A. (1974). How Big Is a Chunk?: By combining data from several experiments, a basic
human memory unit can be identified and measured. Science, 183(4124), 482–488.
Singer, M. (2006). Verification of text ideas during reading. Journal of Memory and Language, 54(4),
574–591.
Singer, M. (2013). Validation in Reading Comprehension. Current Directions in Psychological Science,
22(5), 361–366.
Singer, M. (2019). Challenges in Processes of Validation and Comprehension. Discourse Processes,
56(5–6), 465–483.
Singer, M., & Doering, J. C. (2014). Exploring Individual Differences in Language Validation. Discourse
Processes, 51(1–2), 167–188.
Singer, M., Graesser, A. C., & Trabasso, T. (1994). Minimal or global inference during reading. Journal
of Memory and Language, 33(4), 421–441.
Singer, M., & Halldorson, M. (1996). Constructing and validating motive bridging inferences. Cognitive
Psychology, 30, 1–38.
Singer, M., Halldorson, M., Lear, J. C., & Andrusiak, P. (1992). Validation of causal bridging inferences.
Journal of Memory and Language, 31, 507–524.
Smith, S. M. (2002). Fast robust automated brain extraction. Human Brain Mapping, 17(3), 143–155.
Speer, N. K., Reynolds, J. R., Swallow, K. M., & Zacks, J. M. (2009). Reading stories activates neural
representations of visual and motor experiences. Psychological Science, 20(8), 989–999.
Stewart, A. J., Pickering, M. J., & Sturt, P. (2004). Using eye movements during reading as an
implicit measure of the acceptability of brand extensions. Applied Cognitive
Psychology, 18(6), 697–709.

222
Swanson, H. L., Cochran, K. F., & Ewers, C. A. (1989). Working memory in skilled and less
skilled readers. Journal of Abnormal Child Psychology, 17(2), 145–156.
Swets, B., Desmet, T., Clifton, C., & Ferreira, F. (2008). Underspecification of syntactic ambiguities:
Evidence from self-paced reading. Memory & Cognition, 36(1), 201–216.
Swets, B., Desmet, T., Hambrick, D. Z., & Ferreira, F. (2007). The role of working memory
in syntactic ambiguity resolution: A psychometric approach. In Journal of Experimental
Psychology: General (Vol. 136, Issue 1, pp. 64–81). American Psychological Association.
Tanenhaus, M. K., & Trueswell, J. C. (1995). Sentence comprehension.
Trabasso, T., Secco, T., & van den Broek, P. (1984). Causal cohesion and story
coherence. In H. Mandl, N. L. Stein, & T. Trabasso (Eds.), Learning and Comprehension
of Text (pp. 83–111). Lawrence Erlbaum Associates.
Trabasso, T., & Suh, S. (1993). Understanding text: Achieving explanatory coherence through
on‐line inferences and mental operations in working memory. Discourse Processes, 16(1–2),
3–34.
Turcotte, S. (2012). Computer-supported collaborative inquiry on buoyancy: A discourse analysis
supporting the “pieces” position on conceptual change. Journal of Science Education and
Technology, 21(6), 808–825.
van Berkum, J. A., Hagoort, P., & Brown, C. M. (1999). Semantic integration in sentences and
discourse: Evidence from the N400. Journal of Cognitive Neuroscience, 11(6), 657–671.
van de Meerendonk, N., Kolk, H. J., Chwilla, D. J., & Vissers, C. T. W. M. (2009). Monitoring in language
perception. Linguistics and Language Compass, 3(5), 1211–1224.
van De Meerendonk, N., Kolk, H. J., Vissers, C. T. W. M., & Chwill, D. J. (2010). Monitoring in language
perception: Mild and strong conflicts elicit different ERP patterns. Journal of Cognitive
Neuroscience, 22(1), 67–82.
van den Broek, P. (1988). The effects of causal relations and hierarchical position on the importance
of story statements. Journal of Memory and Language, 27(1), 1–22.
van den Broek, P. (1994). Comprehension and memory of narrative texts: Inferences and coherence.
In Handbook of psycholinguistics. (pp. 539–588). Academic Press.
van den Broek, P. (2010). Using Texts in Science Education: Cognitive Processes and Knowledge
Representation. Science, 328, 543–456.
van den Broek, P., Beker, K., & Oudega, M. (2015). Inference generation in
text comprehension: Automatic and strategic processes in the construction of a
mental representation. In Inferences during reading. (pp. 94–121). Cambridge University
Press.
van den Broek, P., Bohn-Gettler, C., Kendeou, P., Carlson, S., & White, M. J. (2011). When a reader
meets a text: The role of standards of coherence in reading comprehension. In M. T.
McCrudden, J. P. Magliano, & G. Schraw (Eds.), Text relevance and learning from text. (pp.
123–140). CT: Information Age Publishing.
van den Broek, P., Fletcher, C. R., & Risden, K. (1993). Investigations of inferential processes in
reading: A theoretical and methodological integration. Discourse Processes, 16(1–2), 169–180.
van den Broek, P., & Helder, A. (2017). Cognitive Processes in Discourse Comprehension: Passive
Processes, Reader-Initiated Processes, and Evolving Mental Representations. Discourse
Processes, 54, 1–13.

223
van den Broek, P., & Kendeou, P. (2008). Cognitive processes in comprehension of science texts: the
role of co-activation in confronting misconceptions. Applied Cognitive Psychology, 22(3), 335–
351.
van den Broek, P., Lorch, R. F. J., Linderholm, T., & Gustafson, M. (2001). The effects of readers’ goals
on inference generation and memory for texts. Memory & Cognition, 29(8), 1081–1087.
van den Broek, P., Risden, K., Fletcher, C. R., & Thurlow, R. (1996). A “landscape” view of reading:
Fluctuating patterns of activation and the construction of a stable memory representation.
Models of Understanding Text, 165–187.
van den Broek, P., Risden, K., & Husebye-Hartman, E. (1995). The role of readers’ standards for
coherence in the generation of inferences during reading. In J. Lorch R. F. & E. J. O’Brien
(Eds.), Sources of coherence in text comprehension (pp. 353–373). NJ: Erlbaum.
van den Broek, P., Young, P. M., Tzeng, Y., & Linderholm, T. (1999). The Landscape Model of Reading.
In H. van Oostendorp & S. R. Goldman (Eds.), The Construction of Mental Representations
During Reading (pp. 71–98). Lawrence Erlbaum.
van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. Academic
Press.
van Herten, M., Chwilla, D. J., & Kolk, H. J. (2006). When heuristics clash with parsing routines: ERP
evidence for conflict monitoring in sentence perception. Journal of Cognitive Neuroscience,
18(7), 1181–1197.
van Herten, M., Kolk, H. H. J., & Chwilla, D. J. (2005). An ERP study of P600 effects elicited by semantic
anomalies. Cognitive Brain Research, 22(2), 241–255.
van Kesteren, M. T., Beul, S. F., Takashima, A., Henson, R. N., Ruiter, D. J., & Fernandez, G. (2013).
Differential roles for medial prefrontal and medial temporal cortices in schema-dependent
encoding: from congruent to incongruent. Neuropsychologia, 51(12), 2352–2359.
van Kesteren, M. T., Fernandez, G., Norris, D. G., & Hermans, E. J. (2010). Persistent
schema-dependent hippocampal-neocortical connectivity during memory encoding and
postencoding rest in humans. Proceedings of the National Academy of Sciences of the United
States of America, 107(16), 7550–7555.
van Kesteren, M. T., Ruiter, D. J., Fernández, G., & Henson, R. N. (2012). How schema and novelty
augment memory formation. Trends in Neurosciences, 35(4), 211–219.
van Kesteren, M. T., Ruiter, D., & Fernandez, G. (2017). How neuroscience can inform education: A
case for prior knowledge effects on memory. In E. Segers & P. van den Broek (Eds.),
Developmental Perspectives in Written Language and Literacy. John Benjamins.
van Moort, M. L., Jolles, D. D., Koornneef, A., & van den Broek, P. (2020). What you read vs what you
know: Neural correlates of accessing context information and prior knowledge in constructing
a mental representation during reading. Journal of Experimental Psychology: General, 149(11),
2084–2101.
van Moort, M. L., Koornneef, A., & van den Broek, P. (2018). Validation: Knowledge- and Text-Based
Monitoring During Reading. Discourse Processes, 55(5–6), 480–496.
van Moort, M. L., Koornneef, A., & van den Broek, P. W. (2021). Differentiating Text-Based and
Knowledge-Based Validation Processes during Reading: Evidence from Eye Movements.
Discourse Processes, 58(1), 22–41.

224
Vincent, J. L., Kahn, I., Snyder, A. Z., Raichle, M. E., & Buckner, R. L. (2008). Evidence for a
frontoparietal control system revealed by intrinsic functional connectivity. Journal of
Neurophysiology, 100(6), 3328–3342.
Virtue, S., Haberman, J., Clancy, Z., Parrish, T., & Jung Beeman, M. (2006). Neural activity of
inferences during story comprehension. Brain Research, 1084(1), 104–114.
Vosniadou, S. (1994). Capturing and modeling the process of conceptual change. Learning and
Instruction, 4(1), 45–69.
Voss, J. F., & Bisanz, G. I. (1985). Knowledge and the processing of narrative and expository texts. In
B. K. Britton & J. B. Black (Eds.), Understanding expository text. Erlbaum.
Walsh, E. K., Cook, A. E., & O’Brien, E. J. (2018). Processing real-world violations embedded within a
fantasy-world narrative. Quarterly Journal of Experimental Psychology, 71(11), 2282–2294.
Westfall, J., Kenny, D. A., & Judd, C. M. (2014). Statistical power and optimal design
in experiments in which samples of participants respond to samples of
stimuli. In Journal of Experimental Psychology: General (Vol. 143, Issue 5, pp. 2020–2045).
American Psychological Association.
Whitney, P., Clark, M. B., & Whitney, P. (1991). Working-Memory Capacity and the Use of Elaborative
Inferences in Text Comprehension. Discourse Processes, 14(2), 133–145.
Wiley, J., George, T., & Rayner, K. (2018). Baseball fans don’t like lumpy batters: Influence of domain
knowledge on the access of subordinate meanings. Quarterly Journal of Experimental
Psychology, 71(1), 93–102.
Wiley, J., & Voss, J. F. (1999). Constructing arguments from multiple sources: Tasks that promote
understanding and not just memory for text. Journal of Educational Psychology, 91(2), 301.
Williams, C. R., Cook, A. E., & O’Brien, E. J. (2018). Validating semantic illusions: Competition between
context and general world knowledge. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 44(9), 1414.
Woolrich, M. W., Behrens, T. E., Beckmann, C. F., Jenkinson, M., & Smith, S. M. (2004). Multilevel
linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4), 1732–
1747.
Worsley, K. J. (1997). An overview and some new developments in the statistical analysis of PET and
fMRI data. Human Brain Mapping, 5(4), 254–258.
Worsley, K. J., & Friston, K. J. (1995). Analysis of fMRI time-series revisited—again. Neuroimage, 2(3),
173–181.
Yarkoni, T., Speer, N. K., & Zacks, J. M. (2008). Neural substrates of narrative comprehension and
memory. NeuroImage, 41(4), 1408–1425.
Ye, Z., & Zhou, X. (2008). Involvement of cognitive control in sentence comprehension: Evidence from
ERPs. Brain Research, 1203, 103–115.
Ye, Z., & Zhou, X. (2009). Executive control in language processing. Neuroscience and Biobehavioral
Reviews, 33(8), 1168–1177.
Yeari, M., van den Broek, P., & Oudega, M. (2015). Processing and memory of central versus
peripheral information as a function of reading goals: evidence from eye-movements. Reading
and Writing, 28(8), 1071–1097.
Yuill, N., & Oakhill, J. (1991). Children’s problems in text comprehension: An experimental
investigation. In Children’s problems in text comprehension: An experimental investigation.
Cambridge University Press.

225
Zabrucky, K., & Ratner, H. H. (1986). Children’s Comprehension Monitoring and Recall of Inconsistent
Stories. Child Development, 57(6), 1401–1418.
Zacks, J. M., & Ferstl, E. C. (2015). Discourse Comprehension. In G. Hickok & S. L. Small (Eds.),
Neurobiology of Language (pp. 661–673). Academic Press.
Zwaan, R. A., Langston, M. C., & Graesser, A. C. (1995). The construction of situation models in
narrative comprehension: An event-indexing model. Psychological Science, 6, 292–297.
Zwaan, R. A., & Radvansky, G. A. (1998). Situation models in language comprehension and memory.
Psychological Bulletin, 123(2), 162–185.
Zwaan, R. A., & Singer, M. (2003). Text comprehension. In A. C. Graesser, M. A. Gernsbacher, & S.
R. Goldman (Eds.), Handbook of discourse processes (pp. 83–121). Erlbaum.
Zwitserlood, P. (1989). The locus of the effects of sentential-semantic context in spoken-word
processing. Cognition, 32(1), 25–64.

226
227
Dankwoord
Dankwoord
It’s the friends we meet along life’s road who help us appreciate the journey.
En wat een reis is dit proefschrift geweest! Het was niet altijd makkelijk, maar
terugkijkend op dit belangrijke deel van mijn leven kan ik zeggen dat ik van ieder
moment genoten heb (van het een wel wat meer dan het ander moet ik eerlijk
toegeven). Dit is het laatste en wat mij betreft het lastigste onderdeel van dit
proefschrift om te schrijven. Want hoe ga ik mijn dankbaarheid voor alle geweldige
mensen die onderdeel waren van deze reis in woorden vatten? Maar ik ga toch een
poging wagen.
Allereerst wil ik mijn promotor en copromotor Paul van den Broek en Arnout
Koornneef bedanken. Jullie zijn onmisbaar geweest in de totstandkoming van dit
proefschrift en in mijn ontwikkeling als wetenschapper. Ik had mij geen betere
begeleiders kunnen wensen en ik heb enorm veel van jullie geleerd, als
wetenschapper en als mens.
Paul, bedankt voor je support, je geduld, je nimmer aflatende vertrouwen en
je betrokkenheid. Je hebt me altijd gestimuleerd om verder te kijken en verder te
denken (en dankzij jou kan ik nooit meer naar een tekst kijken zonder ‘de lezer’ in mijn
achterhoofd te houden). Je bent een mooi mens en je enthousiasme voor de
wetenschap is inspirerend. Ik hoop dan ook dat we onze samenwerking nog lang
mogen voortzetten.
Arnout, bedankt voor je steun en je vertrouwen, je enthousiasme, je scherpe
opmerkingen en je directheid. Ik heb meer van je geleerd dan je je zelf waarschijnlijk
beseft. Hoewel we elkaar soms bijna tot wanhoop dreven, heb ik genoten van alle
woensdagen waarop wij verhit zaten te discussiëren over mijn onderzoek. Ik mis ze
nu al, dus ik hoop dat we af en toe nog een sessie in kunnen lassen in Utrecht!
Al mijn Leidse collega’s, waaronder alle (oud)collega’s van de afdeling
Onderwijswetenschappen en mijn mede-aio’s: Astrid, Amy, Katinka, Marja, Jolien,
Marcella, Mirjam, Anne, Karin, en Josefine, bedankt voor jullie steun en gezelligheid.
Ik had me geen betere collega’s kunnen wensen! In het bijzonder mijn roomies
Katinka, Astrid, Amy, Marja, en Dianne: Bedankt dat jullie er altijd voor me waren! Ook
mijn medebestuursleden van het PhD Platform waren onmisbaar bij dit traject, en dan
in het bijzonder Bianca, Friederike en Amy. Wat waren (en zijn!) we een top-team.
Mijn paranimfen Amy en Reini verdienen een extra eervolle vermelding. Amy,
mijn science sister en de andere helft van de ‘Leiden-ladies’, ik ben blij dat Paul je
wilde adopteren, want ik had je voor geen goud willen missen. Bedankt voor je
mentale steun, je relativeringsvermogen, je humor en voor being the best partner-in-
crime ever. Maar bovenal, bedankt dat ik altijd op je kon rekenen en dat je je samen
met mij overal voor de volle 100% hebt ingestort. It’s been one hell of a ride, but I’ll

230
see you on the other side! Reini, mijn science schoonzusje, ik ben vereerd dat je me
bij wil staan tijdens mijn verdediging en had me geen betere paranimf kunnen wensen.
I would also like to thank my colleagues and friends in Pittsburgh. In
particular, I would like to thank Charles Perfetti for enabling me to stay at the LRDC at
the University of Pittsburgh. Thank you for our inspiring conversations and the
opportunities you gave me at the Perfetti lab. I have learned a lot during these three
months, both as a scientist and as a human being.
Ook wil ik alle mensen die hebben bijgedragen aan de onderzoeken die in dit
proefschrift beschreven staan bedanken: alle studenten die hebben geholpen bij de
dataverzameling, alle deelnemers voor het lezen van al mijn - stiekem best saaie -
tekstjes en alle ondersteuning vanuit het LIBC voor hun hulp bij het uitvoeren van mijn
fMRI onderzoek. Zonder jullie was dit proefschrift er nooit geweest!
Mijn lieve vrienden en familie, die waarschijnlijk nog steeds niet helemaal
begrijpen waar mijn onderzoek nou precies over gaat… maar me altijd on-
voorwaardelijk steunen in alles wat ik doe. Bedankt voor alles!
Mam en pap, bedankt voor jullie vertrouwen in mijn keuzes en het stimuleren
van mijn eigenwijsheid, voor jullie onvoorwaardelijke support en rotsvaste geloof dat
het wel goed ging komen. Mijn allerliefste broertje Frank, we lijken zo op elkaar en we
zijn toch ook weer zo verschillend. Maar altijd twee handen op een buik. Zonder jou
had ik het niet gekund.
Jayce, bedankt voor je onvoorwaardelijke steun, je vertrouwen en dat je mij
de ruimte en vrijheid geeft om mijn eigen ding te doen. Maar ook voor het geduldig
aanhoren van al mijn verhalen (en dat zijn er veel!) en voor alle avonturen die we
samen beleefd hebben de afgelopen 16 jaar. Ik had met niemand liever samen
volwassen willen worden en ik wil met niemand liever samen verder groeien. To
infinity and beyond!

231
Curriculum Vitae
Curriculum Vitae
Marloes van Moort was born on the 23rd of January in 1989 in Zoetermeer,
the Netherlands. After graduating from secondary school (Alfrink College,
Zoetermeer), Marloes obtained her Bachelor’s degree in Psychology in 2010 and her
Research Master’s degree in Psychology, specializing in Cognitive Neuroscience, in
2013 at Leiden University.
In October 2015, Marloes started her PhD project at the Department of
Educational Science at the Institute of Education and Child Studies. Under the
supervision of Prof. Dr. Paul van den Broek and Dr. Arnout Koornneef she worked on
several research projects, including those reported in this dissertation. In addition,
Marloes taught courses at both bachelor and master level and supervised numerous
bachelor and master students. In 2017, Marloes obtained a Grassroots grant for
innovation in education to redesign a course on designing digital tools for an
educational setting together with Dr. Arnout Koornneef. She obtained her University
Teaching Qualification in 2020.
During her time as a PhD student, Marloes has presented her research on
numerous national and international conferences. In 2021 she was awarded the Best
Graduate Student Research Award from the Society for Text and Discourse.
Furthermore, Marloes spent three months as a visiting researcher at the Learning
Research and Development Center (LRDC) at the University of Pittsburgh. In
collaboration with Prof. Dr. Charles Perfetti, she set up a research project to examine
the facilitative and interfering (or protective) effects of background knowledge in
learning new information using Electroencephalography (EEG).
In August 2020 Marloes started working as a lecturer both at the Institute of
Education and Child studies at Leiden University and the Department of Language,
Literature and Communication at Utrecht University. In September 2021, Marloes
started working as an Assistant Professor in Language and Communication in the
department of Language, Literature and Communication at Utrecht University. Here
she continues her research on the complex interactions between text information and
the readers’ existing knowledgebase.

235
List of publications
List of publications
Van Moort, M. L., Koornneef A., & van den Broek, P. (under revision).
Purposeful validation: Do reading goals affect monitoring processes during reading
and the construction of a mental representation? Journal of Educational Science.

Van Moort, M. L., Jolles, D. D., Koornneef A., & van den Broek, P. (2020).
What you read vs what you know: Neural correlates of accessing context information
and prior knowledge in constructing a mental representation during reading. Journal
of Experimental Psychology: General, 149(11), 2084–2101.

Van Moort, M. L., Koornneef, A., & van den Broek, P. (2020). Differentiating
text-based and knowledge-based validation processes during reading: Evidence from
eye movements. Discourse Processes. 1-20.

Van Moort, M. L., Koornneef, A., & van den Broek, P. (2018). Validation:
Knowledge- and Text-Based Monitoring During Reading. Discourse Processes, 55(5-
6), 480-496.

Murphy, P. R., van Moort, M. L., & Nieuwenhuis, S. (2016). The Pupillary
Orienting Response Predicts Adaptive Behavioral Adjustment after Errors. PLoS One,
11(3), e0151763.

Van Moort, M. L., Helder, A., & van den Broek, P. (in press). Werk aan het
opbouwen van kennis. In van Steensel, R.C.M. & Houtveen, T. (Eds.). Hoe ziet effectief
onderwijs in begrijpend lezen eruit?

Helder, A., van den Broek, P., van Moort, M. L., van den Bosch, L. J. & De
Bruine, A. (2020). Begrijpend lezen vanuit een cognitief perspectief: hoe leerlingen
coherente mentale representaties opbouwen tijdens het lezen van teksten. Digitaal
vakdidactisch Handboek Nederlands.
https://didactieknederlands.nl/handboek/2020/08/begrijpend-lezen-deel-1/

239

You might also like