Gauche Pfeiffer Flores 2021 The Role of Inferences in Reading Comprehension A Critical Analysis

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

1043805

research-article2021
TAP0010.1177/09593543211043805Theory & PsychologyGauche and Pfeiffer Flores

Article

Theory & Psychology

The role of inferences in


2022, Vol. 32(2) 326­–343
© The Author(s) 2021
Article reuse guidelines:
reading comprehension: A sagepub.com/journals-permissions
DOI: 10.1177/09593543211043805
https://doi.org/10.1177/09593543211043805
critical analysis journals.sagepub.com/home/tap

Gilberto Gauche
Hochschule für Künste Bremen

Eileen Pfeiffer Flores


University of Brasilia

Abstract
The central role attributed to inferences in reading comprehension can be traced back to the
Construction-Integration (CI) model, and many of its theoretical assumptions are still shared
by current models. This article analyses recent research in terms of how inferences have been
conceived, how they relate to comprehension, and how the CI model’s theoretical legacy has been
articulated. The main issues found are that the way inferences are currently conceived doesn’t
satisfactorily distinguish them from ordinary comprehension and that a series of assumptions which
plausibly apply to computational models have been often mistakenly attributed to interpersonal
processes. This, added to the widespread usage of lab-created texts in experiments, hinders the
faithful capturing of personal comprehension processes. Finally, we propose recommendations
for future research based on conceptual clarity, metatheoretical awareness, and a meaning-based
approach on language, so as to improve interresearcher communication, theoretical consistency,
and ecological validity.

Keywords
computational models, conceptual analysis, construction-integration model, inferences, reading
comprehension

The ability to make inferences is seen as closely linked to reading comprehension (e.g.,
Bos et al., 2016; Cain & Oakhill, 1999; Nash & Heath, 2011). Inferences have been con-
sidered central to cognitive research on reading comprehension ever since the field’s first
significant developments (see Goldman et al., 2007; McNamara & Magliano, 2009, for
a comprehensive account of cognitive theories and computational models of reading

Corresponding author:
Gilberto Gauche, Hochschule für Künste Bremen, Dechanatstrasse 13-15, Bremen, 28195, Germany.
Email: gilbertogauche@gmail.com
Gauche and Pfeiffer Flores 327

comprehension). The centrality of inferences can be traced back to the research devel-
oped by Walter Kintsch in collaboration with Teun van Dijk (e.g., Kintsch, 1988; Kintsch
& van Dijk, 1978). The general claim is that inferences are vital in bringing together
elements of the text and transforming them into a coherent and unified mental
representation.
Kintsch and van Dijk’s work, which culminated in the development of the Construction-
Integration (CI) model (Kintsch, 1988), quickly acquired historical prominence (see van
Dijk, 1995, for a history of the model). The CI model1 was subsequently used as a foun-
dation for many subsequent comprehension models that share its most basic presupposi-
tions, even when attempting to overcome its problems (Goldman et al., 2007; McNamara
& Magliano, 2009).
Inferences were already central in early versions of the CI model. Kintsch and van
Dijk (1978), for example, posited a text-base of propositions (microstructure) from
which higher order, more abstract structures would be constructed through a rule-based
process based on deductive inference (i.e., logical entailment). In later versions (e.g.,
Kintsch, 1988) inference processes persisted as the primary postulated mechanism sup-
porting text coherence and model construction.
Various systems for classifying inferences have been proposed (see Kispal, 2008, for
a thorough review of these proposals). However, as shown by our upcoming analysis,
these classifications are usually wanting of solid theoretical foundations which would
provide the common ground for researchers to use as a reference. Kintsch (1993) had
already pointed out that very distinct processes were being summarised under the global
term “inference” as if they were the same. He listed existent classifications, emphasized
the lack of a unifying framework, and drafted his own proposal for a more unified
approach. Despite the pertinence of Kintsch’s remarks, his call for a theoretically solid
common ground for inference research doesn’t appear to have been heard. Important
theoretical issues have ceased to be discussed, as if they had been resolved and could
comfortably recede to the background as well-established principles. A sign of this is the
increasing scarcity, in the last 10 to 15 years, of new, explicit attempts to define or clas-
sify inferences, or to explain precisely how comprehension cashes out into inference
processes.
The present work intends to show in which ways this confusion is still prevalent and
the need to unravel it in order to improve interresearcher dialogue while keeping the
necessary integrity of investigated phenomena. This is an exemplary review (Rubin
et al., 2009); in other words, we offer a representative but critical account of how infer-
ences have been conceived and how they have been connected to comprehension in the
empirical literature, rather than a detailed description of published work. We look at
some examples taken from comprehension research in some detail, in order to analyse in
depth how theoretical distinctions (e.g., between the inferential and the literal) are actu-
ally used and operationalised. This is important because it is one thing to state a theoreti-
cal commitment and another to actually apply it, for example, by transforming it into
performance measures or independent variables. Far from being a detail, this cashing out
of theory in concrete research reveals how the distinctions are in fact being used. It is in
this use that inconsistencies and confusions often creep in, to the point of changing the
concrete phenomenon that ends up being studied. On the other hand, they often serve to
328 Theory & Psychology 32(2)

highlight gaps and the need for perfecting background theory, but for them to serve this
purpose, we need to look at this moment of translation from theory to variables as a key
moment in research.
This article is structured as follows: First, we discuss how, when using computational
models of comprehension (including elaborations on the CI model and derived models
that rely on a computational approach to the mind), it is often forgotten that they are, so
to say, maps of the mind, not the mind itself. We aim to show that the translation from
theoretical models to lab or real-life situations is itself an important theoretical move, not
an automatic application of principles. Second, we highlight pivotal, yet little-investi-
gated problems regarding how readers use their knowledge in reading comprehension,
the main context in which inferences are investigated. We finish our analytical section by
detailing other minor issues that uncover the need for higher standards of theoretical
grounding in comprehension research. In the final section of our work, we adopt a more
positive approach, presenting solutions to the problems we described and pointing to
current, promising trends that are arising from the investigation of literary texts and may
be seen as positive inspiration for researching comprehension in a broader sense.
As a rule, we will build our analysis upon a broad conception of the CI model, for it
is a paradigmatic example of a computational view of reading comprehension which
most later cognitive models have built upon and modified, while sharing its theoretical
premises to great extent (Goldman et al., 2007; McNamara & Magliano, 2009). Most
importantly, we discuss the way the CI model’s assumptions have been appropriated and
operationalized in later research. Thus, our analysis applies to reading comprehension
research in a broad sense and doesn’t restrict itself to the CI model.

A paradigmatic case: Ernie the hamster


We shall begin by illustrating the centrality and difficulty of defining inferences and their
role in comprehension with an example from Nash and Heath’s (2011) distinction
between inferential and literal questions in their experiment about reading comprehen-
sion in children with Down syndrome. The example will be used as a prototypical case
of conceptual difficulties when defining and classifying inferences, not as a specific criti-
cism of Nash and Heath’s empirical study.
Participants were asked to read a text about a hamster called Ernie and to answer
questions that were classified either as inferential or literal. A representative example of
a supposedly inferential question was based on this part of the text: “One day the cage
door was left open. When no one was looking Ernie decided to escape and headed straight
for Sarah’s bag” (Nash & Heath, 2011, p. 1790). The question was, “How was Ernie able
to escape?” (p. 1791). But if, as the story explicitly states, the cage door was left open, in
what sense does answering this question require an inference? Considering that the door
was open, it being the only usual obstacle against Ernie’s escape, there seems to be no
need for elaborating beyond what’s written. We realise that this would in fact be a typical
example of an inference in comprehension research, because it does not really say that
Ernie escaped through the door, but bear with us, as what we want to argue is that expand-
ing the notion of inference too far can lead to letting go of precisely what makes infer-
ences important and distinctive.
Gauche and Pfeiffer Flores 329

Before attempting to answer the question regarding the escape of Ernie, let us care-
fully consider the meaning in ordinary language of the main concept at stake: inference.
A concept’s common usage is usually a good starting point for the mapping of its mean-
ing, so that the specific characteristics of its scientific usage can be defined against the
background of ordinary language.
In ordinary language, we usually say that an inference is necessary when there is some
kind of gap in discourse that creates uncertainty and asks for a conjecture. This gap might
either be because two or more possibilities for understanding are given (as would be the
case, for example, if without any explanation about the cage’s door, Ernie’s owner would
come and find an empty cage) or because the discourse contains unknown elements (e.g.,
if one is unfamiliar with the English language and doesn’t know the meaning of the word
“cage” or “door”). A third possibility refers to when we draw conclusions about open
possibilities in the future.
A crucial point here is the following: that inference implies the existence of a nonin-
ferential dimension of language, a common ground of certain understanding in relation
to which the inference is defined as a special case. Every inference arises from an uncer-
tainty, but every uncertainty can only be faced against a background of some certainty.
By overgeneralising the concept of inference, we cease to distinguish between those
degrees of certainty, thus threatening the usefulness of the concept.
A further distinction must still be made: there is no equivalence between the opposites
certain/uncertain and implicit/explicit. There seems to be an assumption in most research
on reading comprehension that all implicitness creates uncertainty and calls for an infer-
ence. We propose that these are different things and should not be conflated. For exam-
ple: a story starts with a husband and wife sitting in their car in a garage, tensely staring
ahead of them, and the husband says to his wife, “can you please start the car?” It is
certain, however implicit, that he refers to their car, and not to the neighbour’s (or the
Pope’s, or Elvis Presley’s, and so on to infinitude). We would not say that the reader must
infer that the husband is referring to the couple’s car.
Thus, the common ground of certain understanding includes both explicitness and
implicitness. That is why to say that someone is “taking things literally” is not (as would
be expected if implicitness were equal to the need for inferences) to describe someone
who fails to see that inferences are needed, but, on the contrary, who fails to capture what
is obvious and, because of that, insists on seeing the need for inferences where none are
needed. So even if it is true that inferences always presuppose some sort of implicitness,
the inverse is not true. The implicit is everything that is left out, and everything that is
left out is infinite. That is why the implicit cannot, logically, be equated to that which
needs inferencing. To put it blandly, prescribing the need for an additional inference in
every such case would mean an enormous cognitive burden—an infinite one, in fact.
Of course, it could be argued that nobody needs to care about what inferencing means
in ordinary language if we find it useful to change this usage in scientific theories and
treat every case of implicitness as calling for an inference. However, this is a radical revi-
sion of both the concepts of inference and of implicitness. In good scientific practice,
such a change will ideally be explicit (here is no place to let the reader fill in the gap with
inferences!) and be accompanied by a theoretical justification as to what advantages are
brought about by obliterating an existent distinction. As far as our knowledge goes, this
330 Theory & Psychology 32(2)

clarification was never carried out in reading comprehension research, suggesting that
the identification of inference with implicitness was assumed, rather than deliberated.
Even if there were a justification for claiming that implicitness entails the need for
inferences, we hope to show the problems that arise thereof, in terms of conceptual confu-
sion and undesirable outcomes. One such undesirable result is that we would never be
able to explain reading comprehension, because the inferencing process would be caught
in an infinite regress. Let us go back to Ernie for a moment to show this. Let us assume
that the original sentence had in fact left implicit how Ernie escaped. We could then
rewrite the sentence in a more explicit way: Ernie escaped by walking through the door.
Someone could still consider that this is not explicit enough: how, after all, did Ernie walk
through the door? Ernie escaped by walking through the door with his paws. But this may
even not be explicit enough, leading us to write: Ernie escaped by moving from where he
was towards the door, and then beyond it, with the help of movements of his paws, which
pull him forward. This could keep going forever. Thus, it is always possible to assume that
the text was not explicit enough and that an inference had to be made. What is left unsaid
is, by definition, unending. Thus, a decision to conflate the inferential with the implicit
would make the inferencing process impossible, as every inference would demand, in
turn, another inference, and so on to infinity. No further explanations are needed on how
much this would hamper any coherent and useful account of comprehension.
Of course, an inference, even if unnecessary for most readers, might be needed for a
particular reader, for example, one who is not familiar with the language or the specific
vocabulary. Although this may have been the case with some of Nash and Heath’s
research participants, who had language delays, aprioristically presupposing their lack of
knowledge would go against the commonly accepted principle of aiming for a phenom-
enon’s simplest explanation. Thus, it makes much more sense that, also in the case of
struggling comprehenders, we consider such sentences as being in principle understand-
able without need of inferences.
The authors’ claim that Ernie’s escape requires an inference seems to be supported by
the reasoning that the cage door being left open and Ernie’s escape are two different
events, which must be connected by a bridging inference. Although the reasoning pos-
sesses a sound logic structure in concordance with the computational approach of local
coherence proposed by Kintsch, our previous analysis showed that it is incompatible
with how real readers comprehend.
In fact, Kintsch seems to be aware that his model is not conflatable with the personal
level of explanation and that the bridging between computational models and human
readers is itself a complex task. While it is sometimes claimed that his model presup-
poses knowledge to be represented in the form of proposition in the human mind (e.g.,
McNamara & Magliano, 2009), Kintsch (1988) states that “the decision to use a propo-
sitional representation does not imply that all other forms of knowledge are to be consid-
ered unimportant or nonexistent” (p. 166). In later works, Kintsch (1993, 1998) describes
how representing text propositionally leads to mistakenly categorising as inferential
information which is actually directly given by the situation model (cf. Kintsch, 1998,
pp. 191–192), which applies quite exactly to the case of Ernie’s cage door. In the follow-
ing section, we will look more closely at how failure to distinguish computational mod-
els from what they model can lead to conceptual dead ends.
Gauche and Pfeiffer Flores 331

Text content and reader’s knowledge: Confusion between two levels of


explanation
The CI model (Kintsch, 1988) distinguishes between information internal to the text and
information external to it (stemming from the reader’s knowledge). This classification
has been carried over wholesale to applied reading comprehension research (e.g., Bos
et al., 2016; Carlson et al., 2014; Elbro & Buch-Iversen, 2013; Nash & Heath, 2011;
Williams, 2014, 2015). In our view, this is a representative example of the direct transfer-
ence of vocabulary pertaining to the computational level of explanation, to explanations
offered at the level of person-to-person and person-to-text interaction (and of the ensuing
confusions). Let us explain.
The neat separation between internal and external knowledge is one of the require-
ments for building a computational model in which it is possible to discriminate, without
overlap, between text-based bits of information and “inferred” propositions (which need
to be additionally inserted for the text to be adequately processed). The direct application
of this distinction to readers at the personal level, however, raises serious difficulties. As
we said at the beginning of this paper, bridging between theoretical concepts and their
application to concrete research or intervention is in itself a theoretical endeavour. Failure
to see this leads, as in this case, to transporting concepts conceived for the computational
level of information processing, to the reader–text (interpersonal) level. This cannot
work, for the two levels of explanation have different aims. The computational level
seeks to understand the computational mechanisms that make possible the reader strate-
gies at the reader–text level. At the personal level, the concepts will be intentional and
will make reference to meaning, sense, strategies, and so forth. At the computational
level, processes will be nonintentional and will make reference to input, information,
states, and the like. Computational explanatory models do not deal with meanings and
strive to explain comprehension at a subpersonal, syntactical level (for the distinction
between subpersonal and personal levels of explanation, see Dennett, 1969). For exam-
ple, the sentence “the bliaber quickly turned into a fainble” can be readily processed into
the macrostructure by the CI model, but would be an obstacle for a real reader, because
these words are not part of the English language and, thus, meaningless in principle.
While computational models “make sense” of texts through the application of logical
rules to a purely symbolic system enclosed within itself, real readers make sense of texts
primarily by making use of words’ semantic content (i.e., of words’ usage in real situa-
tions, and not enclosed within a purely symbolic system), and not by sheer application of
logical rules (Searle, 2003). Although logical reasoning may take place, its validity
doesn’t come from logic rules, but from the meaning of words—precisely their meaning,
which is not processed by a computational model like the CI model.
Wallot (2014) explains that a purely symbolic system—such as a computational mod-
el’s representation of a text—cannot contain meaning within itself; it must relate to
something else—readers, thus, bring the meaning to the otherwise empty symbolic
matrix that is a text. A metaphor from literature can help us understand this. We think of
Borges’ Library of Babel (Bloch, 2008), an infinite library containing all that can be and
will ever be written. The library works as a closed symbolic system: the content of books
can only be validated by the contents of other books, so that without any extratextual
332 Theory & Psychology 32(2)

reference, a reader has no way of knowing whether there is any true meaning in what
they have read. Borges reinforces that the library’s collection is comprised of infinite
randomness out of which, by coincidence, crumbs of apparent wisdom emerge: “For
every rational line or forthright statement there are leagues of senseless cacophony, ver-
bal nonsense, and incoherency” (Bloch, 2008, p. 5). Fascinated with the infinite grandeur
of the available oeuvre, credulously hunting for bits of sense scattered across a sea of
nonsense, many inhabitants of the library lose their lives or go mad in their fruitless
quest, unaware of the inescapable meaninglessness of their enterprise.
Despite being so essential to bottom-up processes in humans that discussing its appli-
cation seems of little utility, readers’ knowledge can be conceived in a top-down approach
to refer to specific types of knowledge that the average reader doesn’t possess, such as a
specific cultural background or domain knowledge (e.g., Goldman et al., 2016; McCarthy
& Goldman, 2019). A musicologist, unlike the average person, sees neumes and knows
what they mean—but still, this approach has little to do with the externally inserted
propositions that Kintsch (1988) originally referred to as reader’s knowledge. The author
is even more explicit in his later book, saying that describing readers’ knowledge use in
terms of inferences is “a regrettable terminology that has caused a great deal of confu-
sion” (Kintsch, 1998, p. 189).
Bos et al.’s (2016) operationalisation of text-based versus knowledge-based inferenc-
ing strategies can be particularly useful for our analysis, since it illustrates some difficul-
ties that arise when applying the internal–external knowledge separation to real-life
readers, while also presenting a way to make this distinction useful when not taken too
literally. Rather than discussing fixed categories of inferences, the authors talk about
text- and knowledge-based inferencing strategies. They exemplify what they mean by
text-based strategy with the following example: the sentence “Pedro put the cake with
candles on the table” could be the basis for inferences about the next sentence: “He won-
dered what gifts his little sister would get” (p. 6). The first sentence that gives informa-
tion about candles and cakes potentially helps with the inference that this is a birthday
party, so this is an example of a text-based inference. The authors recognise, however,
the overlapping of categories: one needs to know that cakes, candles, and gifts are things
that are often around when birthdays are celebrated. In this sense, the readers are also
making use of knowledge-based strategies.
Theoretically sharp distinctions between external and internal information, when
transferred thoughtlessly from the computational level, where they make perfect sense,
to the personal level, result in confusions that are closely related to the ones we consid-
ered when discussing the literal and inferential. Indeed, the two confusions overlap—at
the personal level, it is impossible to separate precisely what is “really there in the text”
from what isn’t, because it is only when the text is read by someone that reading compre-
hension happens, so what is “internal” to the text cannot be defined independently of a
reader who actively interacts with it. This is not the case at the computational level,
where it is perfectly possible to determine exactly what was fed into the system, indepen-
dently of how the system processed it. At the personal level, the receiver will partly
determine what the input is, making it logically impossible to separate “internal” from
“external” information. Therefore, it is more serious an issue than the mere creation of a
“grey area” between the internal and the external, which has been recently pointed out
Gauche and Pfeiffer Flores 333

(e.g., Williams, 2014, 2015). The world-based versus text-based knowledge distinction
is not only productive of a grey area or a fuzzy dichotomy, it is senseless at the personal
level. Unless one is an S–R behaviorist, at the personal level, everything that affects us
is determined by how we receive it—we are active in the process of determining our
environment. Therefore, there is no part of the environment (“the text”) that is “exterior”
to us, while another is “inside” us. Again, irreflectively transporting subpersonal mecha-
nistic processes to the social level has led us into theoretical muddles.

Anaphor resolution as inference


Anaphor resolution is often cited as evidence of the pervasiveness of inferencing in the
reading comprehension process. Models of natural language processing do in fact often
incorporate inference rules for anaphora (some of them now learn regularities through
supervised reinforcement learning—in that case it becomes more difficult, even at the
subpersonal level, to speak of inference processes). But does that mean that we should
transpose this framework to the personal level and assume not only that the computa-
tional processes that underlie reading comprehension are governed by inference-like
rules, but readers themselves are making (very rapid?) inferences every time they get an
anaphor right? This assumption is evident when researchers test for this at the personal
level—then, they create artificial situations where in fact inference is necessary, for
example, customised texts where many anaphors are ambiguous—but this does not faze
us into continuing to suppose that the same strategies are necessary for perfectly obvi-
ous cases.
Let us illustrate with the following example from Kintsch (1988): “The lawyer dis-
cussed the case with the judge. He said: ‘I shall send the defendant to prison’” (p. 166).
The author points out a challenge for text-based construction here (i.e., for a computation
system that manipulates syntax), which consists in deciding who the pronoun “he” refers
to in the second sentence. The pronoun in the second sentence could refer either to the
judge or the lawyer in the first sentence, which means there is an ambiguity that must be
solved. In that case, either a search for previous sentences already stored in long-term
memory or a bridging inference would need to take place.
At the personal, reader–text level, however, the pronoun “he” is not automatically per-
ceived as having an ambiguous referent. Instead, the rule of parallelism (cf. Leffa, 2003)
makes interpreting “he” as the lawyer the most natural choice. “The lawyer” is the subject
in the first sentence, and “he” is the subject of the second sentence, so we tend to auto-
matically understand “lawyer” to be the referent of “he.” In fact, it is precisely this imme-
diate, noninferential interpretation that gives rise to the real ambiguity: the anaphor, which
does not require inferences by the reader at all, reveals an incongruency with what the
reader knows about lawyers and judges, and it is this incongruity that prompts an inferen-
tial process, not a mere pregiven referential ambiguity. This, to repeat, applies to the
reader–text level of explanation. Inferential rules can be hypothesised to govern what
happens automatically at the subpersonal level (the first, noninferential anaphor resolu-
tion). But assuming that automatic syntax rules of inference are being applied at the sub-
personal level does not automatically lead us to conclude that the reader is also doing
that—on the contrary! It would help explain precisely why no inferences are needed here!
334 Theory & Psychology 32(2)

In other words, the rule of parallelism doesn’t clash with the CI model’s inner logic.
It could easily be formulated like this: a pronoun is automatically understood as referring
to the word which, in the previous sentence, occupies the same slot (e.g., agent or object).
This would mean improving the model by prescribing a rule of high cognitive efficiency
which, as shown by Leffa (2003), avoids the need for inferences in most cases. This can
be shown by the example: “The lawyer hastily closed his briefcase after he met with the
judge. He then remembered that there was something else he wanted to discuss.” In the
latter case, the reader will tend to understand that “he” refers to the lawyer, and not to the
judge, although, just like in Kintsch’s (1988) example, there are theoretically two possi-
ble referents to the pronoun. No world knowledge about judges or lawyers comes into
question: it is plausible for either of them to have the intention of discussing something,
allowing us to rule out the hypothesis that any inferences were necessary in that regard.
Despite their relatively weak theoretical grounding, pronominal inferences at the per-
sonal level are widely assumed in comprehension research (e.g., Elbro & Buch-Iversen,
2013; Nash & Heath, 2011; Williams, 2014, 2015). The notion of ambiguity is overex-
tended, as if every pronoun demanded an inferential process (e.g., Singer, 1994). A com-
plementary source of this difficulty could be the widespread use of lab-customised texts,
resulting in a form of experimenter bias (cf. McNerney et al., 2011). If researchers use
texts in which many anaphors are ambiguous, we will find readers frequently resorting
to inferences. However, such ambiguities are the exception and not the rule (Leffa,
2003). As McCarthy (2015) and others (e.g., Bortolussi & Dixon, 2013) highlight,
research on inference-making and reading comprehension often sacrifices ecological
validity by using easy to manipulate, lab-created textoids instead of materials produced
in and for actual reading contexts. Next, we approach another minor issue in comprehen-
sion research, pointing to further problematic assumptions about text comprehension
derived from thoughtless transposition from computational to personal levels of
explanation.

Local and global inferences


The last categorisation proposal to be analysed is that between local and global infer-
ences (e.g., Barth et al., 2015; Carlson et al., 2014; Clinton, 2015; Reed & Lynn, 2016;
Williams, 2014, 2015). Kintsch and collaborators (Kintsch, 1988; Kintsch & van Dijk,
1978) first used these same terms not to classify inferences, but to refer respectively to
the macro and micro levels of text representation.
Kintsch’s characterisation of a text’s holistic meaning (i.e., the text’s theme or point)
as “global” aligns itself with a research tradition that is not exclusive to cognitive science
(e.g., Svensson, 1987, who approaches reading comprehension from the point of view of
narrative theory). It is, therefore, supported by different theoretical backgrounds and
unrelated to the role of inferences. We were unable to trace when the local/global distinc-
tion began to be applied to inferences, and it is not certain that the researchers themselves
were aware of this shift.
In some papers, global inferences are those that refer to the holistic meaning of a text
(e.g., Williams, 2014, 2015). In other cases, it is unclear whether the dichotomy global/
local alludes to a local versus holistic sense of the text or to the connection of sentences
Gauche and Pfeiffer Flores 335

that are distant (global) or close (local) to one another (e.g., Barth et al., 2015; Carlson
et al., 2014; Reed & Lynn, 2016). McNamara and Magliano (2009) indicate that, as a
rule, the terms refer to the text’s holistic properties and that a shift towards a criterion of
distance was the exception. We would say that, rather, ambiguity is the rule. We believe
the distance criterion to be intrinsically problematic, thus, we will focus our upcoming
analysis on it.
The first question that comes up is whether the global/local distinction is dichotomous
or continuous. It could be quantified as continuous, for example, by counting the number
of propositions to be bridged (Kintsch & van Dijk, 1978). Many authors, however, trans-
form it into a categorical variable. Barth et al. (2015), for example, did this by using a
five-sentence, lab-created text, with the number of intervening propositions being either
zero (local) or four (global). However, without parametric comparisons, it is difficult to
determine what should count as local or global. Consider a Russian novel, for example,
where the distance between to-be-bridged propositions can vary from zero to tens of
thousands!
A more critical issue is that the distance between propositions is not something that
can be manipulated experimentally without tampering with the text’s rhetorical structure.
The position of a word or sentence is not only about the distance between two pieces of
information; it carries meaning. Back to the experiment of Barth et al. (2015), partici-
pants had to read a text and judge the plausibility of a given continuation. The distance
between the critical sentence and the continuation was manipulated (zero or four propo-
sitions). The logic was that making a bridging inference between propositions separated
by a longer distance would demand more cognitive resources. The text used by Barth
et al. (2015) was as follows, with the two possible positions of the key sentence in square
brackets: “[Alan does not like getting in trouble with his teacher.] Alan sits in the back
row of his fourth-grade classroom. He sits beside two other boys who tell very funny
jokes. Alan heard one of them tell a very funny joke. The two boys giggled. [Alan does
not like getting in trouble with his teacher].” After reading this, the participants had to
judge how good the continuations “Alan kept quiet” or “Alan laughed loudly” would be
(Barth et al., 2015, p. 594).
As the example shows, however, changing the key sentence’s position has conse-
quences not only for processing load but also for meaning. Different structures imply
different connections. In the example, when the key sentence is near the one about Alan
sitting in the back, a reader may justifiably link Alan’s avoiding trouble and his sitting far
away from the teacher. The connection might make us suppose that Alan is shy in a broad
sense, or that he is wary of the teacher. Thus, the connection might reinforce the plausi-
bility of the key sentence by giving more profound meaning to it, even though it is farther
away. Of course, however, readers might or might not make these specific associations,
but the point is that sentence order influences not merely information processing time, as
is supposed by Barth et al. (2015), but how all the surrounding sentences and, finally, the
whole text is understood. Like in the case of pronominal inferences, the usage of lab-
created text didn’t provide better variable control as intended. Changing the position of
one sentence, rather than being a simple variable change, opens a plethora of semantic
possibilities, the mapping of which is not necessarily less complicated than it would be
in the case of a naturalistic text.
336 Theory & Psychology 32(2)

Barthes (Barthes & Duisit, 1975) and Eco (1994) show that the semantic dimension is
not constituted merely by what is said, but mainly by how it is said. Structural properties
are not only devices for organising units of meaning but are themselves pregnant with
meaning. Once again, this difficulty, however widely present in research by now, cannot
be attributed to Kintsch upon a close reading. Kintsch (1988) is quite clear about how
meaning is generated online in a process that critically depends on immediately previous
sentences, so that meaning changes deriving from sentence order are to be accounted for
by the CI model. It can hardly be said that Kintsch’s model is refined enough to really
make sense of such nuances, but its theoretical foundations can’t be blamed for this
unhappy development. As a conclusion, if we support our hypotheses and data interpre-
tation exclusively on an abstraction of the information in the text and overlook (a) how
the text is written (form) and (b) how readers make sense of it, we lose sight of essential
features of how inference-making and reading comprehension occur.

Final remarks
As our final remarks, we shall discuss the contributions our analysis brings to the field of
inference, and more widely, reading comprehension research. From our previous criti-
cism, we draw recommendations for the improvement of interresearcher communica-
tion, conceptual clarity, scientific integrity, and more ecological approaches.

Logic and concept usage


The first point we developed thoroughly was the need for explicit, well-thought defini-
tions of the concepts of inference and comprehension. Departing from their common
usage, we showed that, unlike the belief of many, inference and implicitness are not two
sides of the same coin. We used the example of Ernie the hamster to show that not eve-
rything that makes sense in a computational model works for human cognition. Besides
recommending more conceptual clarity, we argued against a priori presupposing readers’
lack of knowledge that would prompt an inferencing process. This latter point requires
further elaboration.
There is, so to say, a habit in reading comprehension research in which researchers
interpret comprehension as evidence for certain cognitive operations. It goes the follow-
ing way: from the premises that “if the reader makes an inference (A), they will compre-
hend (B),” one concludes that, if the reader comprehends, therefore, they made an
inference. This is, however, the fallacy known as affirming the consequent. Given B, A
could only be presumed if the first statement was “if A, and only if A.” This is, however,
seldom the case. Affirming the consequent means, in inference research, assuming that
an inference is made in every single case where an inference might be made. A simple
reductio ad absurdum like we did shows that this would lead to the absolute impossibil-
ity of understanding.
As we have shown, the mere application of the rule of parallelism suffices for most
anaphor cases, so that inferences are not the only possible explanation for anaphor
resolution. And, given that the rule of parallelism is a simpler explanation, it would be
better scientific practice to go for it, unless it can’t satisfactorily explain the observed
Gauche and Pfeiffer Flores 337

phenomenon. In the same fashion, an inference must not be simply assumed in the case
of Ernie, given that the simpler explanation that the reader understood the meaning of
words usually suffices. Researchers working with readers’ knowledge as a variable
should pay special attention regarding this fallacy, so that no inferences are assumed in
cases where word meaning is a sufficient explanation. We strongly suggest that future
researchers work on what may be called metatheoretical awareness. The term doesn’t
seem to be very established within scientific practice, but Kettermann and Marko
(2004) defined it in a way that we invite our readers to read closely. One aspect particu-
larly relevant to this case is that of critical awareness (“sensitivity to and conscious
awareness of the potential fallacy of our own approach or that of others,” p. 185).
Logic weaknesses in science are normal and expected, but they must be properly
accounted for.
In the beginning, we used ordinary language as a frame of reference for understanding
the concepts of inference and comprehension. One might say that our definition was still
vague and loose, but it was enough to solve the conflict of Ernie: rather than trying to
create rigid definitions regarding what is or isn’t an inference, we conceived “inference”
as naming simply a case of comprehension that goes beyond what would be usually
understood as obvious. It may seem outrageous that after long elaborations on the need
for clarity, we don’t provide our readers with more solid definitions. However, our opin-
ion is precisely that, in the absence of a general, reliable taxonomy, scientific concepts
should be defined pragmatically, at least as well as needed to effectively work as experi-
mental tools. Inference classification proposals should be understood as scientific tools
useful within certain theories, instead of as ontological statements about the nature of
cognitive processes.
So, instead of just adopting existing taxonomy, we encourage researchers to minutely
analyse their working concepts to check if they adequately capture the observed phe-
nomenon. It is no problem if, rather than one single aspect, a spectrum of similar, but
diverse phenomena is under the microscope: then, we can adopt categories that are ample
enough, but still present well-thought, theoretically grounded criteria enabling the inclu-
sion or exclusion of given phenomena—instead of the grey area of little use offered, for
example, by the text-based versus knowledge-based classification. When citing research
that uses the same terminology as we are using, we should carefully check whether we
aren’t using the same word to refer to completely different things. An absolute necessity
is that all definitions are explicit and available to whoever decided to read the final paper,
for obscure concepts amount to nothing.

Human minds and computational models


Multiple times throughout this article, we discussed cases where certain notions of the CI
model, well-thought and well-placed within the context of a subpersonal computational
approach, were interpreted as descriptions of personal processes—which, as we showed,
doesn’t necessarily work unless at the expense of terminological precision. This point
could be subsumed under a call for awareness of the frame of reference (“sensitivity to
and conscious awareness of what we are making statements about,” Kettermann &
Marko, 2004, p. 185). We believe the field of comprehension research has much to gain
338 Theory & Psychology 32(2)

from a closer reading of Kintsch (1988) in order to more adequately separate the original
model from its later interpretations.
A very noteworthy remark is that, despite many authors citing Kintsch’s works as an
important part of their theoretical foundation (e.g., Bos et al., 2016; Elbro & Buch-
Iversen, 2013; Goldman et al., 2016; McCarthy & Goldman, 2015), what they actually
involve—explicitly teaching children that there are two ways of solving coherence
breaks—has little or nothing to do with the bottom-up processes referred to by the CI
model, being instead more focused on intentionally driven reading strategies and the
“top-down” search for meaning (cf. McNamara & Magliano, 2009; see also the differ-
ence between automatic and controlled inferences proposed by Kintsch, 1998). This is
not meant as a criticism of the cited experiments themselves, since we found many excel-
lent, efficient works that adopt this “upside-down” CI model.
Our criticism of this aspect may also be applied to the seminal constructionist model
proposed by Graesser et al. (1994), the relevance of which might lie in the roots of this
widespread mixing up of fundamentally different approaches. The authors were aware of
the difference between so-called top-down and bottom-up strategies, but refrained from
making use of this distinction in their theoretical elaborations, while at the same time
making constant reference to works of Kintsch, which are only meant to refer to bottom-
up processes. This is not necessarily problematic, but it seems at least strange that the CI
model keeps on being cited where it doesn’t belong. A possible explanation is that
Kintsch’s (1988) theory has terminological affinities with other conceptions of reading
comprehension (e.g., Eco, 1994), such as the focus on the role of the reader, which makes
its technical terms easy to transport into altogether different theoretical frameworks such
as that of an educational approach. Regarding this point, we refer to Kettermann and
Marko’s (2004) awareness of differences in approaches (“sensitivity to and conscious
awareness of how problems can be approached differently,” p. 185).

An ecological approach is a meaning-based approach


As our final point, we would like to discuss a point that underlies the widespread usage
of textoids in experimental practice and list a few alternatives to it. Usually, researchers
make their own texts so that units relevant for comprehension can be more easily isolated
and manipulated. As Wallot (2014) points out, however, changing one element in isola-
tion often affects how readers understand the whole text, so that the scientific search for
stable basic textual units may be seen as a doomed enterprise. The author also points to
the little-examined issue of whether lab-written texts prompt reading behaviour similar
to real-life reading, which seems often not to be the case (cf. McNerney et al., 2011).
Wallot (2014) then moves on to defend, like we did throughout this article, that the pur-
suit of what is universal to all reading comprehension requires a meaning-centred con-
ception of reading, while stating that meaning can only be properly understood in terms
of how language is contextually used, that is, of the “reading games” specific to a text
and its context. Here we can also mention Gahrn-Andersen’s (2019) remarks on the
necessity of studying a language in its living, material-bound usage, instead of conceiv-
ing it as stemming from a general, abstract representational system, out of which, as
Wallot (2014) states, no meaning can arise.
Gauche and Pfeiffer Flores 339

Indeed, there are indicatives that a mere variation in readers’ expectations is sufficient
for them to significantly alter their reading practices (e.g., Altmann et al., 2014; Rouet,
et al., 2017; Teng et al., 2016). Until now, reader expectations have been conceived as a
variable that stems from within the reader and affects their reading strategies in a one-
way fashion. Adopting a stance centred on language use, reader’s expectations would be
more appropriately described as a fundamentally social aspect of language, indications
of text genre functioning as cues regarding what the appropriate reading strategies are
given a text’s content and social function. The reader who approaches a text most appro-
priately shall be properly rewarded: a reader may gain a lot by memorising Shakespeare’s
sonnets, but would hardly benefit from doing the same with dish soap labels. Thus,
expectations work in a two-way fashion, concerning the text–reader interaction rather
than only the reader (cf. Flores, de Oliveira-Castro, & de Souza, 2020).
An important distinction between what we are proposing and what is usually done in
comprehension research is that, to our view, concrete reading practices should not be
deemed as extratextual factors to be cast from outside upon an otherwise complete tex-
tual material, but rather as conditions so fundamental as to determine how text decoding
itself takes place. This approach is not in principle incompatible with the traditional theo-
retical framework for comprehension studies and would, in fact, represent great advance-
ments for cognitive theory within its own terms. It does, however, require a radical shift
from what has commonly been done and calls for the continuous pursuit of awareness of
one’s metatheoretical stance, necessarily involving the reevaluation of many assump-
tions that were, until now, taken for granted. We shall not detain ourselves further on this
issue, but we strongly encourage the reading of the cited literature.
In terms of actual assessment of comprehension, a promising approach can be found
in the field of literary text comprehension. Noticing that literary texts were fundamen-
tally different from the texts commonly used in comprehension research, researchers
of literature comprehension have been at the forefront of developing strategies for the
increasing of ecological validity. Measurements of “online” interactions have had
promising results (e.g., Goldman et al., 2015; Levine & Horton, 2013; McNerney
et al., 2011; Teng et al., 2016). In our own works about the comprehension of literary
texts, we used the laborious but interesting approach of recording and analysing in
detail how individual experimental participants interact with authentic texts (for a
summary of our methodology and the reasoning behind it, see Flores, da Nóbrega
Rogoski, & Nolasco, 2020). In another exciting perspective, recent studies have used
eye tracking as a measurement of reading behaviour (e.g., Fechino et al., 2020; Xue
et al., 2019). The development of corpus analysis techniques (e.g., Jacobs et al., 2020)
also looks promising. Just to name two possibilities, it could be used to source material
for the elaboration of experiments, or to check whether the material chosen for an
experiment adequately represents the average text within a given genre, which would
mean significant gain in ecological validity.
We hope to have made the case for important changes in the way reading comprehen-
sion research has been conducted so far. We feel that the theory underlying reading pro-
cesses has not accompanied the quick progress being achieved in the form of massive
amounts of gathered data, of which science can only make sense with the aid of proper
interpretative frameworks.
340 Theory & Psychology 32(2)

Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/
or publication of this article: the authors thank the Brazilian National Council for Scientific and
Technological Development (CNPq) (Research Scholarship for GG) and the Foundation for the
Support of Research – Federal District, Brazil (FAP-DF) (Post-Doctoral grant for EPF).

ORCID iDs
Gilberto Gauche https://orcid.org/0000-0003-4331-6163
Eileen Pfeiffer Flores https://orcid.org/0000-0002-7440-8872

Note
1. From now on, we shall use the term “CI model” somewhat freely when discussing features
present in Kintsch (1988), even if they were first discussed before 1988.

References
Altmann, U., Bohrn, I. C., Lubrich, O., Menninghaus, W., & Jacobs, A. M. (2014). Fact vs fic-
tion—how paratextual information shapes our reading processes. Social Cognitive and
Affective Neuroscience, 9(1), 22–29. https://doi.org/10.1093/scan/nss098
Barth, A. E., Barnes, M., Francis, D., Vaughn, S., & York, M. (2015). Inferential processing among
adequate and struggling adolescent comprehenders and relations to reading comprehension.
Reading and Writing, 28(5), 587–609. https://doi.org/10.1007/s11145-014-9540-1
Barthes, R., & Duisit, L. (1975). An introduction to the structural analysis of narrative. New
Literary History, 6(2), 237–272. https://doi.org/10.2307/468419
Bloch, W. G. (2008). The unimaginable mathematics of Borges’ Library of Babel. Oxford
University Press.
Bortolussi, M., & Dixon, P. (2013). Minding the text: Memory for literary narrative. In L. Bernaerts,
L. Herman, B. Vervaeck, & D. de Geest (Eds.), Stories and minds: Cognitive approaches to liter-
ary narrative (pp. 23–37). University of Nebraska Press. https://doi.org/10.2307/j.ctt1ddr7zh.5
Bos, L. T., De Koning, B. B., Wassenburg, S. I., & van der Schoot, M. (2016). Training inference
making skills using a situation model approach improves reading comprehension. Frontiers
in Psychology, 7, Article 116. https://doi.org/10.3389/fpsyg.2016.00116
Cain, K., & Oakhill, J. V. (1999). Inference making ability and its relation to comprehen-
sion failure in young children. Reading and Writing, 11(5–6), 489–503. https://doi.
org/10.1023/A:1008084120205
Carlson, S. E., van den Broek, P., McMaster, K., Rapp, D. N., Bohn-Gettler, C. M., Kendeou, P., &
White, M. J. (2014). Effects of comprehension skill on inference generation during reading.
International Journal of Disability, Development and Education, 61(3), 258–274. https://doi.
org/10.1080/1034912x.2014.934004
Clinton, V. (2015). Examining associations between reading motivation and inference generation
beyond reading comprehension skill. Reading Psychology, 36(6), 473–498. https://doi.org/1
0.1080/02702711.2014.892040
Dennett, D. C. (1969). Content and consciousness. Routledge & Kegan Paul.
Eco, U. (1994). Six walks in the fictional woods. Harvard University Press. https://doi.org/10.2307/j.
ctvjhzps3
Elbro, C., & Buch-Iversen, I. (2013). Activation of background knowledge for inference making:
Effects on reading comprehension. Scientific Studies of Reading, 17(6), 435–452. https://doi.
org/10.1080/10888438.2013.774005
Gauche and Pfeiffer Flores 341

Fechino, M., Jacobs, A. M., & Lüdtke, J. (2020). Following in Jakobson and Lévi-Strauss’
footsteps: A neurocognitive poetics investigation of eye movements during the reading of
Baudelaire’s “Les Chats”. Journal of Eye Movement Research, 13(3), Article 4. https://doi.
org/10.16910/jemr.13.3.4
Flores, E. P., da Nóbrega Rogoski, B., & Nolasco, A. C. G. (2020). Comprensión narrativa:
Análisis del concepto y una propuesta metodológica [Narrative comprehension: Concept
analysis and a methodological proposal]. Psicologia: Teoria e Pesquisa, 36, Article e3635.
https://doi.org/10.1590/0102.3772e3635
Flores, E. P., de Oliveira-Castro, J. M., & de Souza, C. B. A. (2020). How to do things with
texts: A functional account of reading comprehension. The Analysis of Verbal Behavior, 36,
273–294. https://doi.org/10.1007/s40616-020-00135-0
Gahrn-Andersen, R. (2019). But language too is material! Phenomenology and the Cognitive
Sciences, 18(1), 169–183. https://doi.org/10.1007/s11097-017-9540-0
Goldman, S. R., Britt, M. A., Brown, W., Cribb, G., George, M., Greenleaf, C., Lee, C. D., &
Shanahan, C., & Project READI. (2016). Disciplinary literacies and learning to read for
understanding: A conceptual framework for disciplinary literacy. Educational Psychologist,
51(2), 219–246. https://doi.org/10.1080/00461520.2016.1168741
Goldman, S. R., Golden, R., & van den Broek, P. (2007). Why are computational models of text
comprehension useful? In F. Schmalhofer & C. Perfetti (Eds.), Higher-Level language pro-
cesses in the brain (pp. 27–51). Erlbaum. https://doi.org/10.4324/9780203936443
Goldman, S. R., McCarthy, K. S., & Burkett, C. (2015). Interpretive inferences in literature. In E.
O’Brien, A. Cook, & R. Lorch (Eds.), Inferences during reading (pp. 386–415). Cambridge
University Press. https://doi.org/10.1017/CBO9781107279186.018
Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative
text comprehension. Psychological Review, 101(3), 371–395. https://doi.org/10.1037/0033-
295x.101.3.371
Jacobs, A. M., Hermann, B., Lauer, G., Lüdtke, J., & Schroeder, S. (2020). Sentiment analysis of
children and youth literature: Is there a Pollyanna effect? Frontiers in Psychology, 11, Article
574746. https://doi.org/10.3389/fpsyg.2020.574746
Kettermann, B., & Marko, G. (2004). Can the L in TaLC stand for literature? In G. Aston, S.
Bernardini, & D. Stewart (Eds.), Corpora and language learners (pp. 169–193). John
Benjamins.
Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction-integration
model. Psychological Review, 95(2), 163–182. https://doi.org/10.1037/0033-295x.95.2.163
Kintsch, W. (1993). Information accretion and reduction in text processing: Inferences. Discourse
Processes, 16(1–2), 193–202. https://doi.org/10.1080/01638539309544837
Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge University Press.
Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production.
Psychological Review, 85(5), 363–394. https://doi.org/10.1037/0033-295x.85.5.363
Kispal, A. (2008). Effective teaching of inference skills for reading: Literature review [Research
report no. DCSF-RR031]. National Foundation for Educational Research. https://www.nfer.
ac.uk/publications/edr01/edr01.pdf
Leffa, V. J. (2003). Anaphora resolution without world knowledge. DELTA: Documentação de
Estudos em Lingüística Teórica e Aplicada, 19(1), 181–200. https://dx.doi.org/10.1590/
S0102-44502003000100007
Levine, S., & Horton, W. S. (2013). Using affective appraisal to help readers construct liter-
ary interpretations. Scientific Study of Literature, 3(1), 105–136. https://doi.org/10.1075/
ssol.3.1.10lev
342 Theory & Psychology 32(2)

McCarthy, K. S. (2015). Reading beyond the lines: A critical review of cognitive approaches to
literary interpretation and comprehension. Scientific Study of Literature, 5(1), 99–128. https://
doi.org/10.1075/ssol.5.1.05mcc
McCarthy, K. S., & Goldman, S. R. (2015). Comprehension of short stories: Effects of task instruc-
tions on literary interpretation. Discourse Processes, 52(7), 585–608. https://doi.org/10.1080/
0163853X.2014.967610
McCarthy, K. S., & Goldman, S. R. (2019). Constructing interpretive inferences about literary
text: The role of domain-specific knowledge. Learning and Instruction, 60, 245–251. https://
doi.org/10.1016/j.learninstruc.2017.12.004
McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. In
B. H. Ross (Ed.), The psychology of learning and motivation: Vol. 51. The psychology of
learning and motivation (pp. 297–384). Elsevier Academic Press. https://doi.org/10.1016/
S0079-7421(09)51009-2
McNerney, M. W., Goodwin, K. A., & Radvansky, G. A. (2011). A novel study: A situation model
analysis of reading times. Discourse Processes, 48(7), 453–474. https://doi.org/10.1080/016
3853x.2011.582348
Nash, H., & Heath, J. (2011). The role of vocabulary, working memory and inference making abil-
ity in reading comprehension in Down syndrome. Research in Developmental Disabilities,
32(5), 1782–1791. https://doi.org/10.1016/j.ridd.2011.03.007
Reed, D. K., & Lynn, D. (2016). The effects of an inference-making strategy taught with
and without goal setting. Learning Disability Quarterly, 39(3), 133–145. https://doi.
org/10.1177/0731948715615557
Rouet, J. F., Britt, M. A., & Durik, A. M. (2017). RESOLV: Readers’ representation of reading
contexts and tasks. Educational Psychologist, 52(3), 200–215. https://doi.org/10.1080/0046
1520.2017.1329015
Rubin, R. B., Rubin, A. M., & Haridakis, P. M. (2009). Communication research: Strategies and
sources. Wadsworth Cengage Learning.
Searle, J. R. (2003). Rationality in action. MIT Press.
Singer, M. (1994). Discourse inference processes. In M. A. Gernsbacher (Ed.), Handbook of psy-
cholinguistics (pp. 479–515). Academic Press.
Svensson, C. (1987). The construction of poetic meaning: A developmental study of symbolic and
non-symbolic strategies in the interpretation of contemporary poetry. Poetics, 16(6), 471–
503. https://doi.org/10.1016/0304-422x(87)90014-3
Teng, D. W., Wallot, S., & Kelty-Stephen, D. G. (2016). Single-word recognition need not depend
on single-word features: Narrative coherence counteracts effects of single-word features
that lexical decision emphasizes. Journal of Psycholinguistic Research, 45(6), 1451–1472.
https://doi.org/10.1007/s10936-016-9416-4
van Dijk, T. A. (1995). On macrostructures, mental models, and other inventions: A brief per-
sonal history of the Kintsch-van Dijk theory. In C. A. Weaver III, S. Mannes, & C. R.
Fletcher (Eds.), Discourse comprehension: Essays in honor of Walter Kintsch (pp. 383–410).
Lawrence Erlbaum Associates.
Wallot, S. (2014). From “cracking the orthographic code” to “playing with language”: Toward
a usage-based foundation of the reading process. Frontiers in Psychology, 5, Article 891.
https://doi.org/10.3389/fpsyg.2014.00891
Williams, J. C. (2014). Recent official policy and concepts of reading comprehension and infer-
ence: The case of England’s primary curriculum. Literacy, 48(2), 95–102. https://doi.
org/10.1111/lit.12012
Gauche and Pfeiffer Flores 343

Williams, J. C. (2015). The new Salford Sentence Reading Test (2012) and the Diagnostic Reading
Analysis (2008) assess “inference”—but what forms of inference do they test? English in
Education, 49(1), 25–40. https://doi.org/10.1111/eie.12055
Xue, S., Lüdtke, J., Sylvester, T., & Jacobs, A. M. (2019). Reading Shakespeare sonnets:
Combining quantitative narrative analysis and predictive modeling—an eye tracking study.
Journal of Eye Movement Research, 12(5), Article 2. https://doi.org/10.16910/jemr.12.5.2

Author biographies
Gilberto Gauche holds a degree in psychology from the University of Brasilia. He cowrote this
paper while studying music at the University of the Arts Bremen, after which he went to study
cognitive science at the University of Osnabrück. Currently, he researches semiotic, aesthetic, and
philosophic issues in models of text comprehension.
Eileen Pfeiffer Flores is adjunct professor in the Department of Psychology at the University of
Brasilia. While cowriting this paper, she was a visiting professor at the Philosophy Department at
King’s College, London. Her current research interests include conceptual and empirical issues
related to narrative comprehension, shared reading and interactions between literary and social
empathy. Recent publications include: (with J. M. de Oliveira-Castro & C. B. A. de Souza), “How
to do Things With Texts: A Functional Account of Reading Comprehension” in The Analysis of
Verbal Behavior (2020).

You might also like