Human Information Interaction and The Cognitive Predicting Theory of Trust

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

Human Information Interaction and the Cognitive Predicting


Theory of Trust
Lauren Fell Andrew Gibson
Queensland University of Technology (QUT) Queensland University of Technology (QUT)
Brisbane, Australia Brisbane, Australia
l3.fell@qut.edu.au andrew.gibson@qut.edu.au

Peter Bruza Pamela Hoyte


Queensland University of Technology (QUT) Queensland University of Technology (QUT)
Brisbane, Australia Brisbane, Australia
p.bruza@qut.edu.au pamela.hoyte@qut.edu.au

ABSTRACT KEYWORDS
This perspectives paper proposes a conceptualization of trust that trust, cognitive predicting theory of trust, information interaction,
does not require a predefined feature space, but rather is dynam- predictive processing, curating information, fake news, cognitive
ically formed at the point of information interaction through a bias
cognitive predicting mechanism.
ACM Reference Format:
Trust is a significant issue in the current information context due
Lauren Fell, Andrew Gibson, Peter Bruza, and Pamela Hoyte. 2020. Human
to fake news, echo chambers, filter bubbles, and confirmation biases Information Interaction and the Cognitive Predicting Theory of Trust. In
which can result in a disconnect between human trust expectations 2020 Conference on Human Information Interaction and Retrieval (CHIIR
and information trustworthiness, making it difficult to establish a ’20), March 14–18, 2020, Vancouver, BC, Canada. ACM, New York, NY, USA,
feature space within which trust might be modeled. 8 pages. https://doi.org/10.1145/3343413.3377981
In response to this, we present our Cognitive Predicting Theory
of Trust (CPTT) which allows trust to be modeled without the re-
quirement of a predefined feature space. Drawn from the cognitive 1 INTRODUCTION
theory of Predictive Processing, CPTT describes how people form Echo chambers, filter bubbles, alternative facts, and fake news are
trust judgments based on cognitive predictions within a system of terms characterizing an information environment where trust is
information interactions. We outline how this CPTT view of trust increasingly difficult to establish. Yet in arguably the most informa-
might be modeled using complex systems and provide examples tion saturated time of human history, trust has never been more
showing how curation of the information interaction environment important. In times long gone, human trust expectations were sat-
can affect the trust associated with the system. isfied with the trustworthiness of the information and its source.
We conclude by proposing that our perspective opens up two This nexus between the trusting person and the trustworthy infor-
avenues for exploration in Computer Human Information Interac- mation is now fractured, resulting in significant social issues like
tion and Retrieval: (1) the need for alternative models, and (2) the increased polarization in socio-political discourse and widespread
value of curating the information environment. misinformation on well established health practices. Many infor-
mation seekers are ill equipped to navigate this environment, and
calls are frequently made for search and social media companies to
CCS CONCEPTS take action, with regulators litigating companies when they fail to
• Human-centered computing → Interaction design theory, maintain public trust [22].
concepts and paradigms; • Information systems → Users and However, despite the resources being levied against the problem,
interactive retrieval; • Applied computing → Psychology. there is little evidence of progress in stemming the erosion of trust,
and few fresh ideas on how to move forward. Frequently, atten-
tion is turned to the quality of information and how features like
authenticity might allow the discrimination between trustworthy
and untrustworthy information. Yet with the rise of deep fake tech-
nology [7], the volume of information available, and the velocity
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed with which new information is created, it is an open question as
for profit or commercial advantage and that copies bear this notice and the full citation to whether specific information features can be associated reliably
on the first page. Copyrights for components of this work owned by others than the with levels of trustworthiness.
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission The dependence on human judgment to discern trustworthy
and/or a fee. Request permissions from permissions@acm.org. information also highlights the importance of the reliability of the
CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada general public in making these judgments. Educational institutions
© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-6892-6/20/03. . . $15.00 are regularly called on to improve the ‘data literacy’ of our children
https://doi.org/10.1145/3343413.3377981 to address a perceived deficit in this area [3]. Implicit in this burden

145
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

is that problems can be solved through better education. However, representing the media organization and the author from that or-
the very heuristics that help people make judgments, depend on ganization. From this traditional perspective, trusting the article
cognitive factors that are manipulated by the same information is not so much trusting the information itself, but more trusting
interaction environments in which the judgments are made. So- the source of that information based on a relationship between the
cial media companies exploit human factors (such as emotions and reader and the author/organization. As such, trustworthiness of
cognitive biases) to create conduits for advertisers, and build busi- the information is something of a proxy for trustworthiness of one
nesses on minimizing the friction between information giver and or other of the parties.
receiver, largely without concern for the nature of that information, In the following section, we will show that in the information
and rarely without consideration of the effect that it may have on interaction context this kind of trust is founded on two pillars:
trust. In this context the relationship between reality and trust is people who are trusting, and information which is trustworthy.
distorted at best and completely destroyed in the worse case, with However, at the same time we recognize that traditional views on
information consumers living in bubbles of information that re- trust are limited with respect to information in many contempo-
semble their own desired reality, echoing their own perspectives rary situations, and that frequently information is dynamic and
and views, irrespective of how far they are from being beneficial to plays an active role in information interactions, shaping the trust
society as a whole. This scenario is a trust illusion for the individual relationship between interacting parties. We return to this issue
and a trust breakdown for society. Given this context, it is pertinent later in section 3 as a motivation for our new perspective on trust
to examine existing conceptualizations of trust and their fitness for in relation to information interactions.
application in this volatile, dynamic environment.

2 TWO PILLARS OF TRUST


1.1 Trust between people
As described above, dominant theories on trust center around rela-
Most conceptualizations of trust have centered around people, both
tionships between people and groups of people. In these there are
individuals and groups. Trust has been viewed as a significant
always two necessary components to the trust relationship: one
issue for businesses in relation to their customers and employees,
party needs to trust the other, and the other needs to behave in
for politics in terms of persuading constituents, and for people
a trustworthy manner. These theories have also tended to dom-
engaging in relational tasks that require the trust of others. This
inate thinking with respect to information interactions. That is,
has resulted in definitions of trust such as “an expectancy held by
information is seen as a proxy for either or both sides of the trust
an individual or a group that the word, promise, verbal or written
relationship. This results in information either conveying a party’s
statement of another individual or group can be relied upon” [30, p
propensity to trust, or representing the extent to which a party
651]; or “the willingness of a party to be vulnerable to the actions of
may be trusted. Like trust in the physical sphere, the information
another party based on the expectation that the other will perform a
sphere imports the significance of truth, vulnerability, expectations,
particular action important to the trustor, irrespective of the ability
consistency, and other common trust concepts.
to monitor or control that other party” [21, p 712].
Similarly, trust within human information interactions rests on
Popular definitions such as these, along with the theoretical
two pillars: (1) the characteristics of the person doing the trusting,
frameworks that rely on them, treat trust as a relationship be-
such as their personal expectations with respect to the information
tween statically defined parties with specific attributes. This static
and their propensity to trust; and (2) the features of the information
relationship-centric understanding of trust is problematic when
itself, such as its truthfulness, authenticity, reliability, and integrity
considering human interaction with information. Do we consider
of its source. In this section we examine these two pillars in de-
the relationship between the person and the information? If so,
tail, before showing how they contribute to the way trust can be
can characteristics of this relationship be seen as similar to those
modeled in information interactions.
relationships between people? If not, how are we to understand
trust in the context of information interactions.
2.1 Trusting people
1.2 Trust in information There are situations where the barriers between trust and distrust
Although theories of trust have largely been constructed in a world are clear, for example, we may completely trust our parents as
where relationships predominantly had a physical manifestation, children, and fully distrust a snake we encounter on a hike. On the
they continue to be drawn upon in an environment where many other hand, we may only partially trust that friend who sometimes
relationships are virtual and mediated by information interaction lets us down, that dog who only once bit us, or that company who
rather than physically situated dialogue. While once a consumer’s sells us products that only sometimes malfunction.
relationship with a company was based on physical engagement Definitions of trust often involve the expectations a person has
with that company such as talking to a sales assistant on the shop of a certain outcome [9, 21, 26, 31]. This could be the outcome of
floor, it is now just as common for there to be no physical encounter a friend keeping a secret, a dog not attacking a child, or a piece
at all. of news being truthful and accurate. Wherever expectations are
Yet, despite these changes in society, views on trust have re- being made, a person must draw on a complex range of memories,
mained firmly rooted in relationships, where trustworthiness of assumptions, perceptions, and heuristics in an attempt to predict an
information is conceptualized as the trustworthiness of the other outcome with a certain amount of confidence. The more uncertainty
party. For example, an article in the New York Times is seen as present in the environment, the less confident a person can be about

146
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

their predictions, as they inevitably have less relevant information 2.2 Trustworthy information
to draw on in order to make them. The second pillar of trust is the extent to which the information
The effect of uncertainty on human decision-making has been can be seen to be trustworthy. It is reasonable to assume that trust
well documented in cognition, neuroscience and behavioral eco- is strongly related to qualities of the information in question. For
nomics. For example, the bounded rationality theory [32] posits example Briggs, Simpson and de Angeli [4] found three factors
that humans have a limited capacity for considering outcomes and strongly correlated with the formation of trust: source credibility,
probabilities necessary to make completely accurate predictions personalization, and predictability. These factors can be influenced
and decisions about the world, so must make the best decisions through modifications to the information itself. That is, the infor-
based on the information and cognitive resources available. In a mation could be modified to improve its credibility, to be more
similar vein, dual process theories of reasoning [10] assume two suitable to the searcher, and more inline with expectations. In re-
separate processes: (1) fast, intuitive, mostly automatic processes lated work, Harris, Sillence and Briggs [14], found that four factors
which tend to operate when uncertainty is high, time is short and were significant in a model of general web trust, but that in respect
cognitive resources are few, and (2) slow, considered, controlled to health web sites, two factors (personalization and credibility)
processes in the opposite cases. were mediated through impartiality and information quality. Re-
Slow, considered, controlled processes aim to maximize the util- gardless of how identified factors play a role in trust formation,
ity of information and transacts decisions in order to achieve this. they are nevertheless features of the information.
That is, the quality of the information, the reputation of the source However, in a 2013 study on beliefs and biases in health related
etc. are carefully and logically evaluated. Such rational decision search [35], White found that users were more likely to select an-
making has been well studied in ‘top-down’ models of cognition. swers based on positivity of the answer and in confirmation of
However, there is an alternative ‘bottom-up view’ in which hu- existing ideas, rather than on the truthfulness of the information.
mans employ heuristics, short-cuts and cognitive biases because Assuming that searchers are more likely to trust responses that they
of bounded rationality. Heuristic decision making and cognitive select, this work raises an important question in relation to trust-
biases tend to be triggered in situations where there is ‘too much worthy information: To what extent is trustworthiness based on
information’, ‘not enough meaning’, ‘the need to act fast’. Such features of the information itself, and to what extent is it a function
heuristics and associated biases often have their roots in satisficing, of the person making the trust judgment? This question highlights
rather than ‘getting it right’, which is cognitively more demanding. the inter-relationship between human factors and information fea-
The number of cognitive biases identified continues to increase, tures, and it is this inter-relationship and how it is modeled that
and they range from remembering only emotionally salient infor- we examine further in the following sections.
mation, to finding patterns in pattern-less data, to weighting risk
and reward differently, and relying on stereotypes. Whilst these
are generally believed to be side effects of a mostly functional and 3 FEATURE SPACE VIEW OF TRUST
adaptive system that preserves our cognitive resourcing and allows Imagine you would want to build a search engine that was effective
for fast decision making, they contribute to the unpredictability of in not only retrieving information that was topically related to
how people may judge trust. a query but trustworthy as well. In fact, the need to incorporate
One may think that knowing the heuristics that appear to govern trust and provenance in information retrieval was presaged almost
human decision-making under uncertainty, as well as certain fea- two decades ago [19]. Recent reflections from the Third Strategic
tures of a piece of information, one might be in a position to reliably Workshop on Information Retrieval in Lorne (SWIRL) suggest not
predict a person’s decisions with regard to trust, for instance. This much has changed in the intervening period, “Current evaluation
simply is not the case, as, aside from some cardinal examples used metrics and methods do not adequately capture notions of users’
to demonstrate specific biases, it is often difficult to know which satisfaction, confidence, and trust or the quality of the outcomes or
combination of biases may be used for any given situation. This is decisions made on the search process" [1].
because of the many individual differences present in the various As there is a human interacting with the information, it is natural
styles of decision-making employed. For example, Kahneman and to view the search system in terms of the two pillars described above
Frederick [15] describe heuristics in terms of ‘attribute substitution’ - the human and the information.
- in place of a difficult decision where information, probabilities or One pillar is the human judging the trustworthiness of the in-
reasoning are difficult to access, we substitute a simpler decision, formation. As mentioned in the previous section, decisions of trust
which may be based on a heuristic such as a prototype or peripheral involve assessments of factors such as the reputation of the source
information in order to make a decision. Although this explana- and the subjective assessment of the validity of the information.
tion seems to bring many cognitive biases and heuristics under The human pillar is challenging. Given that the human is a
one umbrella, it does not make human decision-making any more heuristic decision maker subject to cognitive bias it does not suf-
predictable - in many cases it is impossible to know which simpler fice to assume the human is a rational decision maker. However,
decision a person might substitute. In short, it is challenging to this doesn’t necessarily preclude the development of cognitively
model how human beings transact decisions of trust. motivated user models which could, in principle, be embedded into
technology. The field of cognitive science has recently proposed
Bayesian models that are sensitive to bounded rationality, and con-
sequently account for both cognitive biases and rational decision
making within a single formal framework [18]. In such a model,

147
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

we could assume that the function f (A1, . . . , Am ) is a high level feature extraction and eventual interpretation. Instead, the process-
abstraction of the user’s decision making of trustworthiness where ing of information is moderated by the cognitive system’s active
A1, . . . Am represent cognitive features that bear on decisions of predictions based on expectations about the world [2, 11]. One sug-
trust. gested example of this is the blind spot in our visual system. This
The other pillar is the information which can also be abstractly refers to a section of the retina that is lacking in receptor cells due
characterized by means of the function д(B 1, . . . Bn ). This function to the exit of nerve fibers through the back of the eye. Rather than
involves features Bi that represent data-driven proxies of concep- perceiving a black spot in our visual field, our brains instead ac-
tual aspects of trust. A famous example is PageRank, which in its tively fill this space with what it expects to see based on activation
original conception was a data-driven feature that computed the ‘im- coming from the surrounding visual receptors [27]. In other words,
portance’ of a web page based on their back links, which “provide a our perception of the visual field (in this case, the part of it that
kind of peer review" [23]. The better the ‘peer review’, the more im- falls within our ‘blind spot’) is mediated by top-down predictions
portant the information and hence the higher the probability of its in ways which we are often not consciously aware of. An example
trustworthiness. Much of the research in systems IR has proceeded at a higher level is in the way we perceive bodily motion, moder-
in this way: Features are identified that promote retrieval effective- ated by what we expect based on our own muscle movements. The
ness. Such features are usually instantiated numerically, e.g., by predictive mechanism underlying this not only avoids the world
corpus-based statistics, click logs etc. The function д(B 1, . . . , Bn ) appearing to shake around us as our eyes saccade, but also allows
represents a high level abstraction of information features that are us to distinguish between movements generated by ourselves and
optimized by the system in order to retrieve trustworthy informa- external forces at a higher level in the cortical hierarchy [37].
tion (this function can be equated with the ‘laboratory approach’
in IR).
Our trust retrieval system can now be conceptualized as an inter-
active retrieval system [16]. A user enters a query into the search 4.2 Trust and Prediction
system and interacts with it in the usual ways, e.g., the user reformu- As predicting is a fundamental mechanism involved in thought
lates their query based on what information is retrieved. While this and action, it clearly is also significant with regards to trust. We
is going on, heuristic decision making and cognitive biases are in- (humans) like predictability; we find safety in it. It is the security
fluencing the user’s interactions. All the while, the system attempts of the comfort zone, the safe feeling of home, and the contentment
to retrieve trustworthy information. In other words, we have a com- found in the familiar. In other words, a predictable environment
posite system where trust is reduced to a computation based on the fosters our subjective feeling of trust.
corresponding composite function: f (A1, . . . , Am ) ⊙ д(B 1, . . . , Bn ). Risk judgments are the inverse - they deal with the likelihood
More specifically, trust would be computed from some vector of that a safety prediction is false - which in turn threatens the feeling
user and system features where the weights would be iteratively of trust. This can be seen in the lack of trust that is fostered when
updated according to the interactions between the user and the a piece of technology fails to perform consistently. This can also
system. play a role in the trust perception of virtual characters, as is seen in
A significant consequence of this line of thinking is that trust is the famous uncanny valley effect (negative reactions to a character,
always defined within a given space of features. Once the feature avatar or robot that bears a very close but ‘not quite right’ resem-
space has been defined, decisions of trust are reduced to features blance to a human) [8, 20], which has been shown to be moderated
with corresponding weights and the given function that computes by the predictive processing system [33].
with them. The point is that trust is therefore dependent on a The predicting of an outcome, current or future, is a fundamen-
predefined feature space. The question is whether an adequate tally interactive exercise. Based on the predictive processing theory
notion of trust can be expressed in this way. in cognition [5], we propose that trust arises out of a dynamic
exchange between expectation and environment that uses a cog-
nitive mechanism that is fundamentally predictive. In this view,
4 THE COGNITIVE PREDICTING THEORY OF expectation is informed by a range of prior beliefs, current infor-
TRUST mation, values, affect and biases. This view is particularly useful
In contrast to the feature space view, we provide an alternate view in situations where prior features of the information space may be
of trust accounting for a more holistic understanding of both the difficult or impossible to determine, such as in complex information
human and information pillars, and does not rely on a pre-defined environments. Our observation of the importance of predictability
feature space. To do so, we draw on the predictive processing theory as central to trust does not deny the importance of other factors.
of mind [5] (also related are active inference and predictive coding Rather, it is an attempt to strip down the concept to its fundamental
[25]) which describes all of cognition as continuous prediction and element. This factor in particular is important in understanding
error minimisation. the interactive component of trust, as it involves a dynamic level
of trust based on the inseparable components of environment and
expectation.
4.1 Prediction and Cognition Hence, we define trust as a dynamic predictive interaction
Predictive processing is thought to play an essential role in all levels between expectation and environment, where expectation is
of the cortical hierarchy. At the lowest levels, information does not mediated by prior beliefs, biases, affect, and available infor-
simply enter the cognitive system through a bottom-up process of mation.

148
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

This view of a predictive mechanism as central to the notion to say, you form an expectation that what they are about to tell
of trust is evidenced in much of the trust literature. For example, you should be truthful. Your expectation could be due to a range of
PytlikZillig & Kimbrough’s [26] meta-review on conceptualizations factors such as social norms (people should generally tell the truth),
and definitions of trust revealed common themes, with the ma- internal motivations (you really want to believe them this time),
jority including the words expectation and uncertainty in their affect (you’re in a positive mood, and your cynicism is low), or
definitions. Further, Rousseau, Sitkin, Burt & Camerer [31] iden- an unknowable combination of cognitive biases. For example, you
tify “confident expectations and a willingness to be vulnerable [as] might have an overgeneralized sense of the honesty of your friend
critical components" (p 394) of all definitions of trust they reviewed. based on some extraneous aspect that places them in a positive light
The word ‘expectation’ implies prediction to a certain level of (the halo effect), or what they are saying in the present interaction
confidence in the behavior of the object of trust. In fact, Rempel, may agree with your world view (confirmation bias), or it may be
Holmes & Zanna [28] consider the predictability component of trust that something about the interaction reminds you of someone else
to be the first stage of trust in interpersonal interactions. Rather whom you deem to be a particularly honest person and you anchor
than consider predictability as simply one component, however, we your prediction of the honesty based on this other person. The latter
consider trust to hinge on this cognitive process of prediction. is an example of something that can arise out of an interaction with
When we encounter a new situation, the rules of probability no way of anticipating its presence or effect on the decision in
or heuristics or any of the other means by which we may make advance or in absence of the interaction.
general predictions about the world around us can be considered as We contend that predictions are made based on numerous ex-
standards informing our expectations, from which a judgment of pectations about the world and objects within it, which cannot
trust emerges. We internalize these standards, or rather, these stan- be anticipated prior to interaction. In the example above, an ob-
dards arise dynamically as we interact and learn from our changing jective observer of your relationship might see that this friend is
environment. However, regardless of how much consensus there is predictably deceptive, and that a decision based purely on an ability
between these standards, or how ‘correct’ they may be, they still to predict might be a poor means of informing whether or not you
form the basis through which we make predictions. For example, should trust what this friend says. However, an objective measure of
a general prediction about tomorrow might be that the sun will truthfulness is just one aspect of this friend that contributes to your
rise in the morning. One might think that this is an objective truth, decision to trust them. In addition, your friend is just one aspect of
however, if we think about it in terms of a subjective experience, your environment at the point of this interaction that contributes
one may not know the probability that some catastrophic event will to the trust that emerges out of the whole situation. Rather than
befall the sun sometime during the night, and thus, the standard considering the features of a human system and the features of
that is set for the sun coming up tomorrow is not based on that an information system as static and separate, we instead consider
objective probability. Instead, the subjective expectation is based them to be holistically and dynamically related to the interaction
on the fact that the sun has been witnessed to rise every day up context. Humans make predictions about the context as a whole,
until this point and this has formed a standard (or rule) that the which inextricably affects their assessments of each individual part,
sun will definitely rise tomorrow. such as the features of the information system, or the system itself.
Likewise, consider the situation where you might encounter a According to the predictive processing view [5], the predictive
person in a dark alley, who appears to be dangerous, looking at you mechanism permeates multiple levels of cognition, from perception
menacingly. You might compare this to the set of rules you have es- to complex decisions, and is thus able to account for the many
tablished based on all encounters with other people across your life, factors that play a role in trust.
and find that this person does not meet the general prediction you
have for people, namely, to not look at you menacingly or appear
dangerous, so you might then find that person untrustworthy.
One might ask at this point: how can trust be applied as simply a 4.3 Modeling Trust in CPTT
probability judgment in cases where we may see someone or some- We suggest that complex systems models provide a basis for mod-
thing as predictably ‘untrustworthy’? If someone or something eling trust in a way that accords with CPTT. The system is an
predictably acts in a way that produces harmful or unsatisfactory information interaction system where components of the system
outcomes, can they not be considered to be predictably ‘untrustwor- (human factors and information features) are interacting with each
thy’, and therefore, under our definition, ‘trustworthy’ to act in a other within an extended cognitive space. At various points in time
way that produces harm? To answer this, we must place ‘prediction’ these components can form networks which correspond to trust
in the wider context of not just the object of prediction, but on the judgments (see figure 1). A network of one set of components may
expectations we place on our environment. At the point of interac- correspond to a strong prediction of trust whereas another network
tion with information or an agent in a system, what we consider is may correspond to a prediction of mistrust. A lack of networked
the expectations that are formed through this interaction, and how components may indicate ambiguity or inability to form a trust
well the information or agent is perceived to be predictable based judgment.
on those expectations. In this complex system model, characteristics are known about
Expanding on the previous example, you may have a friend that information aspects of the system, but specific values of individ-
is consistently deceptive. Whenever you interact with them in the ual features may not be known. Similarly, an understanding of
context where they have an opportunity to be deceptive, at the human factors at the system level is possible, but details on indi-
point at which you invest your time in listening to what they have vidual human factors remains unpredictable. Therefore the two

149
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

set of features and finding appropriate means to weight them. An


additional challenge is developing methods to update weights in
I the light of information interactions from the user.
H I
Another possibility is presented by the holistic approach of CPTT,
H which is best introduced with an analogy. An artist sometimes
H I I
designs a work in order to engender a certain experience for the
H
viewer. To this end, one of the important aspects of curating artwork
is to mount the work in such a way that it enhances the possibility
of the desired experience being actualized in the human viewer.
Note that in this setting, the viewer, artwork and its mounting form
an extended cognition system that need not be predicated on a
pre-existing feature space.
I We argue that trust is an experience in an extended cognition
H system that can be analogously curated. To illustrate this point,
H
in the following sections we consider three information interac-
I I
tion examples: (1) Google’s use of their knowledge panel, (2) Face-
book’s response to live streaming of terrorist video material, and (3)
Cambridge Analytica’s approach to influencing political decision-
making.

5.1 Google knowledge panel


The volume and inconsistency of health information on the internet
Figure 1: Modeling CPTT. Components of a cognitive sys-
has been reported as being two of the most significant barriers to
tem (human factors and information features) form net-
effectively interacting with health information online [17]. These
works corresponding to trust judgments. Changing the sys-
barriers when coupled with cognitive biases that result in a ten-
tem boundaries affects trust judgment formation.
dency to accept favorable results [35] can result in searchers trusting
information which for their own health should not be trusted.
Information and human factors could be addressed separately
pillars remain in the model, but are considered holistically, with- by educating people and controlling information, however, a dif-
out a requirement that a feature space must be mapped, so the ferent approach would be to curate the information environment
model embraces the dynamism and ambiguity associated individual by utilizing CPTT. Using this perspective, objective features of the
components of the system. Individual components of the system information do not necessarily exist, and therefore cannot be in-
can be given characteristics that dynamically change in interaction dependently assessed by Google. Likewise, human factors such as
with other components. For example a component representing a cognitive biases do not exist independently of the interaction, and
human factor of a particular emotion increase in emotional valence interaction implicitly involves the predictive processing mechanism.
in response to a component representing an information feature of A solution therefore could be to curate the interaction space rather
strong opinion. The same human factor component may decrease than the independent combination of the two pillars of information
its tendency to network with other components when emotional and user.
valence is high. Google has adopted an approach which resembles information
Whereas the feature space view obtains a trust measure based on curation [12]. Using knowledge panels, Google enhances the infor-
continuous updating of weights across features that are predefined, mation interactions with the search system by ‘mounting’ infor-
the complex systems model provides a mechanism for influencing mation from expertly determined trusted sources at the top of the
trust through curating the environment and effectively changing results page. Users hold certain expectations of the general layout
the boundary of the system (see figure 1). For example, when the of a results page, and base predictions for what they see on these
system comprises networks that correspond strongly to trust, a expectations. By playing into these predictions, an experience of
boundary change might erode this trust corresponding to a reduc- trust is promoted in the extended cognition system.
tion in the ability of the components to form networks. In contrast,
if an environment is curated that allows the optimum opportunity
for these networks to form, then such an environment might be 5.2 Facebook live
seen as facilitating trust. Facebook’s news post on “Protecting Facebook Live from abuse”
[29] can be seen as an attempt to respond to community calls for
5 CURATING TRUST the company to step up as Curator and to establish an environment
A feature based model of trust spans information based features re- which reduces the likelihood of racist and violent content going
lated to human cognition. It is important to note that a pre-existing viral. Their response followed two months after the live streaming
feature space is required. Much hinges on getting this ‘right enough’ on its platform of a gunman’s massacre of people worshiping in
to be effective. Big challenges are involved in identifying a suitable two New Zealand mosques.

150
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

However, Facebook was also simultaneously involved in con- they make trust judgments based on that particular polar view.
tinuing to curate an information interaction environment which It also promotes dialogue with others of the same view (i.e. echo
favours the spread of this kind of information. One commentator chambers), and rejects those with differing views (i.e. filter bubbles).
remarked that “The act dovetailed grimly with the business model These phenomena are strongly associated with the predictive nature
of major social media platforms (Christchurch, mosque shootings, of trust as the affected person predicts that information will be
2019) which were not able or willing to stop the shooter’s material trustworthy based on their political stance and are rewarded with
spreading online.”[24] an almost zero error result, receiving their view echoed back to
We argue that Facebook operates on a model where trust has them.
implicitly been cultivated through predictability. Suggested content In this example, the extended cognition system comprises an
algorithms and structure of the friend/follow list and news feed individual and the information environment offered by Facebook,
promotes predictability. A list of friends on Facebook would tend to and so information interactions were not confined to purely the
consist of people with whom a user perceives some similarity. The individual nor the information presented to them, but were influ-
robust Similarity Attraction effect [6] in interpersonal interaction enced by the network effect of the platform, effectively amplifying
shows that people tend to surround themselves with others who the interactions on which trust judgments are made.
share similar attributes such as opinions, attitudes and interests. What is striking is how Cambridge Analytica’s work allowed for
This similarity is therefore likely to foster some level of homogene- curation an information environment to maximize the possibility
ity in a network (i.e. creating a filter bubble), which increases the of certain experiences for groups of individuals. For example, in
predictability of information coming through the social platforms relation to the US presidential election, curating the information
feed and thus, according to CPTT, reinforces the perception of trust. environment of Democratic voters in such a way as to create a false
This effect is already being exploited to promote products through sense of trust that the election was all but won, thereby facilitating
social recommendations [34]. In addition, algorithms are employed a behavioral response where individuals did not feel compelled
which present material (including advertising) based on past user to vote. In contrast, the information environment of Republican
behavior, narrowing diversity, and primarily reflecting the users’ voters was curated to foster an experience of anger which was
own interests and attitudes. Therefore, the predictability and trust amplified by the echo chamber (also a curated experience) causing
cultivated in the feed is extended to third party advertisements greater numbers to vote than if that experience of anger was not
presented in a space where friend-created content is ordinarily ex- present. In both cases, small amounts of information available at the
pected. These examples demonstrate an environment that Facebook individual level (like personality questionnaire answers) were used
has curated, where an optimized level of predictability in the plat- to obtain highly accurate system level information about groups
form’s environment fosters trust, increasing the potential success of individuals, which in turn allowed for the curation of targeted
of both continued engagement with the platform as well as other information environments designed to affect individual experiences
business goals, whether or not this benefits its users or society as a and ultimately behavior.
whole.
For Facebook to curate an environment which prevents the prop-
agation of content such as the live footage of the Christchurch
mosque shooting, they would need to curate the information envi- 6 CONCLUSION
ronment toward a different experience than the current, revealing In the current information saturated environment of polarized opin-
a friction between Facebook’s current business model and what ion and high quality faking of images, video and text, trust has
may be considered acceptable content moderation. A tension exists become a highly significant topic that is related to arguably all in-
between censorship (which may erode the predictability and trust formation interactions. The character of trust also makes it difficult
in the platform) and free flow of information (which may impact to be adequately catered for with the core technologies that un-
societal trust). We suggest that CPTT might indicate a way forward derpin modern information retrieval systems. The world’s largest
in the form of adjusting the predictability of certain factors within search and social media companies have found that they need to
the information environment towards an agreed social good. Solv- reach beyond the dominant technological solutions in order to build
ing these ethical dilemmas is beyond the scope of this paper, but trust with their users.
we suggest our CPTT model provides a basis on which questions Traditionally, approaches to modeling trust have focused on
on this issue might be addressed in future work. forming a pre-defined feature space between the two pillars of hu-
man factors and information features, in which feature weightings
are interactively updated. However, this approach is untenable in
5.3 Cambridge Analytica contexts where it is not practically feasible to form the feature
While the Cambridge Analytica scandal [36] was mostly (and rightly) space.
focused on the misuse of Facebook users’ personal information, In this paper we have suggested that alternative models of trust
here we suggest that the Company’s analytics allowed for political are required that do not rest on the interactive updating of a pre-
influence in a manner that was essentially the curation of an in- defined feature space. We suggest one such alternative can be found
formation experience in order to achieve a particular desired trust in our Cognitive Predicting Theory of Trust (CPTT) and a corre-
outcome: trust in one politician and/or mistrust of another. sponding complex systems model.
Polarized political views effectively prey on the confirmation We believe that our perspective opens up several new areas of
bias of those people whose views are located at the poles such that exploration for human information interaction. Firstly, we believe

151
Session 5: On Privacy, Trust, and Ethics CHIIR ’20, March 14–18, 2020, Vancouver, BC, Canada

that there is a need for alternative models that can operate in con- [14] Peter R Harris, Elizabeth Sillence, and Pam Briggs. 2011. Perceived threat and
texts where pre-defined features are not accessible or where feature corroboration: key factors that improve a predictive model of trust in internet-
based health information and advice. Journal of medical Internet research 13, 3
space formation is not feasible. Secondly, we suggest future avenues (2011), e51.
in the areas of neuroscience, cognition, and social psychology in [15] Daniel Kahneman and Shane Frederick. 2002. Representativeness Revisited:
Attribute Substitution in Intuitive Judgment. Heuristics and biases: The psychology
exploring how major trust theories within these fields might relate of intuitive judgment 49 (2002), 81.
to CPTT. Third, we note a benefit of CPTT is its foundations in [16] Diane Kelly. 2009. Methods for Evaluating Interactive Information Retrieval
neuroscience. There has been a recognition of the value of neuro- Systems with Users. Foundation and Trends in Information Retrieval 3, 1-2 (2009),
1–224.
science in the human information interaction space in recent years [17] Kenneth Lee, Kreshnik Hoti, Jeffery David Hughes, and Lynne Emmerton. 2017.
[13] and CPTT may help direct further research in this area. Finally, Dr Google Is Here to Stay but Health Care Professionals Are Still Valued: An
we propose that it would profit the field to undertake further re- Analysis of Health Care Consumers’ Internet Navigation Support Preferences.
Journal of medical Internet research 19, 6 (2017), e210.
search in curation of information environments, taking advantage [18] Falk Lieder and Thomas L. Griffiths. 2019. Resource-Rational Analysis: Un-
of CPTT in order to design curated, trusting information interac- derstanding Human Cognition as the Optimal Use of Limited Computational
Resources. Behavioral and Brain Sciences (2019), 1–85. https://doi.org/10.1017/
tion experiences for users. Specifically, investigations may focus on S0140525X1900061X
determining what curation of the information environment might [19] C. Lynch. 2001. When Documents Deceive: Trust and Provenance as New Factors
look like in terms of information interaction and the implications for Information Retrieval in a Tangled Web. Journal of the American Society for
Information Science and Technology for Information Science and Technology 52, 1
for search and retrieval. It is our hope that this paper will help (2001), 12–17.
provoke discussion and debate on these areas. [20] Maya B Mathur and David B Reichling. 2016. Navigating a Social World with
Robot Partners: A Quantitative Cartography of the Uncanny Valley. Cognition
146 (2016), 22–32.
[21] Roger C Mayer, James H Davis, and F David Schoorman. 1995. An Integrative
ACKNOWLEDGMENTS Model of Organizational Trust. Academy of management review 20, 3 (1995),
709–734.
This research was supported by the Asian Office of Aerospace [22] Evan Osnos. 2018. How Much Trust Can Facebook Afford to Lose?
Research and Development (AOARD) grant: FA2386-17-1-4016 https://www.newyorker.com/news/daily-comment/how-much-trust-can-
facebook-afford-to-lose.
[23] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The
PageRank citation ranking: Bringing order to the web. Technical Report. Stanford
REFERENCES InfoLab.
[1] James Allan, Jaime Arguello, Leif Azzopardi, Peter Bailey, Tim Baldwin, Krisztian [24] Colin Peacock. 2019. The New Zealand Mosque Massacre: 2.‘End of Innocence’for
Balog, Hannah Bast, Nick Belkin, Klaus Berberich, Bodo von Billerbeck, Jamie Media and Nation. Pacific Journalism Review: Te Koakoa 25, 1&2 (2019), 18–28.
Callan, Rob Capra, Mark Carman, Ben Carterette, Charles L. A. Clarke, Kevyn [25] Giovanni Pezzulo. 2012. An Active Inference View of Cognitive Control. Frontiers
Collins-Thompson, Nick Craswell, W. Bruce Croft, J. Shane Culpepper, Jeff Dalton, in Psychology 3 (2012), 478.
Gianluca Demartini, Fernado Diaz, Laura Dietz, Susan Dumais, Carsten Eickhoff, [26] Lisa M PytlikZillig and Christopher D Kimbrough. 2016. Consensus on Con-
Nicola Ferro, Norbert Fuhr, Shlomo Geva, Claudia Hauff, David Hawking, Hideo ceptualizations and Definitions of Trust: Are We There Yet? In Interdisciplinary
Joho, Gareth Jones, Jaap Kamps, Noriko Kando, Diane Kelly, Jaewon Kim, Julia Perspectives on Trust. Springer, 17–47.
Kiseleva, Yiqun Liu, Xiaolu Lu, Stefano Mizzaro, Alistair Moffat, Jian-Yun Nie, [27] Rajani Raman and Sandip Sarkar. 2016. Predictive Coding: A Possible Explanation
Alexandra Olteanu, Iadh Ounis, Filip Radlinski, Maarten de Rijke, Mark Sanderson, of Filling-in at the Blind Spot. PloS one 11, 3 (2016), e0151194.
Falk Scholer, Laurianne Sitbon, Mark Smucker, Ian Soboroff, Damiano Spina, [28] John K Rempel, John G Holmes, and Mark P Zanna. 1985. Trust in Close Rela-
Torsten Suel, James Thom, Paul Thomas, Andrew Trotman, Ellen Voorhees, tionships. Journal of personality and social psychology 49, 1 (1985), 95.
Arjen P. de Vries, Emine Yilmaz, and Guido Zuccon. 2018. Research Frontiers in [29] Guy Rosen. 2019. Protecting Facebook Live From Abuse and Investing in Manip-
Information Retrieval: Report from the Third Strategic Workshop on Information ulated Media Research. https://newsroom.fb.com/news/2019/05/protecting-live-
Retrieval in Lorne (SWIRL 2018). SIGIR Forum 52, 1 (Aug. 2018), 34–90. https: from-abuse/.
//doi.org/10.1145/3274784.3274788 [30] Julian B Rotter. 1967. A New Scale for the Measurement of Interpersonal Trust 1.
[2] Lisa Feldman Barrett and W Kyle Simmons. 2015. Interoceptive Predictions in Journal of personality 35, 4 (1967), 651–665.
the Brain. Nature Reviews Neuroscience 16, 7 (2015), 419. [31] Denise M Rousseau, Sim B Sitkin, Ronald S Burt, and Colin Camerer. 1998. Not
[3] Patricia Senn Breivik. 2005. 21st Century Learning and Information Literacy. so Different after All: A Cross-Discipline View of Trust. Academy of management
Change: The Magazine of Higher Learning 37, 2 (2005), 21–27. review 23, 3 (1998), 393–404.
[4] Pamela Briggs, Brad Simpson, and Antonella De Angeli. 2004. Personalisation [32] Herbert A Simon. 1972. Theories of Bounded Rationality. Decision and organiza-
and Trust: A Reciprocal Relationship? In Designing Personalized User Experiences tion 1, 1 (1972), 161–176.
in eCommerce. Springer, 39–55. [33] Burcu A Urgen, Marta Kutas, and Ayse P Saygin. 2018. Uncanny Valley as a
[5] Andreja Bubic, D Yves Von Cramon, and Ricarda I Schubotz. 2010. Prediction, Window into Predictive Processing in the Social Brain. Neuropsychologia 114
Cognition and the Brain. Frontiers in human neuroscience 4 (2010), 25. (2018), 181–185.
[6] Donn Byrne. 1997. An overview (and underview) of research and theory within [34] Dong Wei, Tao Zhou, Giulio Cimini, Pei Wu, Weiping Liu, and Yi-Cheng Zhang.
the attraction paradigm. Journal of Social and Personal Relationships 14, 3 (1997), 2011. Effective mechanism for social recommendation of news. Physica A:
417–431. Statistical Mechanics and its Applications 390, 11 (2011), 2117–2126.
[7] Robert Chesney and Danielle Keats Citron. 2018. Deep Fakes: A Looming Chal- [35] Ryen White. 2013. Beliefs and Biases in Web Search. In Proceedings of the 36th
lenge for Privacy, Democracy, and National Security. (2018). International ACM SIGIR Conference on Research and Development in Information
[8] Claude Draude. 2011. Intermediaries: Reflections on Virtual Humans, Gender, Retrieval. ACM, 3–12.
and the Uncanny Valley. AI & society 26, 4 (2011), 319–327. [36] wikipedia. [n.d.]. Facebook–Cambridge Analytica Data Scandal.
[9] Timothy C Earle and Michael Siegrist. 2006. Morality Information, Performance https://en.wikipedia.org/wiki/Facebook–Cambridge_Analytica_data_scandal.
Information, and the Distinction Between Trust and Confidence 1. Journal of [37] Yukihito Yomogida, Motoaki Sugiura, Yuko Sassa, Keisuke Wakusawa, Atsushi
Applied Social Psychology 36, 2 (2006), 383–416. Sekiguchi, Ai Fukushima, Hikaru Takeuchi, Kaoru Horie, Shigeru Sato, and Ryuta
[10] Jonathan St BT Evans. 2003. In Two Minds: Dual-Process Accounts of Reasoning. Kawashima. 2010. The Neural Basis of Agency: An fMRI Study. Neuroimage 50,
Trends in cognitive sciences 7, 10 (2003), 454–459. 1 (2010), 198–207.
[11] Karl Friston and Stefan Kiebel. 2009. Predictive Coding under the Free-Energy
Principle. Philosophical Transactions of the Royal Society B: Biological Sciences
364, 1521 (2009), 1211–1221.
[12] Google. [n.d.]. Search for Medical Information on Google.
https://support.google.com/websearch/answer/2364942?p=medical_conditions.
[13] Jacek Gwizdka, Yashar Moshfeghi, and Max L Wilson. 2019. Introduction to
the special issue on neuro-information science. Journal of the Association for
Information Science and Technology 70, 9 (2019), 911–916.

152

You might also like