Artigo A-psychological-model-of-collective-risk-perceptions

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Judgment and Decision Making (2024), Vol.

19:e13 1–23
doi:10.1017/jdm.2024.9

THEORY ARTICLE

A psychological model of collective risk perceptions


Sergio Pirla
Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
E-mail: sergio.pirla@psy.au.dk
Received: 28 May 2023; Revised: 8 January 2024; Accepted: 12 March 2024
Keywords: risk perception; information transmission; agent-based models

Abstract
Decades of research seek to understand how people form perceptions of risk by modeling either individual-level
psychological processes or broader social and organizational mechanisms. Yet, little formal theoretical work has
focused on the interaction of these 2 sets of factors. In this paper, I contribute to closing this gap by modifying
a psychologically rich individual model of probabilistic reasoning to account for the transmission and collective
emergence of risk perceptions. Using data from 357 individuals, I present experimental evidence in support of
my main building assumptions and demonstrate the empirical validity of my model. Incorporating these results
into an agent-based setting, I simulate over 1.5 billion social interactions to analyze the emergence of risk
perceptions within organizations under different information frictions (i.e., limits on the availability and precision
of social information). My results show that by focusing on information quality (rather than availability), groups
and organizations can more effectively boost the accuracy of their emergent risk perceptions. This work offers
researchers a formal framework to analyze the relationship between psychological and organizational factors in
shaping risk perceptions.

1. Introduction
Human decision-making frequently hinges on our perceptions of risk, the subjective probabilities we
attribute to uncertain outcomes. Whether it’s an individual deciding to invest in a start-up, guided by
their calculated risk of financial reward, a patient contemplating a novel medical treatment, swayed by
their grasp of potential health risks, or a daily commuter choosing to cycle rather than drive, steered
by their safety assessment, risk perceptions infiltrate all aspects of our everyday lives.
Considering the critical importance of risk perceptions in shaping our day-to-day experiences, a
vast scholarly literature has attempted to dissect its complex nature (see Siegrist and Árvai, 2020 for a
review). This exploration encompasses the emotional, cognitive, and social factors that influence our
view of risk in a variety of contexts and circumstances (Holzmeister et al., 2020; Kim et al., 2016; Slovic
and Peters, 2006; Slovic et al., 1980; Tompkins et al., 2018). Within the scope of this literature, several
authors have put forth formal mathematical models with the intent to elucidate the process by which
individuals form risk perceptions (Bordalo et al., 2012; Busby et al., 2016; Einhorn and Hogarth, 1985,
1986; Moussaıd, 2013). These formal models hold a number of advantages over verbal psychological
theories (Borsboom et al., 2021; Fried, 2020; Robinaugh et al., 2021). For one, formal models offer
a mathematically rigorous framework that provides a systematic and structured understanding of the
underlying mechanisms driving the formation of risk perceptions. This mathematical modeling allows
for more precise predictions, making it easier to validate or refute the proposed mechanisms. Second,
© The Author(s), 2024. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European
Association for Decision Making. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence
(https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is
properly cited.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


2 Sergio Pirla

compared to verbal theories, formal models have the capacity to incorporate and quantify complex
interactions between variables, which can be difficult to disentangle or express in verbal theories. Third,
by explicitly outlining the mathematical relationship among different variables, formal models bridge
the divide between psychological theories and the application of statistical models to empirical data
(Borsboom et al., 2021; Fried, 2020).
Despite the undeniable strides made by these formalization efforts, there remains an important gap in
the existing literature. Prior work on risk perception has modeled either individual-level psychological
processes or broader social and organizational phenomena, with scant attention paid to the interaction
of these 2 sets of factors. In this paper, I contribute to closing this gap by modifying a psychologically
rich individual model of probabilistic reasoning to account for the social transmission and collective
emergence of risk perceptions.

1.1. Modeling risk perceptions: The role of psychological processes


In various everyday situations, individuals often deal with limited or imprecise information regarding
the actual probabilities associated with different outcomes. This lack of accessible and precise
probabilistic information presents a scientific challenge when modeling the psychological processes
behind the formation of subjective probabilistic assessments.
An early body of literature proposed modeling individual risk perceptions as emerging from
Bayesian updating, providing a theoretically sound framework for understanding how individuals
should ideally update their probabilistic beliefs in response to new information. Examples of these
mathematically tractable models can be found in settings where individuals were presented with nuclear
risks (Smith and Michaels, 1987), dangerous chemicals (Smith and Johnson, 1988), workplace hazards
(Viscusi and O’Connor, 1984), or overall mortality assessments (Hakes and Viscusi, 1997). However,
empirical evidence has since accumulated showing that individuals often fail to adhere to the principles
of Bayesian updating when assessing risks (Kahneman et al., 1982; Siegrist and Árvai, 2020; Slovic,
2016; Tversky and Kahneman, 1974; Viscusi, 1985).
Responding to this discrepancy between normative models and actual psychological processes,
several authors departed from rational models of risk perceptions to focus on the role of biases and
heuristics (Kahneman et al., 1982; Tversky and Kahneman, 1974). Notably, Einhorn and Hogarth
(1985, 1986) suggested a model in which subjective risk perceptions emerge from a process of
anchoring and adjustment. In this framework, an individual initially sets a probability estimate, termed
an anchor, and then mentally explores probabilities both above and below this anchor, leading to a final
adjustment away from the initial estimate.
This model, which offers a descriptive view of the formation of subjective risk perceptions, has
been shown to account for a significant number of empirical regularities and exhibits a good empirical
fit in experimental settings (Einhorn and Hogarth, 1985, 1986; Hogarth and Einhorn, 1990; Hogarth
and Kunreuther, 1985, 1989). Moreover, many of the ideas introduced in the Einhorn and Hogarth’s
model have significantly influenced the risk literature in the fields of economics and psychology.
For instance, a recent paper by Jaspersen and Ragin (2021) developed an anchoring and adjustment
model of decision-making under risk, demonstrating that such a model can elucidate various choice
anomalies. Similarly, Johnson and Busemeyer (2016) employ an anchoring and adjustment model to
capture the process of mental simulation of probabilities by an individual facing a choice under risk.
Other noteworthy contributions in the economics literature have either adapted the insights of Einhorn
and Hogarth’s risk perception model to a revealed-preference context (Abdellaoui et al., 2011) or
incorporated some of its insights to analyze additional psychological processes like salience (Bordalo
et al., 2012) or cognitive uncertainty (Enke and Graeber, 2019).
The rich body of theoretical work has been instrumental in directing and enriching an expanding
empirical literature on risk perceptions and preferences in economics and psychology (Barberis, 2013;
Siegrist and Árvai, 2020). In particular, it has shed light on the complexities of individual decision-
making processes under conditions of uncertainty. However, a common assumption these models

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 3

often make is that individuals confront risky choices in isolation. In taking this approach, these
models inadvertently overlook the social dynamics that frequently characterize the formation of risk
perceptions.

1.2. Modeling risk perceptions: The role of social and organizational processes
A distinct body of literature has concentrated on exploring the social and organizational determinants
of risk perception, introducing several models to clarify how these elements shape subjective risk
evaluations. This literature has been notably influenced by network models of attitudinal contagion,
wherein individuals are portrayed as nodes connected within a network to study the formation and
evolution of different attitudes and perceptions within groups. Popular contagion models include the
threshold model (Granovetter, 1978; Watts, 2002, 2004; which postulates that an individual will adopt a
new attitude or behavior when a specified proportion of their network connections have done so) or the
independent cascade model (Kempe et al., 2003; Watts, 2004; which postulates that when an individual
adopts a new attitude, each of their connections in the network has a certain probability of also adopting
that attitude).
Beyond attitudinal contagion models, a few authors have attempted to formalize the social
amplification of risk framework (SARF, Kasperson et al., 1988, 2022). This framework describes
risk perceptions as emerging from the interaction of complex psychological, social, institutional, and
cultural processes. Various societal mechanisms such as media coverage, cultural beliefs, interpersonal
communication, or people’s experiences can attenuate or amplify these perceptions of risk. While
formalizing all aspects of the SARF is unfeasible, past work has focused on important specific aspects
of this theoretical framework (Busby et al., 2016; Moussaıd, 2013). For example, Moussaıd (2013)
modifies an opinion-dynamics model (i.e., a framework used to understand how individuals’ opinions
evolve and influence each other within a social network through persuasion and consensus building) to
study the social evolution of risk perceptions. Specifically, Moussaid focuses on the role of the media
environment, showing that the propensity of individuals to seek independent information can play a
key role in the formation of risk perceptions. Similarly, Busby et al. (2016) employ an agent-based
model to formalize some concepts presented in the SARF. Specifically, these authors show that the
availability of different information sources and the way risk communications are crafted can largely
impact collective risk perceptions.
Despite the substantial advancements these models have made in uncovering the nature of risk
perceptions, their focus on social and organizational aspects has resulted in a somewhat circumscribed
view, incorporating only a limited set of psychological factors. Moreover, by failing to explicitly
incorporate descriptive psychological mechanisms, these models cannot be easily modified to further
study the role of different psychological processes in shaping collective risk perceptions. The absence
of this psychological modeling in existing formal models of the social emergence of risk perceptions
underscores the need for more integrative approaches that can bridge the gap between social and
psychological processes in the formation of risk perceptions.

1.3. The present paper


The goal of this paper is to synthesize these 2 separate fields of research. To do so, I present a modified
individual model of probabilistic reasoning to account for the social propagation of subjective risk
perceptions. In doing so, this paper presents a framework to study how subjective perceptions of risk
(i.e., subjective probabilities assigned to uncertain events) evolve in groups.
As typically assumed in information transmission models (Banerjee, 1992; Banerjee and Fudenberg,
2004; Kasperson et al., 1988; Watts, 2002, 2004), in my theoretical approach, agents can discern the
subjective probabilities attached to an uncertain event by members of their social network. Utilizing
this social information, agents generate subjective risk perceptions by adhering to an anchoring-and-
adjustment process, as proposed by Einhorn and Hogarth (1985, 1986). Specifically, my framework

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


4 Sergio Pirla

presumes that agents pay attention to 2 aspects of their social information—the mean and the dispersion
of subjective probabilities. Agents employ the standard deviation of subjective probabilities associated
with an event to infer the level of ambiguity or precision in their social information, capturing the
notion that a higher level of consensus among individuals implies a lower degree of ambiguity about
an event’s true probabilities. Given an estimate of the ambiguity in their social information, the agents
anchor their subjective probability assessments on the average subjective probability assigned to an
event by their network members. Following this, the agents undergo an adjustment process to account
for the ambiguity in their social information. This adjustment process represents the mental simulation
of probabilities above and below the anchoring estimate. Given the individual’s uncertainty about
the actual probability distribution of an event, the agent mentally explores and tests various levels
of likelihood, resulting in an adjustment away from the anchor that is contingent on the anchoring
probability itself, the degree of ambiguity in the agent’s social information, and the agents’ attitudes
toward the risky event.
Empirical support for my primary building assumptions and the validity of my model are obtained
using data from 357 participants in an online study. Furthermore, I utilize this experimental data to
calibrate the model and implement it in an agent-based setting. By simulating over 1.5 billion social
interactions, I investigate the emergence of risk perceptions within organizations. To exemplify how my
model can yield testable predictions, I examine how different information frictions (i.e., constraints on
the precision and availability of social information) influence a group’s emergent risk perception. My
findings reveal that constraining the quality of socially shared information results in more imprecise
and ambiguous risk perceptions. Conversely, limiting the quantity of socially shared information leads
to a more accurate (albeit slower) emergence of risk perceptions and a higher degree of ambiguity
resolution. My results suggest that groups and organizations can more effectively enhance the accuracy
of their emergent risk perceptions by prioritizing information quality over availability.
Beyond specific results, this work offers researchers a versatile framework to formally study the
interplay of a broad range of organizational, psychological, and social phenomena in shaping collective
risk perceptions. By doing so, this paper paves the way for the development of novel research lines on
the construction of collective risk perceptions.

2. Theoretical model
In this section, I first offer an overview of the Einhorn and Hogarth’s (1985, 1986) model of probabilistic
reasoning (Section 2.1). Then, I adapt this model to a social setting by endogenizing some of its key
ingredients (Section 2.2).

2.1. Theoretical foundation: A model of probabilistic reasoning


My theoretical framework is an adaptation to a social setting of the Einhorn and Hogarth’s model of
probabilistic reasoning under ambiguity (Einhorn and Hogarth, 1985, 1986). In essence, this model
outlines a cognitive process by which agents assign a subjective probability to an uncertain event
in situations where probabilistic information is either lacking or vague. In this model, the agents
follow a process of anchoring and adjustment when deriving the subjective probability of an event.
Specifically, the subjective probability assigned to an outcome x—namely SP(x)—is given by the
following equation:

SP(x) = p + k, (1)

where p (ranging from 0 to 1) is the anchoring probability and k the adjustment made to account
for the ambiguity in the available probabilistic information. The authors provide a rationale for this
adjustment process—it represents a process of mental simulation of probabilities above and below the
anchor (see Johnson and Busemeyer, 2016 for a different approach to modeling the mental simulation

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 5

of probabilities in risky choice). As no precise probabilistic information exists, an agent tests and
evaluates the plausibility of different probability levels above and below the anchor. This leads to an
adjustment away from the anchor that depends on the total amount of ambiguity, the agent’s attitudes
toward the uncertain event (i.e., the agent’s level of optimism/pessimism), and the anchoring probability
itself. Specifically, the total adjustment (k) is the difference between the adjustment toward greater
probabilities (kg ) and the adjustment toward smaller probabilities (ks ):

k = kg − ks . (2)

As highlighted in Einhorn and Hogarth (1985, 1986), the maximum value of kg is (1 − p). Similarly,
ks cannot exceed p—violating any of these 2 conditions could result in subjective probabilities that
are below 0 or above 1. The authors assume that ks and kg are captured by the maximum adjustments
multiplied by a constant of proportionality 𝜃 (𝜃 ∈ [0, 1]). This constant of proportionality represents the
amount of ambiguity in the probabilistic information, reflecting the idea that the mental simulation of
probabilities is more extensive and leads to a greater adjustment away from the anchor under conditions
of increased ambiguity (i.e., when the agent holds less precise probabilistic information). Finally, to
capture the agent’s attitudes toward the uncertain event, the authors include a parameter (𝛽) representing
the agent’s degree of optimism or pessimism—the extent to which probabilities above (as opposed to
below) the anchor are considered. To do so, the authors weight the maximum adjustment toward smaller
probabilities by 𝛽 (𝛽 ∈ [0, ∞)). The final adjustment functions are given in the following equations:

kg = 𝜃 (1 − p), (3)
𝛽
ks = 𝜃p . (4)

For values of 𝛽 below 1, an agent gives more weight to probabilities below (as opposed to above) the
anchor, with the opposite effect taking place for values of 𝛽 above 1. If 𝛽 is equal to 1, the agent gives
the same weight to probabilities above and below the anchor. Replacing these equations in Equation (1)
yields the following functional form:

SP(x) = p + 𝜃 (1 − p − p𝛽 ). (5)

2.2. A social model of probabilistic reasoning


People routinely use information gathered from their social interactions to build their risk perceptions.
This is especially important in contexts where access to objective information is either limited or costly.
In my model, I assume that any given individual can recover the subjective probabilities assigned to
an ambiguous or uncertain event by those in their social network—an assumption commonly made
in models of information transmission and social learning (Banerjee, 1992; Banerjee and Fudenberg,
2004; Kasperson et al., 1988; Watts, 2002, 2004). Using this social information, each agent anchors
its probability judgment on the average probability level observed in its social network. That is, in my
model, the anchoring probability used by agent j is defined as

1
n
pj = SPi (x), (6)
n i=1

where SPi (x) (ranging from 0 to 1) represents the observed subjective probabilities assigned to the
event by agent i in the social network of agent j.
Beyond average probability levels, the distribution of probability judgments in a network conveys
important information. Figure 1 shows the distribution of probability judgments across 2 similar
networks of agents. In this setting, the anchoring probabilities of agents A and B would be the same.
Nevertheless, there is a higher level of social consensus in the network of agent A than in the network

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


6 Sergio Pirla

Figure 1. Example of networks that convey similar anchoring probabilities with different degrees of
ambiguity.

of agent B. The average probability level is a more precise signal of the probability assigned to the
uncertain event in the network of agent A than in the network of agent B. That is, the socially transmitted
information carries a lower degree of ambiguity in the network of agent A than in the network of
agent B. To capture this insight, I assume that individuals pay attention to both the average and the
standard deviation of probability judgments within a network. Specifically, I assume that the dispersion
in probability judgments allows an agent to capture the precision or ambiguity in its social information.
Hence, in my model, the amount of ambiguity in the probabilistic information of agent j—namely 𝛾j —
is implicitly defined by the standard deviation of probability judgments in its social network:

n
i=1 (SPi (x) − pj )
2
𝛾j = . (7)
n−1

Hence, the length of the mental simulation process performed by agent j (𝜃 j ) is defined as the amount of
ambiguity in the probabilistic information of agent j (𝛾j ) multiplied by a constant of proportionality (𝛼):

𝜃 j = 𝛾j · 𝛼. (8)

The inclusion of this constant of proportionality (𝛼) responds to theoretical reasons. Specifically,
it has been shown that individuals differ in their ability to discriminate between different levels of
likelihoods in the absence of precise probabilistic information (Abdellaoui et al., 2011; Dimmock et al.,
2016; Li et al., 2018). That is, while some people are readily able to discern between different levels of
likelihood under ambiguity, others tend to see ambiguous uncertain events as having a 50% chance.
This cognitive process—termed ambiguity-generated likelihood insensitivity or a-insensitivity—is
commonly portrayed as a measure of people’s understanding of ambiguous situations. To account
for a-insensitivity, I describe the total length of the mental simulation process (𝜃) as resulting from
the interaction of the amount of ambiguity in a specific scenario (𝛾) and a measure of the agents’
a-insensitivity (𝛼). Given that a-insensitivity reflects the understanding of the ambiguous situation, a
higher level of a-insensitivity (i.e., a poorer understanding of the ambiguous context) should increase
the length of the process of mental simulation of probabilities above and below the anchor (especially
in context with high ambiguity). Given an anchor, and under non-extreme values of 𝛽, the increased
length of the mental simulation process will lead to final probability estimates that are closer to 50%
(compared with the anchor). Hence, by including this constant, this model accommodates a measure of
a-insensitivity.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 7

Summing up, agent j’s subjective probabilities assigned to an uncertain event x can be defined as
𝛽
SPj (x) = pj + (𝛾j 𝛼) (1 − pj − pj ), (9)

where pj and 𝛾j represent the average and standard deviation in subjective probabilities assigned to
event x by those in the network of agent j, 𝛼 represents the agents’ a-insensitivity, and 𝛽 represents the
agents’ attitude toward the uncertain event (i.e., the degree of optimism/pessimism). For consistency
with the empirical analyses and simulations, 𝛼 and 𝛽 are population-level parameters. That is, they are
assumed to be fixed between individuals.
In the following sections, I first estimate this model and test its main building assumptions using
data from an online study (Section 3). Then, I incorporate the empirically calibrated version of the
model into an agent-based framework to analyze the evolution of group risk perceptions under varying
information frictions (Section 4).

3. Experimental evidence
In this section, I present my experimental design (Section 3.1), an overview of the statistical procedures
used to analyze the experimental data (Section 3.2), and the results of these analyses (Section 3.3).

3.1. Experimental setting


I used data from an online experiment to empirically calibrate this model and test its main building
assumptions. Specifically, I used hypothetical choice scenarios to analyze how individuals form risk
perceptions (i.e., subjective probability assessments) about an investment with an unknown success
probability when presented with risk estimates (i.e., probabilities) from 4 hypothetical co-workers.
A total of 695 participants were recruited on Prolific for an online study on risk perceptions (see
SM Note 2 of the Supplementary Material for an overview of the main demographic characteristics of
the sample). The study had a median completion time of 9 min, and the participants received a fixed
payment of £1. I preregistered my sample size, exclusion criteria, and main analyses. Data and code are
openly available in an OSF repository.1
The study involved 20 hypothetical scenarios. In each scenario, the participants had to choose
between 2 investments that only differed in their probability of success. Specifically, the participants
had to choose between an investment with a known probability of success (Investment 1) and an
investment with an unknown probability of success (Investment 2). Although Investment 2 had an
unknown success probability, the participants had access to 4 estimates of this probability obtained
by hypothetical co-workers. Keeping these 4 estimates constant for each scenario, the participants had
to choose between Investment 1 and Investment 2 for different probabilities of Investment 1 being
successful. In each scenario, the participants made 11 choices, corresponding to probability levels
of Investment 1 being successful that ranged from 0 to 1 (in 0.1 increments). Across scenarios, the
4 probability estimates reflecting the likelihood of Investment 2 being successful (i.e., the estimates
given by the hypothetical co-workers) varied in their average probability estimate (with estimates
averaging at approximately 0.2, 0.4, 0.6, and 0.8 probability of success) and their dispersion (with
standard deviations in probability estimates of approximately 0.05, 0.1, 0.15, 0.2, and 0.25). Figure 2
presents an example of the hypothetical scenarios and the subjective probability elicitation task used
in this study. Although this elicitation method has some limitations compared to simpler alternatives
(as detailed in the Discussion section), it captures the role of key cognitive and affective mechanisms
that influence risk perceptions in daily life. Rather than simply choosing a number, participants in this
task are guided to consider various outcomes and compare different alternatives, much like they would

1 OSF repository.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


8 Sergio Pirla

Figure 2. Example of hypothetical scenario and subjective probability elicitation task used in the
experiment. Additional study materials (consent form and detailed instructions) are presented in SM
Note 1 of the Supplementary Material. Across scenarios, the participants were presented with different
sets of co-workers’ probability estimates.

in real-life situations. In doing so, this task incorporates key emotional and cognitive factors that are
difficult to incorporate in a purely numerical estimation task.
To ensure that the participants had carefully read the instructions given at the beginning of the
study (“Study Instructions” on SM Note 1 of the Supplementary Material), they were presented
with a multiple choice attention check (see “Attention Check” on SM Note 1 of the Supplementary
Material). A total of 93 participants failed this attention check and were not allowed to continue with
the study. Of the remaining 602 participants, a total of 245 participants chose—at some point in the
experiment—an investment with a known probability of success of 0 or failed to select an investment
with a known probability of success of 1. I excluded these responses from my analyses, resulting in a
final sample of 357 individuals. In SM Note 5 of the Supplementary Material, I show that the model
estimates remain qualitatively similar when applying more relaxed inclusion standards. In SM Note 6
of the Supplementary Material, I demonstrate that this experimental design provides good parameter
recoverability.

3.2. Analyses and statistical procedures


For each scenario and participant, I obtained a measure of the subjective probability of success assigned
to Investment 2 (my main dependent variable) by taking the middle point between (1) the greatest
probability of Investment 1 being successful for which the participant chose Investment 2, and (2)
the smallest probability of Investment 1 being successful for which the participant chose Investment
1. A unique switching point between Investments 1 and 2 was not imposed on the participants, nor
were they prompted to reconsider their choice if more than 1 switching point was provided. In cases
with multiple switching points, subjective probabilities were derived using the same method described
above: the middle point between (1) the greatest probability of Investment 1 being successful for which
the participant chose Investment 2, and (2) the smallest probability of Investment 1 being successful
for which the participant chose Investment 1. As stated in the previous paragraph, note that I focus
on participants that started the elicitation task by selecting Investment 2, and finished it by selecting
Investment 1.
Using this variable, I started by testing the 2 main building assumptions of my model. First, I tested
whether the participants’ risk perceptions could be approximated by the average of their co-workers’
subjective probabilities in a given scenario (anchoring assumption). To test this assumption, I simply
regressed my main dependent variable (i.e., the participant and scenario-specific measure of subjective
probability assigned to Investment 2) on the average probability estimate reported by the 4 co-workers

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 9

in a given scenario. Second, I tested whether the study participants deviated more extensively from the
average of their co-workers’ estimates under increased ambiguity (when the SD of their co-workers’
estimates was high, adjustment assumption). To test this assumption, I first obtained a participant and
scenario-specific measure of adjustment size by taking the absolute value of the difference between a
participant’s subjective success probability of Investment 2 and the average probability reported by the
4 co-workers in a given scenario. Then, I regressed this adjustment size on the amount of ambiguity
in an scenario (i.e., the SD of the probability estimates reported by the 4 co-workers). Both of these
models were estimated using linear mixed models and included participant-specific fixed effects.
Moving to my main analyses, I employed my participant and scenario-specific measure of subjective
probability to estimate the following model:
𝛽
SPis = ps + (𝛾s 𝛼) (1 − ps − ps ) + 𝜖 is , (Full Model)

where SPis is the subjective probability assigned to Investment 2 being successful by participant i in
scenario s, ps and 𝛾s are the average and standard deviation of the co-workers’ subjective probabilities
in scenario s, and 𝜖is is a zero-mean normally distributed error term with 𝜎 2 variance.
This model has 3 free parameters (𝛼, 𝛽, and 𝜎 2 ). I estimated these parameters by maximum
likelihood and followed a cluster bootstrap approach to estimate their standard errors (to account for
the nested nature of my data, see estimation details in SM Note 4 of the Supplementary Material).
Moreover, I compared the fit of this full model to the empirical fit of the following 2 models:

SPis = ps + 𝜖 is , (Null Model 1)


SPis = c + 𝜖is , (Null Model 2)

where SPis , ps , and 𝜖 is are defined as in my full model, and c represents a constant. Again, I estimated
these 2 null models by maximum likelihood, and obtained their parameters’ standard errors by a cluster
bootstrap procedure. To compare Null Model 1 against my full model, I performed a likelihood ratio
test. As Null Model 2 is not nested in my Full Model, I compared the empirical fit of these 2 models by
simply reporting their estimated likelihoods. For completeness, I also consider additional comparison
metrics. Specifically, I use both the Akaike information criterion (AIC) and the Bayesian information
criterion (BIC) to compare model fit.
Finally, I compared the empirical fit of my model against 5 alternative specifications. These
specifications were versions of the full model that replaced the anchoring and amount of ambiguity
measures (i.e., the average and the standard deviation of co-workers’ estimates) by other centrality and
dispersion variables. Specifically, I considered all combinations of 2 measures of centrality (average
and median of co-workers’ estimates) to be used as anchors, and 3 measures of dispersion (standard
deviation, variance, and relative standard deviation2 of co-workers’ estimates) to be used as proxies for
the amount of ambiguity in the agents’ social information.

3.3. Results
My experimental results offer empirical evidence in support of the 2 main building assumptions of my
model. Specifically, the participants’ subjective probabilities could be well approximated by the average
probability reported by the co-workers in each scenario (𝛽p = 0.813, t = 156.86, p < 0.001), evidence
that is consistent with the anchoring assumption. Moreover, the participants deviated more extensively
from this anchor under conditions of increased ambiguity (𝛽SD = 0.135, t = 10.96, p < 0.001), evidence
that is consistent with the adjustment assumption (see SM Note 3 of the Supplementary Material for

2 Therelative standard deviation is a measure of dispersion that takes into account the confound between average and standard
deviation for bounded variables. Details on its estimation are presented in Mestdagh et al. (2018). In short, this measure results
from dividing the standard deviation of a bounded variable by its maximal standard deviation given its average.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


10 Sergio Pirla

Table 1. Estimated parameters (with standard errors in parentheses) for


the full and the 2 null models. All parameters are estimated by maxi-
mum likelihood. Standard errors are obtained using a cluster bootstrap
approach.
Full Model Null Model 1 Null Model 2

𝛼 0.674
(0.051)

𝛽 0.337
(0.045)

𝜎2 0.015 0.018 0.048


(0.001) (0.002) (0.001)

c 0.470
(0.004)

No. of obs. 7,140 7,140 7,140


Log likelihood 4,777.84 4,242.68 683.20
AIC −9,549.69 −8,483.37 −1,362.40
BIC −9,529.07 −8,476.50 −1,348.65

detailed results). Overall, the experimental results provide evidence in support of these 2 main building
assumptions.
Moving to my full model estimations, my main results are presented in Table 1. My analyses yield
an estimate of 𝛼 of 0.674 (SE = 0.051) and 𝛽 of 0.337 (SE = 0.045). These estimates have 3 direct
implications. First, the adjustment away from the anchoring probability is not negligible. In fact, my
estimate of 𝛼 is significantly greater than zero (t = 13.02, p < 0.001), suggesting that the participants
in my study did not simply assign subjective probabilities based on the average probability estimate
reported by their co-workers. On average, my model estimates predict an adjustment away from the
anchor with an absolute size of 0.077 for high-ambiguity contexts (i.e., when co-workers’ estimates
have a standard deviation of 0.25) and 0.015 for low-ambiguity contexts (i.e., when co-workers’
estimates have a standard deviation of 0.05). These deviations away from the anchoring probability
are greater in size for high probabilities. For example, when the anchoring probability is 0.8, my
model predicts a final subjective probability of 0.68 when ambiguity is high (SD co-workers’ estimate
= 0.25), and 0.78 when ambiguity is low (SD co-workers’ estimate = 0.05). Figure 3 presents the
resulting subjective probabilities predicted by my model for different anchoring probabilities and levels
of ambiguity (i.e., co-workers’ estimates with different averages and standard deviations).
Second, in line with past empirical evidence (Abdellaoui et al., 2011; Dimmock et al., 2016;
Li et al., 2018), my results demonstrate the existence of significant ambiguity-generated insensitivity
(a-insensitivity)—the tendency to perceive ambiguous events as having 50% probability. This inability
to correctly differentiate across probability levels is readily visible in Figure 3, where estimated
probabilities are biased toward 50% (resulting in the overweighting of small probabilities and the
underweighting of large ones). Consistent with past evidence (Henkel, 2022), a-insensitivity is more
prevalent in more ambiguous scenarios.
Third, the study participants display significant levels of pessimism or ambiguity aversion. An
estimate of 𝛽 = 1 would imply ambiguity neutrality (i.e., that the participants put the same weight on

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 11

Figure 3. Estimated subjective probabilities for different anchoring probabilities (i.e., averages in
co-workers’ estimates) and levels of ambiguity (i.e., standard deviations in co-worker’s estimates).

probabilities above and below the anchor). However, my 𝛽 estimate (𝛽 = 0.337) is significantly smaller
than 1 (t = −14.49, p < 0.001), suggesting that the participants gave more weight to probabilities below
the anchor. This finding is consistent with past evidence showing the prevalence of ambiguity aversion
in a broad range of settings (see Trautmann and Van De Kuilen, 2015 for a review). However, it is
important to note that attitudes toward ambiguous events have been shown to be source dependent
(Abdellaoui et al., 2011; Baillon et al., 2018). That is, they vary depending on the uncertainty source.
To be consistent with these findings, in my simulations (Section 4) rather than describing an event’s
true probabilities as high or low, I focus on their relationship to the agents’ attitudes (i.e., whether the
probabilities are consistent with the agents’ attitudes towards the event or not, see Section 4.2). For
instance, in the case of an event with a high degree of likelihood, if the agents’ attitudes are such that
the agent puts a higher weight into probabilities above the anchor when following the process of mental
simulation (𝛽 > 1), then these attitudes are described as consistent with the event’s true probabilities.
Beyond specific parameters, my estimations allowed me to test the empirical validity of my full
model against a null model. Specifically, I tested the empirical fit of my model against the empirical
fit of a model with no adjustment away from the anchor (Null Model 1). Using a likelihood ratio tests,
my results demonstrate that my full model offers a significantly superior fit (𝜆RT = 1070.32, df = 2, p <
0.001). Similarly, a direct comparison of likelihoods suggest that my full model outperforms a model
with a single constant (Full Model Log L = 4,777.84, Null Model 2 Log L = 683.20). A result that
holds when using different comparison metrics (see Table 1). For instance, my full model yields a root
mean square error that is 45% smaller than the root mean square error of Null Model 2 (RMSEFull Model
= 0.124, RMSENull Model 2 = 0.220). While both the likelihood ratio test and the estimated parameters
demonstrate that the adjustment component of my model is non-negligible, it is important to remark that
the anchoring component of my model explains the largest share of accounted error (RMSENull Model 1
= 0.134).
Finally, Table 2 presents the empirical fit (log likelihood) of 6 model specifications differing in
their operationalization of the anchor value and the amount of ambiguity. Again, as these models are
not nested within the full model, I compared their empirical fit by simply reporting their likelihoods.
Overall, my main specification—the one that uses the average and standard deviation of the co-workers’
estimates—yielded a superior empirical fit. However, differences across operationalizations are rather

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


12 Sergio Pirla

Table 2. Empirical fit for 6 different specifications of the main


model. The specifications include 2 measures of centrality (to
be used as anchors) and 3 measures of dispersion (to be used
as proxies for the amount of ambiguity).
Anchor Amount of ambiguity Log likelihood

Average SD 4,777.84
Average Variance 4,668.88
Average Rel. SD 4,770.12
Median SD 4,751.77
Median Variance 4,646.38
Median Rel. SD 4,742.31

small. These results demonstrate that, compared to alternative specifications, my preferred model
operationalization does not display a poorer empirical fit.
Altogether, in this section, I (1) provide evidence in support of my main building assumptions, (2)
show that my model provides a significantly better empirical fit than simpler alternative models, and
(3) demonstrate that alternative measures of centrality and dispersion do not improve the empirical fit
of my model. These results demonstrate the empirical validity of my main model and offer important
insights on how we use social information to construct our subjective perceptions of risk.

4. Calibrated simulations
In this section, I further develop my model to demonstrate its theoretical relevance. Specifically, I first
expand the model to an agent-based setting (Section 4.1), and use it to simulate the evolution of risk
perceptions within an organization (Section 4.2). Using empirically calibrated agent-based simulations,
I conclude by studying the impact of information frictions (limits on the availability and quality of
social information) on the emergent risk perceptions (Section 4.3).

4.1. Agent-based model


Agent-based models are used across the social sciences to study the emergence of collective phenomena
from the interaction of individual agents. For instance, past work has used agent-based simulations
to model the emergence of risk perceptions in contexts such as floods (Haer et al., 2017), or water
reuse (Kandiah et al., 2017). In the field of organization science, past work has used agent-based
simulations to study topics that include organizational design (Clement and Puranam, 2018), division
of labor (Raveendran et al., 2022), or firm performance under different levels of complexity and market
turbulence (Siggelkow and Rivkin, 2005).
To adapt my model to an agent-based setting, I use a network of 16 agents distributed across a 4×4
grid to study the evolution of risk perceptions (i.e., subjective probabilities) assigned to an uncertain
event with a fixed true probability. While most agents will not have access to the true probability of
the event, each agent will have access to the subjective probabilities assigned to the event by those
individuals located in its 4 surrounding grid points (neighborhood, see Figure 4). The grid wraps around
the edges—that is, in Figure 4, agent A44 will be able to collect information from agents A14 and A41.
In everyday situations, precise probabilistic information is often difficult or costly to obtain.
To reflect the limited availability of precise probabilistic information in real life, throughout the
simulations, I assume that agent A11 is the only agent with access to the true precise probability
associated with a given event. Moreover, I assume that agent A11 is an informed “stubborn individual,”
an agent that maintains fixed beliefs or opinions, irrespective of surrounding influences. Hence, agent

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 13

Figure 4. Network of agents employed in the simulations. Each individual has access to the risk
perceptions of their surrounding 4 agents. For instance, agent A33 will observe the subjective
probabilities assigned to an event by agents A32, A23, A43, and A34.

A11 knows the precise true probability of the uncertain event and does not update its subjective
probability throughout the simulation. The presence of such stubborn agents is a common feature in
network models of information transmission (Acemoglu et al., 2010; Ghaderi and Srikant, 2014; Yildiz
et al., 2013). In the context of the present model, this stubborn agent symbolizes the existence of
objective information within the network. Without a stubborn agent, the network would lack access
to the true probability associated with the specific event. Consequently, the resultant risk perception
would be uniquely determined by the agent’s attitudes toward the uncertain event.
The rest of individuals (referred as non-informed or non-stubborn individuals) update their subjec-
tive probabilities following the process of mental simulation outlined in my main model. That is, they
(1) anchor their probability assessments on the average probability assessment in their neighborhood,
and (2) follow a process of mental simulation of probabilities that results in an adjustment away from the
anchor to account for the amount of ambiguity in their probabilistic information. Specifically, in each
iteration, a random individual in the 4×4 grid is selected. If this agent is a “stubborn individual” (i.e.,
if it is A11), the iteration concludes, and a new agent is selected. If the randomly selected individual
is not an informed individual, the agent recovers the subjective probabilities of the 4 agents in its
neighborhood and derives a new subjective probability estimate. Given the subjective probabilities of
those in its neighborhood, a non-informed agent j will derive the following subjective probability:

SPj = pj + 0.674 · 𝛾j · (1 − pj − p0.337


j ), (10)

where pj and 𝛾j represent the average and the standard deviation of the observed subjective probabilities
in the neighborhood of agent j. Note, that in order to obtain empirically calibrated results, I have
replaced the parameters 𝛼 and 𝛽 of my model (Equation 9), by its empirical estimates from Section 3.3.
After updating the agent’s subjective probability, the model iteration concludes, and a new agent is
selected. I repeat this process until a steady state has been reached (e.g., after 50,000 model iterations).
To ensure that my results are not driven by random variation in the selection of individuals within the
network, all my results represent aggregations (i.e., averages) across 100 model repetitions. I perform
my simulations independently for 101 true-probability levels (ranging from 0 to 1 in 0.01 increments).
I initialized the non-informed agents’ subjective probabilities at 50% (so that under the absence of
information, uninformed agents give a 50% probability to any event occurring). Again, data and code
for the simulations are posted in an OSF repository.3

3 OSF repository.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


14 Sergio Pirla

Figure 5. Evolution of group subjective probabilities and amount of ambiguity across model iterations.

Figure 6. Evolution of group subjective probabilities and amount of ambiguity across the first 5,000
model iterations for high (i.e., 99%) and low (i.e., 1%) probability events.

4.2. Baseline simulations


To evaluate my simulations, I focused on 2 main outcomes—the average subjective probability and the
average perceived amount of ambiguity (i.e., the average of the SDs in the agents’ social information).
To estimate both measures, I excluded stubborn individuals (i.e., I exclude agent A11), focusing
exclusively on non-informed agents. Figure 5 presents the group’s subjective probabilities and amount
of ambiguity for different values of the true probability and model iterations. Figure 6 presents a

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 15

more detailed overview of the dynamics of the risk perception formation process for low- and high-
probability events (i.e., events with a true probability of 1% or 99%).
These results carry several implications. First, when true probabilities are aligned with the agents’
ambiguity attitudes, the subjective probabilities in the group converge to the true probabilities of the
ambiguous event. In this setting, when the agents follow a process of mental simulation to account for
the ambiguity in their social information, the agents put more weight on probabilities below their anchor
(i.e., the parameter beta of Equation 9 is below 1 in our simulations). This is not problematic when true
probabilities are small. In fact, when true probabilities are small, putting more weight on probabilities
below the anchor makes the adjustment process more likely to go in the direction of the true probability.
However, when true probabilities are large, putting more weight on probabilities below the anchor
means that the agents fail to sufficiently adjust in the direction of their social information. This tendency
to adjust insufficiently limits the extent to which the group subjective probabilities converge to the true
probabilities.
Second, when true probabilities are inconsistent with the agent’s ambiguity attitudes, the resulting
network of risk perception reflects a higher degree of unresolved ambiguity. That is, given that in
our setting, the agents give more weight to probabilities below (as opposed to above) the anchor,
when true probabilities are large, the emergent probabilistic assessment in a group is not only
more likely to deviate from the true probability, but yields a higher degree of perceived ambiguity.
This could be understood as a collective form of motivated ambiguity by which individuals—by
adjusting insufficiently for information that is contrary to their attitudes—create a degree of ambiguity
in the network that allows them to hold risk perceptions that are more in line with their own
attitudes.
Third, the resulting pattern of risk perception is a non-monotonic function of an event’s true
probability. For instance, the resulting group subjective probability is 0.8 when the true probability of
the event is 0.8, and 0.75 when the true probability of the event is 0.99. Again, this can be explained
by the agents’ tendency to under-react to social information that is at odds with their own ambiguity
attitudes. As agents give more weight to probabilities below (as opposed to above) their anchor, they
underreact to information that goes against their own attitudes (i.e., they underreact to high probability
estimates). This underreaction leads to greater levels of ambiguity in the available social information.
As the size of this under-adjustment is larger for more extreme probabilities, the amount of unresolved
ambiguity is greater for extreme probability events. This increased ambiguity allows the agents to hold
risk perceptions that are more in line with their own attitudes, thus, creating a non-monotonic pattern
of risk perception.
It is important to acknowledge that this observed non-monotonic pattern of risk perceptions is
at odds with previous empirical evidence (Siegrist and Árvai, 2020). However, it is also important
to notice that this non-monotonicity emerges only under specific model conditions. For instance,
Figure 5 demonstrates that risk perceptions form a non-monotonic relationship with true probabil-
ities only after a large number of model iterations. In the initial phases of the simulation, risk
perceptions are a monotonic function of an event’s true probabilities. Moreover, as presented in the
following section, when agents do not accurately discern the subjective probabilities assigned to
an uncertain event by those in their social network, the resulting pattern of subjective probabilities
is a monotonic transformation of the event’s true probabilities. Therefore, while the existence of a
non-monotonic range in risk perceptions is theoretically possible, it is not a robust property of this
model.
Fourth, as depicted in Figure 6 (right panel), the amount of ambiguity evolves following a non-
monotonous trajectory—rapidly increasing and collapsing in the early stages of the simulation. This
trajectory in ambiguity perceptions closely resembles the patterns of information seeking found on
previous empirical studies of collective attention (Crane and Sornette, 2008; Wu and Huberman, 2007).
My simulations, therefore, not only provide a theoretical explanation to the emergence of collective
risk perceptions but can explain collective patterns of attention and information seeking. Note that, in
the early stages of the simulation process, the amount of ambiguity evolves similarly for high and low

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


16 Sergio Pirla

probability events (i.e., the trajectory in the amount of ambiguity is similar irrespective of whether the
event’s true probability aligns with the agents’ attitudes or not). It’s only after 100 model iterations
that large disparities in the amount of ambiguity appear for events with different true probabilities
(with greater ambiguity resolution for events whose true probabilities align with the agents’ attitudes).
This pattern of ambiguity resolution is translated into a marginally decreasing rate of information
transmission. In other words, changes in the subjective probabilities of a group are concentrated in the
initial phase of the simulation process. For instance, Figure 6 (left panel) shows that about 75% of the
shift in subjective probabilities occurs within the first 2,000 model iterations.

4.3. Information frictions


Using agent-based simulations, I study how information-availability frictions (i.e., constraints on the
amount of information gathered by an agent) and information-quality frictions (i.e., constraints on the
precision of the information gathered by an agent) shape the collective emergence of risk perceptions.

4.3.1. Modeling information frictions


I carry my agent-based simulations under 3 different conditions. These conditions represent (1) a
frictionless setting, (2) a setting with information-quality frictions, and (3) a setting with information-
availability frictions. In the frictionless condition, the agents can perfectly recover the subjective
probabilities assigned to the uncertain event by all other agents in the group. That is, in this condition,
the agents observe the exact risk perception of the remaining 15 agents in the group. In the quality-
frictions condition, the agents recover the subjective probabilities assigned to the uncertain event by all
other agents in the group, but they do so imprecisely. That is, in this condition, the agents observe the
risk perception of the remaining 15 agents in the group plus an error term. Throughout the simulations, I
assume that this error term is normally distributed with mean 0 and a standard deviation of 0.05. Finally,
in the availability-frictions condition, the agents recover the exact subjective probabilities assigned to
the uncertain event by a fraction of the agents in the group. As in the baseline simulations, in this
setting the agents can recover the subjective probabilities assigned to an ambiguous event by their 4
surrounding agents. This condition is equivalent to the baseline setting.
All the simulations are conducted on a network of 16 agents with agent A11 as a stubborn individual.
Each simulation consists of 50,000 model iterations and the results represent aggregates (i.e., averages)
across 100 model repetitions. Again, I perform these simulations independently for 101 true-probability
levels (from 0 to 1 in 0.01 increments).

4.3.2. Results
Again, to derive my main results, I focused on the average subjective probability and amount of
ambiguity perceived by non-stubborn individuals. Figure 7 presents the resulting patterns of risk
perception and ambiguity for each condition after 50,000 model iterations. Figure 8 presents the
dynamic evolution of risk perceptions across conditions for events with high (99%) and low (1%) true
probability.
As shown in Figure 7, limiting the precision of social information not only results in subjective
probabilities that deviate more extensively from the true probabilities, but in a pattern of risk
perceptions that displays a higher degree of unresolved ambiguity. As one would expect, in the quality-
frictions condition, the amount of ambiguity perceived in the social information is never below 0.05 (the
standard deviation of the error term that is incorporated into an agent’s social information). Compared to
the frictionless condition, introducing quality frictions has a limited impact on the resulting subjective
probabilities at the extremes (i.e., when the true probability is either 0 or 1). However, the precision loss
can be large for non-extreme probability events. For instance, for events with a true probability of 0
or 1, the accuracy loss due to quality frictions is 0.060 and 0.019, respectively. On the other hand, for a
true probability of 0.6, introducing quality frictions increases the difference between the true probability

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 17

Figure 7. Resulting group subjective probabilities and amount of ambiguity under different
information-friction conditions.

and the group resulting subjective probability in 0.104. Overall, introducing quality frictions leads to
more imprecise and ambiguous subjective probabilities.
On the other hand, introducing availability frictions results in a more precise and less ambiguous
information transmission process. This is especially true in situations where the agents’ attitudes are
at odds with the true probabilities of the event (i.e., for high probabilities). To explain this result,
one needs to consider the impact that the stubborn (or informed) agent has on its surrounding agents.
Under the frictionless condition, the true subjective probabilities are diluted in a set of 15 subjective
probabilities. However, for those agents located next to the stubborn individual, introducing availability
frictions means that the true probability is only diluted in a set of 4 subjective probabilities. Hence, for
those agents that directly surround the stubborn individual, introducing availability friction increases
the impact that the stubborn agent has on their subjective probability estimates. While these frictions
will limit the direct impact that the stubborn agent has on those located further away in the network,
by shaping the risk perceptions of those in its neighborhood, the stubborn agent generates a cascade
effect—indirectly shaping the perceptions of agents located further away in the network by impacting
the perceptions of the agents in its neighborhood. This information cascade translates into a slower, but
more precise transmission of risk perceptions.
As information-availability frictions slow down the information transmission process, the group’s
risk perceptions under these frictions display a more imprecise pattern of risk perceptions during the
first stages of the simulation (i.e., the first 500 model iterations). As depicted in Figure 8, when the
true probabilities are in line with the agents’ attitudes (i.e., for low probabilities), both the frictionless
condition and the quality-frictions condition outperform the availability-frictions condition during the
first 500 model iterations. This results is ultimately reversed, but shows that beyond shaping the
precision of the emergent risk perceptions, availability frictions also shape the speed of this emergent
process.
Overall, limits on the quality of the social information led to more imprecise and ambiguous risk
perceptions. On the contrary, limits on the amount of social information slowed down the information
transmission process but resulted in a pattern of risk perceptions that reflected a greater precision and
ambiguity resolution.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


18 Sergio Pirla

Figure 8. Evolution of group subjective probabilities and amount of ambiguity under different
information-friction conditions. The graph presents the the first 5,000 model iterations for high (i.e.,
99%) and low (i.e., 1%) probability events.

5. Discussion
By modifying an individual model of probabilistic reasoning (Einhorn and Hogarth, 1985, 1986), in this
paper, I present a psychologically founded model of the transmission and emergence of risk perceptions.
As typically assumed in models of information transmission under risk (Banerjee, 1992; Banerjee and
Fudenberg, 2004; Eyster and Rabin, 2010; Kasperson et al., 1988; Watts, 2002, 2004), agents in my
theoretical framework can recover the subjective probabilities assigned to an uncertain event by those
in their social network. In my model, agents use this information to (1) anchor their risk perceptions,
and (2) infer the amount of ambiguity in their social information. More specifically, my theoretical
framework assumes that the members of a group anchor their subjective risk perceptions on the average
subjective probability estimate in their social network. Then, they follow a process of mental simulation
of probabilities above and below the anchor to account for the amount of ambiguity in their social

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 19

information. This process of mental simulation leads to an adjustment away from the anchor that varies
in size and direction depending on an individual’s attitudes toward the uncertain event, the amount of
ambiguity, and the anchoring probability itself. Using an online study, I present evidence in support
of my main building assumptions and the empirical validity of my model. Applying my experimental
results to an agent-based setting, I simulate the evolution of risk perceptions within organizations under
different information frictions. While limiting the quality of the socially shared information leads to
more imprecise and ambiguous risk perceptions, limiting the amount of information socially shared
leads to more precise and less ambiguous risk perceptions.
This paper makes several important contributions. First, the present work provides researchers with
a framework to theoretically study the interplay between organizational, psychological, and social
phenomena in shaping collective risk perceptions. My model can be easily expanded to account
for psychological factors such as salience, cognitive load, limited attention, motivated cognition, or
costly mental effort. For example, past laboratory work has studied the impact of cognitive load on
the anchoring-and-adjustment heuristic, showing that cognitive load leads to a smaller adjustment
away from the anchor (Deck and Jahedi, 2015). In line with this evidence, one could argue that a
high cognitive load impairs people’s ability to mentally simulate and test the likelihood of different
probability estimates away from the anchor. As the length of the process of mental simulation of
probabilities in my model is determined by the amount of ambiguity multiplied by a constant of
proportionality (𝛼), one could simulate the evolution of risk perceptions in groups under different levels
of cognitive load by modifying this parameter (where a lower 𝛼 would represent a higher level of
cognitive load).
Similarly, my model can be easily modified to study social factors such as status, social influence,
or the dynamic endogenous evolution of social networks. For example, all my simulations assume
that every agent has the same number of connections and that these connections are bidirectional.
However, in real life, some agents occupy a public role, capable of influencing the perceptions of a large
crowd, but are unable to gather information from everyone. One could explore the role of these highly
influential individuals, and how their characteristics impact the resulting pattern of risk perceptions
at the group level by modifying the network used in the simulations. Specifically, by repeating my
simulations in a network where agents have access to only their 3 surrounding neighbors and 1 fixed
influential individual, one can study the role of status or social influence in the emergent patterns of risk
perceptions.
Moreover, my framework can be used to study the combined impact of psychological and social
processes in shaping collective perceptions of risk. For instance, by applying the 2 previously described
sets of changes, one can examine the role of highly influential agents in shaping risk perceptions across
groups experiencing different levels of cognitive load.
In more organizational contexts, my agent-based model can be used to investigate how different
design features such as firm structure (e.g., horizontal vs. vertical), incentives and information cost,
or the self-selection of individuals across tasks affect the collective emergence of risk perceptions
within organizations. By providing researchers with a framework to analyze the interaction of these
organizational, social, and psychological processes, my model paves the way to new research lines on
how we construct collective risk perceptions.
Second, my simulations and experimental results offer important insights on how we use social
information to construct subjective risk perceptions. In fact, my experimental results shed light on the
factors driving everyday choices in settings such as investments, insurance, or health-related decisions.
Beyond individual decisions, the results of my agent-based simulation have important implications
at the group and organizational level. In fact, my simulation results suggests that by focusing on
information quality (rather than availability), groups and organizations can more effectively boost the
accuracy of emergent risk perceptions.
Third, this paper contributes to several different literatures. For instance, my model contributes to the
literature on the SARF (Kasperson et al., 1988). In fact, my model can be understood as a mathematical
formalization of some of the ideas presented in this literature (see Kasperson et al., 2022 for a recent

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


20 Sergio Pirla

review) including the importance of considering the interaction of social and psychological phenomena
in the formation of risk perceptions. Similarly, my results add to the literature on how social processes
such as attitudinal contagion (Fan and Pedrycz, 2016; Mason et al., 2007; Scherer and Cho, 2003; Watts,
2002, 2004), media influence (Moussaıd, 2013) or information cascades (Banerjee, 1992; Banerjee and
Fudenberg, 2004; Bikhchandani et al., 1992; Eyster and Rabin, 2010) shape collective perceptions of
risk. Specifically, this paper adds to this line of research by explicitly modeling a psychologically rich
individual process of risk perception formation. Finally, this paper adds to a growing body of evidence
that uses agent-based simulations to study the emergence of behaviors, preferences, and perceptions
in groups (Brown et al., 2022; Clement and Puranam, 2018; Gray et al., 2014; Gross and De Dreu,
2019; Haer et al., 2017; Kandiah et al., 2017; Moussaıd, 2013; Raveendran et al., 2022; Siggelkow and
Rivkin, 2005). The present work contributes to this literature by illustrating how existing individual
models of cognition and behavior can be easily adapted to an agent-based setting.
Despite these important contributions, this paper has a number of limitations that should be address
in future research. First, my theoretical framework prioritizes the modeling of certain mechanisms—
ignoring several factors that can impact the emergence of collective risk perceptions. In fact, my
framework ignores a number of cognitive mechanism and biases that have been shown to play a key role
in the formation of individual risk perceptions (Kahneman et al., 1982). Future research should expand
my framework to consider how such biases impact the formation of collective risk assessments. It is
also important to keep in mind that the simulations are based on 3 arbitrary discrete states or conditions
(frictionless, availability frictions, and quality frictions). In SM Note 7 of the Supplementary Material,
I expand on these simulations by considering the combination of varying degrees of availability and
quality frictions and their impact on the formation of collective risk perceptions.
Second, while my experimental setting captures a static one-shot decision scenario, my agent-based
simulations are dynamic. This implies that my simulations rest upon additional implicit assumptions
that are not directly tested in this paper. For instance, my agent-based model assumes that individuals
have no direct access to feedback, and that they don’t take into consideration their previous risk
perceptions when generating new ones. Future research should test the impact of these assumptions
on the collective emergence of risk perceptions.
Third, in my simulations, the agents give the same weight to the social information gathered from all
their sources. This simplification does not hold in real life where individuals trust some sources more
than others. Future research should incorporate differing levels of trust within the agent’s network and
jointly model the dynamic evolution of risk perceptions and trust across groups and organizations.
Fourth, I restricted my simulations to groups of homogeneous agents over a fixed network topology
(i.e., a fixed map of agents and neighborhoods). Further work is needed to establish how heterogeneity
in the agents’ attitudes impacts the emerging risk perceptions. Similarly, future work needs to consider
how different network topologies shape the resulting patterns of risk perceptions.
My experimental design also has some limitations. While multiple price lists are widely used in
economics and psychology to recover subjective probabilities (see Charness et al., 2013 for a discussion
of experimental elicitation methods in risky choice) they have certain shortcomings. Specifically, my
method for eliciting subjective probabilities lacks perfect precision, as it only allows for probability
measurements in increments of 0.1. This could be especially problematic in contexts of low ambiguity.
Additionally, this method is prone to a central tendency bias, potentially skewing my subjective
probability measurements toward 0.5. The method’s complexity may also lead to decision errors
among some participants. Future research should aim to replicate my main experimental findings using
different methods of probability elicitation, such as direct probability matching.
Finally, past work has demonstrated that attitudes toward ambiguous uncertain events are source
dependent (Abdellaoui et al., 2011; Baillon et al., 2018). That is, individual attitudes vary depending
on the source that generates the uncertainty. Given my model symmetry, this source-dependence has
a limited impact on the main results of my simulations (i.e., the effect of information frictions on the
emergence of risk perceptions). Yet, it is important to remark that in contexts where agents give more
weight to probabilities above the anchor, the resulting pattern of probabilities will be different. For

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 21

instance, in my baseline setting—and assuming that the agents put more weight on probabilities above
their anchor—the group probabilities would converge to the true probabilities for high probability
events. On the contrary, for low true-probability events, the group’s subjective probabilities would
diverge from the true probabilities of the event. To make more generalizable claims, throughout the
paper rather than focusing on high or low probabilities, I have described the relationship of these
probabilities with the agent’s ambiguity attitudes (i.e., whether the probabilities were consistent with
the agents’ attitudes or not).
It’s important to note that, while there are several risk perception models proposed in the literature,
the present work focuses on exclusively adapting the Einhorn and Hogarth (1985, 1986) model of
probability reasoning under ambiguity to an agent-based setting. Future work should adapt other
models of individual behavior to an agent-based framework to explore how risk perceptions form and
evolve within groups. Other anchoring and adjustment models of risk perception, for instance, could be
successfully translated into a social context using some of the simplifying assumptions introduced in
this paper. For instance, in the anchoring and adjustment model proposed by Johnson and Busemeyer
(2016), decision weights are obtained through a Markov process in which the decision maker first
anchors and then mentally transitions between outcomes until making a prediction. When adapting this
to an agent-based setting, one could develop a model in which agents gather different outcomes along
their prevalence from their social network and use this information to mentally simulate the likelihood
of each outcome, anchoring on the outcomes experienced by those individuals nearest to them in the
social network.
Moving beyond the anchoring and adjustment heuristic, there is promising potential in adapting
models that examine specific cognitive and affective processes in risky decision-making (Bordalo
et al., 2012; Enke and Graeber, 2019) to an agent-based setting. Such adaptations would broaden our
understanding of the dynamics of risk perception in group and organizational settings, offering a richer,
more diverse perspective on how individuals collectively navigate and interpret uncertainty.
Despite these limitations, this paper presents a flexible approach to theoretically study the interaction
of psychological, social, and organizational processes in shaping collective perceptions of risk. In doing
so, this work paves the way for future research on how we collectively form perceptions of risk.
Supplementary material. The supplementary material for this article can be found at https://doi.org/10.1017/jdm.2024.9.

Data availability statement. Data and code are available in the OSF repository at https://osf.io/bdgm3/?view_only=
32d3dd1abc4b4a1e9ad5c3eccaa5dd8b.

Acknowledgments. I am very grateful to Robin M. Hogarth for insightful comments and feedback on an earlier version of this
manuscript.

Funding statement. This research received no specific grant funding from any funding agency, commercial or not-for-profit
sectors.

Competing interest. The author declares no competing interests.

References
Abdellaoui, M., Baillon, A., Placido, L., & Wakker, P. P. (2011). The rich domain of uncertainty: Source functions and their
experimental implementation. American Economic Review, 101(2), 695–723.
Acemoglu, D., Ozdaglar, A., & ParandehGheibi, A. (2010). Spread of (mis)information in social networks. Games and Economic
Behavior, 70(2), 194–227.
Baillon, A., Huang, Z., Selim, A., & Wakker, P. P. (2018). Measuring ambiguity attitudes for all (natural) events. Econometrica,
86(5), 1839–1858.
Banerjee, A. V. (1992). A simple model of herd behavior. Quarterly Journal of Economics, 107(3), 797–817.
Banerjee, A. V., & Fudenberg, D. (2004). Word-of-mouth learning. Games and Economic Behavior, 46(1), 1–22.
Barberis, N. C. (2013). Thirty years of prospect theory in economics: A review and assessment. Journal of Economic
Perspectives, 27(1), 173–196.

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


22 Sergio Pirla

Bikhchandani, S., Hirshleifer, D., & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as informational
cascades. Journal of Political Economy, 100(5), 992–1026.
Bordalo, P., Gennaioli, N., & Shleifer, A. (2012). Salience theory of choice under risk. Quarterly Journal of Economics, 127(3),
1243–1285.
Borsboom, D., van der Maas, H. L., Dalege, J., Kievit, R. A., & Haig, B. D. (2021). Theory construction methodology: A practical
framework for building theories in psychology. Perspectives on Psychological Science, 16(4), 756–766.
Brown, G. D., Lewandowsky, S., & Huang, Z. (2022). Social sampling and expressed attitudes: Authenticity preference and
social extremeness aversion lead to social norm effects and polarization. Psychological Review, 129(1), 18.
Busby, J. S., Onggo, B. S., & Liu, Y. (2016). Agent-based computational modelling of social risk responses. European Journal
of Operational Research, 251(3), 1029–1042.
Charness, G., Gneezy, U., & Imas, A. (2013). Experimental methods: Eliciting risk preferences. Journal of Economic Behavior
& Organization, 87, 43–51.
Clement, J., & Puranam, P. (2018). Searching for structure: Formal organization design as a guide to network evolution.
Management Science, 64(8), 3879–3895.
Crane, R., & Sornette, D. (2008). Robust dynamic classes revealed by measuring the response function of a social system.
Proceedings of the National Academy of Sciences, 105(41), 15649–15653.
Deck, C., & Jahedi, S. (2015). The effect of cognitive load on economic decision making: A survey and new experiments.
European Economic Review, 78, 97–119.
Dimmock, S. G., Kouwenberg, R., & Wakker, P. P. (2016). Ambiguity attitudes in a large representative sample. Management
Science, 62(5), 1363–1380.
Einhorn, H. J., & Hogarth, R. M. (1985). Ambiguity and uncertainty in probabilistic inference. Psychological Review,
92(4), 433.
Einhorn, H. J., & Hogarth, R. M. (1986). Decision making under ambiguity. Journal of Business, S225–S250.
Enke, B., & Graeber, T. (2019). Cognitive uncertainty (tech. rep.). National Bureau of Economic Research.
Eyster, E., & Rabin, M. (2010). Naive herding in rich-information settings. American Economic Journal: Microeconomics, 2(4),
221–43.
Fan, K., & Pedrycz, W. (2016). Opinion evolution influenced by informed agents. Physica A: Statistical Mechanics and Its
Applications, 462, 431–441.
Fried, E. I. (2020). Lack of theory building and testing impedes progress in the factor and network literature. Psychological
Inquiry, 31(4), 271–288.
Ghaderi, J., & Srikant, R. (2014). Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate.
Automatica, 50(12), 3209–3215.
Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83(6), 1420–1443.
Gray, K., Rand, D. G., Ert, E., Lewis, K., Hershman, S., & Norton, M. I. (2014). The emergence of “us and them” in 80 lines of
code: Modeling group genesis in homogeneous populations. Psychological Science, 25(4), 982–990.
Gross, J., & De Dreu, C. K. (2019). The rise and fall of cooperation through reputation and group polarization. Nature
Communications, 10(1), 1–10.
Haer, T., Botzen, W. W., de Moel, H., & Aerts, J. C. (2017). Integrating household risk mitigation behavior in flood risk analysis:
An agent-based model approach. Risk Analysis, 37(10), 1977–1992.
Hakes, J., & Viscusi, W. K. (1997). Mortality risk perceptions: A Bayesian reassessment. Journal of Risk and Uncertainty, 15,
135–150.
Henkel, L. (2022). Experimental evidence on the relationship between perceived ambiguity and likelihood insensitivity (tech.
rep.). ECONtribute Discussion Paper. University of Bonn and University of Cologne.
Hogarth, R. M., & Einhorn, H. J. (1990). Venture theory: A model of decision weights. Management Science, 36(7), 780–803.
Hogarth, R. M., & Kunreuther, H. (1985). Ambiguity and insurance decisions. American Economic Review, 75(2), 386–390.
Hogarth, R. M., & Kunreuther, H. (1989). Risk, ambiguity, and insurance. Journal of Risk and Uncertainty, 2(1), 5–35.
Holzmeister, F., Huber, J., Kirchler, M., Lindner, F., Weitzel, U., & Zeisberger, S. (2020). What drives risk perception? A global
survey with financial professionals and laypeople. Management Science, 66(9), 3977–4002.
Jaspersen, J. G., & Ragin, M. A. (2021). A model of anchoring and adjustment for decision-making under risk. Available at SSRN
3845633.
Johnson, J. G., & Busemeyer, J. R. (2016). A computational model of the attention process in risky choice. Decision, 3(4), 254.
Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge:
Cambridge university Press.
Kandiah, V., Binder, A. R., & Berglund, E. Z. (2017). An empirical agent-based model to simulate the adoption of water reuse
using the social amplification of risk framework. Risk Analysis, 37(10), 2005–2022.
Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., Kasperson, J. X., & Ratick, S. (1988). The social
amplification of risk: A conceptual framework. Risk Analysis, 8(2), 177–187.
Kasperson, R. E., Webler, T., Ram, B., & Sutton, J. (2022). The social amplification of risk framework: New perspectives. Risk
Analysis, 42(7), 1367–1380.
Kempe, D., Kleinberg, J., & Tardos, É. (2003). Maximizing the spread of influence through a social network. In Conference
Chair: L. Getoor, General Chair: T. Senator, Program Chairs: P. Domingos & C. Faloutsos. (Eds.), Proceedings of the ninth

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press


Judgment and Decision Making 23

ACM SIGKDD international conference on knowledge discovery and data mining. New York, NY, USA: Association for
Computing Machinery, (pp. 137–146).
Kim, H., Schroeder, A., & Pennington-Gray, L. (2016). Does culture influence risk perceptions? Tourism Review International,
20(1), 11–28.
Li, Z., Müller, J., Wakker, P. P., & Wang, T. V. (2018). The rich domain of ambiguity explored. Management Science, 64(7),
3227–3240.
Mason, W. A., Conrey, F. R., & Smith, E. R. (2007). Situating social influence processes: Dynamic, multidirectional flows of
influence within social networks. Personality and Social Psychology Review, 11(3), 279–300.
Mestdagh, M., Pe, M., Pestman, W., Verdonck, S., Kuppens, P., & Tuerlinckx, F. (2018). Sidelining the mean: The relative
variability index as a generic mean-corrected variability measure for bounded variables. Psychological Methods, 23(4), 690.
Moussaıd, M. (2013). Opinion formation and the collective dynamics of risk perception. PLoS One, 8(12), e84592.
Raveendran, M., Puranam, P., & Warglien, M. (2022). Division of labor through self-selection. Organization Science, 33(2),
810–830.
Robinaugh, D. J., Haslbeck, J. M., Ryan, O., Fried, E. I., & Waldorp, L. J. (2021). Invisible hands and fine calipers: A call to use
formal theory as a toolkit for theory construction. Perspectives on Psychological Science, 16(4), 725–743.
Scherer, C. W., & Cho, H. (2003). A social network contagion theory of risk perception. Risk Analysis, 23(2), 261–267.
Siegrist, M., & Árvai, J. (2020). Risk perception: Reflections on 40 years of research. Risk Analysis, 40(S1), 2191–2206.
Siggelkow, N., & Rivkin, J. W. (2005). Speed and search: Designing organizations for turbulence and complexity. Organization
Science, 16(2), 101–122.
Slovic, P. (2016). The perception of risk. New York: Routledge.
Slovic, P., Fischhoff, B., & Lichtenstein, S. (1980). Facts and fears: Understanding perceived risk. Boston, MA: Springer.
Slovic, P., & Peters, E. (2006). Risk perception and affect. Current Directions in Psychological Science, 15(6), 322–325.
Smith, V. K., & Johnson, F. R. (1988). How do risk perceptions respond to information? The case of radon. Review of Economics
and Statistics, 70, 1–8.
Smith, V. K., & Michaels, R. G. (1987). How did households interpret chernobyl?: A Bayesian analysis of risk perceptions.
Economics Letters, 23(4), 359–364.
Tompkins, M. K., Bjälkebring, P., & Peters, E. (2018). Emotional aspects of risk perceptions. In M. Raue, E. Lermer, & B.
Streicher. (Eds.), Psychological Perspectives on Risk and Risk Analysis Theory, Models, and Applications (1st ed. 2018). (pp.
109–130). https://doi.org/10.1007/978-3-319-92478-6
Trautmann, S. T., & Van De Kuilen, G. (2015). Ambiguity attitudes. In G. Keren, and G. Wu. The Wiley Blackwell handbook of
judgment and decision making Chichester, West Sussex; Marlden, MA: John Wiley & Sons, Ltd, (vol. 2, pp. 89–116).
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases: Biases in judgments reveal some
heuristics of thinking under uncertainty. Science, 185(4157), 1124–1131.
Viscusi, W. K. (1985). Are individuals Bayesian decision makers? American Economic Review, 75(2), 381–385.
Viscusi, W. K., & O’Connor, C. J. (1984). Adaptive responses to chemical labeling: Are workers Bayesian decision makers?
American Economic Review, 74(5), 942–956.
Watts, D. J. (2002). A simple model of global cascades on random networks. Proceedings of the National Academy of Sciences,
99(9), 5766–5771.
Watts, D. J. (2004). The “new” science of networks. Annual Review of Sociology, 30, 243–270.
Wu, F., & Huberman, B. A. (2007). Novelty and collective attention. Proceedings of the National Academy of Sciences, 104(45),
17599–17601.
Yildiz, E., Ozdaglar, A., Acemoglu, D., Saberi, A., & Scaglione, A. (2013). Binary opinion dynamics with stubborn agents. ACM
Transactions on Economics and Computation (TEAC), 1(4), 1–30.

Cite this article: Pirla, S. (2024). A psychological model of collective risk perceptions. Judgment and Decision Making, e13.
https://doi.org/10.1017/jdm.2024.9

https://doi.org/10.1017/jdm.2024.9 Published online by Cambridge University Press

You might also like