Professional Documents
Culture Documents
Online Review Characteristics and Trust: A Cross-Country Examination
Online Review Characteristics and Trust: A Cross-Country Examination
Online Review Characteristics and Trust: A Cross-Country Examination
Volume 50 Number 3
June 2019
Mei Li
Department of Supply Chain Management, Eli Broad College of Business, Michigan State
University, N334 Business Complex, East Lansing, MI, 48824
K. Sivakumar
Department of Marketing, College of Business and Economics, Lehigh University, Bethlehem,
PA, 18015
ABSTRACT
Consumers and companies are increasingly concerned about fake reviews posted online.
To make appropriate managerial decisions about online reviews, it is crucial to under-
stand what drives consumer trust in online reviews. Research examining the linkage
between online reviews and trust is sporadic and lacks a comprehensive framework.
To address this gap, this research investigates the individual and joint impact of three
review attributes—valence, rationality, and source—on the benevolence, ability, and
integrity dimensions of trustworthiness of the reviewer, which further determines trust
in online reviews. Using behavioral experiments, the study finds that positive reviews,
factual reviews, and reviews appearing on social networks lead to perceptions of greater
benevolence, ability, and integrity than negative reviews, emotional reviews, and reviews
appearing on retailer sites, respectively. In addition, review rationality and review source
moderate the link between review valence and the dimensions of reviewer trustworthi-
ness: the effect of valence is stronger for emotional reviews than for factual reviews and
for retailer sites than for social networks. Furthermore, the study analyzes data from
the United States and China to examine generalizability and cultural nuances of the
proposed relationships. This study provides important insights into consumer trust in
online reviews and offers guidelines for decision making in communication strategy,
system design, and operations by manufacturers, retailers, service providers, and online
platforms to more effectively manage online reviews. [Submitted: November 14, 2017.
Revised: August 22, 2018. Accepted: August 25, 2018.]
We appreciate the constructive guidance of the editors-in-chief, senior editor, associate editor, and anonymous
reviewers. We are grateful to the generous funding support by Lehigh University Faculty Research Grant
and CIBER at Michigan State University.
† Corresponding author
537
538 Online Review Characteristics and Trust
INTRODUCTION
With the burgeoning growth of the Internet, online consumer reviews about goods
and services have become a ubiquitous phenomenon (Hayne, Wang, & Wang, 2015;
Hong, Xu, Wang, & Fan, 2017). According to PowerReviews (O’Neil, 2015), 95%
of the people surveyed said they read online reviews regularly for researching
and purchasing consumer products. Indeed, the consumer decision journey has
dramatically changed with the advent of technology (Lemon & Verhoef, 2016),
and these consumer-to-consumer influences have become an important force in
shaping consumer decisions (Li, Choi, Rabinovich, & Crawford, 2013). Therefore,
a more detailed understanding of online reviews will enable firms to make better
decisions about managing online reviews with implications in communication,
systems design, and operations.
Despite the increasing importance of online reviews, criticisms about their
credibility exist (Weise, 2015). With trust being “the single most important variable
influencing interpersonal and inter-organizational behavior” (Zhang, Viswanathan,
& Henke, 2011, p. 318), understanding what factors drive consumer trust in online
reviews is crucial to arrive at appropriate decisions in leveraging online reviews
(Wolf & Muhanna, 2011). Yet academic research examining the link between
online reviews and trust is sporadic and lacks a comprehensive framework. To
address this research gap, this study examines three research questions: (1) How
do review characteristics (valence, rationality, and source) influence the dimensions
of reviewer trustworthiness (benevolence, ability, and integrity)? (2) Are there any
interaction effects among these review characteristics? And (3) do the proposed
relationships hold in the United States and China, two important markets with very
different cultures?
By addressing these research questions, we make several contributions to the
literature on online reviews and offer decision-making guidelines for managers.
First, research assessing online reviews often focuses on a single aspect of reviews
(e.g., review valence); this study adopts a multidimensional approach by examining
the independent and joint impacts of three review attributes. Second, this research
is the first to link review attributes to the three dimensions of trustworthiness to
gain a comprehensive understanding of the role of online reviews on trust. Third,
given the importance of globalization and technology, multinational studies are
particularly valuable; thus, we conduct the study in the United States and China, to
enhance the generalizability of our theoretical framework and to examine cultural
nuances.
i There are research papers focused on trust toward online retailers; however, the focus of our article is trust
in online reviews (rather than in online retailers). These are conceptually different constructs and the trust
formation toward online reviewers and online retailers are very different. Please refer to Web Appendix 1
for more detailed discussion on this issue.
540
Note: V = valence, R = rationality, S = source, O = other; N/A = not applicable; Contributions: INT = interaction of review characteristics, TR/TW =
distinguishing trust and trustworthiness, 3-FAC = three-factor trustworthiness model, CCC = cross-country comparison, M = the construct was measured
using various multipoint scales.
541
542 Online Review Characteristics and Trust
both the individual and joint effects of these attributes on trustworthiness to offer
a more complete and contingent view of trust in online review (see Table 1).
Fifth, research on online reviews has primarily been conducted in developed
countries (Yin et al., 2016; Banerjee, Bhattacharyya, & Bose, 2017); limited re-
search has examined the generalizability of the knowledge to other countries such
as China (Wu, Noorian, Vassileva, & Adaji, 2015). This study tests the proposed
framework in the United States and China, offering guidance to U.S. retailers
exploring the Chinese market (e.g., Costoc, Amazon.com) and Chinese retailers
exploring the U.S. market (e.g., Alibaba) (Wolf & Muhanna, 2011).
CONCEPTUAL FRAMEWORK
Figure 1 depicts our conceptual framework. Building on the three-factor model by
Mayer et al. (1995), we examine how the three review attributes individually and
jointly influence the three dimensions of reviewer trustworthiness, which in turn
determine trust in reviews.
Extant literature suggests that the trustworthiness of a source depends directly
on how individuals perceive and respond to the message in the review (Grewal,
Gotlieb, & Marmorstein, 1994); the more trustworthy the reviewer is, the lesser is
the uncertainty of the review (Racherla, Mandviwalla, & Connolly, 2012). Retail-
ers use multiple incentives to encourage promotional chat and manipulate online
reviews (Chevalier & Mayzlin, 2006), and the lack of nonverbal and social cues in
the online environment compounds the issue of reviewer genuineness (Ray, Ow, &
Kim, 2011). A particular challenge in online settings is that the party who writes
the review (review-writer) and the party who reads the review (review-reader) have
neither a history nor an expectancy of future interactions (Racherla et al., 2012).
This is similar to the communication between strangers, and thus research exam-
ining communication between strangers could provide insights for our research
(Bargh, McKenna, & Fitzsimons, 2002). Next we explain the key constructs in our
model.
Review Characteristics
Online consumers seek credible information to facilitate decision making (Xu,
2014), and they often employ both active and passive strategies to do so (Jacoby
et al., 1994). In the context of online reviews, active strategies involve evaluating
Dong, Li, and Sivakumar 543
message content to determine source expertise and bias (Filieri et al., 2015). Review
content has two dimensions: review valence, or the positive or negative orientation
of the information (Yin et al., 2016), and review rationality, or the extent to
which the argument is fact-based, objective, and verifiable (Cheung, Sia, & Kuan,
2012). Given the anonymous nature of online communication and the potential
for deception, consumers sometimes employ passive strategies by searching for
peripheral cues to assess the credibility of a review (Filieri et al., 2015). Review
source (i.e., where a review appears) is one such peripheral cue (Bosman, Boshoff,
& van Rooyen, 2013). Online reviews can appear on a retailer’s Web site (e.g.,
Amazon.com) or on social networks (e.g., Facebook, Twitter).
and (2) positive expectations (Colquitt et al., 2007). We follow the literature and
define trust in the review as the willingness of online consumers to believe the
written commentaries of reviewers and to rely on them with the expectation that
the reviewers are trustworthy (Mayer et al., 1995). This definition suggests that
trustworthiness of the reviewer is an antecedent of trust in the review, consistent
with the literature (Colquitt et al., 2007).
HYPOTHESES DEVELOPMENT
Effect of Review Valence on Reviewer Trustworthiness
We hypothesize how review valence influences the benevolence and integrity
dimensions of reviewer trustworthiness. First, in an online review context, a positive
review projects a greater sense of benevolence than a negative review (Duffy, 2017).
Research shows that a person’s online expressions, in the form of online reviews,
are reflections of his or her “true self” (Bargh et al., 2002). A review that is filled
with kind words and emphasizes what is good or laudable reflects internal kindness
of the reviewer and his or her willingness to do good to others (Griskevicius et al.,
2007). As both kindness and willingness to do good to others are distinguishing
features of benevolence (Mayer et al., 1995), it naturally follows that positive
statements lead to the perception of benevolence (Xu, 2014). Conversely, a review
that is demeaning and expresses criticism is likely to reflect an unkind reviewer and
works against the image of doing good to others (Banerjee et al., 2017). Therefore,
we propose that positive reviews garner stronger perceptions of benevolence than
negative reviews.
Second, review valence appeals to the integrity dimension of trustworthiness.
Recent research shows that consumers exhibit positive bias in receiving word of
mouth (WOM) (Martin, 2017)—they simply view positive product statements as
more trustworthy. The underlying logic can be explained by research on behavioral
cues to discern truth-tellers from liars (DePaulo et al., 2003). One behavioral cue
people often use to decide whether to trust a stranger is the valence of his or
her statement (DePaulo et al., 2003). Research shows that liars tend to give more
negative statements and complaints, sound less pleasant, and look less friendly than
truth-tellers (Zuckerman, Kernis, Driver, & Koestner, 1984). Given that reviewers
and readers interact more like strangers (Bargh et al., 2002), readers are likely to
rely on this cue to evaluate reviewers’ integrity (Bilgihan et al., 2016). Thus, we
propose that people tend to view reviewers who express positive reviews as more
honest than those who publish negative reviews.
H1: Review valence influences reviewer trustworthiness, such that a positive review
leads to perceptions of (a) greater benevolence and (b) greater integrity than a
negative review.
1996). Interpersonal relationships often operate in different circles and social ties
(Gao, Ballantyne, & Knight, 2010). Strangers providing reviews on a retailer’s
Web site and acquaintances on social networks have different levels of relationship
familiarity (Burgoon, Buller, Floyd, & Grandpre, 1996) and belong to different
circles or social ties (Racherla et al., 2012). Research indicates that relationship
familiarity between the review-writer and review-reader can improve readers’
ability to detect deception (Burgoon et al., 1996) and influences the interpretation
of a reviewer’s motive for posting a review and thus the credibility of the review
(Buller & Burgoon, 1996). An inner circle is often represented by close ties to
family members and friends (Granovetter, 1983), whereas an outer circle often
reflects instrumental ties that include strangers. Research shows that compared
to inner-circle members, people tend to perceive outer-circle members as less
honest (McAllister, 1995). When sharing product reviews, people often view those
who are in their social networks as more sincere, honest, and less self-interested
than strangers who post reviews on a retailer’s Web site (Buller & Aune, 1987).
Moreover, reviews posted on retail sites are subject to the influence and even
manipulation of the retailers and thus are less independent and have a greater
possibility of conflicts of interest (Bosman et al., 2013). This presence and/or
absence of self-interest in crafting a review also influences the inference of reviewer
integrity.
Review source also influences the perception of benevolence. Membership
in a social network is often by self-selection. People have family and friends, who
exhibit similar and desirable characteristics, in their inner circle (Buller & Aune,
1987). Given the closer relationships with people on social networks than with
strangers on retailing sites, people tend to believe that the former are more likely
to have good motives to share their candid views (Buller & Burgoon, 1996) and
help them make better decisions (Mackiewicz et al., 2016).
H3: Review source influences reviewer trustworthiness, such that a review posted
on one’s social network leads to perceptions of (a) greater benevolence and (b)
greater integrity than a review posted on a retailer site.
other words, reviews packed with factual information, despite being negative, help
alleviate concerns that the reviewer is hostile, not agreeable, and less truthful (De-
Paulo et al., 2003) and thus reduce the disadvantage of negative reviews (Sparks,
So, & Bradley, 2016). Therefore, consumers rely less on review valence to project
trustworthiness, resulting in a weakened effect of review valence.
H4: Review rationality moderates the effect of valence on perceptions of (a) benev-
olence and (b) integrity such that the effect of valence is stronger for emotional
reviews than for factual reviews.
H5: Review source moderates the effect of valence on perceptions of (a) benevo-
lence and (b) integrity such that the effect of review valence is stronger for retailer
sites than for social networks.
METHODOLOGY
Research Design and Context
We used a scenario-based experiment to collect data in two countries. Given
that most research on consumer reviews uses secondary data when examining
their impact on sales and firm performance (Sun, 2012), use of experiments im-
proves conceptualization accuracy and establishes causal relationships among key
constructs (Bendoly, Bachrach, & Perry-Smith, 2011). We followed the exper-
imental methods used in behavioral operations research (Bendoly et al., 2011;
Bendoly, 2014) and research on trust in online reviews (e.g., Sparks et al., 2013;
Xu, 2014) to examine the psychological process of how consumers trust online
reviews.
We use online clothes purchases as the research context. Statistics show
that clothing predominates online retail in both the United States and China
(www.statista.com). This context allows us to explore a type of product that is
significant to online retailers, familiar to online consumers, and shows compara-
ble popularity in both countries. We reviewed more than 200 real online cloth-
ing reviews from Amazon.com and Taobao before preparing our experimental
scenarios. A 2 (valence) × 2 (rationality) × 2 (source) between-subjects exper-
imental design was used. The scenario featured the product review of a jacket.
We chose this product because it is gender neutral and has comparable shopping
patterns across the two countries. Respondents were asked to imagine that they
were planning to purchase a denim outfit and were then randomly assigned to
one of the eight conditions to read a product review. After reviewing myriad real
consumer reviews, we identified four major decision attributes that appear most
frequently in consumer clothes reviews: material, shipping, design, and fit. In-
corporating these decision factors, we manipulated review valence at two levels:
positive reviews (with all four attributes positive) and negative reviews (with all
four attributes negative). We manipulated review rationality at two levels: factual
review (e.g., describing facts) and emotional review (e.g., venting emotion and
feelings). We also manipulated review source at two levels: a review appearing on
an online retail site and a social network site. The manipulation included using two
made-up names to differentiate the sites—eRetail.com and Friendbook—along
with an explanation of the sites. Detailed scenario descriptions appear in Web
Appendix 2.
and the remainder had a high school degree or lower. Regarding ethnicity, 8.9%
were African American, 73.2% Caucasian, 5.8% Hispanic, 8.3% Asian, and 3.8%
other.
We employed a similar sampling approach in China and used Sojump, one of
the most prominent crowdsourcing companies for online surveys in China. Sojump
provides researchers with greater control of sample representativeness and response
quality. A total of 1,790 participants were solicited to fill out the survey, out of
which 577 accepted the invitation and answered the survey. A total of 346 responses
were considered valid by passing Sojump’s established screening criteria. Among
them, 55.2% of respondents were women, and ages ranged from 20 to 63 years,
with a mean of 32. Respondents came from 24 of 34 provinces, and 96.8% were
from nonrural areas. Finally, 79% had a college degree, 7% had a graduate degree,
and the remaining 14% had a high school degree or lower. Following the strategies
recommended by existing research (e.g., Goodman & Paolacci, 2017; Hulland &
Miller, 2018), we adopted various procedures in our research design prior to data
collection and data quality screening post data collection to ensure reliability and
validity of the data we collected from both MTurk and Sojump. Additional details
about these procedures are elaborated in Web Appendix 3.
gender, income, and education. We developed the original script and question-
naire in English. Then, two native Chinese speakers translated the questionnaire
into Chinese, which was then translated back to English by two other native Chi-
nese; a translation/back-translation approach was followed to ensure language
equivalence.
Analytical Procedure
We followed a systematic approach to compile the data, to ensure measurement
appropriateness, to test the hypotheses, and to examine nuances between the two
country samples. Specifically, we followed six steps: (1) we conducted manipu-
lation checks to ensure the scenarios were appropriate for the hypotheses (using
comparison of means across groups); (2) we performed confirmatory factor anal-
ysis (CFA) to ensure reliability and convergent and discriminant validity of all
measures (using EQS 6.2 for Windows); (3) we established measurement equiva-
lence between data collected from the two countries (using χ 2 comparisons to test
for full factor structure equivalence in EQS 6.2); (4) we used analysis of covari-
ance (ANCOVA) to examine the individual and joint effects of the three review
attributes on the three review trustworthiness dimensions for both countries in
SPSS version 24; (5) we tested the relationship between trustworthiness and trust
using linear multiple regression analysis in SPSS version 24; and (6) we com-
bined the two country data to statistically assess the differences between the two
countries.
RESULTS
Manipulation Checks
All manipulation checks were conducted with seven-point scales. We assessed
the manipulation of review valence using two items: “The review I just read has
a favorable opinion of the product” and “The review reflects negatively on the
product.” The score was higher for positive than negative reviews (U.S.: Mpos =
6.56 vs. Mneg = 1.64, p < .001; China: Mpos = 6.30 vs. Mneg = 1.49, p < .001).
We used the two items to evaluate the manipulation of review rationality, which
assessed whether the review focused on describing facts or emotions. Respondents
considered the review more rational in the factual than emotional review condition
(U.S.: Memo = 2.94 vs. Mfact = 5.24, p < .001; China: Memo = 4.68 vs. Mfact
= 3.69, p < .001). The emotion literature suggests that emotional statements
can trigger different levels of arousal, so we used two items (unpleasant/pleasant
and calm/excited) to assess the arousal levels across the factual and emotional
conditions; we found no significant difference in arousal levels between the two
(U.S.: Arousalemo = 4.24 vs. Arousalfact = 3.99, p = .129; China: Arousalemo = 4.11
vs. Arousalfact = 3.95, p = .249). We deemed the manipulation of review source
successful by evaluating whether the site allows for direct purchase (U.S.: Mretail
= 6.24 vs. Msocial = 1.52, p < .001; China: Mretail = 6.51 vs. Msocial = 1.39, p <
.001). Responses to all manipulation check questions were significantly different
across experimental conditions in the expected direction, confirming successful
manipulations in both countries.
Dong, Li, and Sivakumar 551
Measurement Equivalence
In addition to the translational equivalence of the measures, we tested the measure-
ment equivalence between the U.S. and Chinese samples using two-group CFAs in
EQS 6.2 (Bagozzi & Yi, 1988). Again, we used elliptical reweighted least squares
to estimate the model. The factor structure was specified as invariant, and all the
factor loadings were constrained to be equal across the two samples, to achieve full
factor structure equivalence (Steenkamp & Baumgartner, 1998). We conducted a
chi-square difference test between the two-group model without constraints and
the model with constraints (i.e., invariant factor structure and factor loadings). We
found that the CFA model with all factor loadings constrained fit the data well (NFI
= .993; CFI = .994; IFI = .994; SRMR = .038; RMSEA = .045). The chi-square
difference between the two models was 13.3 (model without constraints: χ 2 (142)
= 241.790; model with constraints: χ 2 (152) = 255.090), with 10 degrees of
552 Online Review Characteristics and Trust
U.S. data
1. Ability 4.50 1.63 .935 .934 .828 .624 .573
2. Benevolence 4.19 1.44 .916 .917 .732 .640 .594 .74b
3. Integrity 4.73 1.39 .955 .954 .808 .689 .626 .74b .80b
4. Trust 4.63 1.77 .972 .971 .945 .689 .635 .79b .77b .83b
Chinese data
1. Ability 5.00 1.11 .865 .865 .683 .578 .504
2. Benevolence 4.46 1.15 .873 .872 .633 .504 .442 .61b
3. Integrity 4.98 1.03 .925 .925 .711 .608 .558 .75b .71b
4. Trust 4.90 1.19 .894 .891 .808 .608 .545 .76b .67b .78b
a
p < .001.
b
p < .01.
Note: CR = composite reliability.
Dong, Li, and Sivakumar 553
freedom (ns, p > .05), suggesting full metric invariance (Steenkamp &
Baumgartner, 1998).
ii Although we did not provide theoretical predictions for the effects of valence and source on ability (due
to the absence of prior research to establish the effects), in the interest of completeness, we include them in
the empirical analysis; likewise, despite the absence of hypotheses for the interaction between source and
rationality and the three-way interaction among valence, rationality, and source in our conceptual framework,
we keep them in the ANCOVA model for the sake of completeness of the analysis.
554
Independent variables
H1 Valence 30.95*** 25.00*** 39.10*** 6.57** 5.08* 22.80***
H2 Rationality 76.44*** 71.30*** 61.73*** 25.50*** 9.22** 11.86***
M
H3 Source 1.89 4.25* 2.68 5.29* 8.13** 9.02**
H4 Valence × Rationality 9.61** 21.22*** 15.20*** 1.29 .53 9.56**
M
H5 Valence × Source 3.19 7.92** 7.60** 2.25 2.17 2.55
Rationality × Source 1.43 1.44 1.92 .006 .20 .07
Valence × Rationality × Source 1.00 .84 1.73 1.18 1.66 .95
Control variables
Age 1.15 .48 .01 .27 1.28 .27
M M
Gender 3.54 3.63 1.91 .73 .28 .46
Education .05 .00 .28 1.06 .08 .00
Income 4.76* 3.82* 2.19 .03 .88 .06
Quality expectation 4.60* 1.20 6.27* 10.32*** 33.12*** 19.90***
*
p < .05; ** p < .01; *** p <.001; M p < .1.
Online Review Characteristics and Trust
Table 4: Mean comparisons of the ANCOVAs.
A. Mean Comparisons of the Main Effects of Review Characteristics on Trust (H1–H3)
Positive Negative Positive Negative Factual Emotional Factual Emotional Social Retail Social Retail
Ability 5.00a 4.13a 5.15b 4.86b 5.25a 3.87a 5.29a 4.71a 4.67 4.45 5.13c 4.87c
Dong, Li, and Sivakumar
Benevolence 4.58a 3.89a 4.59c 4.33c 4.82a 3.65a 4.64b 4.28b 4.38c 4.10c 4.63b 4.30b
Integrity 5.20a 4.37a 5.23a 4.74a 5.31a 4.26a 5.16a 4.81a 4.90M 4.68M 5.14b 4.83b
Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative
Factual 5.45M 5.06M 5.37 5.21 4.85 4.79 4.73 4.55 5.47 5.16 5.25 5.07
Emotional 4.56a 3.19a 4.92b 4.50b 4.31a 2.98a 4.46c 4.11c 4.94a 3.58a 5.21a 4.40a
Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative
Social 4.97b 4.38b 5.19 5.07 4.53M 4.23M 4.67 4.58 5.13b 4.66b 5.30c 4.97c
Retail 5.03a 3.88a 5.10b 4.64b 4.63a 3.55a 4.51b 4.08b 5.28a 4.07a 5.16a 4.50a
555
Note: Mean comparisons are significantly different across the two compared conditions with the same subscript: a p < .001; b p < .01; c p < .05;
M
p < .1. We report the full factorial ANCOVAs and include the relationships not formally hypothesized (e.g., effect of valence and source on ability).
556 Online Review Characteristics and Trust
affects perceived benevolence in both countries (U.S.: Fsource = 4.25, p < .05;
China: Fsource = 8.13, p < .01) and integrity in China (Fsource = 9.02, p < .01) but
has only a marginally significant effect on integrity in the United States (Fsource
= 2.68, p < .1), in full support of H3a but only partial support of H3b. Likewise,
although we did not provide theoretical predictions for the effect of review source
on perceived ability, the empirical results show a significant link in China (Fsource =
5.29, p < .05) but not in the United States (Fsource = 1.89, p = .17). When comparing
the means, we find that respondents perceive significantly greater benevolence and
integrity when the review appears on a social network than on a retail site (e.g.,
source–benevolence: U.S.: Benevolenceretail = 4.10, Benevolencesocial = 4.38, p <
.05; China: Benevolenceretail = 4.30, Benevolencesocial = 4.63, p < .01).
ability, and integrity as independent variables and trust as the dependent variable
into regression. We find consistent results across countries that all three factors
significantly influence trust, in support of H6 (U.S.: β benevolence = .16, β ability =
.35, β integrity = .45, p < .001; China: β benevolence = .16, β ability = .36, β integrity =
.40, p < .001).
DISCUSSION
Theoretical Contributions
This research extends the literature in several ways. First, research on the con-
struct of trust primarily focuses on trust between people who are known to and
558 Online Review Characteristics and Trust
Managerial Implications
Given the prevalence of online reviews, increasing consumers’ trust toward online
reviews is an important objective for organizations (Hayne et al., 2015). This is
because the increased trust in reviews directly influences customer attitude and
downstream behaviors (Wolf & Muhanna, 2011). Our findings are relevant for
Dong, Li, and Sivakumar 559
rides (i.e., Uber), lodge (i.e., Airbnb), and labor (i.e., TaskRabbit) (Benoit, Baker,
Bolton, Gruber, & Kandampully, 2017).
SUPPORTING INFORMATION
REFERENCES
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in prac-
tice: A review and recommended two-step approach. Psychological Bulletin,
103(3), 411–423.
Audi, R., & Murphy, P. E. (2006). The many faces of integrity. Business Ethics
Quarterly, 16(1), 3–21.
Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models.
Journal of the Academy of Marketing Science, 16(1), 74–94.
Dong, Li, and Sivakumar 561
Banerjee, S., Bhattacharyya, S., & Bose, I. (2017). Whose online reviews to trust?
Understanding reviewer trustworthiness and its impact on business. Decision
Support Systems, 96(3), 17–26.
Bargh, J. A., McKenna, K. Y., & Fitzsimons, G. M. (2002). Can you see the real
me? Activation and expression of the “true self” on the Internet. Journal of
Social Issues, 58(1), 33–48.
Bendoly, E. (2014). System dynamics understanding in projects: Information shar-
ing, psychological safety, and performance effects. Production and Opera-
tions Management, 23(8), 1352–1369.
Bendoly, E., Bachrach, D. G., & Perry-Smith, J. (2011). The perception of difficulty
in project-work planning and its impact on resource sharing. Journal of
Operations Management, 28(5), 385–397.
Benoit, S., Baker, T. L., Bolton, R. N., Gruber, T., & Kandampully, J. (2017). A
triadic framework for collaborative consumption (CC): Motives, activities
and resources & capabilities of actors. Journal of Business Research, 79(3),
219–227.
Berger, J. (2014). Word of mouth and interpersonal communication: A review
and directions for future research. Journal of Consumer Psychology, 24(4),
586–607.
Bilgihan, A., Barreda, A., Okumus, F., & Nusair, K. (2016). Consumer percep-
tion of knowledge-sharing in travel-related online social networks. Tourism
Management, 52(2), 287–296.
Bosman, D. J., Boshoff, C., & van Rooyen, G. (2013). The review credibility of
electronic word-of-mouth communication on e-commerce platforms. Man-
agement Dynamics, 22(3), 29–44.
Botsman, R. (2016). We’ve stopped trusting institutions and started trust-
ing strangers. TED Summit, Retrieved from https://www.ted.com/
talks/rachel_botsman_we_ve_stopped_trusting_institutions_and_started_tr
usting_strangers
Breidbach, C. F., & Brodie, R. J. (2017). Engagement platforms in the sharing
economy: Conceptual foundations and research directions. Journal of Service
Theory and Practice, 27(4), 761–777.
Buller, D. B., & Aune, R. K. (1987). Nonverbal cues to deception among intimates,
friends, and strangers. Journal of Nonverbal Behavior, 11(4), 269–290.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communi-
cation Theory, 6(3), 203–242.
Burgoon, J. K., Buller, D. B., Floyd, K., & Grandpre, J. (1996). Deceptive reali-
ties: Sender, receiver, and observer perspectives in deceptive conversations.
Communication Research, 23(6), 724–748.
Chang, T. V., Rhodes, J., & Lok, P. (2013). The mediating effect of brand trust be-
tween online customer reviews and willingness to buy. Journal of Electronic
Commerce in Organizations, 11(1), 22–42.
562 Online Review Characteristics and Trust
Cheung, C. M., Sia, C., & Kuan, K. K. Y. (2012). Is this review believable? A
study of factors affecting the credibility of online consumer reviews from an
ELM perspective. Journal of the Association for Information Systems, 13(8),
618–635.
Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales:
Online book reviews. Journal of Marketing Research, 43(3), 345–354.
Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and
trust propensity: A meta-analytic test of their unique relationships with risk
taking and job performance. Journal of Applied Psychology, 92(4), 909–927.
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K.,
& Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1),
74–118.
Devumi (2016). Learn how Lego mastered user-generated content. Medium, Re-
trieved from https://medium.com/@devumi/learn-how-lego-mastered-user-
generated-content-86735044b43c
Donati, J. (2018). Facebook data scandal raises another question:
Can there be too much privacy? The Wall Street Journal, Re-
trieved from https://www.wsj.com/articles/facebook-data-scandal-raises-
another-question-can-there-be-too-much-privacy-1522584000
Dong, B., Sivakumar, K., Evans, K. R., & Zou, S. (2015). Effect of customer partic-
ipation on service outcomes: The moderating role of participation readiness.
Journal of Service Research, 18(2), 160–176.
Duffy, A. (2017). Trusting me, trusting you: Evaluating three forms of trust on an
information-rich consumer review website. Journal of Consumer Behaviour,
16(3), 212–220.
Earley, P. C. (1986). Trust, perceived importance of praise and criticism, and work
performance: An examination of feedback in the United States and England.
Journal of Management, 12(4), 457–473.
Filieri, R., Alguezaui, S., & McLeay, F. (2015). Why do travelers trust TripAdvisor?
Antecedents of trust towards consumer-generated media and its influence on
recommendation adoption and word of mouth. Tourism Management, 51,
174–185.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equations models with
unobservable variables and measurement error. Journal of Marketing Re-
search, 18(1), 39–50.
Gao, H., Ballantyne, D., & Knight, J. G. (2010). Paradoxes and guanxi dilemmas in
emerging Chinese–Western intercultural relationships. Industrial Marketing
Management, 39(2), 264–272.
Goodman, J. K., & Paolacci, G. (2017). Crowdsourcing consumer research. Journal
of Consumer Research, 44(1), 196–210.
Granovetter, M. (1983). The strength of weak ties: A network theory revisited.
Sociological Theory, 1(1), 201–233.
Dong, Li, and Sivakumar 563
Grewal, D., Gotlieb, J., & Marmorstein, H. (1994). The moderating effects of fram-
ing and source credibility on the price-perceived risk relationship. Journal
of Consumer Research, 21(3), 145–153.
Griskevicius, V., Tybur, J. M., Sundie, J. M., Cialdini, R. B., Miller, G. F., &
Kenrick, D. T. (2007). Blatant benevolence and conspicuous consumption:
When romantic motives elicit strategic costly signals. Journal of Personality
and Social Psychology, 93(1), 85–102.
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate data analysis
(7th ed.). Upper Saddle River, NJ: Prentice Hall.
Hayne, S. C., Wang, H., & Wang, L. (2015). Modeling reputation as a time-series:
Evaluating the risk of purchase decisions on eBay. Decision Sciences, 46(6),
1077–1107.
Hong, H., Xu, D., Wang, G. A., & Fan, W. (2017). Understanding the determinants
of online review helpfulness: A meta-analytic investigation. Decision Support
Systems, 102, 1–11.
Hulland, J., & Miller, J. (2018). Keep on Turkin? Journal of the Academy of
Marketing Science, 46(5), 789–794.
Jacoby, J., Jaccard, J. J., Currim, I., Kuss, A., Ansari, A., & Troutman, T. (1994).
Tracing the impact of item-by-item information accessing on uncertainty
reduction. Journal of Consumer Research, 21(2), 291–303.
Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is anybody out there? An-
tecedents of trust in global virtual teams. Journal of Management Information
Systems, 14(2), 29–64.
Jensen, M. L., Averbeck, J. M., Zhang, Z., & Wright, K. B. (2013). Credibility
of anonymous online product reviews: A language expectancy perspective.
Journal of Management Information Systems, 30(1), 293.
Johnston, D. A., McCutcheon, D. M., Stuart, F. I., & Kerwood, H. (2004). Effects of
supplier trust on performance of cooperative supplier relationships. Journal
of Operations Management, 22(1), 23–38.
Kim, M., Chung, N., & Lee, C. (2011). The effect of perceived trust on electronic
commerce: Shopping online for tourism products and services in South Ko-
rea. Tourism Management, 32(2), 256–265.
Lee, J., Park, D., & Han, I. (2011). The different effects of online consumer reviews
on consumers’ purchase intentions depending on trust in online shopping
malls: An advertising perspective. Internet Research, 21(2), 187–206.
Lemon, K. N., & Verhoef, P. C. (2016). Understanding customer experience
throughout the customer journey. Journal of Marketing, 80(6), 69–96.
Li, M., Choi, T. Y., Rabinovich, E., & Crawford, A. (2013). Inter-customer in-
teractions in self-service setting: Implications for perceived service quality
and repeat purchasing intentions. Production and Operations Management
Journal, 22(4), 888–914.
Liu, Z., & Park, S. (2015). What makes a useful online review? Implication for
travel product websites. Tourism Management, 47, 140–151.
564 Online Review Characteristics and Trust
Mackiewicz, J., Yeats, D., & Thornton, T. (2016). The impact of review environ-
ment on review credibility. IEEE Transactions on Professional Communica-
tion, 59(2), 71–88.
Martin, W. C. (2017). Don’t be such a downer: Examining the impact of valence
on receivers of word of mouth (A structured abstract). In M. Stieler (Ed.)
Creating marketing magic and innovative future marketing trends. Cham:
Springer, 979–983.
Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal
system on trust for management: A field quasi-experiment. Journal of Applied
Psychology, 84(1), 123–136.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of
organizational trust. Academy of Management Review, 20(3), 709–734.
McAllister, D. J. (1995). Affect-and cognition-based trust as foundations for in-
terpersonal cooperation in organizations. Academy of Management Journal,
38(1), 24–59.
Moe, W. W., & Trusov, M. (2011). The value of social dynamics in online product
ratings forums. Journal of Marketing Research, 48(3), 444–456.
Morgan, N. A., Zou, S., Vorhies, D. W., & Katsikeas, C. S. (2003). Experiential
and informational knowledge, architectural marketing capabilities, and the
adaptive performance of export ventures: A cross-national study. Decision
Sciences Journal, 34(2), 287–321.
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of
satisfaction decisions. Journal of Marketing Research, 17(4), 460–469.
O’Neil, T. (2015). Survey confirms the value of reviews, provides new insights.
Retrieved from https://www.powerreviews.com/blog/survey-confirms-the-
value-of-reviews/
Qiu, L., Pang, J., & Lim, K. H. (2012). Effects of conflicting aggregated rating on
eWOM review credibility and diagnosticity: The moderating role of review
valence. Decision Support Systems, 54(1), 631–643.
Racherla, P., Mandviwalla, M., & Connolly, D. J. (2012). Factors affecting con-
sumers’ trust in online product reviews. Journal of Consumer Behaviour,
11(2), 94–104.
Ray, S., Ow, T., & Kim, S. S. (2011). Security assurance: How online service
providers can influence security control perceptions and gain trust. Decision
Sciences, 42(2), 391–412.
Rohrer, T. A. (2010). The reverse thing, Volume XVII. Bloomington, IN: IUniverse.
Schoorman, D. F., Mayer, R. C., & Davis, J. H. (2007). An integrative model
of organizational trust: Past, present, and future. Academy of Management
Review, 32(2), 344–354.
Smith, D., Menon, S., & Sivakumar, K. (2005). Online peer and editorial rec-
ommendations, trust, and choice in virtual markets. Journal of Interactive
Marketing, 19(3), 15–37.
Dong, Li, and Sivakumar 565
Sparks, B. A., Perk, H. E., & Buckley, R. (2013). Online travel reviews as per-
suasive communication: The effects of content type, source, and certification
logos on consumer behavior. Tourism Management, 39, 1–9.
Sparks, B. A., So, K. K. F., & Bradley, G. L. (2016). Responding to negative online
reviews: The effects of hotel responses on customer inferences of trust and
concern. Tourism Management, 53, 74–85.
Steenkamp, J.-B. E. M., & Baumgartner, H. (1998). Assessing measurement invari-
ance in cross-national consumer research. Journal of Consumer Research,
25(1), 78–90.
Sun, M. (2012). How does the variance of product ratings matter? Management
Science, 58(4), 696–707.
Weise, E. (2015). Amazon cracks down on fake reviews. USAToday,
Retrieved from https://www.usatoday.com/story/tech/2015/10/19/amazon-
cracks-down-fake-reviews/74213892/
Wolf, J. R., & Muhanna, W. A. (2011). Feedback mechanisms, judgment bias, and
trust formation in online auctions. Decision Sciences, 42(1), 43–68.
Wu, K., Noorian, Z., Vassileva, J., & Adaji, I. (2015). How buyers perceive the
credibility of advisors in online marketplace: Review balance, review count
and misattribution. Journal of Trust Management, 2(1), 1–18.
Xu, Q. (2014). Should I trust him? The effects of reviewer profile characteristics
on eWOM credibility. Computers in Human Behavior, 33(2), 136–144.
Yan, T., & Kull, T. J. (2015). Supplier opportunism in buyer–supplier new prod-
uct development: A China-Us study of antecedents, consequences, and cul-
tural/institutional contexts. Decision Sciences Journal, 46(2), 403–445.
Yan, T., & Nair, A. (2016). Structuring supplier involvement in new product
development: A China–US study. Decision Sciences Journal, 47(4), 589–
627.
Yin, D., Mitra, S., & Zhang, H. (2016). When do consumers value positive vs.
negative reviews? An empirical investigation of confirmation bias in online
word of mouth. Information Systems Research, 27(1), 131–144.
Zhang, C., Viswanathan, S., & Henke, J. W. (2011). The boundary spanning capa-
bilities of purchasing agents in buyer–supplier trust development. Journal of
Operations Management, 29(4), 318–328.
Zuckerman, M., Driver, R., & Koestner, R. (1982). Discrepancy as a cue to actual
and perceived deception. Journal of Nonverbal Behavior, 7(2), 95–100.
Zuckerman, M., Kernis, M. R., Driver, R., & Koestner, R. (1984). Segmentation
of behavior: Effects of actual deception and expected deception. Journal of
Personality and Social Psychology, 46(5), 1173–1182.
Association Services Marketing SIG in 2014, the “Best Reviewer Award” from
Journal of Service Research in 2015, and Thomas J. Campbell ’80 Professorship
from Lehigh University in 2014. She is currently serving on the editorial review
board of Journal of Service Research.