Online Review Characteristics and Trust: A Cross-Country Examination

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Decision Sciences © 2018 Decision Sciences Institute

Volume 50 Number 3
June 2019

Online Review Characteristics and Trust: A


Cross-Country Examination
Beibei Dong†
Department of Marketing, College of Business and Economics, Lehigh University, Bethlehem,
PA, 18015, e-mail: bdong@lehigh.edu

Mei Li
Department of Supply Chain Management, Eli Broad College of Business, Michigan State
University, N334 Business Complex, East Lansing, MI, 48824

K. Sivakumar
Department of Marketing, College of Business and Economics, Lehigh University, Bethlehem,
PA, 18015

ABSTRACT
Consumers and companies are increasingly concerned about fake reviews posted online.
To make appropriate managerial decisions about online reviews, it is crucial to under-
stand what drives consumer trust in online reviews. Research examining the linkage
between online reviews and trust is sporadic and lacks a comprehensive framework.
To address this gap, this research investigates the individual and joint impact of three
review attributes—valence, rationality, and source—on the benevolence, ability, and
integrity dimensions of trustworthiness of the reviewer, which further determines trust
in online reviews. Using behavioral experiments, the study finds that positive reviews,
factual reviews, and reviews appearing on social networks lead to perceptions of greater
benevolence, ability, and integrity than negative reviews, emotional reviews, and reviews
appearing on retailer sites, respectively. In addition, review rationality and review source
moderate the link between review valence and the dimensions of reviewer trustworthi-
ness: the effect of valence is stronger for emotional reviews than for factual reviews and
for retailer sites than for social networks. Furthermore, the study analyzes data from
the United States and China to examine generalizability and cultural nuances of the
proposed relationships. This study provides important insights into consumer trust in
online reviews and offers guidelines for decision making in communication strategy,
system design, and operations by manufacturers, retailers, service providers, and online
platforms to more effectively manage online reviews. [Submitted: November 14, 2017.
Revised: August 22, 2018. Accepted: August 25, 2018.]

We appreciate the constructive guidance of the editors-in-chief, senior editor, associate editor, and anonymous
reviewers. We are grateful to the generous funding support by Lehigh University Faculty Research Grant
and CIBER at Michigan State University.
† Corresponding author

537
538 Online Review Characteristics and Trust

Subject Areas: Cross-Country, Online Reviews, Review Rationality, Review


Source, Review Valence, Trust, and Trustworthiness.

INTRODUCTION
With the burgeoning growth of the Internet, online consumer reviews about goods
and services have become a ubiquitous phenomenon (Hayne, Wang, & Wang, 2015;
Hong, Xu, Wang, & Fan, 2017). According to PowerReviews (O’Neil, 2015), 95%
of the people surveyed said they read online reviews regularly for researching
and purchasing consumer products. Indeed, the consumer decision journey has
dramatically changed with the advent of technology (Lemon & Verhoef, 2016),
and these consumer-to-consumer influences have become an important force in
shaping consumer decisions (Li, Choi, Rabinovich, & Crawford, 2013). Therefore,
a more detailed understanding of online reviews will enable firms to make better
decisions about managing online reviews with implications in communication,
systems design, and operations.
Despite the increasing importance of online reviews, criticisms about their
credibility exist (Weise, 2015). With trust being “the single most important variable
influencing interpersonal and inter-organizational behavior” (Zhang, Viswanathan,
& Henke, 2011, p. 318), understanding what factors drive consumer trust in online
reviews is crucial to arrive at appropriate decisions in leveraging online reviews
(Wolf & Muhanna, 2011). Yet academic research examining the link between
online reviews and trust is sporadic and lacks a comprehensive framework. To
address this research gap, this study examines three research questions: (1) How
do review characteristics (valence, rationality, and source) influence the dimensions
of reviewer trustworthiness (benevolence, ability, and integrity)? (2) Are there any
interaction effects among these review characteristics? And (3) do the proposed
relationships hold in the United States and China, two important markets with very
different cultures?
By addressing these research questions, we make several contributions to the
literature on online reviews and offer decision-making guidelines for managers.
First, research assessing online reviews often focuses on a single aspect of reviews
(e.g., review valence); this study adopts a multidimensional approach by examining
the independent and joint impacts of three review attributes. Second, this research
is the first to link review attributes to the three dimensions of trustworthiness to
gain a comprehensive understanding of the role of online reviews on trust. Third,
given the importance of globalization and technology, multinational studies are
particularly valuable; thus, we conduct the study in the United States and China, to
enhance the generalizability of our theoretical framework and to examine cultural
nuances.

STATE OF KNOWLEDGE AND RESEARCH GAPS


We conducted a systematic search of articles on online reviews published between
2000 and 2017. We began with ProQuest’s ABI/Inform database to conduct a
Dong, Li, and Sivakumar 539

keyword-based digital search. Then, given the interdisciplinary nature of online


reviews, we identified 18 journals from the fields of marketing, operations,
information systems, service management, and tourism that publish research
on online review. We compared this list of journals with those covered by
ABI/Inform, identified journals not covered by that database or journals with
recent issues (as of December 2017) not covered, and then performed a manual
search to supplement the digital search. We gradually narrowed our search to
articles (1) focusing on online reviews, (2) examining characteristics of online
reviews, and (3) examining trust or related terms in online reviewsi . Web Appendix
1 provides additional details depicting the procedure followed in our search.
As shown in Web Appendix 1, extensive research has examined online reviews;
however, we focus on a smaller set of articles examining trust in online review
(Table 1).
Our review of the current state of the literature reveals five gaps that this
study intends to address. First, as Figure AF1 shows, although there is a large body
of literature on online reviews (16,222 articles), when we narrow the research to
studies examining particular online review characteristics of our interest (review
valence, rationality, and source), the number of articles reduces dramatically to
116. Furthermore, the majority of the research focuses on evaluating online re-
views’ effect on sales (Chevalier & Mayzlin, 2006) and other performance-related
variables (Moe & Trusov, 2011), whereas research examining consumer trust in
online reviews is rather limited (19 articles; see Table 1).
Second, a close examination of Table 1 reveals that the conceptualization
and operationalization of trust construct have not been consistent. Previous re-
search has used various terminologies to denote trust, including trustworthiness
(Mackiewicz, Yeats, & Thornton, 2016), credibility (Xu, 2014), usefulness (Liu &
Park, 2015), and helpfulness (Yin, Mitra, & Zhang, 2016); however, the concep-
tual domain of trust and its distinction from and connection with other concepts
(e.g., trustworthiness) are not accurately theorized. Further, the operationalization
of trust is also inconsistent in the literature.
Third, even the limited research examining trust in online reviews does
not adopt a systematic framework. Our research expands the knowledge base
by proposing three dimensions of trustworthiness of the reviewer as the the-
oretical base for forming trust in online reviews. Applying the established re-
search on organizational trust (e.g., Mayer, Davis, & Schoorman, 1995) to the
online review setting, we not only differentiate trustworthiness and trust as sep-
arate but related constructs but also propose the theoretical linkages between
these constructs, clarifying the confusion in existing research about trust in online
reviews.
Fourth, studies investigating review characteristics mostly focus on review
valence (e.g., Chang et al., 2013; Yin et al., 2016), whereas few studies have ex-
amined review source and/or review rationality. Further, no studies have examined

i There are research papers focused on trust toward online retailers; however, the focus of our article is trust
in online reviews (rather than in online retailers). These are conceptually different constructs and the trust
formation toward online reviewers and online retailers are very different. Please refer to Web Appendix 1
for more detailed discussion on this issue.
540

Table 1: A summary of research on online product review and trust.


Antecedents of Trust Contributions
Trust-Related Operationalization
Article V R S Others Variables of Trust Country INT TR/TW 3-FAC CCC
Kim, Chung, and Web site functionality, Web site trust M: reliability, integrity, South No No No No
Lee (2011) security, satisfaction Web site Korea
trustworthiness
Cheung et al. x Argument quality, source Review credibility M: believable, factual, U.S. No No No No
(2012) credibility, review accurate, credible
consistency, sidedness
Lee, Park, and Han N/A Review credibility, brand M: credible South No No No No
(2011) trust Korea
Qiu et al. (2012) x Conflict review rating Review credibility M: trustworthy, reliable, China No No No No
credible
Racherla et al. x Background similarity Review trust M: decision trust U.S. No No No No
(2012)
Bosman et al. x Platform (amazon vs. Review credibility % rating review as helpful U.S. No No No No
(2013) other), text length,
time, star ratings
Chang, Rhodes, x Brand trust M: brand reliability, China No No No No
and Lok (2013) intentionality
Jensen et al. (2013) x Review sidedness, lexical Reviewer credibility M: trustworthy, U.S. No No No No
complexity, affect believable, credible,
intensity accurate
Sparks, Perk, and Content vagueness, Brand trust M: retailer integrity, Australia No No No No
Buckley (2013) reviewer identity, logo trustworthy, believable,
confident
Xu (2014) x Reviewer reputation, Review credibility M: credible, believable U.S. No No No No
profile picture and trustworthy
Filieri, Alguezaui, Reviewer credibility, info Web site trust M: honest, trustworthy Europe No No No No
and McLeay quality, site quality,
(2015) satisfaction, experience
(Continued)
Online Review Characteristics and Trust
Table 1: (Continued)
Antecedents of Trust Contributions
Trust-Related Operationalization
Article V R S Others Variables of Trust Country INT TR/TW 3-FAC CCC
Liu and Park (2015) x Reviewer identity, Review usefulness Number of useful votes U.S. No No No No
expertise, reputation;
review ratings, length,
enjoyment, readability
Dong, Li, and Sivakumar

Wu et al. (2015) Review balance Reviewer credibility M: dependable, honest, China No No No No


(proportion of positive reliable, sincere, and
reviews) trustworthy
Bilgihan, Barreda, Web site ease of use, Web site Integrity M: honest, sincere, not U.S. No No No No
Okumus, and utilitarian beliefs, overcharging
Nusair (2016) subjective norms
Mackiewicz et al. x Brand vs. retailer site Web site trustworthiness M: credible U.S. No No No No
(2016)
Sparks et al. (2016) Organizational response Brand trustworthiness M: brand trustworthiness Australia No No No No
to negative reviews
Yin et al. (2016) x Average product rating Review helpfulness % rating review as helpful U.S. No No No No
Banerjee et al. x Reviewer positivity, Reviewer trustworthiness Number of followers U.S. No No No No
(2017) involvement,
experience, reputation,
competence,
sociability
Duffy (2017) N/A Web site, reviewer, and M: 3-factor U.S. No No Yes No
self-trust trustworthiness
measures
Our research x x x Reviewer trustworthiness, M: 3-factor U.S., Yes Yes Yes Yes
review trust trustworthiness China
measures

Note: V = valence, R = rationality, S = source, O = other; N/A = not applicable; Contributions: INT = interaction of review characteristics, TR/TW =
distinguishing trust and trustworthiness, 3-FAC = three-factor trustworthiness model, CCC = cross-country comparison, M = the construct was measured
using various multipoint scales.
541
542 Online Review Characteristics and Trust

Figure 1: : A model of review characteristics, trustworthiness, and trust.

both the individual and joint effects of these attributes on trustworthiness to offer
a more complete and contingent view of trust in online review (see Table 1).
Fifth, research on online reviews has primarily been conducted in developed
countries (Yin et al., 2016; Banerjee, Bhattacharyya, & Bose, 2017); limited re-
search has examined the generalizability of the knowledge to other countries such
as China (Wu, Noorian, Vassileva, & Adaji, 2015). This study tests the proposed
framework in the United States and China, offering guidance to U.S. retailers
exploring the Chinese market (e.g., Costoc, Amazon.com) and Chinese retailers
exploring the U.S. market (e.g., Alibaba) (Wolf & Muhanna, 2011).

CONCEPTUAL FRAMEWORK
Figure 1 depicts our conceptual framework. Building on the three-factor model by
Mayer et al. (1995), we examine how the three review attributes individually and
jointly influence the three dimensions of reviewer trustworthiness, which in turn
determine trust in reviews.
Extant literature suggests that the trustworthiness of a source depends directly
on how individuals perceive and respond to the message in the review (Grewal,
Gotlieb, & Marmorstein, 1994); the more trustworthy the reviewer is, the lesser is
the uncertainty of the review (Racherla, Mandviwalla, & Connolly, 2012). Retail-
ers use multiple incentives to encourage promotional chat and manipulate online
reviews (Chevalier & Mayzlin, 2006), and the lack of nonverbal and social cues in
the online environment compounds the issue of reviewer genuineness (Ray, Ow, &
Kim, 2011). A particular challenge in online settings is that the party who writes
the review (review-writer) and the party who reads the review (review-reader) have
neither a history nor an expectancy of future interactions (Racherla et al., 2012).
This is similar to the communication between strangers, and thus research exam-
ining communication between strangers could provide insights for our research
(Bargh, McKenna, & Fitzsimons, 2002). Next we explain the key constructs in our
model.

Review Characteristics
Online consumers seek credible information to facilitate decision making (Xu,
2014), and they often employ both active and passive strategies to do so (Jacoby
et al., 1994). In the context of online reviews, active strategies involve evaluating
Dong, Li, and Sivakumar 543

message content to determine source expertise and bias (Filieri et al., 2015). Review
content has two dimensions: review valence, or the positive or negative orientation
of the information (Yin et al., 2016), and review rationality, or the extent to
which the argument is fact-based, objective, and verifiable (Cheung, Sia, & Kuan,
2012). Given the anonymous nature of online communication and the potential
for deception, consumers sometimes employ passive strategies by searching for
peripheral cues to assess the credibility of a review (Filieri et al., 2015). Review
source (i.e., where a review appears) is one such peripheral cue (Bosman, Boshoff,
& van Rooyen, 2013). Online reviews can appear on a retailer’s Web site (e.g.,
Amazon.com) or on social networks (e.g., Facebook, Twitter).

Trustworthiness of the Reviewer


In the context of online reviews, we define reviewer trustworthiness as the extent
to which the review-writer can be trusted (Mayer et al., 1995). This definition is
consistent with the literature (e.g., Mayer et al., 1995; Colquitt, Scott, & LePine,
2007). Mayer et al. (1995) further suggest that trustworthiness is a multifaceted
construct comprising three factors: benevolence, ability, and integrity. Benevolence
is the extent to which a trustor believes that a trustee wants to do good to others,
beyond an egocentric profit motive (Mayer & Davis, 1999). A trustor trusts what
a trustee states because he or she believes that the trustee is kind and agreeable,
exhibits goodwill toward others, and cares for others (Mayer & Davis, 1999).
Ability refers to a group of skills, competencies, and characteristics that confer
influence on a party (Mayer & Davis, 1999). Ability-based trustworthiness derives
from a trustor’s confidence in the competence of a trustee in providing a quality
review. Integrity describes the extent to which a trustee is believed to adhere to
sound moral and ethical principles (Mayer & Davis, 1999). A person with integrity
is typically thought of as being honest and truthful (Audi & Murphy, 2006).
Integrity-based trustworthiness stems from a trustor’s confidence that a trustee
adheres to principles that make him or her dependable and reliable (Jarvenpaa,
Knoll, & Leidner, 1998).
Most of the trustworthiness literature is situated in organizational and other
interpersonal research settings (e.g., Mayer et al., 1995; Mayer & Davis, 1999).
Given the recency of the online review phenomenon, our research is one of the
first to systematically conceptualize the detailed dimensions of trustworthiness in
the online review context. We argue that the trustworthiness framework explored
in the organizational and other interpersonal contexts can be extended to online
review settings because individual customers try to make informed decisions about
reviewers, which expands the interpersonal notion of trust (Duffy, 2017) that
has been examined in other contexts (e.g., trust between the salesperson and the
customer, trust in new product team, trust in self-organizing teams, trust in mass
media) (Johnston, McCutcheon, Stuart, & Kerwood, 2004).

Trust in the Review


Trust refers to the willingness of a trustor to be vulnerable to the actions of a trustee,
based on the expectation that the trustee will perform a particular action (Mayer
et al., 1995). Trust comprises two factors: (1) the intention to accept vulnerability
544 Online Review Characteristics and Trust

and (2) positive expectations (Colquitt et al., 2007). We follow the literature and
define trust in the review as the willingness of online consumers to believe the
written commentaries of reviewers and to rely on them with the expectation that
the reviewers are trustworthy (Mayer et al., 1995). This definition suggests that
trustworthiness of the reviewer is an antecedent of trust in the review, consistent
with the literature (Colquitt et al., 2007).

HYPOTHESES DEVELOPMENT
Effect of Review Valence on Reviewer Trustworthiness
We hypothesize how review valence influences the benevolence and integrity
dimensions of reviewer trustworthiness. First, in an online review context, a positive
review projects a greater sense of benevolence than a negative review (Duffy, 2017).
Research shows that a person’s online expressions, in the form of online reviews,
are reflections of his or her “true self” (Bargh et al., 2002). A review that is filled
with kind words and emphasizes what is good or laudable reflects internal kindness
of the reviewer and his or her willingness to do good to others (Griskevicius et al.,
2007). As both kindness and willingness to do good to others are distinguishing
features of benevolence (Mayer et al., 1995), it naturally follows that positive
statements lead to the perception of benevolence (Xu, 2014). Conversely, a review
that is demeaning and expresses criticism is likely to reflect an unkind reviewer and
works against the image of doing good to others (Banerjee et al., 2017). Therefore,
we propose that positive reviews garner stronger perceptions of benevolence than
negative reviews.
Second, review valence appeals to the integrity dimension of trustworthiness.
Recent research shows that consumers exhibit positive bias in receiving word of
mouth (WOM) (Martin, 2017)—they simply view positive product statements as
more trustworthy. The underlying logic can be explained by research on behavioral
cues to discern truth-tellers from liars (DePaulo et al., 2003). One behavioral cue
people often use to decide whether to trust a stranger is the valence of his or
her statement (DePaulo et al., 2003). Research shows that liars tend to give more
negative statements and complaints, sound less pleasant, and look less friendly than
truth-tellers (Zuckerman, Kernis, Driver, & Koestner, 1984). Given that reviewers
and readers interact more like strangers (Bargh et al., 2002), readers are likely to
rely on this cue to evaluate reviewers’ integrity (Bilgihan et al., 2016). Thus, we
propose that people tend to view reviewers who express positive reviews as more
honest than those who publish negative reviews.
H1: Review valence influences reviewer trustworthiness, such that a positive review
leads to perceptions of (a) greater benevolence and (b) greater integrity than a
negative review.

Effect of Review Rationality on Reviewer Trustworthiness


Review rationality signals the quality of the argument and influences all three di-
mensions of reviewer trustworthiness (Cheung et al., 2012). First, review rational-
ity influences benevolence. Factual reviews provide objective product evaluations
backed by verifiable facts and thus help other consumers make more informed
Dong, Li, and Sivakumar 545

decisions (Griskevicius et al., 2007). Conversely, emotional reviews focus more on


expressing reviewers’ emotions and less on valuable product information. Given
the heterogeneity in consumer preferences, emotional statements lacking factual
support often make it difficult to assess whether the information shared by the re-
viewer is based on individual preferences or is an accurate reflection of the product
(Sparks et al., 2013). As such, these messages tend to be less helpful in facilitating
others’ decision making (Liu & Park, 2015). Moreover, as factual reviews require
great effort to collect, analyze, and synthesize information and then present infor-
mation in a logical and coherent way, the need for mental resources in crafting
factual reviews implies that the reviewers are acting in good faith to provide valu-
able information to help other consumers make more informed decisions (Racherla
et al., 2012). This perception of helpfulness projects an image of doing good to
others (i.e., benevolence; Mayer et al., 1995). By contrast, consumers perceive
reviewers who focus more on exhibiting their emotions as more self-centered and
thus less helpful (Banerjee et al., 2017).
Second, review rationality positively affects ability. People are generally
biased when forming perceptions of intellectual ability. For example, emotional
reviews, which often use extreme statements and share subjective feelings of the re-
viewers rather than objective facts, often leave the impression that the reviewers are
incapable of processing information logically and arriving at accurate judgments
(Jensen, Averbeck, Zhang, & Wright, 2013). Such reviewers are also perceived to
be less intelligent (Rohrer, 2010), whereas reviewers who post factual descriptions
are perceived to be more intelligent (Banerjee et al., 2017).
Last, review rationality influences reviewer integrity. In essence, integrity
reflects the honesty of the reviewer (Mayer et al., 1995). Factual, objective, and
verifiable claims tend to leave the impression that the reviewers have tried to
process product information accurately, have sound rules and principles when
evaluating the product, and have shared their observations candidly and honestly
when disclosing their reasoning process (Wu et al., 2015). By contrast, claims that
are less rational (e.g., emotional responses) and incorporate less compelling tales
(e.g., using weak logical structure) project a sense of falsification (DePaulo et al.,
2003). Research suggests that compared with truth-tellers, liars provide fewer
concrete and verifiable details in their stories (DePaulo et al., 2003) and obfuscate
their messages with vagueness and nonspecificity (Zuckerman, Driver, & Koestner,
1982), often in an effort to reduce the chances of being challenged (DePaulo et al.,
2003). Therefore, compared with emotional reviews, people tend to have more
confidence in the integrity of reviewers who share more factual reviews.
H2: Review rationality influences reviewer trustworthiness, such that a factual
review leads to perceptions of (a) greater benevolence, (b) greater ability, and
(c) greater integrity than an emotional review.

Effect of Review Source on Reviewer Trustworthiness


Depending on where the review appears (e.g., retailers’ transaction Web sites,
social network), review source can affect benevolence and integrity (Mackiewicz
et al., 2016). Research suggests that the nature of the relationship between sender
and receiver is an important predictor of the sender’s integrity (Buller & Burgoon,
546 Online Review Characteristics and Trust

1996). Interpersonal relationships often operate in different circles and social ties
(Gao, Ballantyne, & Knight, 2010). Strangers providing reviews on a retailer’s
Web site and acquaintances on social networks have different levels of relationship
familiarity (Burgoon, Buller, Floyd, & Grandpre, 1996) and belong to different
circles or social ties (Racherla et al., 2012). Research indicates that relationship
familiarity between the review-writer and review-reader can improve readers’
ability to detect deception (Burgoon et al., 1996) and influences the interpretation
of a reviewer’s motive for posting a review and thus the credibility of the review
(Buller & Burgoon, 1996). An inner circle is often represented by close ties to
family members and friends (Granovetter, 1983), whereas an outer circle often
reflects instrumental ties that include strangers. Research shows that compared
to inner-circle members, people tend to perceive outer-circle members as less
honest (McAllister, 1995). When sharing product reviews, people often view those
who are in their social networks as more sincere, honest, and less self-interested
than strangers who post reviews on a retailer’s Web site (Buller & Aune, 1987).
Moreover, reviews posted on retail sites are subject to the influence and even
manipulation of the retailers and thus are less independent and have a greater
possibility of conflicts of interest (Bosman et al., 2013). This presence and/or
absence of self-interest in crafting a review also influences the inference of reviewer
integrity.
Review source also influences the perception of benevolence. Membership
in a social network is often by self-selection. People have family and friends, who
exhibit similar and desirable characteristics, in their inner circle (Buller & Aune,
1987). Given the closer relationships with people on social networks than with
strangers on retailing sites, people tend to believe that the former are more likely
to have good motives to share their candid views (Buller & Burgoon, 1996) and
help them make better decisions (Mackiewicz et al., 2016).
H3: Review source influences reviewer trustworthiness, such that a review posted
on one’s social network leads to perceptions of (a) greater benevolence and (b)
greater integrity than a review posted on a retailer site.

Moderating Role of Review Rationality and Review Source


Review valence is one of the most researched review characteristics in the literature,
given its importance in influencing quality perceptions, purchase decisions, and
WOM (Berger, 2014); therefore, we further explore how the other two review
characteristics (rationality and source) may moderate the effect of valence. H1
postulates that review valence positively affects benevolence and integrity. We
propose this effect is contingent on review rationality and source.
We posit that the perceptual differences between positive reviews and nega-
tive reviews are more salient for emotional reviews than for factual reviews; that
is, review rationality reduces the effect of valence on both benevolence and in-
tegrity. Although people who write negative reviews are less likely to project a
kind, honest, and agreeable image than those who write positive reviews (Banerjee
et al., 2017), factual arguments presented in a rational and coherent manner signal
the quality of review arguments (Cheung et al., 2012) and provide more impor-
tant information to assess reviewer trustworthiness (Buller & Burgoon, 1996). In
Dong, Li, and Sivakumar 547

other words, reviews packed with factual information, despite being negative, help
alleviate concerns that the reviewer is hostile, not agreeable, and less truthful (De-
Paulo et al., 2003) and thus reduce the disadvantage of negative reviews (Sparks,
So, & Bradley, 2016). Therefore, consumers rely less on review valence to project
trustworthiness, resulting in a weakened effect of review valence.

H4: Review rationality moderates the effect of valence on perceptions of (a) benev-
olence and (b) integrity such that the effect of valence is stronger for emotional
reviews than for factual reviews.

Similarly, we propose that review source moderates the effect of valence on


perceived benevolence and integrity, such that the effect of valence is stronger
when the review is posted on retailer sites than social networks. As discussed
previously, the rationale underlying the link between review valence and perceived
benevolence and integrity is that when interacting with strangers, people tend
to believe that truth-tellers make more positive statements than liars (DePaulo
et al., 2003). However, the use of verbal cues (i.e., review valence) to judge
others stems from the lack of ability to assess the actual content and quality of
the arguments provided by strangers (Racherla et al., 2012). This is not the case
when people interact with their social networks, about whom they have better
knowledge and information to evaluate their trustworthiness (Buller & Aune,
1987). Therefore, a product review, despite being negative, has a weaker association
with dishonesty if posted by people on social networks (Mackiewicz et al., 2016).
Likewise, we argue that people largely believe that friends, family members, and
selected acquaintances are less likely to intentionally harm them (Burgoon et al.,
1996). People infer that those who are inside their inner circle are motivated to share
their thoughts and help, whether the review is positive or negative. Therefore, there
is less concern about viciousness and “unkindness” in a negative review posted
by members of one’s social network. Thus, the advantage of positive reviews over
negative reviews diminishes when the reviews are shared on social networks (vs.
retailer sites), showing a weakened effect of valence on trustworthiness.

H5: Review source moderates the effect of valence on perceptions of (a) benevo-
lence and (b) integrity such that the effect of review valence is stronger for retailer
sites than for social networks.

Link between Trustworthiness and Trust


As mentioned previously, the literature treats trustworthiness and trust as concep-
tually different constructs (Colquitt et al., 2007). The separation of these two con-
structs is important to gain a better understanding of the psychological processes
that form trust. That being said, the positive association between trustworthiness
and trust is well established in the literature (e.g., Mayer et al., 1995). Therefore,
to holistically examine the paths between review characteristics and trust in our
framework, we propose the following:

H6: The three dimensions of reviewer trustworthiness—benevolence, ability, and


integrity—are positively associated with trust in the review.
548 Online Review Characteristics and Trust

METHODOLOGY
Research Design and Context
We used a scenario-based experiment to collect data in two countries. Given
that most research on consumer reviews uses secondary data when examining
their impact on sales and firm performance (Sun, 2012), use of experiments im-
proves conceptualization accuracy and establishes causal relationships among key
constructs (Bendoly, Bachrach, & Perry-Smith, 2011). We followed the exper-
imental methods used in behavioral operations research (Bendoly et al., 2011;
Bendoly, 2014) and research on trust in online reviews (e.g., Sparks et al., 2013;
Xu, 2014) to examine the psychological process of how consumers trust online
reviews.
We use online clothes purchases as the research context. Statistics show
that clothing predominates online retail in both the United States and China
(www.statista.com). This context allows us to explore a type of product that is
significant to online retailers, familiar to online consumers, and shows compara-
ble popularity in both countries. We reviewed more than 200 real online cloth-
ing reviews from Amazon.com and Taobao before preparing our experimental
scenarios. A 2 (valence) × 2 (rationality) × 2 (source) between-subjects exper-
imental design was used. The scenario featured the product review of a jacket.
We chose this product because it is gender neutral and has comparable shopping
patterns across the two countries. Respondents were asked to imagine that they
were planning to purchase a denim outfit and were then randomly assigned to
one of the eight conditions to read a product review. After reviewing myriad real
consumer reviews, we identified four major decision attributes that appear most
frequently in consumer clothes reviews: material, shipping, design, and fit. In-
corporating these decision factors, we manipulated review valence at two levels:
positive reviews (with all four attributes positive) and negative reviews (with all
four attributes negative). We manipulated review rationality at two levels: factual
review (e.g., describing facts) and emotional review (e.g., venting emotion and
feelings). We also manipulated review source at two levels: a review appearing on
an online retail site and a social network site. The manipulation included using two
made-up names to differentiate the sites—eRetail.com and Friendbook—along
with an explanation of the sites. Detailed scenario descriptions appear in Web
Appendix 2.

Data Collection and Sample


In the United States, data collection was through Amazon Mechanical Turk’s online
subject pool, which is one of the most prevalent crowdsourcing platforms to recruit
general consumers for academic studies (Goodman & Paolacci, 2017). Only U.S.
residents were qualified to participate in the study. A total of 343 people started
the experiment, and 325 of them completed the survey. All completed surveys
passed the quality checks and were deemed usable. Of the participants, 53.8%
were women; ages ranged from 19 to 73 years, with an average age of 36; and
43 of the 50 states were represented. Respondents were dispersed geographically,
with 21% living in rural areas, 55% in suburban areas, and the remainder in
urban areas. In addition, 40% had a college degree, 11.3% had a graduate degree,
Dong, Li, and Sivakumar 549

and the remainder had a high school degree or lower. Regarding ethnicity, 8.9%
were African American, 73.2% Caucasian, 5.8% Hispanic, 8.3% Asian, and 3.8%
other.
We employed a similar sampling approach in China and used Sojump, one of
the most prominent crowdsourcing companies for online surveys in China. Sojump
provides researchers with greater control of sample representativeness and response
quality. A total of 1,790 participants were solicited to fill out the survey, out of
which 577 accepted the invitation and answered the survey. A total of 346 responses
were considered valid by passing Sojump’s established screening criteria. Among
them, 55.2% of respondents were women, and ages ranged from 20 to 63 years,
with a mean of 32. Respondents came from 24 of 34 provinces, and 96.8% were
from nonrural areas. Finally, 79% had a college degree, 7% had a graduate degree,
and the remaining 14% had a high school degree or lower. Following the strategies
recommended by existing research (e.g., Goodman & Paolacci, 2017; Hulland &
Miller, 2018), we adopted various procedures in our research design prior to data
collection and data quality screening post data collection to ensure reliability and
validity of the data we collected from both MTurk and Sojump. Additional details
about these procedures are elaborated in Web Appendix 3.

Questionnaire and Measurement


We experimentally manipulated the three review characteristics (valence, ratio-
nality, and source). The literature does not have measures for trustworthiness in
the online review context. Therefore, we adapted the classic measurement scale of
Mayer and Davis (1999) to the online review context to assess benevolence (the
perception that the reviewer cares for the interests of review readers), ability (the
reviewer’s skills and competency in providing successful product reviews), and
integrity (the perception that the reviewer adheres to sound moral standards and
ethical principles). We carefully assessed each item of Mayer and Davis (1999)
to ensure their appropriateness to our research context—a consumer-to-consumer
online context that is mostly devoid of previous interactions and long-term rela-
tionships. We kept most items of Mayer and Davis (1999) and dropped only a few
that were inappropriate to our context. Responses to these items require knowledge
from long-term relationships in the workplace. We also added one item for benev-
olence (“The Reviewer cares for my interest”) and two items for integrity (i.e.,
“The Reviewer adheres to ethical principles” and “The Reviewer exhibits sound
moral character”) to align with the definitions of these constructs in our research.
Additional details appear in Web Appendix 4. Finally, we used the two items from
Earley (1986) to measure trust in the review. All items used seven-point Likert
scales.
Before reading the scenario, respondents indicated their general impressions
of purchasing clothes online using two items adapted from Dong, Sivakumar,
Evans, and Zou (2015). According to expectation–disconfirmation theory (Oliver,
1980), when consumers expect good quality of online purchases, positive reviews
confirm their expectations and thereby increase their trust, whereas negative re-
views disconfirm their expectations, and thus consumers are more likely to regard
the reviews as a one-time instance and less trustworthy. To account for the in-
fluence of expectation, we included it as a control variable, in addition to age,
550 Online Review Characteristics and Trust

gender, income, and education. We developed the original script and question-
naire in English. Then, two native Chinese speakers translated the questionnaire
into Chinese, which was then translated back to English by two other native Chi-
nese; a translation/back-translation approach was followed to ensure language
equivalence.

Analytical Procedure
We followed a systematic approach to compile the data, to ensure measurement
appropriateness, to test the hypotheses, and to examine nuances between the two
country samples. Specifically, we followed six steps: (1) we conducted manipu-
lation checks to ensure the scenarios were appropriate for the hypotheses (using
comparison of means across groups); (2) we performed confirmatory factor anal-
ysis (CFA) to ensure reliability and convergent and discriminant validity of all
measures (using EQS 6.2 for Windows); (3) we established measurement equiva-
lence between data collected from the two countries (using χ 2 comparisons to test
for full factor structure equivalence in EQS 6.2); (4) we used analysis of covari-
ance (ANCOVA) to examine the individual and joint effects of the three review
attributes on the three review trustworthiness dimensions for both countries in
SPSS version 24; (5) we tested the relationship between trustworthiness and trust
using linear multiple regression analysis in SPSS version 24; and (6) we com-
bined the two country data to statistically assess the differences between the two
countries.

RESULTS
Manipulation Checks
All manipulation checks were conducted with seven-point scales. We assessed
the manipulation of review valence using two items: “The review I just read has
a favorable opinion of the product” and “The review reflects negatively on the
product.” The score was higher for positive than negative reviews (U.S.: Mpos =
6.56 vs. Mneg = 1.64, p < .001; China: Mpos = 6.30 vs. Mneg = 1.49, p < .001).
We used the two items to evaluate the manipulation of review rationality, which
assessed whether the review focused on describing facts or emotions. Respondents
considered the review more rational in the factual than emotional review condition
(U.S.: Memo = 2.94 vs. Mfact = 5.24, p < .001; China: Memo = 4.68 vs. Mfact
= 3.69, p < .001). The emotion literature suggests that emotional statements
can trigger different levels of arousal, so we used two items (unpleasant/pleasant
and calm/excited) to assess the arousal levels across the factual and emotional
conditions; we found no significant difference in arousal levels between the two
(U.S.: Arousalemo = 4.24 vs. Arousalfact = 3.99, p = .129; China: Arousalemo = 4.11
vs. Arousalfact = 3.95, p = .249). We deemed the manipulation of review source
successful by evaluating whether the site allows for direct purchase (U.S.: Mretail
= 6.24 vs. Msocial = 1.52, p < .001; China: Mretail = 6.51 vs. Msocial = 1.39, p <
.001). Responses to all manipulation check questions were significantly different
across experimental conditions in the expected direction, confirming successful
manipulations in both countries.
Dong, Li, and Sivakumar 551

Confirmatory Factor Analysis


Following a two-step modeling approach, we first assessed the measurement model
and then tested the hypotheses (Anderson & Gerbing, 1988). The review charac-
teristics were manipulated, so we did not include them in the CFA. Instead, we
included four constructs in CFA (i.e., benevolence, ability, integrity, and trust)
and ran separate CFAs for the two samples. To assess the fit of the measurement
model, we followed the multistep procedure that Bagozzi and Yi (1988) recom-
mend. First, we used elliptical reweighted least squares in EQS 6.2 to estimate
the model because the data had some relatively high kurtosis values. Second, the
model converged properly without any report of anomalies (e.g., condition codes,
improper solutions). Third, we evaluated the chi-square tests and the model fit
indexes, as Bagozzi and Yi (1988) recommend. Table 2, Panel A, lists the scales
and factor loadings, whereas measurement properties, correlations, and item de-
scriptive statistics appear in Panel B.
The measurement model for both samples suggested good fit (U.S.: χ 2 (71)
= 126.662, p < .001; normed fit index [NFI] = .992; comparative fit index
[CFI] = .994; incremental fit index [IFI] = .994; standardized root mean square
residual [SRMR] = .027; root mean square error of approximation [RMSEA]
= .049; China: χ 2 (71) = 115.133, p < .001; NFI = .993; CFI = .995; IFI =
.995; SRMR = .032; RMSEA = .042). For both samples, we then tested the
measurement model for its reliability, convergent validity, and discriminant va-
lidity. Cronbach’s α values and composite reliabilities, as displayed in Table 2,
Panel B, for all four latent factors are greater than .70, demonstrating reliabil-
ity of the measures (Hair, Black, Babin, & Anderson, 2010). Factor loadings
are all larger than .70, and average variances extracted (AVEs) are greater than
.50, suggesting convergent validity of the measurement model (Fornell & Lar-
cker, 1981; Hair et al., 2010). To test discriminant validity, we further com-
pared AVEs with the maximum shared variances (MSVs), the average shared
variances (ASVs), and interfactor correlations. Table 2 shows that all MSVs
and ASVs are smaller than the AVEs and that the square root of the AVEs are
larger than the interfactor correlations, demonstrating discriminant validity (Hair
et al., 2010).

Measurement Equivalence
In addition to the translational equivalence of the measures, we tested the measure-
ment equivalence between the U.S. and Chinese samples using two-group CFAs in
EQS 6.2 (Bagozzi & Yi, 1988). Again, we used elliptical reweighted least squares
to estimate the model. The factor structure was specified as invariant, and all the
factor loadings were constrained to be equal across the two samples, to achieve full
factor structure equivalence (Steenkamp & Baumgartner, 1998). We conducted a
chi-square difference test between the two-group model without constraints and
the model with constraints (i.e., invariant factor structure and factor loadings). We
found that the CFA model with all factor loadings constrained fit the data well (NFI
= .993; CFI = .994; IFI = .994; SRMR = .038; RMSEA = .045). The chi-square
difference between the two models was 13.3 (model without constraints: χ 2 (142)
= 241.790; model with constraints: χ 2 (152) = 255.090), with 10 degrees of
552 Online Review Characteristics and Trust

Table 2: Scale and measurement properties.


A. Scale and Factor Loadings

Construct (1 = strongly disagree, 7 = strongly agree) U.S. (β) China (β)


F1: Ability
The Reviewer is able to provide successful product reviews. .898 .761
The Reviewer is knowledgeable about writing consumer reviews. .921 .828
The Reviewer is well qualified. .910 .885
F2: Benevolence
The Reviewer cares for my interest. .890 .815
The Reviewer is concerned with other consumers’ welfare. .856 .791
My needs and desires are very important to the Reviewer. .831 .740
The Reviewer really looks out for what is important to me. .845 .833
F3: Integrity
The Reviewer adheres to ethical principles. .862 .830
The Reviewer exhibits sound moral character. .894 .835
The Reviewer tries hard to be fair in dealing with others. .908 .823
The Reviewer seems to be guided by sound principles. .916 .864
I like the Reviewer’s values. .914 .863
F4: Trust in the Review
The Review can be trusted. .967 .906
The Review can be counted on. .977 .892
χ 2 (df = 71) 126.662a 115.133a
NFI .992 .993
CFI .994 .995
IFI .994 .995
SRMR .027 .032
RMSEA .049 .042

B. Construct Validity, Correlation, and Descriptive Statistics

M SD CR α AVE MSV ASV 1 2 3

U.S. data
1. Ability 4.50 1.63 .935 .934 .828 .624 .573
2. Benevolence 4.19 1.44 .916 .917 .732 .640 .594 .74b
3. Integrity 4.73 1.39 .955 .954 .808 .689 .626 .74b .80b
4. Trust 4.63 1.77 .972 .971 .945 .689 .635 .79b .77b .83b
Chinese data
1. Ability 5.00 1.11 .865 .865 .683 .578 .504
2. Benevolence 4.46 1.15 .873 .872 .633 .504 .442 .61b
3. Integrity 4.98 1.03 .925 .925 .711 .608 .558 .75b .71b
4. Trust 4.90 1.19 .894 .891 .808 .608 .545 .76b .67b .78b
a
p < .001.
b
p < .01.
Note: CR = composite reliability.
Dong, Li, and Sivakumar 553

freedom (ns, p > .05), suggesting full metric invariance (Steenkamp &
Baumgartner, 1998).

Individual Effect of Review Characteristics on Trustworthiness


We conducted univariate ANCOVAs in SPSS to test the individual and joint effects
of the three review attributes on each of the three dimensions of trustworthiness.ii
More specifically, we included the three manipulated variables (review valence,
rationality, and source) as independent variables and age, gender, income, edu-
cation, and quality expectation as covariates. We performed three ANCOVAs on
the three dependent variables, respectively, for each country. We computed fac-
tor scores as a composite of their respective items for all continuous variables.
Table 3 summarizes the results of the ANCOVAs.
Regarding the individual effect of review valence on trustworthiness, we
found that valence significantly influences perceived benevolence (U.S.: Fvalence =
25.00, p < .001; China: Fvalence = 5.08, p < .05) and integrity (U.S.: Fvalence =
39.10, p < .001; China: Fvalence = 22.80, p < .001), in both countries, in support of
H1a and H1b, respectively. Although we did not hypothesize the effect of review
valence on ability, the results indicate that review valence also significantly affects
perceived ability in both samples (U.S.: Fvalence = 30.95, p < .001; China: Fvalence
= 6.57, p < .01). Mean comparisons, as shown in Table 4, Panel A, are consistent
with what we expected. Compared with negative reviews, positive reviews showed
great perceptions of benevolence (U.S.: Benevolencepos = 4.58, Benevolenceneg =
3.89, p < .001; China: Benevolencepos = 4.59, Benevolenceneg = 4.33, p < .05),
ability (U.S.: Abilitypos = 5.00, Abilityneg = 4.13, p < .001; China: Abilitypos =
5.15, Abilityneg = 4.86, p < .01), and integrity (U.S.: Integritypos = 5.20, Integrityneg
= 4.37, p < .001; China: Integritypos = 5.23, Integrityneg = 4.74, p < .001).
Likewise, we found similar results of review rationality on all three dimen-
sions of trustworthiness in both countries, in support of H2a–H2c (benevolence,
ability, and integrity, respectively). As Table 3 shows, review rationality signifi-
cantly affects perceived benevolence (U.S.: Frationality = 71.30, p < .001; China:
Fvalence = 9.22, p < .01), ability (U.S.: Frationality = 76.44, p < .001; China:
Frationality = 25.50, p < .001), and integrity (U.S.: Frationality = 61.73, p < .001;
China: Frationality = 11.86, p < .001). Mean comparisons (see Table 4, Panel A)
further confirm that factual reviews are associated with significantly greater benev-
olence (U.S.: Benevolencefact = 4.82, Benevolenceemo = 3.65, p < .001; China:
Benevolencefact = 4.64, Benevolenceemo = 4.28, p < .01), ability (U.S.: Abilityfact
= 5.25, Abilityemo = 3.87, p < .001; China: Abilityfact = 5.29, Abilityemo = 4.71,
p < .001), and integrity (U.S.: Integrityfact = 5.31, Integrityemo = 4.26, p < .001;
China: Integrityfact = 5.16, Integrityemo = 4.81, p < .001) than emotional reviews
in both countries.
H3a and H3b examine the individual effect of review source on benevolence
and integrity, respectively. The findings indicate that review source significantly

ii Although we did not provide theoretical predictions for the effects of valence and source on ability (due
to the absence of prior research to establish the effects), in the interest of completeness, we include them in
the empirical analysis; likewise, despite the absence of hypotheses for the interaction between source and
rationality and the three-way interaction among valence, rationality, and source in our conceptual framework,
we keep them in the ANCOVA model for the sake of completeness of the analysis.
554

Table 3: Summary of ANCOVA results (F statistics).


Dependent Variables

U.S. sample China sample

Ability Benevolence Integrity Ability Benevolence Integrity

Independent variables
H1 Valence 30.95*** 25.00*** 39.10*** 6.57** 5.08* 22.80***
H2 Rationality 76.44*** 71.30*** 61.73*** 25.50*** 9.22** 11.86***
M
H3 Source 1.89 4.25* 2.68 5.29* 8.13** 9.02**
H4 Valence × Rationality 9.61** 21.22*** 15.20*** 1.29 .53 9.56**
M
H5 Valence × Source 3.19 7.92** 7.60** 2.25 2.17 2.55
Rationality × Source 1.43 1.44 1.92 .006 .20 .07
Valence × Rationality × Source 1.00 .84 1.73 1.18 1.66 .95
Control variables
Age 1.15 .48 .01 .27 1.28 .27
M M
Gender 3.54 3.63 1.91 .73 .28 .46
Education .05 .00 .28 1.06 .08 .00
Income 4.76* 3.82* 2.19 .03 .88 .06
Quality expectation 4.60* 1.20 6.27* 10.32*** 33.12*** 19.90***
*
p < .05; ** p < .01; *** p <.001; M p < .1.
Online Review Characteristics and Trust
Table 4: Mean comparisons of the ANCOVAs.
A. Mean Comparisons of the Main Effects of Review Characteristics on Trust (H1–H3)

Valence (H1) Rationality (H2) Source (H3)

U.S. China U.S. China U.S. China

Positive Negative Positive Negative Factual Emotional Factual Emotional Social Retail Social Retail

Ability 5.00a 4.13a 5.15b 4.86b 5.25a 3.87a 5.29a 4.71a 4.67 4.45 5.13c 4.87c
Dong, Li, and Sivakumar

Benevolence 4.58a 3.89a 4.59c 4.33c 4.82a 3.65a 4.64b 4.28b 4.38c 4.10c 4.63b 4.30b
Integrity 5.20a 4.37a 5.23a 4.74a 5.31a 4.26a 5.16a 4.81a 4.90M 4.68M 5.14b 4.83b

B. Mean Comparisons of the Moderating Effect of Review Rationality (H4)

Ability Benevolence (H4a) Integrity (H4b)

U.S. China U.S. China U.S. China

Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative

Factual 5.45M 5.06M 5.37 5.21 4.85 4.79 4.73 4.55 5.47 5.16 5.25 5.07
Emotional 4.56a 3.19a 4.92b 4.50b 4.31a 2.98a 4.46c 4.11c 4.94a 3.58a 5.21a 4.40a

C. Mean Comparisons of the Moderating Effect of Review Source (H5)

Ability Benevolence (H5a) Integrity (H5b)

U.S. China U.S. China U.S. China

Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative

Social 4.97b 4.38b 5.19 5.07 4.53M 4.23M 4.67 4.58 5.13b 4.66b 5.30c 4.97c
Retail 5.03a 3.88a 5.10b 4.64b 4.63a 3.55a 4.51b 4.08b 5.28a 4.07a 5.16a 4.50a
555

Note: Mean comparisons are significantly different across the two compared conditions with the same subscript: a p < .001; b p < .01; c p < .05;
M
p < .1. We report the full factorial ANCOVAs and include the relationships not formally hypothesized (e.g., effect of valence and source on ability).
556 Online Review Characteristics and Trust

affects perceived benevolence in both countries (U.S.: Fsource = 4.25, p < .05;
China: Fsource = 8.13, p < .01) and integrity in China (Fsource = 9.02, p < .01) but
has only a marginally significant effect on integrity in the United States (Fsource
= 2.68, p < .1), in full support of H3a but only partial support of H3b. Likewise,
although we did not provide theoretical predictions for the effect of review source
on perceived ability, the empirical results show a significant link in China (Fsource =
5.29, p < .05) but not in the United States (Fsource = 1.89, p = .17). When comparing
the means, we find that respondents perceive significantly greater benevolence and
integrity when the review appears on a social network than on a retail site (e.g.,
source–benevolence: U.S.: Benevolenceretail = 4.10, Benevolencesocial = 4.38, p <
.05; China: Benevolenceretail = 4.30, Benevolencesocial = 4.63, p < .01).

Moderating Effect of Review Rationality and Source on Review Valence


We further examine the two-way interactions between the three review charac-
teristics. As Table 3 shows, in partial support of H4a, review rationality signifi-
cantly reduces the effect of valence on perceived benevolence in the United States
(Fvalence×rationality = 21.22, p < .001), but not in China. In support of H4b, ratio-
nality significantly moderates the effect of valence on perceived integrity in both
countries (U.S.: Fvalence×rationality = 15.20, p < .001; China: Fvalence×rationality = 9.56,
p < .01). The mean comparisons in Table 4, Panel B, further indicate that for re-
views that focus on venting emotions, respondents perceive positive reviews with
greater benevolence (U.S.: Benevolencepos-emo = 4.31, Benevolenceneg-emo = 2.98,
p < .001) and integrity (e.g., U.S.: Integritypos-emo = 4.94, Integrityneg-emo = 3.58,
p < .001) than negative reviews; however, such an effect of valence is substantially
reduced and turns nonsignificant when the reviews focus on presenting facts (U.S.:
Benevolencepos-fact = 4.85, Benevolenceneg-fact = 4.79, p = .80; Integritypos-fact =
5.47, Integrityneg-fact = 5.16, p = .12). With the increase in rationality, respondents
perceive both positive and negative reviews as more trustworthy, and the advantage
of positive reviews disappears.
Regarding the moderating effect of review source, consistent with H5a and
H5b for the U.S. sample, review source significantly moderates the effect of va-
lence on perceived benevolence (Fvalence×source = 7.92, p < .01) and integrity
(Fvalence×source = 7.60, p < .01); however, we do not find such a moderating ef-
fect in the Chinese sample. Thus, H5a and H5b are partially supported. For the
U.S. sample, the results suggest that the advantage of positive reviews over neg-
ative reviews on perceived benevolence (H5a) and integrity (H5b) only occurs
when the reviews appear on retail sites (e.g., U.S.: Benevolencepos-retail = 4.63,
Benevolenceneg-retail = 3.55, p < .001; Integritypos-retail = 5.28, Integrityneg-retail =
4.07, p < .001); when they appear on social networks, such an effect diminishes
and even disappears (e.g., U.S.: Benevolencepos-social = 4.53, Benevolenceneg-social
= 4.23, p = .10; Integritypos-social = 5.13, Integrityneg-social = 4.66, p < .01).

Effect of Trustworthiness on Trust


Research shows that the three dimensions of trustworthiness are antecedents of
trust (Mayer et al., 1995). In the interest of completeness, we also test to replicate
these effects (H6) using multiple regression in SPSS 24. We entered benevolence,
Dong, Li, and Sivakumar 557

ability, and integrity as independent variables and trust as the dependent variable
into regression. We find consistent results across countries that all three factors
significantly influence trust, in support of H6 (U.S.: β benevolence = .16, β ability =
.35, β integrity = .45, p < .001; China: β benevolence = .16, β ability = .36, β integrity =
.40, p < .001).

Validation in Two Countries


To show the similarities and differences when comparing the results across coun-
tries, first we find that the main effects of the three review characteristics on
the three dimensions of trustworthiness are, by and large, generalizable to both
countries, verifying the external validation of the main effects. Review valence,
rationality, and source are indeed important determinants of reviewer trustworthi-
ness for consumers in both countries. Firms should use these country-independent
dimensions to influence consumers’ perceptions of trustworthiness.
Second, although we find significant main effects of valence and rationality
in both samples, subtle differences do exist. A comparison of mean values reveals
that, overall, Chinese consumers report greater perceived benevolence, ability,
and integrity for negative reviews than U.S. consumers, indicating that Chinese
consumers are less skeptical of negative reviews. Marketers and information system
designers should be cognizant of this country difference and try to control/minimize
the display of negative reviews more so in China.
Third, we observe more salient moderation effects in the United States than
in China, suggesting that there are more complex interactions among the review
characteristics and trustworthiness dimensions for U.S. consumers than for Chinese
consumers. Marketing and operations management research efforts in China can be
made simpler from the lack of interaction effects, whereas managing positive and
negative reviews entails more contextual adjustments in the United States because
of the contingency effect of review valence.
Although we have followed previous research (e.g., Morgan, Zou, Vorhies, &
Katsikeas, 2003; Yan & Kull, 2015; Yan & Nair, 2016) to minimize possible sample
differences, we understand there could always be differences between countries;
as such, direct comparisons of cross-country results, especially effect size, needs
to be done more cautiously. To validate the cross-country nuances statistically,
we conducted additional post hoc analysis to incorporate a country moderator,
in order to assess country difference of the proposed relationships. The results
indicate that the main effects of valence and rationality on benevolence, ability,
and integrity and the moderating effect of rationality on valence are significantly
different across the two countries at p < .05 level. This provides some statistical
evidences of the country differences. Further details on the country comparisons
and their implications appear in Web Appendix 5.

DISCUSSION
Theoretical Contributions
This research extends the literature in several ways. First, research on the con-
struct of trust primarily focuses on trust between people who are known to and
558 Online Review Characteristics and Trust

typically have a long-term-oriented relationship with each other, such as supervi-


sor and subordinates (Mayer & Davis, 1999) or buyers and suppliers (Johnston
et al., 2004). This long-term orientation enables the partners to form an opin-
ion from past performance. In the online context, reviews are often written by
strangers, and recent popular press articles on the sharing economy and technolog-
ical advances discuss how the concept of trusting strangers is evolving (Botsman,
2016). Our study takes a small but important step toward examining trust among
strangers, thus contributing to additional insights to the stream of research on the
new evolving marketplaces and business models. For example, with the increas-
ing role peer-to-peer communications play, electronic WOM (e-WOM) becomes
a critical component of consumers’ decision-making process (Qiu, Pang, & Lim,
2012). Firms need to rethink their business operations to suit the unique features
of e-WOM. Our research addresses one important facet of these unique features,
the absence of prior familiarity with the marketplace participants, which underlies
most of the transactions in the new marketplaces. Furthermore, our findings on
how consumers form trust when there is a lack of traditional clues associated with
personal relationships are especially useful in the sharing economy (Airbnb, Uber)
in which the buyer and seller are individuals rather than businesses.
Second, although extant research has underscored the need for trust as an un-
derlying mechanism for online marketplaces (Smith, Menon, & Sivakumar, 2005),
research examining trust in the online review setting is still in its infancy. Partly
stemming from the confusion in conceptualization and operationalization of the
“trust” concept and partly due to the infancy of the research on peer-to-peer trust,
multidimensional perspectives on trust are lacking in the online review literature.
Thus, our use of the three-factor trustworthiness framework to understand the
impact of multiple review characteristics on trust formation provides an effective
approach to deepen the understanding of the phenomenon. Further, the simultane-
ous examination of the three review attributes offers a comprehensive perspective
that extends the unidimensional investigations in the extant literature (e.g., as
seen in Table 1, most studies focus on review valence with very few studying
review source or review rationality, and no work examines the interaction effects
of the multiple attributes). Our study is the first to examine the moderating effects
of review rationality and source on the impact of review valence, demonstrating
that the impact of review characteristics on trustworthiness should be examined
holistically.
Schoorman, Mayer, and Davis (2007) suggest that culture influences people’s
propensity to trust. Although the overall conceptual framework applies to both
countries, we found nuances in some of the relationships between the two samples.
As such, our research contributes to cross-cultural examination of trust in a new,
prevalent, and important context (i.e., online reviews).

Managerial Implications
Given the prevalence of online reviews, increasing consumers’ trust toward online
reviews is an important objective for organizations (Hayne et al., 2015). This is
because the increased trust in reviews directly influences customer attitude and
downstream behaviors (Wolf & Muhanna, 2011). Our findings are relevant for
Dong, Li, and Sivakumar 559

many industries, including hospitality and retailing, and provide implications on


communication, information system design, and operations management.
First, from a marketing perspective, our study offers actionable guidance
to online retailers on how to promote review messages to gain consumer trust
(Wolf & Muhanna, 2011). Clearly, not all reviews generate similar impact. Online
marketers should consider displaying review messages with favorable attributes
prominently. We find that positive reviews are more trustworthy than negative
reviews; therefore, marketers can benefit more by promoting positive reviews
systematically (e.g., highlighting positive reviews in noticeable positions on the
transaction Web site) than taking steps to discourage/reduce negative reviews.
Second, our study provides helpful insights for information system design.
For example, we found factual reviews yield greater trustworthiness than emotional
reviews. This finding can guide system design such that how consumers provide
their reviews could be structured so as to encourage more factual reviews. Firms
can guide consumers to use predefined quality categories, questionnaire items,
examples, or templates to create reviews, stimulating consumers to post comparable
factual reviews along similar dimensions of product features critical to the majority
of consumers and thereby enhancing review quality (Banerjee et al., 2017).
Third, this study demonstrates that reviews posted on social networks gener-
ate greater trustworthiness. Accordingly, firms could design information systems
to incentivize consumers to share reviews on these sites and also partner with social
network providers to cross-reference the reviews posted. For example, Tripadvisor
has pop-up messages informing customers that their friend(s) stayed in one of the
hotels they are viewing on the Web site and posted a review for that property and
directing the customers to the corresponding Facebook post. Likewise, utilizing
customers’ real-time social media posts (e.g., Instagram, Twitter) becomes an in-
creasingly popular way for firms to leverage user-generated content to promote
their goods and services (Devumi, 2016). For example, Lego Discovery Center
encourages customers to share their pictures using hashtags which appear real
time on the firm’s Web site (Devumi, 2016). Clearly, the recent negative publicity
regarding how social media companies make use of customer data must be con-
sidered in developing effective strategies (Donati, 2018). Some of this could be
driven by the reported cases of social media companies using customer data in
questionable ways (Donati, 2018). The issue of how far social media companies
can go in order to assemble and curate customer reviews without invoking privacy
concerns is important from the individual rights as well as public policy perspec-
tives. However, if consumers are clearly informed of the data usage and provide
explicit consent for such usage, the online review data could be mutually beneficial
to both online retailers and fellow customers (Devumi, 2016).
With the substantial growth of sharing economy, peer-to-peer interactions are
expected to flourish, especially in technology-enabled online contexts (Breidbach
& Brodie, 2017). Often, these virtual exchanges occur among strangers. It is
critical for designers of online sites to recognize the challenges in fostering trust
among strangers. Our study took an initial step in examining the nuances involved
in forming trust among strangers. Its implication goes beyond online retailing
and can be used in a broader context of collaborative consumption when sharing
560 Online Review Characteristics and Trust

rides (i.e., Uber), lodge (i.e., Airbnb), and labor (i.e., TaskRabbit) (Benoit, Baker,
Bolton, Gruber, & Kandampully, 2017).

Limitation and Future Research


This study examines a single online review, as our focus was on understanding the
influence of multiple review attributes. However, consumers often read multiple
reviews to form perceptions of products online. Thus, an extension would be to val-
idate the model further in a multiple-review context. Likewise, we operationalized
review valence and rationality into categories (e.g., positive and negative reviews
for review valence, factual and emotional reviews for review rationality). These
constructs could also be operationalized on a continuum.
Future research should consider using discovery and qualitative approaches
to categorize decision-making patterns in online reviews (Hayne et al., 2015).
Consumer-specific factors (e.g., technology readiness), firm-specific factors (e.g.,
brand reputation), and contextual factors (e.g., economic conditions) can all po-
tentially influence the linkages between review characteristics and trustworthiness
(Bilgihan et al., 2016). Exploratory research using qualitative techniques would be
useful in theory development. Text-mining techniques could overcome the sample
size limitation of the traditional qualitative approach to provide more generalizable
findings.
Another avenue for research is a more detailed examination of the link
between review characteristics and trust in a cross-cultural context. Although our
research focuses on one example of a developed economy and one example of
an emerging economy, a broader investigation of cross-country and cross-cultural
differences is required to determine how technological advances and globalization
influence the new marketplace.

SUPPORTING INFORMATION

Additional supporting information may be found online in the Supporting Infor-


mation section at the end of the article.
Supporting Information

REFERENCES
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in prac-
tice: A review and recommended two-step approach. Psychological Bulletin,
103(3), 411–423.
Audi, R., & Murphy, P. E. (2006). The many faces of integrity. Business Ethics
Quarterly, 16(1), 3–21.
Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models.
Journal of the Academy of Marketing Science, 16(1), 74–94.
Dong, Li, and Sivakumar 561

Banerjee, S., Bhattacharyya, S., & Bose, I. (2017). Whose online reviews to trust?
Understanding reviewer trustworthiness and its impact on business. Decision
Support Systems, 96(3), 17–26.
Bargh, J. A., McKenna, K. Y., & Fitzsimons, G. M. (2002). Can you see the real
me? Activation and expression of the “true self” on the Internet. Journal of
Social Issues, 58(1), 33–48.
Bendoly, E. (2014). System dynamics understanding in projects: Information shar-
ing, psychological safety, and performance effects. Production and Opera-
tions Management, 23(8), 1352–1369.
Bendoly, E., Bachrach, D. G., & Perry-Smith, J. (2011). The perception of difficulty
in project-work planning and its impact on resource sharing. Journal of
Operations Management, 28(5), 385–397.
Benoit, S., Baker, T. L., Bolton, R. N., Gruber, T., & Kandampully, J. (2017). A
triadic framework for collaborative consumption (CC): Motives, activities
and resources & capabilities of actors. Journal of Business Research, 79(3),
219–227.
Berger, J. (2014). Word of mouth and interpersonal communication: A review
and directions for future research. Journal of Consumer Psychology, 24(4),
586–607.
Bilgihan, A., Barreda, A., Okumus, F., & Nusair, K. (2016). Consumer percep-
tion of knowledge-sharing in travel-related online social networks. Tourism
Management, 52(2), 287–296.
Bosman, D. J., Boshoff, C., & van Rooyen, G. (2013). The review credibility of
electronic word-of-mouth communication on e-commerce platforms. Man-
agement Dynamics, 22(3), 29–44.
Botsman, R. (2016). We’ve stopped trusting institutions and started trust-
ing strangers. TED Summit, Retrieved from https://www.ted.com/
talks/rachel_botsman_we_ve_stopped_trusting_institutions_and_started_tr
usting_strangers
Breidbach, C. F., & Brodie, R. J. (2017). Engagement platforms in the sharing
economy: Conceptual foundations and research directions. Journal of Service
Theory and Practice, 27(4), 761–777.
Buller, D. B., & Aune, R. K. (1987). Nonverbal cues to deception among intimates,
friends, and strangers. Journal of Nonverbal Behavior, 11(4), 269–290.
Buller, D. B., & Burgoon, J. K. (1996). Interpersonal deception theory. Communi-
cation Theory, 6(3), 203–242.
Burgoon, J. K., Buller, D. B., Floyd, K., & Grandpre, J. (1996). Deceptive reali-
ties: Sender, receiver, and observer perspectives in deceptive conversations.
Communication Research, 23(6), 724–748.
Chang, T. V., Rhodes, J., & Lok, P. (2013). The mediating effect of brand trust be-
tween online customer reviews and willingness to buy. Journal of Electronic
Commerce in Organizations, 11(1), 22–42.
562 Online Review Characteristics and Trust

Cheung, C. M., Sia, C., & Kuan, K. K. Y. (2012). Is this review believable? A
study of factors affecting the credibility of online consumer reviews from an
ELM perspective. Journal of the Association for Information Systems, 13(8),
618–635.
Chevalier, J. A., & Mayzlin, D. (2006). The effect of word of mouth on sales:
Online book reviews. Journal of Marketing Research, 43(3), 345–354.
Colquitt, J. A., Scott, B. A., & LePine, J. A. (2007). Trust, trustworthiness, and
trust propensity: A meta-analytic test of their unique relationships with risk
taking and job performance. Journal of Applied Psychology, 92(4), 909–927.
DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K.,
& Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1),
74–118.
Devumi (2016). Learn how Lego mastered user-generated content. Medium, Re-
trieved from https://medium.com/@devumi/learn-how-lego-mastered-user-
generated-content-86735044b43c
Donati, J. (2018). Facebook data scandal raises another question:
Can there be too much privacy? The Wall Street Journal, Re-
trieved from https://www.wsj.com/articles/facebook-data-scandal-raises-
another-question-can-there-be-too-much-privacy-1522584000
Dong, B., Sivakumar, K., Evans, K. R., & Zou, S. (2015). Effect of customer partic-
ipation on service outcomes: The moderating role of participation readiness.
Journal of Service Research, 18(2), 160–176.
Duffy, A. (2017). Trusting me, trusting you: Evaluating three forms of trust on an
information-rich consumer review website. Journal of Consumer Behaviour,
16(3), 212–220.
Earley, P. C. (1986). Trust, perceived importance of praise and criticism, and work
performance: An examination of feedback in the United States and England.
Journal of Management, 12(4), 457–473.
Filieri, R., Alguezaui, S., & McLeay, F. (2015). Why do travelers trust TripAdvisor?
Antecedents of trust towards consumer-generated media and its influence on
recommendation adoption and word of mouth. Tourism Management, 51,
174–185.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equations models with
unobservable variables and measurement error. Journal of Marketing Re-
search, 18(1), 39–50.
Gao, H., Ballantyne, D., & Knight, J. G. (2010). Paradoxes and guanxi dilemmas in
emerging Chinese–Western intercultural relationships. Industrial Marketing
Management, 39(2), 264–272.
Goodman, J. K., & Paolacci, G. (2017). Crowdsourcing consumer research. Journal
of Consumer Research, 44(1), 196–210.
Granovetter, M. (1983). The strength of weak ties: A network theory revisited.
Sociological Theory, 1(1), 201–233.
Dong, Li, and Sivakumar 563

Grewal, D., Gotlieb, J., & Marmorstein, H. (1994). The moderating effects of fram-
ing and source credibility on the price-perceived risk relationship. Journal
of Consumer Research, 21(3), 145–153.
Griskevicius, V., Tybur, J. M., Sundie, J. M., Cialdini, R. B., Miller, G. F., &
Kenrick, D. T. (2007). Blatant benevolence and conspicuous consumption:
When romantic motives elicit strategic costly signals. Journal of Personality
and Social Psychology, 93(1), 85–102.
Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate data analysis
(7th ed.). Upper Saddle River, NJ: Prentice Hall.
Hayne, S. C., Wang, H., & Wang, L. (2015). Modeling reputation as a time-series:
Evaluating the risk of purchase decisions on eBay. Decision Sciences, 46(6),
1077–1107.
Hong, H., Xu, D., Wang, G. A., & Fan, W. (2017). Understanding the determinants
of online review helpfulness: A meta-analytic investigation. Decision Support
Systems, 102, 1–11.
Hulland, J., & Miller, J. (2018). Keep on Turkin? Journal of the Academy of
Marketing Science, 46(5), 789–794.
Jacoby, J., Jaccard, J. J., Currim, I., Kuss, A., Ansari, A., & Troutman, T. (1994).
Tracing the impact of item-by-item information accessing on uncertainty
reduction. Journal of Consumer Research, 21(2), 291–303.
Jarvenpaa, S. L., Knoll, K., & Leidner, D. E. (1998). Is anybody out there? An-
tecedents of trust in global virtual teams. Journal of Management Information
Systems, 14(2), 29–64.
Jensen, M. L., Averbeck, J. M., Zhang, Z., & Wright, K. B. (2013). Credibility
of anonymous online product reviews: A language expectancy perspective.
Journal of Management Information Systems, 30(1), 293.
Johnston, D. A., McCutcheon, D. M., Stuart, F. I., & Kerwood, H. (2004). Effects of
supplier trust on performance of cooperative supplier relationships. Journal
of Operations Management, 22(1), 23–38.
Kim, M., Chung, N., & Lee, C. (2011). The effect of perceived trust on electronic
commerce: Shopping online for tourism products and services in South Ko-
rea. Tourism Management, 32(2), 256–265.
Lee, J., Park, D., & Han, I. (2011). The different effects of online consumer reviews
on consumers’ purchase intentions depending on trust in online shopping
malls: An advertising perspective. Internet Research, 21(2), 187–206.
Lemon, K. N., & Verhoef, P. C. (2016). Understanding customer experience
throughout the customer journey. Journal of Marketing, 80(6), 69–96.
Li, M., Choi, T. Y., Rabinovich, E., & Crawford, A. (2013). Inter-customer in-
teractions in self-service setting: Implications for perceived service quality
and repeat purchasing intentions. Production and Operations Management
Journal, 22(4), 888–914.
Liu, Z., & Park, S. (2015). What makes a useful online review? Implication for
travel product websites. Tourism Management, 47, 140–151.
564 Online Review Characteristics and Trust

Mackiewicz, J., Yeats, D., & Thornton, T. (2016). The impact of review environ-
ment on review credibility. IEEE Transactions on Professional Communica-
tion, 59(2), 71–88.
Martin, W. C. (2017). Don’t be such a downer: Examining the impact of valence
on receivers of word of mouth (A structured abstract). In M. Stieler (Ed.)
Creating marketing magic and innovative future marketing trends. Cham:
Springer, 979–983.
Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal
system on trust for management: A field quasi-experiment. Journal of Applied
Psychology, 84(1), 123–136.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of
organizational trust. Academy of Management Review, 20(3), 709–734.
McAllister, D. J. (1995). Affect-and cognition-based trust as foundations for in-
terpersonal cooperation in organizations. Academy of Management Journal,
38(1), 24–59.
Moe, W. W., & Trusov, M. (2011). The value of social dynamics in online product
ratings forums. Journal of Marketing Research, 48(3), 444–456.
Morgan, N. A., Zou, S., Vorhies, D. W., & Katsikeas, C. S. (2003). Experiential
and informational knowledge, architectural marketing capabilities, and the
adaptive performance of export ventures: A cross-national study. Decision
Sciences Journal, 34(2), 287–321.
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of
satisfaction decisions. Journal of Marketing Research, 17(4), 460–469.
O’Neil, T. (2015). Survey confirms the value of reviews, provides new insights.
Retrieved from https://www.powerreviews.com/blog/survey-confirms-the-
value-of-reviews/
Qiu, L., Pang, J., & Lim, K. H. (2012). Effects of conflicting aggregated rating on
eWOM review credibility and diagnosticity: The moderating role of review
valence. Decision Support Systems, 54(1), 631–643.
Racherla, P., Mandviwalla, M., & Connolly, D. J. (2012). Factors affecting con-
sumers’ trust in online product reviews. Journal of Consumer Behaviour,
11(2), 94–104.
Ray, S., Ow, T., & Kim, S. S. (2011). Security assurance: How online service
providers can influence security control perceptions and gain trust. Decision
Sciences, 42(2), 391–412.
Rohrer, T. A. (2010). The reverse thing, Volume XVII. Bloomington, IN: IUniverse.
Schoorman, D. F., Mayer, R. C., & Davis, J. H. (2007). An integrative model
of organizational trust: Past, present, and future. Academy of Management
Review, 32(2), 344–354.
Smith, D., Menon, S., & Sivakumar, K. (2005). Online peer and editorial rec-
ommendations, trust, and choice in virtual markets. Journal of Interactive
Marketing, 19(3), 15–37.
Dong, Li, and Sivakumar 565

Sparks, B. A., Perk, H. E., & Buckley, R. (2013). Online travel reviews as per-
suasive communication: The effects of content type, source, and certification
logos on consumer behavior. Tourism Management, 39, 1–9.
Sparks, B. A., So, K. K. F., & Bradley, G. L. (2016). Responding to negative online
reviews: The effects of hotel responses on customer inferences of trust and
concern. Tourism Management, 53, 74–85.
Steenkamp, J.-B. E. M., & Baumgartner, H. (1998). Assessing measurement invari-
ance in cross-national consumer research. Journal of Consumer Research,
25(1), 78–90.
Sun, M. (2012). How does the variance of product ratings matter? Management
Science, 58(4), 696–707.
Weise, E. (2015). Amazon cracks down on fake reviews. USAToday,
Retrieved from https://www.usatoday.com/story/tech/2015/10/19/amazon-
cracks-down-fake-reviews/74213892/
Wolf, J. R., & Muhanna, W. A. (2011). Feedback mechanisms, judgment bias, and
trust formation in online auctions. Decision Sciences, 42(1), 43–68.
Wu, K., Noorian, Z., Vassileva, J., & Adaji, I. (2015). How buyers perceive the
credibility of advisors in online marketplace: Review balance, review count
and misattribution. Journal of Trust Management, 2(1), 1–18.
Xu, Q. (2014). Should I trust him? The effects of reviewer profile characteristics
on eWOM credibility. Computers in Human Behavior, 33(2), 136–144.
Yan, T., & Kull, T. J. (2015). Supplier opportunism in buyer–supplier new prod-
uct development: A China-Us study of antecedents, consequences, and cul-
tural/institutional contexts. Decision Sciences Journal, 46(2), 403–445.
Yan, T., & Nair, A. (2016). Structuring supplier involvement in new product
development: A China–US study. Decision Sciences Journal, 47(4), 589–
627.
Yin, D., Mitra, S., & Zhang, H. (2016). When do consumers value positive vs.
negative reviews? An empirical investigation of confirmation bias in online
word of mouth. Information Systems Research, 27(1), 131–144.
Zhang, C., Viswanathan, S., & Henke, J. W. (2011). The boundary spanning capa-
bilities of purchasing agents in buyer–supplier trust development. Journal of
Operations Management, 29(4), 318–328.
Zuckerman, M., Driver, R., & Koestner, R. (1982). Discrepancy as a cue to actual
and perceived deception. Journal of Nonverbal Behavior, 7(2), 95–100.
Zuckerman, M., Kernis, M. R., Driver, R., & Koestner, R. (1984). Segmentation
of behavior: Effects of actual deception and expected deception. Journal of
Personality and Social Psychology, 46(5), 1173–1182.

Beibei Dong (PhD, University of Missouri) is an associate professor of marketing


at Lehigh University. Her research appears in Journal of Marketing, Journal of the
Academy of Marketing Science, Journal of Service Research, Journal of Service
Management, Journal of International Marketing, Marketing Letters, and others.
She has received the “Best Services Article Award” from American Marketing
566 Online Review Characteristics and Trust

Association Services Marketing SIG in 2014, the “Best Reviewer Award” from
Journal of Service Research in 2015, and Thomas J. Campbell ’80 Professorship
from Lehigh University in 2014. She is currently serving on the editorial review
board of Journal of Service Research.

Mei Li (PhD, Arizona State University) is an assistant professor of supply chain


management at Michigan State University’s Eli Broad College of Business. She
is interested in service research including service outsourcing, self-service tech-
nology and service supply network. Her research has appeared in Journal of
Operations Management, Journal of Supply Chain Management, Production and
Operations Management, Strategic Management Review, and Journal of Market-
ing. Her research at Journal of Marketing has been recognized with the Best
Service Article Award in 2014 by the AMA’s SERVSIG.

K. Sivakumar is the Arthur Tauck Chair and a professor of marketing at Lehigh


University. His research appears in Journal of Marketing, Journal of the Academy
of Marketing Science, Journal of Business Research, Journal of Service Research,
Journal of Product Innovation Management, Decision Sciences, and other outlets.
He has received the Donald Lehmann Award from the American Marketing As-
sociation (AMA), Best Services Article in 2014 Award by the AMA’s SERVSIG,
Distinguished PhD Alumni Award from Syracuse University, and Best Conference
Paper Awards from the AMA and the Academy of Marketing Science, and other
awards.

You might also like