Rotten Tomatoes and Chill? Rotten Tomatoes and Chill?

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Allman & Medeiros, Rotten Tomatoes and Chill?

MRAs and Their Impact on Decision-making

Rotten Tomatoes and Chill?


MRAs and Their Impact on Decision-Making

Sharon Allman and Jenny Lee-De Medeiros

Abstract :
The purpose of this research was to examine whether young adults (aged 18-32) look at
user- and/or critic-generated movie review aggregates (MRAs) to decide which film to
watch, or whether other factors impact their decision-making. The literature on this topic
most notably shows a correlation between highly rated movies and better box office results,
a preference for MRAs, and potential market benefits of MRAs. This research, which fo-
cused on the North American context, contained both quantitative and qualitative methods
in the form of an online survey, focus groups, and key informant interviews. The results in-
dicate that MRAs are not the preferred method to decide what movie to watch, and instead
factors such as family or friends’ recommendations and marketing decisions of the film
most affect young adults’ decisions about which films to watch.

Keywords:
movie review aggregate, MRA, movies, ratings, rating metric, scoring
system

DOI
10.33137/ijournal.v6i1.35269

© 2020 Allman, S. Medeiros, J. Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-Making. This is an
Open Access article distributed under CC-BY.

iJournal, Vol 6, No. 1, 1


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Introductory Statement of Significance


Rotten Tomatoes is a movie review aggregate (MRA) available online that assigns a score
to a movie based on critic and/or user reviews. According to Alexa, a web traffic estimator, it is
the 143rd most popular website in Canada, while IMDB, a website with user-generated reviews,
is 26th most popular (Alexa, n.d.). Media corporations emphasize the impact of the scores which
websites like Rotten Tomatoes and IMDB assign to new film releases. In fact, Fandango, a compa-
ny that sells movie tickets in the U.S., purchased Rotten Tomatoes, and now displays these scores
beside the movie as consumers buy tickets online. This suggests an increased market interest
in MRAs and the commercial impact they have on box office results. Various researchers have
found a correlation between positive critic reviews and higher box office grossing, but very few
have looked directly at the relationship between the act of visiting MRAs and the decision to see
a movie (Terry et al., 2004; Hennig-Thurau et al, 2012). This research aims to fill this gap in the
literature.

Aims and Objectives

Filling this gap in the literature will help explore why there seems to be a commercial ben-
efit for movies that have high reviews. This is an area that is of growing interest to movie compa-
nies and executives and can have bigger implications for box office performance on movies and
firm valuation. Specifically, we hope to answer the following questions:

Do people trust and use MRAs?

What is considered a good score for an MRA? Does a “good score” affect behaviour?

What factors drive people to watch a movie, either in theaters or at home?

Literature Review

Currently the literature assessing the use of MRAs by North Americans falls into three
main categories: preferences for movie aggregates, box office performance based on critics’ movie
reviews, and the perceived quality of movie reviews.

iJournal, Vol 6, No. 1, 2


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Preferences for Movie Aggregates

The preference for MRAs has been examined by Doguel and Xiaoming (2016), who found
that U.S. participants demonstrated a preference for aggregate user information, favouring movies
with high star ratings, compared to the movies with positive user reviews favoured by Singapor-
ean participants. This research suggests that MRAs are the preferred method for Americans to
look at reviews for movies (Doguel & Xiaoming, 2016). Yep et al. chose to compare review sites,
social media, personal blogs, and instant messaging sites to discover the preferred electronic word
of mouth (E-WOM) platform for movie reviews. They found that review sites were the preferred
E-WOM for movies, specifically Rotten Tomatoes (Yep et al., 2014).

Box Office Performance Based on Critics Movie Reviews

Critics have had a long-standing role in the success of films on release through box office
performance. Terry et al. examined different factors that affect box office performance of mov-
ies and found that a critics’ approval rating of at least 40%, and the number of Academy Awards
a film received both correlated with box office success. A 10% increase in critic approval can
add approximately 7.8 million in additional revenue for a movie, indicating that positive critics’
reviews have a positive impact on viewer decision-making (Terry et al., 2004). Similarly, Hen-
nig-Thurau et al. (2012) found that a high number of positive critics’ reviews has a notable impact
on long-term film success, especially for comedies and dramas (2012).

Movie Reviews and Their Perceived Quality

Since movie reviews are often user-generated, the perceived quality of movie reviews is an
important factor to consider, and one which has been studied by Koh et al. They examined three
markets – China, Singapore, and the U.S. – and the differences between the online average user
reviews and the perceived views from participants from the three markets, and found that reviews
from the U.S. were subjected to higher rates of under-reporting bias, whereby people with stron-
ger opinions were more likely to leave reviews than people with more moderate opinions (Koh et
al., 2010). This skews the results to extreme scores compared to the viewers’ total perceived qual-

iJournal, Vol 6, No. 1, 3


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

ity of the film. Ullah et al. (2015) approached the presence of such biases in user reviews using
sentiment analysis; they examined the impact of reviews’ emotional sentiments on their helpful-
ness to the readers, using a variety of reviews from IMDB. They found that reviews with positive
emotions were more likely to be perceived as helpful while reviews with negative emotions were
less likely to be perceived as helpful (Ullah et al., 2015).

Chintagunta et al., 2010 furthered this discussion by investigating the effects of online
user reviews on movies’ box office performance. They focused on user-generated film review site
Yahoo! Movies, and argued that these word of mouth reviews have a large impact on viewer deci-
sion-making (Chintagunta et al., 2010). Rather than the quantity of reviews on a film, the authors
found that the valence of the reviews has a greater impact, corroborating Ullah et al. (2015)’s
findings. Chintagunta et al., 2010 also accounted for pre-release advertisement of a film as a con-
founding variable and used this as a contrast to the impact of the film reviews (2010). It is inter-
esting to note that in all three of these studies, the reviews which had a strong influence on viewer
decision-making were user-generated.

Other Factors Affecting Movie Sales

Moretti chose to focus on social learning and found that movies with better-than-expect-
ed appeal in the first week of release tend to do better in their second week as a result of positive
word of mouth reviews. This impact is directly proportional to the size of an audience’s social
networks and could account for up to 32% of sales of a film with positive surprise (Moretti, 2010).

The existing literature on movie reviews and MRAs clearly indicates that there are strong
correlations between positive movie reviews, but there is little literature that directly examines
whether North American – specifically Canadian – consumers rely on these movie reviews to
make their movie watching decisions. Some studies have indicated that there is a correlation for
other types of online reviews and consumer behaviours, but very little research has been done on
MRAs, despite the commercial benefits of this research. Our own study attempts to fill the void
between studies of the impact of user-generated reviews and the professional reviews on a site like
Rotten Tomatoes.

iJournal, Vol 6, No. 1, 4


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Research Design

Methods

This study employed both qualitative and quantitative research methods as a mixed-meth-
ods approach. Our quantitative tool was an online survey, completed by 292 participants. This
survey also had some qualitative aspects, as participants could submit their own open-ended
responses. Our primary qualitative tools were two focus groups conducted concurrently, as well as
two key informant interviews conducted concurrently over email.

Tools/Instruments

Online Survey: An online survey was deployed using Q-fi, an online survey tool. Respondents were
asked questions about their movie going behaviours, their views on MRAs, and how they make
movie-watching decisions. Most of the questions were ranking or 5-point Likert-scale questions.

Focus Groups: Two focus groups with five participants each were conducted concurrently using
a semi-structured moderators’ guide. Participants were chosen through convenience sampling,
where recruiting posts were made on the researchers’ personal Facebook accounts. Participants
were asked about their movie-going habits and how they make decisions on what movies to see,
and they were given examples of movie scores from Rotten Tomatoes and asked to give their im-
pressions of them.

Online Key Informant Interviews: Two email interviews were conducted, one with a participant who
loved MRAs and one with a participant who disliked them. These participants were sourced from
the researchers’ personal networks. The interviews were conducted in two parts: the questions,
specific to each participant, were emailed to them in a Word document which they were to fill out
and email back to the researchers; and then the researchers returned the document to the partic-
ipant with additional follow up questions, which the participants answered and sent back to the
researchers.

iJournal, Vol 6, No. 1, 5


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Limitations

To triangulate within this study, three different methods of data collection were used. These
methods were combined to identify major themes within the data. Our survey was completed by
292 people and there was a total of 12 participants within both of our qualitative data collection
techniques.

Methodological Limitations

We chose to exclude individuals who were younger than 18 years old and did not want
to study people older than 32 years old as we felt that they were outside our target population of
“millennials”. We excluded people who were not residing in Ontario at the time of our study to
reduce the variation in opinions based on differing geography. We used the same set of inclusion
and exclusion criteria for the survey, interview, and focus group participants.

Study Tool Limitations

Each of our analysis tools came with their own set of limitations. The survey was exclu-
sively administered online, deployed using the online survey software Q-fi over Canadian servers.
This meant that only individuals who could access and use the internet on an electronic device
were able to complete the survey. Some of those who attempted to complete the survey on their
smartphones had issues accessing the survey on Q-fi and required additional instruction to open it
in a different web browser or instead complete it on their computer.

Regarding the qualitative methods, the focus groups were both held on Humber College
Lakeshore Campus and were comprised of students. These focus groups ran concurrently and the
age distribution clustered around the bottom and middle of our intended range. The key informant
interviewees were both female and close in age (residing closer to the lower end of our age range).
The key informant interviews were conducted over email, where the participants answered all the
questions in succession, leaving less opportunity for the interviewer to probe the respondents.1

1 Note: all data collection was performed prior to February 2020 and was therefore collected prior to
COVID-19 related shutdowns. COVID-19 was not considered as a factor in any of the data collection or
analyses.
iJournal, Vol 6, No. 1, 6
Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Quantitative Findings

For our analysis, we decided to look at three factors that we believed were most important
in relation to people using MRAs. The three variables we looked at were: what type of MRA our
participants trusted the most; how much they trusted different types of aggregates; how often they
go to a theater; and if they look at an MRA before, during, or after a movie. These variables were
chosen to give us an understanding of who engages with MRAs and why. Specifically, we looked
at the relationship between: what type of MRA participants trusted and what factors people use to
decide on which movie to watch a home and at the theater; how often people go to a theater and
if they have gone to see a movie because of a certain Rotten Tomato score; when they look at an
MRA and if that affects their impression of a movie, and if they would see the movie based on a
score on Rotten Tomatoes; and if trusting critic or user score MRAs affect their impression of a
movie based on a score on Rotten Tomatoes. An alpha level of 0.05 was used for all analyses.

Trust and Behaviour

Finding 1: A Chi-Squared test was run between what type of MRA participants trusted and what
factors people used to decide on which movie to watch at the theater. The test found that the only
time the type of aggregate mattered was when it came to people who used either critic MRAs or
professional critic reviews when deciding which movie to watch in theaters. The relation between
these variables was significant: (8, N = 212) = 16.61, p < 0.05. This seems to indicate that peo-
ple who use critic aggregates and trust critics value professional MRAs when making a decision,
however this may not apply to the population as a whole.

Finding 2: A similar Chi-Squared test was run between what type of MRA participants trusted
and what factors people used to decide on which movie to watch at home. The test found that the
same held true in both cases and the only time the type of aggregate mattered was when it came
to people who used either critic MRAs or professional critic reviews for deciding which movie to
watch at home: (8, N = 212) = 17.72, p < 0.05. This, again, seems to indicate that people who use
critic aggregates and trust critics value professional MRAs when making a decision, but this does
not apply to the population as a whole.
iJournal, Vol 6, No. 1, 7
Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Trust and Impression

Finding 3: In keeping with the idea of trust in MRAs, a Chi-Squared test was run with the vari-
able “Do you trust professional MRAs?” to find if there was a statistically significant relationship
between Rotten Tomatoes scores and subjects’ impression of a movie. We found that there was a
relationship, but only if the movie either had a really low score (0-19 or 20-39) or a really high
score (80-100): (16, N = 223) = 9.94, p < 0.05. When this same test was run with the “Do you
trust user generated MRAs?” variable and how Rotten Tomatoes scores affect their impression of
the movie, we found that no statistically significant relationship existed at any level: (16, N = 223)
= 36.23, p > 0.05. This seems to indicate the same conclusion as before, that professional MRAs
only have an effect if people trust them.

When Used and Impression and Behaviour

Finding 4: A Chi-Squared test was run to see if the point in time that subjects looked at an MRA
affected their impression of the movie. There was only a significant relationship between these
variables if the movie had a really low score of 0-19: (8, N = 223) = 32.18, p < 0.05. This seems to
indicate that the only time a score affects people’s perception of the film is if the general consen-
sus is negative.

Finding 5: A Chi-Squared test was run to see if the point in time that subjects looked at an MRA
affected their decision to see the movie based on its score. There was only a significant relation-
ship between these variables if the movie had a really high score of 80-100: (8, N = 223) = 28.45,
p < 0.05. This seems to indicate that unless a movie has a really high score, audiences don’t see
much value in the score, and it is not a part of their decision-making process.

How Often and Behaviour

Finding 6: A Chi-Squared test was run between how often a person saw a movie in theater and
how likely they were to go watch a movie based on a certain score on Rotten Tomatoes. It was
found that the only significant relationship was when a movie had a really high score on Rotten
Tomatoes: (8, N = 223) = 14.33, p < 0.05. This could indicate that people feel it necessary to en-

iJournal, Vol 6, No. 1, 8


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

sure the movie they are going to see is worth the visit to the theater.

Bias

Audience: Two instances of types of audience


bias were coded within the focus groups. Audi-
ence bias was not coded within the interviews.
Participants explained that audiences can be
biased in terms of their movie-watching pref-
erences as many people watch films because of
personal preference, not due to any universally
shared judgement that a movie is good or bad.
Another participant explained that the audience
rating is about the moviegoer’s experience as a
whole, including their pre-existing preference for the movie franchise, for example. In summary,
participants expressed that audiences generally do not tend to conceptualize films in terms of their
objective mechanical and artistic strengths and instead watch movies based on casual preferences.

Critics: Three instances of critic biases were coded within the focus groups. Only participant 2 of
the interviews mentioned critic bias once. Participant 2 of the interviews was interviewed because
she did not trust nor like to use MRAs, therefore her distrust of critics is in line with her other
beliefs. Each mention of critic bias within the focus groups outlined that participants believed that
critics did not rate horror movies fairly, as horror movies are not as technically nuanced in the
ways that other genres may be, leading to professional critic bias.

Disconnect Between Behaviours and Beliefs

Four instances of participants having a disconnect between their behaviours and beliefs
were coded within the focus groups. The focus group participants displayed a disconnect between
their behaviours and beliefs: they believed that MRAs are important and popular, yet neither they

iJournal, Vol 6, No. 1, 9


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

nor most of their family or friends generally used them. One instance of a disconnect between be-
haviours and beliefs was coded within participant 1’s interview. The disconnect interview partic-
ipant 1 referred to was that if a movie had a bad score, people would still avoid seeing it, even if
it was a good film in other ways. This refers to the idea that ratings based on personal preferences
may impact the applicability of a score to its artistic or technical quality.

Distrust of the Unknown

Buying Reviews: The first sub-


code within the broader code of
distrust of the unknown is the
idea that production or advertising
companies buy positive reviews.
Neither interview participant
mentioned this concept, but it
was coded once within the focus
groups. In that instance, the par-
ticipant expressed that they had
heard that positive reviews were sometimes purchased, despite the film generally being seen as
less than average in quality. There is no confirmed evidence of this occurring, however, it being
against the terms of use of Rotten Tomatoes.

Distrust of Average User: Distrust of the average user was coded once within the focus groups and
twice within the interviews. Within the focus groups, one of the participants expressed that the
average person is not very intelligent, referring then to the potentially low quality of a user review.
Both interview participants mentioned their distrust of the average user. Participant 1 expressed
that they are distrustful of audience scores because users are allowed to rate a film without having
even seen it. Participant 2 expressed a similar sentiment, but also that before a blockbuster film
is released, various reviews can already be seen of the film for arbitrary reasons, unrelated to the

iJournal, Vol 6, No. 1, 10


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

actual film.

Distrust of Critics: Distrust of critics was coded three times within the focus groups and eight
times within the interviews. Within the focus groups, one participant expressed that they did not
trust Rotten Tomatoes scores. Even though they are critic aggregates, the participant expressed
that critics are too pretentious, which may mean that the ways in which this participant perceives
the scores being awarded does not align with their ideals of a good movie. Another participant
mentioned again that critics tend to give horror films bad reviews, and therefore it is hard to trust
critic opinions. A third participant was critical of the perceived demographics of critics, being that
critics are seen as middle-aged white men.

Within the interviews, seven references to distrust of critics were coded within the interview of
participant 2, who did not like Rotten Tomatoes. The concerns were mainly composed of having a
lack of rubric or criteria for the way that critics score movies. The participant expressed that they
didn’t feel like they knew what critics were scoring movies against. They mentioned that there
is not a diverse critic range. Within participant 1’s interview, they expressed that they sometimes
takes issue with critics, as they sometimes make unfair judgements about the quality of the film,
despite their expert status.

Fear of Spoilers

Critics: Fear of spoilers from critics was coded twice within the interviews (once each) and not
coded within the focus groups. Participant 1 expressed that they do not like how often the movie
is ruined after a critic gives too many details about the movie and its plot. Participant 2 said that
they found reading critic reviews tedious, especially because they run the extreme risk of spoilers.

Friends and Family

Group Activity: The choice of certain movies in order to complete a group activity was coded
twice within the focus groups and was not coded within the interviews. Both focus group partici-
pants expressed that they go to the movies because friends are going out. They have similar tastes
as their friends and therefore trust their opinions of what movies to watch.

iJournal, Vol 6, No. 1, 11


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Personalized

The idea that friends and family can produce


more personalized movie reviews was coded
thrice within the focus groups and once within
the interviews. Within the focus groups, partic-
ipants expressed that they found that reviews
from friends and family were more personalized,
as they had similar taste and were able to engage
in conversation about movies.

Trust Friends

Trusting friends was coded four times within


the focus groups and once within the interviews. Focus group participants expressed that they
understand their friends and therefore understand what is driving their opinions and views about
movies. Participant 1 of the interviews found that they trust peers sometimes more than movie
aggregates because their peers understand their individual movie preferences and therefore would
not recommend a movie the participant would not enjoy.

Marketing Decision

Actor: A film’s actors being a draw for


participants was coded ten times within the
focus groups and was not coded within the
interviews. Participants explained that they
see films because of certain actors, with one
person claiming they do not see movies with
a certain actor in them because of their pref-
erence against them. Participants mentioned

iJournal, Vol 6, No. 1, 12


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

that they would be drawn to see particular movies because they had female casts.

Plot: Seeing a movie because of its plot was coded seven times within the focus groups and once
within the interviews. Within the focus groups, participants claimed that they would see a movie
if they had some information on the plot or story line, such as when a film is based on a previous
movie or is part of a movie franchise. Nostalgia is another factor, as one participant mentioned
that they would go see certain movies within a franchise or remakes of movies because they were
films that they grew up with, such as Beauty and the Beast or Toy Story.

Trailer: A film’s trailer was coded three times within the focus groups and once within the inter-
view as a reason to see a movie. The focus group participants cited the movie’s trailer as a major
factor. One focus group participant goes to the movies very often and therefore uses the trailers to
judge quickly what movie to see. Participant 2 of the interviews connected trailers to Rotten To-
matoes scores, in that they believed these scores were added to advertise the movie if the company
believed the film’s plot was not enough of a draw.

What’s New

Watching movies because they are new releases was coded twice within the focus group
and thrice within the interviews. One focus group participant chose movies based on recent releas-
es. Another participant expressed that they only used Rotten Tomatoes to see the top ten movies
within certain categories. Interview participant 1 expressed similar opinions, as they too looked at
new releases within generated lists on websites in order to select movies to watch.

Reviews

Scores from Aggregates: When participants were asked about using professional aggregate scores
in order to decide to see a movie or not, six references were coded within the focus groups, while
four references were coded within the interviews. Participants made varying use of aggregate
scores, from looking in-depth at the critic scores to deciding to see a movie or its trailer based on
the scores. Interview participant 2 explained that they may use the aggregated scores to make gen-
eral decisions about watching a movie.

iJournal, Vol 6, No. 1, 13


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Scores from Audience: There was only


one instance of a focus group participant
mentioning that they found audience scores
more accurate. There were no instances of
audience score being a factor within the
interviews.

Final Analysis

Trust of MRAs

After analyzing all the data collected, we


discovered that MRAs only matter if the
person trusts them – and for the average millennial, this trust is lacking. In our survey, 40.9% of
our participants considerably or completely trusted critic MRAs and 36.4% considerably or com-
pletely trusted user MRAs. This is compared to 60.9% who considerably or completely trusted
family and friends. This is mirrored in the data we collected from our focus groups, where the
majority of participants trusted family and friends instead of aggregates. From our focus groups
and interviews, it seems that this is due to the belief that friends can give you personalized opin-
ions that will more closely match your own. This idea was also reflected in our interview with the
participant who used MRAs, where they developed their own network of critics that they trust
because they have similar opinions to themself.

Extreme Scores Matter

When reviewing the data, it became apparent that only scores that were incredibly low or
high affected any behaviours of the participants. Almost all of the correlations between MRAs and
participant behaviour seem to appear only if the score was either 0-19% or 80-100% on Rotten
Tomatoes. This is consistent with the data collected in the focus groups and interviews, as the only
time anyone mentioned a change in behaviour was when a movie had a very low or high score.

iJournal, Vol 6, No. 1, 14


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

For example, in our focus group a participant mentioned they would have decided against seeing a
movie had they known the Rotten Tomato score was 9%. On the other end of the scale, when pre-
sented with the score of a movie they hadn’t heard about which had a 98% on Rotten Tomatoes,
various members of the focus group said that, while they would not decide to go to see the movie
solely because of that score, that they would be willing to learn more about the movie and watch
the trailer. Our interview with the participant who liked MRAs reflected a similar sentiment, as the
participant had decided to watch or not watch certain movies because they ended up with a really
very high or low score on Rotten Tomatoes. Overall, it seems that only extreme scores produce a
benefit or a deterrent to participants’ desire to watch a movie.

Marketing Matters

As an unintentional result of our research, we discovered that movie reviews of any kind
were less important as factors in decision-making than film marketing and casting. A combined
64.7% of participants said they would always or often decide to watch a movie because they are a
fan of the series. Other key factors that encouraged viewership were movies having certain actors
or because the participants liked the trailer for the movie. Similarly, within the focus groups, when
shown a movie score on Rotten Tomatoes, instead of saying they would see the movie due to its
score participants said they had decided to watch that film because they enjoyed the series, actor,
or trailer. Specific examples include one participant going to see the film Downsizing because it
starred Matt Damon, and multiple participants having gone to see Incredibles 2 because they were
a fan of the first film.

Recommendations and Insights

This study was inspired by an article about how movie executives were blaming Rotten To-
matoes for disappointing box office results. We wanted to know whether this was valid criticism
based on how people engage with MRAs and how people ultimately decide what movies to watch.
Our research seems to indicate that decision-making about movies is influenced by more multifac-
eted factors than simply the use of MRAs. People favour personalized opinions from family and
friends, familiarity with franchises they already enjoy, and actors and actresses they recognize in
iJournal, Vol 6, No. 1, 15
Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

their decision-making processes. If movie executives want people to come see their films, these
are the factors they need to keep in mind.

For sites like Rotten Tomatoes to become an important source when people are deciding
what films to watch, they need to increase their trustworthiness. When people trust MRAs, they
are influenced by them. Conversely, the individuals who do not trust them believe they have good
reasons not to; that reviews are bought, critics are biased, and anonymous reviewers are untrust-
worthy. Expressing the demographics of critics and the reviewing criteria they use more clearly
could potentially increase the trustworthiness of MRAs.

We recommend that this study be furthered using additional qualitative methods to get a
deeper understanding of how people decide what movies to watch. A content analysis of social
media posts on sites like Reddit, Twitter, and Instagram, combined with interviews and focus
groups could lead to rich and informative data that could be of use to movie industry stakeholders.

iJournal, Vol 6, No. 1, 16


Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

Appendix

Works Cited
Barnes, B. (2017, September 7). Attacked by Rotten Tomatoes. New York Times, Business Re-
trieved from https://www.nytimes.com/2017/09/07/business/media/rotten-tomatoes-box-office.
html
Chen, Y., Liu, Y., & Zhang, J. (2012). When do third-party product reviews affect firm value and
what can firms do? The case of media critics and professional movie reviews. Journal of Market-
ing, 76(2), 116-134. doi:10.1509/jm.09.0034
Chintagunta, P. K., Gopinath, S., & Venkataraman, S. (2010). The effects of online user reviews
on movie box-office performance: accounting for sequential rollout and aggregation across local
markets. SSRN Electronic Journal, 29(5), 944. doi:10.2139/ssrn.1331124
Dogruel, L., & Xiaoming, H. (2016). Movie selection and E-WOM preference: A cross-cultural
perspective. International journal of communication [Online], 2934+.
Fogel, J., & Zachariah, S. (2017). Intentions to Use the Yelp Review Website and Purchase Behav-
ior after Reading Reviews. Journal of Theoretical and Applied Electronic Commerce Research,
12(1), 53-67.
Hennig-Thurau, T., Marchand, A., & Hiller, B. (2012). The relationship between reviewer judg-
ments and motion picture success: Re-analysis and extension. Journal of Cultural Economics,
36(3), 249-283. 10.1007
Hudson, S. (2016). Review sites pass muster with film-ticket seller: Fandango acquires Rotten
Tomatoes, Flixster in bid to reach moviegoers. Los Angeles Business Journal, 38(8), 5. Re-
trieved from http://link.galegroup.com/apps/doc/A446521740/ITBC?u=humber&sid=ITB&x-
id=5f4b272f
Imdb.com Traffic Statistics. (n.d.). Retrieved March 29, 2018, from Alexa website: https://
www.alexa.com/siteinfo/imdb.com
Koh, N. S., Hu, N., & Clemons, E. K. (2010). Do online reviews reflect a product’s true perceived
quality? An investigation of online movie reviews across cultures. 2010 43rd Hawaii International
Conference on System Sciences, 9(5), 374-385. doi:10.1109/hicss.2010.154
Moretti, E. (2010). Social learning and peer effects in consumption: evidence from movie sales.
Retrieved from http://www.jstor.org.ezproxy.humber.ca/stable/23015858
Niraj, R., & Singh, J. (2015). Impact of user-generated and professional critics reviews on Bol-
lywood movie success. Australasian Marketing Journal (AMJ), 23(3), 179-187. 10.1016/j.
ausmj.2015.02.001 Retrieved from http://resolver.scholarsportal.info/resolve/14413582/
v23i0003/179_iouacrobms
Palsson, C., Price, J., & Shores, J. (2012). Ratings and revenues: evidence from movie ratings.
Contemporary Economic Policy, 31(1), 13-21. doi:10.1111/j.1465-7287.2012.00315.x
Rottentomatoes.com Traffic Statistics. (n.d.). Retrieved March 29, 2018, from Alexa website:
https://www.alexa.com/siteinfo/rottentomatoes.com
Rui, H., Liu, Y., & Whinston, A. B. (2011). Whose and what chatter matters? The impact of tweets
on movie sales. SSRN Electronic Journal. doi:10.2139/ssrn.1958068
Shepherd, T. (2009). Rotten tomatoes in the field of popular cultural production. Canadian Journal
iJournal, Vol 6, No. 1, 17
Allman & Medeiros, Rotten Tomatoes and Chill? MRAs and Their Impact on Decision-making

of Film Studies, 18(2), 26-44. Retrieved from http://ezproxy.humber.ca/login?url=https://search-


proquest-com.ezproxy.humber.ca/docview/211513030?accountid=11530
Terry, N., Butler, M., & De’Armond, D. A. A. (2004). Critical acclaim and the box office perfor-
mance of new film releases. Academy of Marketing Studies Journal, 8(1), 61+.
Ullah, R., Zeb, A., & Kim, W. (2015). The impact of emotions on the helpfulness of movie re-
views. Journal of Applied Research and Technology, 13(3), 359-363.10.1016/j.jart.2015.02.001
Retrieved from http://resolver.scholarsportal.info/resolve/16656423/v13i0003/359_tioothomr
Wöllmer, M., Weninger, F., & Knaup, T. (2013). YouTube movie reviews: sentiment analysis in an
audio-visual context. IEEE Intelligent Systems, 28(3), 46 - 53. doi:10.1109/MIS.2013.34

iJournal, Vol 6, No. 1, 18

You might also like