Kwon Et Al 2019

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

International Journal of Research in Marketing xxx (xxxx) xxx

Contents lists available at ScienceDirect

International Journal of Research in Marketing


journal homepage: www.elsevier.com/locate/ijresmar

Full length article

The risk of programmatic advertising: Effects of website quality


on advertising effectiveness
Edlira Shehu a,⇑, Nadia Abou Nabout b, Michel Clement c
a
Department of Marketing, Copenhagen Business School, Solbjerg Plads 3, 2000 Frederiksberg, Denmark
b
Department of Marketing, Vienna University of Economics and Business (WU), Welthandelsplatz 1, 1020 Vienna, Austria
c
Department of Marketing, University of Hamburg, Moorweidenstrasse 18, 20148, Hamburg, Germany

a r t i c l e i n f o a b s t r a c t

Article history: Programmatic advertising is prevalent in online advertising. However, it offers managers
Received 15 January 2020 limited control over the type of website where the ad appears, resulting in brand safety
Available online xxxx issues. Aware of the risk that ads may potentially display on websites of poor quality (non-
premium websites), managers have developed strategies to reduce this risk. Due to the lack
Keywords: of empirical insights, these strategies are based on ‘‘gut feeling” and depend on campaign
Website Quality type (branding versus performance) and brand type (premium versus nonpremium). Our
Online Advertising Effectiveness
research addresses this void and analyzes website quality effects for premium and non-
Programmatic Advertising
premium brands in branding and performance campaigns. Our results show that effects,
indeed, vary depending on campaign and brand type, but not in ways that managers might
expect. When a branding ad appears on a nonpremium website, attitudes towards the ad
and the brand deteriorate, but only for premium brands. In contrast, website quality does
not affect awareness for either type of brand. When a performance ad appears on a non-
premium website, it generates fewer clicks; this effect is stronger for premium brands.
Overall, these findings enrich our understanding of the consequences of programmatic
advertising and highlight the crucial role of website quality dependent on campaign goal
and brand type.
Ó 2020 Elsevier B.V. All rights reserved.

1. Introduction

In 2019, worldwide digital ad spending was at $333.25 billion and more than 60% of it was spent on programmatic adver-
tising (eMarketer, 2020). Programmatic advertising is popular due to its high degree of automation, flexibility, and cost ben-
efits over traditional ad buys.
The initial enthusiasm over programmatic advertising has faded because it gives advertisers limited control over the web-
sites on which their ads appear. In 2017, hundreds of brands–including two of the largest advertisers in the U.S., AT&T and
Verizon–even stopped advertising on YouTube and the Google display network (both of which enable programmatic ad
buys), fearing that their ads might appear on websites with inappropriate content (Coffee, 2017). Even three years after
the first YouTube scandal, detrimental effects due to ads appearing on websites of poor quality–the so-called nonpremium
websites–remain one of the major risks of programmatic advertising (Joseph, 2020). Crovitz (2020) reports in the New York
Times that global players have unwantedly been subsidizing Russian government-run propaganda websites (e.g., RT.com)

⇑ Corresponding author.
E-mail addresses: es.marktg@cbs.dk (E. Shehu), nadia.abounabout@wu.ac.at (N. Abou Nabout), michel.clement@uni-hamburg.de (M. Clement).

https://doi.org/10.1016/j.ijresmar.2020.10.004
0167-8116/Ó 2020 Elsevier B.V. All rights reserved.

Please cite this article as: E. Shehu, N. Abou Nabout and M. Clement, The risk of programmatic advertising: Effects of website quality on
advertising effectiveness, International Journal of Research in Marketing, https://doi.org/10.1016/j.ijresmar.2020.10.004
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

through algorithm-based programmatic ad placements. Websites such as RT.com attracted programmatic ads from 477 com-
panies, including companies such as Amazon, PayPal, and Walmart.
In managerial practice, companies usually utilize blacklists to avoid ‘‘no-go areas,” such as adult or extremist websites.
However, the (manual) process requires time and continuous monitoring, making it difficult to translate brand safety into
programmatic algorithms. Even in the positive case that websites with unethical content could be excluded, the remaining
pool of websites would vary in quality.
Typically, advertisers classify websites into premium and nonpremium, and this classification is essential for advertising
bookings. Premium websites typically come from reputable publishers with high-quality editorial content and superior web-
site design. In Germany, for instance, those publishers are frequently part of the AGOF Top 100 list; its US counterpart is the
ComScore 50 list.1 Examples of premium websites are nytimes.com, espn.com, vogue.com, or nationalgeographic.com. Non-
premium websites comprise websites with user-generated content, content aggregators, gaming platforms, and user forums.
Examples of such websites include allrecipes.com, lifehacker.com, and chegg.com.
Generally, managers classify campaigns as either branding or performance based on their goals–while branding cam-
paigns aim to generate awareness and improve attitudes, performance campaigns focus on generating clicks and eventually
purchases (Hughes, Swaminathan, & Brooks 2019). They assume that the risk of nonpremium websites is mostly prevalent
for branding campaigns (independent of brand type) or premium brands (independent of campaign type). However, previous
research in the area of branding and advertising strategies offers limited guidance with respect to website quality effects.
While research on context effects in online advertising (e.g., Shamdasani, Stanaland, & Tan, 2001; Cho, 2003) focuses
mostly on congruency effects between website content and the ad (e.g., an automobile ad on a website with car content),
important insights into how advertising on websites of poor quality (i.e., nonpremium websites) affects advertising effective-
ness are still missing–mirroring the managerial challenge of deciding on programmatic advertising. Yet, understanding web-
site quality effects is becoming even more relevant due to current industry developments. Recently, Google announced to
limit the use of third-party cookies in their Chrome browser by making ‘‘disable third-party cookies” the standard setting
(Neumann, 2020).2 Consequently, access to user browsing data, which is a prerequisite for user targeting in programmatic
advertising, is getting increasingly difficult, and industry experts predict that contextual targeting will gain relevance (Schiff,
2019; Tan, 2020). Since website quality is a central but under-researched context dimension, it is vital for advertisers to under-
stand how it affects advertising effectiveness, especially given the current industry developments. Yet, empirical insights are
scarce.
We address this research gap and analyze the effect of website quality (premium and nonpremium) on recall, attitudes,
and clicks for premium and nonpremium brands in two studies. Conceptually, we build on evaluative conditioning, which
explains how evaluations towards a stimulus change when paired with another positive or negative stimulus (De
Houwer, 2007; Landwehr, Golla, & Reber, 2017). In Study 1, we analyze the effects of nonpremium website advertising in
the context of branding campaigns. In collaboration with a globally leading video-sharing platform, we used a design in
which pre-roll video ads were displayed in different premium and nonpremium video channels. In Study 2, we analyze
the effects of nonpremium website advertising in the context of performance campaigns using data from one of the largest
ad exchanges in Europe. In both studies, our universe of premium and nonpremium websites does not contain websites that
distribute unethical content, where negative effects might be obvious (e.g., websites with racist or hateful content). Instead,
we study websites that are commonly part of ad inventories used in programmatic advertising and are sold as premium or
nonpremium. In this setting, it is unclear how effects from nonpremium websites will play out.
This study makes two central academic contributions. First, we add to the stream of research on programmatic advertis-
ing (e.g., Paulson, Luo, & James, 2018; Kim et al., 2014) by analyzing one aspect that is considered a major risk, namely adver-
tising on websites of poor quality. Second, we contribute to the literature on online advertising (Kannan & Li, 2017; Colicev,
Malshe & Pauwels, 2018, Lobschat, Osinga & Reinartz 2017). We provide new insights into research on environmental ad
effects, which has mostly analyzed content effects, e.g., BMW being advertised on an automobile website versus a recipe
website (e.g., Shamdasani, Stanaland, & Tan, 2001). We complement these studies by analyzing how website quality–its
most basic, yet most comprehensive characteristic–affects advertising effectiveness. Finally, our findings add to the litera-
ture on advertising effects for premium and nonpremium brands. In a recent study, Guitart, Gonzales & Stremersch
(2018) show that advertising-up (advertising nonpremium products as if they were premium in terms of content) influences
advertising effectiveness of nonpremium brands positively. We study the effects of the advertising-up strategy in terms of
the ad’s environment.
Finally, we provide campaign managers with empirical evidence regarding nonpremium website effects. In their current
practice, managers typically avoid nonpremium websites for branding campaigns (of premium and nonpremium brands) or
premium brands (for branding and performance campaigns). Our results challenge this practice and show that effects
depend on the combination of both dimensions, i.e., brand type and campaign goal.

1
A list of the AGOF top 100 publishers can be found here: https://meedia.de/2020/02/07/die-agof-top-100-der-redaktionellen-onlinemarken-reihenweise-
alltime-rekorde-im-januar-aber-der-spiegel-verliert/. A list of the Comscore top 50 publishers can be found here: https://www.comscore.com/Insights/
Rankings?country=US
2
This move is disrupting the online advertising industry as the Chrome browser has a market share of more than 60% in many countries, even being the
absolute dominating browser in some markets with market shares higher than 80%.

2
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

2. Research background

2.1. Programmatic advertising and website quality

Typically, advertisers buy advertising inventory through traditional direct deals or programmatic advertising (Choi, Mela,
Balseiro, & Leary, 2017). In traditional direct deals advertisers enter one-to-one negotiations to purchase impressions in bulk
from single publishers, while in programmatic advertising advertisers buy impressions from ad networks using technology.
Programmatic advertising includes both programmatic direct and real-time bidding (RTB) purchases. In programmatic
direct, advertisers rely on automated technology-enabled direct deals where ad buys are mostly guaranteed and prices
are fixed. In RTB, advertisers purchase ads by bidding for single impressions in real-time meaning that ad buys are not guar-
anteed and, together with price, will be the outcome of an auction. Unfortunately, programmatic advertising and, more
specifically, RTB are prone to brand safety issues because ads often run on the whole website inventory of the ad network
(Maunder, 2014). Consequently, ads can appear on premium as well as on nonpremium websites (eMarketer, 2014); both
types of websites are typically bundled together by ad networks. Therefore, campaign managers need to evaluate the risks
of programmatic advertising.
Since YouTube’s ad controversy in 2017 (Coffee, 2017), categorizing the media environment into premium and non-
premium according to quality has gained importance. The industry spends considerable effort evaluating and recognizing
risks related to ad exposure on nonpremium websites and continues to develop strategies to minimize and avoid these risks.
Our interviews with several leading industry experts confirmed that they consider website quality an important determinant
when deciding whether to use programmatic advertising (see Table 1). All interviewed experts stated that potential negative
effects due to ad appearance on nonpremium websites are an important reason why campaign managers prefer direct deals
over programmatic advertising. To minimize potential negative effects, campaign managers have developed workaround
instruments, e.g., campaign-specific blacklists or whitelists. However, the development and usage of these lists is a complex
manual process and does not reliably avoid the appearance of ads on nonpremium websites.
Given the lack of empirical insights into the potential negative effects of ad appearances on nonpremium websites, cam-
paign managers executed strategies mainly based on ‘‘gut feeling” or experience. Table 1 shows that these strategies differ
depending on campaign goal (branding versus performance) and brand type (premium versus nonpremium). Specifically,
our interviews revealed two different strategies where managers tend to prefer direct deals for premium brands (indepen-
dent of campaign goal; see experts 1, 4, and 5 in Table 1) or branding campaigns (independent of brand type; see experts 2
and 3 in Table 1). Both strategies are based on the expectation that brand safety is more important for premium brands (than
for nonpremium brands) or for branding campaigns (than for performance campaigns). However, current managerial prac-
tice lacks empirical support from marketing science with respect to the outcome on (a) recall, (b) attitudes, and (c) clicks. To
conclude, despite practitioners’ recognition that website quality is a potential risk for programmatic advertising, we know
very little about its effects.

2.2. Literature review

Research shows that the media environment plays an important role in determining advertising effectiveness (Kwon
et al., 2019). Several studies analyze context effects of offline media. For example, Riebe & Dawes (2006) study context effects
measured by ad clutter in a radio advertising setting. Wilson & Casper (2016) analyze the location effects of billboards on
attention, and Wilson and Suh (2018) study ad location effects on recall in shopping malls. The most relevant study for
our setting is the one by De Pelsmacker, Geuens, & Anckaert (2002) that examines how congruency between offline media
(TV and print advertising) influences recall and attitudes. Using lab experiments, they find that congruency between the
medium and the ad improves attitudes towards the ad, and this effect is stronger for high-involvement products. This finding
holds for both TV and print ads. If individuals like the medium’s content, their recall improves for TV ads but not for print
ads; e.g., if ads for a car appear in an automobile-related TV show, recall is higher.
In the online world (see Table 2), researchers have mostly studied the contextual fit between the ad and the website (e.g.,
Cho, 2003; Shamdasani, Stanaland, & Tan, 2001). Findings show that users are more likely to recall, like, or click an ad that
fits the content of the website (Cho, 2003; Yaveroglu & Donthu, 2008). For example, a user is more likely to respond to an ad
featuring smartphones when visiting an electronics retailer website than when browsing a fashion website (Shamdasani,
Stanaland, & Tan, 2001). This positive effect of the congruency between website content and advertised product is stronger
for high-involvement products. In contrast, a positive website reputation becomes more important for ads of low-
involvement products. These findings indicate that website quality may play a role since it can correlate with website rep-
utation. Notably, website reputation is manipulated by a description containing information on the number of years a web-
site is online, its market share, number of employees, revenue, monthly audience, and rankings by Nielsen NetRatings and
Media Metrics. Thus, website reputation is not necessarily the same as website quality. Many nonpremium websites are
online for a very long time and are successful in terms of market share or revenue (e.g., porn, or yellow press websites). Thus,
these findings cannot be directly translated into ad effects of website quality. This, and the fact that the study uses fictitious
websites, limits its validity for website quality effects and real-life campaigns.

3
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Table 1
Interviews with industry experts.

Q1 Q2 Q3 Q4
Position Annual Branding Perfor- Website Advertising strategy for:
sales in goals mance quality
Branding Branding Performance Performance
EUR goals
campaign/ campaign/ campaign/ campaign/
premium nonpremium premium nonpremium
brand brand brand brand
(1) CEO >29 m Awareness, Clicks, P vs NP P NP P NP
attitudes purchase
(2) Media director >26 m Awareness, Clicks, P vs NP P P NP NP
attitudes purchase
(3) Digital venture Unknown Awareness, Clicks, P vs NP P P NP NP
capital investor attitudes purchase
(4) Head of online >140 m Awareness Clicks P vs NP P NP P* NP
advertising
(5) Managing partner >60 m Awareness, Clicks, P vs NP P NP P NP
video purchase
completion

Notes:
P = premium, NP = nonpremium
Question 1: Which goals do you pursue in case of branding campaigns?
Question 2: Which goals do you pursue in case of performance campaigns?
Question 3: What categorization do you use to differentiate between websites of different quality?
Question 4: Where would you run the following campaigns (premium or nonpremium website)?

 Branding campaign of BMW


 Branding campaign of Dacia
 Performance campaign of BMW
 Performance campaign of Dacia
* If advertiser is willing to carry the additional costs.

Contextual fit is also important in situations where the ad message varies across exposures. In an experimental study,
Moore et al. (2005) show that contextual fit enhances ad liking, but decreases attention. Yaverogly and Donthu (2008) show
that contextual fit increases intention to click when users are exposed to ads with different creatives.
Notably, studies analyzing the contextual fit between websites and ads do not consider the role of brands. An exception is
the study by Hsieh et al. (2016), who analyze the fit between a commercial (noncommercial) website and a commercial
(noncommercial) brand. They find a positive effect of contextual fit, such that attitudes towards ads of commercial (noncom-
mercial) brands are higher on commercial (noncommercial) websites. However, differences between brands go beyond their
commercial nature. Brands differ substantially with respect to their prestige and brand equity (Keller, 1993), allowing clas-
sification into premium or nonpremium (Guitart, Gonzales, & Stremersch, 2018). Premium brands may be more valuable to a
company, which is reflected in their different advertising strategies (Table 1). This underlines the need for differentiated
findings for premium and nonpremium brands.
Except for contextual fit, there is only limited research on other media environmental factors. For example, Stevenson,
Bruner, and Kumar (2000) find that website complexity influences attitudes towards the ad negatively. Lee and Thorson
(2009) find emotional websites improve attitudes towards ads in comparison to cognitive ones.
Overall, Table 2 shows that media environmental effects have gained considerable attention in advertising research. How-
ever, it also shows that when it comes to website quality effects, we still know little; the majority of these studies analyze the
contextual fit between the website content and the ad content (i.e., BMW advertising on a website about cars versus BMW
advertising on a website about cooking). Website quality fit has been neglected so far (e.g., BMW advertising on a premium
website, such as The New York Times, versus BMW advertising on a nonpremium website, such as tmz.com). In addition,
existing research does not differentiate between premium and nonpremium brands (e.g., BMW versus Dacia). Lastly, with
few exceptions (Cho, 2003; Andrews et al., 2016), most of the studies use lab experiments and cannot analyze behavioral
advertising effectiveness metrics, such as clicks. Our study addresses all three gaps and expands the body of existing
research.

3. Conceptualization and hypotheses

A brand’s value is determined by its awareness and image (Keller, 1993). As we know from literature in the field of acti-
vation (e.g., Meyers-Levy & Tybout, 1989), awareness may increase when a consumer is activated by conflicting signals. Such
a dissonant setting could be a premium brand (e.g., Amazon) advertising on a nonpremium website (e.g., RT.com; observed
by Crovitz, 2020). In case of no negative image effects of advertising a premium brand on a nonpremium website, along with

4
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Table 2
Overview of selected studies on media environmental effects.

Study Campaign goal Environment Environmental Data Ad effectiveness Brand


feature type
Branding Perfor- Content Quality Other Awareness Attitudes Clicks
mance
Andrews et al.   Location of ad Field study 
(2016) receiver
Cho (2003)    Website Lab 
congruency experiment
Goodrich (2011)   Banner Lab  
position experiment
Hsieh et al. (2016)   Website Lab 
congruency experiment
Janssens et al.    Website Lab  *
(2012) congruency experiment
Lee and Thorson   Website type Lab  
(2009) (emotional or experiment
cognitive)
Moore et al.   Website Lab  
(2005) congruency experiment
Shamdasani et al.   Website Lab 
(2001) congruency experiment
Stevenson et al.   Website Lab 
(2000) complexity experiment
(number of
items, color,
animation)
Yaveroglu and   Website Lab  *
Donthu (2008) congruency experiment
Nelson-Field et al.   Ad clutter Natural 
(2013) experiment
This study    Website Quasi    
quality experiment +
(premium and Observational
nonpremium) data

Notes:
* intention to click.

lower prices for running programmatic ad campaigns, this logic becomes tempting to follow–especially when empirical sup-
port is missing. In contrast, the evaluative conditioning framework suggests that advertising on nonpremium websites
should impair advertising effectiveness. We build on the evaluative conditioning framework (see Fig. 1) for our
conceptualization.
Human behavior is largely governed by likes and dislikes (Hofmann et al., 2010), which are mostly learned (Rozin &
Millman, 1987). Researchers (De Houwer, 2007; De Houwer et al., 2001) refer to the procedure by which likes and dislikes
are learned as evaluative conditioning. It results from a change in the valence of a stimulus that is paired with another (pos-
itive or negative) stimulus. The first stimulus (in our case the ad) is often referred to as the conditioned stimulus (CS), and the
second one (in our case the website) as the unconditioned stimulus (US; Hofmann et al., 2010). Typically, a CS becomes more
positive (negative) when it is paired with a positive (negative) US. Evaluative conditioning would therefore suggest that con-
sumers should revise their reactions towards ads (CS) as a response to website quality (US). In other words, consumers’ reac-
tions should be negatively influenced when ads are exposed on nonpremium websites.
Interestingly, several studies have reported evaluative conditioning effects on consumer responses, even when these were
not aware of the CS–US contingencies (Hofmann et al., 2010). These studies suggest that recall may remain unaffected. Other
studies, however, have shown that evaluative conditioning occurs only after the participants become aware of the contin-
gency between the CS and the US with which it was paired, providing indications for a recall effect. Building on the latter,
we postulate the following:
H1a: Advertising on nonpremium websites influences ad recall negatively.
H1b: Advertising on nonpremium websites influences ad and brand liking negatively.
H1c: Advertising on nonpremium websites influences clicks negatively.
Finally, Landwehr et al. (2017) suggest that processing fluency (i.e., the subjective ease with which a stimulus is pro-
cessed) moderates the relative magnitude of the pure pairing effect in the overall liking judgment. We expect that processing
fluency is impaired more strongly when premium brands are paired with a negative stimulus (in this case nonpremium web-
sites). We postulate the following:
H2a: The negative effect of nonpremium websites on ad recall is stronger for premium brands.
H2b: The negative effect of nonpremium websites on ad and brand liking is stronger for premium brands.
H2c: The negative effect of nonpremium websites on clicks is stronger for premium brands.

5
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Fig. 1. Conceptual model.

4. Overview of empirical studies

We conducted two studies to investigate the website quality effect on advertising effectiveness. Study 1 analyzes website
quality effects in the context of branding campaigns. We collaborate with a global video sharing platform and analyze how
channel quality affects advertising effectiveness, measured by recall and attitudes, towards premium and nonpremium
brands. This setting is highly relevant since video advertising is the most important form of branding campaigns (IAB, 2019).
In Study 2, we analyze the effect of website quality on performance campaigns using observational data from one of the
largest ad exchanges in Europe, which performs up to 120 million auctions a day and sells impressions from more than 80
websites that together reach approximately 35 million unique users each month.
In both studies, we analyze how advertising on nonpremium websites affects recall and attitudes towards the ad and the
brand (Study 1), as well as clicks (Study 2), and the moderating effect of brand type (premium versus nonpremium). Due to
differences in study design, these effects are not directly comparable. However, our results provide empirical insights into
how and when nonpremium websites can hurt advertising effectiveness and allow us to derive managerial implications
for all four settings described in Table 1.
All premium and nonpremium websites included in these studies are considered brand-safe by marketers; i.e., we do not
use websites with unethical content. Thus, our estimates of the website quality effect are rather conservative. Additionally
and to best reflect current industry practice, we use real-world categorizations of websites in all studies; i.e., the websites we
study are sold as premium and nonpremium to advertisers.

5. Study 1: Effects of website quality on branding campaigns

Study 1 analyzes the effect of premium versus nonpremium channels on recall and attitudes towards the ad and brand.
The study was conducted between February and May 2012 in an online video advertising setting, where the GfK media effi-
ciency panel tracked users’ responses to pre-roll ads shown on a global video-sharing platform. The GfK media efficiency
panel comprises respondents who allow GfK to track their daily surfing behavior. The data includes 5,305 respondents
who were users of the video-sharing platform.

5.1. Design

In our study, we used a between-subject quasi-experimental design. The variable of interest was a video channel’s quality
(i.e., premium versus nonpremium). We relied on the video sharing platform’s existing classification of video channels into
premium and nonpremium, which it uses to sell advertising inventory. Respondents were randomly exposed to ads from 18
brands (13 premium and 5 nonpremium) from different industries (fast-moving consumer goods, online services, travel, and
other). All ads were similar in terms of length and novelty; they were produced by advertising agencies; and none of the
videos contained special elements such as celebrities, babies, or especially humorous plots.
We rely on the rater approach to classify the advertised brands (Lovett, Peres, & Shachar, 2013) and the definition of pre-
mium brands following Guitart, Gonzales, & Stremersch (2018), where a premium brand is defined as ‘‘. . .a brand that deliv-
ers superior functional and symbolic value at a higher price compared to other brands in the category.” To avoid
contaminating our raters’ brand evaluation, none of the raters saw the video ads. The raters’ consensus served as our brand
categorization (see Web Appendix C for more detail).
When the participants entered the video-sharing platform and searched for content, they were exposed to a pre-roll video
ad. The 18 video ads were randomly integrated into respondents’ real search queries; there was no targeting involved. Each
respondent was later redirected to a short online questionnaire. Based on the channels that a specific respondent searched
for on the platform during the survey, we can identify respondents who only surfed premium channels and those respon-

6
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

dents who only surfed nonpremium channels. Our sample contains both respondents who watched the ad and respondents
who skipped the ad or left the channel URL. Accordingly, we avoid selection biases related to respondents’ willingness to
watch online ads in premium or nonpremium channels.
In the survey, participants responded to questions about recall and attitudes towards the ad and brand. Due to the
survey appearing directly after one of the ad exposures, the questions measured immediate reactions (Goldfarb &
Tucker, 2015). We had access to information about the respondents’ user behavior on the video-sharing platform, their
interest in the advertised product category, and sociodemographic variables. Ultimately, 5,305 respondents completed
the questionnaire.
Although we randomized the assignment of the video ads to the respondents,3 respondents were assigned to premium or
nonpremium channels based on their real search queries. An advantage of this setting is that the specific channels were relevant
to the respondents because they had actively searched for them. In addition, it is more realistic than asking respondents to
browse manipulated premium or nonpremium channels in a laboratory environment. Finally, using real search queries allows
us to analyze respondents which advertising companies would actually reach through premium or nonpremium channels. That
said, we account for potential selection bias related to self-assignment of respondents to premium or nonpremium channels
using propensity score matching techniques.

5.2. Data and descriptive statistics

We asked users for their unaided and aided brand recall on a binary scale, where 1 (0) indicates that the respondent
was (not) able to recall the brand advertised in the pre-roll ad. Overall, 31% of users were able to recall the advertised
brand across all brands in our survey (aided recall = 85%), although these capabilities varied widely across users
(unaided recall SD = 0.46, aided recall SD = 0.35). Users’ attitudes towards the ad and brand were measured with ad
and brand liking statements, both measured on 5-point Likert scales (1 = ‘‘not at all,” 5 = ‘‘very much”) with
no-response options. The average ad liking across all brands equaled 3.27 (SD = 0.95), while average brand liking
amounted to 3.36 (SD = 0.80).
The data from the GfK panel contained user demographics. The average respondent in our sample was 36.56 years old and
part of a household with 2.56 members. Half were women, 58% of the sample had graduated from high school, and 31% had a
university degree. 57% of the users were responsible for purchase decisions within their household. On average, users
encountered the featured video ad 5.02 times. We also measured respondents’ interest in the advertised product category
(1 = ‘‘not at all,” 5 = ‘‘very much”; M = 3.51, SD = 1.40).
Given this setup, we controlled for self-selection bias related to channel quality using propensity score matching (PSM)
with all user characteristics available in the data.

5.3. Propensity score matching

We apply PSM to control for self-selection bias of respondents to the premium or nonpremium groups. By applying PSM,
we obtain data that mimics randomized field experiments (Andrews, Luo, Fang, & Ghose, 2016). Although users searching for
premium channels may have different attitudes towards advertising than users searching for nonpremium channels, PSM
ensures that differences in advertising effectiveness outcomes related to available characteristics are minimized
(Heckman, Ichimura, & Todd, 1998; Rubin & Thomas, 1966).
PSM consists of two stages. First, we calculate the propensity score to match users across groups using a logistic
regression. A propensity score is the nonnormalized probability that users search for nonpremium channels (pseudo
treatment group) rather than premium channels (pseudo control groups), given their observable characteristics. We
estimated the logistic regressions for (unaided and aided) brand recall, ad liking, and brand liking separately
because the number of observations varied due to a no-response option in the survey. Because we aim to show
differential effects for premium and nonpremium brands, we apply group specific matching (Caliendo, Clement,
Papies & Scheel-Kopeinig, 2012).
The covariates in Table 3 serve as independent variables in all model specifications. We use sociodemographic character-
istics and usage characteristics on the video platform during the month prior to our study. Demographics, such as age, edu-
cation, and household status, may affect surfing behavior, and consequently the propensity towards visiting premium or
nonpremium channels. In addition, participants who spend more time online or use the video platform more intensively
may be more knowledgeable about specific niche and nonpremium channels, and have a higher propensity to visit these
channels. We also account for the interest in the advertised product category since it varies significantly between both
groups. Table 3 shows that category interest is significantly higher for respondents who watch only premium channels
(M = 4.26, SD = 1.16 for premium versus M = 3.47, SD = 1.40 for nonpremium). We control for the number of total video
ad impressions before completing the survey, since it differs between groups (Table 3). Finally, premium/nonpremium users
may have different experiences regarding pre-roll ads because these are primarily displayed on premium channels, which
may affect users’ attitudes towards ads in general.

3
In the event that the user saw multiple ads over time before taking the survey, the same ad was always displayed to them.

7
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Table 3
Descriptive statistics (Study 1).

Total Premium channel Nonpremium channel Group


comparison
N Mean SD N Mean SD N Mean SD t-value
Unaided brand recall (binary) 5,305 0.31 0.46 211 0.31 0.46 5,094 0.31 0.46 0.16
Aided brand recall (binary) 5,305 0.85 0.35 211 0.81 0.40 5,094 0.86 0.35 11.59 ***
Ad liking (1 to 5) 5,147 3.27 0.95 205 3.45 0.90 4,942 3.26 0.95 2.72 ***
Brand liking (1 to 5) 5,167 3.36 0.80 206 3.50 0.77 4,961 3.36 0.80 2.06 **
School degree (binary) 5,305 0.58 0.49 211 0.56 0.50 5,094 0.58 0.49 0.41
Academic degree (binary) 5,305 0.31 0.46 211 0.29 0.46 5,094 0.31 0.46 0.50
Household size 5,305 2.56 1.25 211 2.49 1.20 5,094 2.57 1.25 0.92
Responsible for purchase decisions (y) 5,305 0.57 0.49 211 0.58 0.49 5,094 0.57 0.49 0.29
Gender (female, binary) 5,305 0.51 0.50 211 0.45 0.50 5,094 0.51 0.50 1.48
Age 5,305 36.56 11.76 211 36.8 12.07 5,094 36.55 11.75 0.30
Number of video platform visits+ 5,305 1.79 0.41 211 1.87 0.32 5,094 1.78 0.40 3.08 ***
Number of video platform queries+ 5,305 76.83 141.04 211 78.68 128.36 5,094 76.76 141.54 0.19
Time spent online+ 5,305 3,070.03 3,347.02 211 3,337.65 3,374.08 5,094 3,058.94 3,345.76 1.18
Time spent on video platform+ 5,305 118.77 314.78 211 132.94 339.24 5,094 118.19 313.75 0.68
Category interest (1 to 5) 5,305 3.51 1.4 211 4.26 1.16 5,094 3.47 1.4 8.05 ***
Number of contacts before participation 5,305 5.02 4.64 211 2.82 3.28 5,094 5.11 4.67 7.03 ***

Notes:
*** p < 0.01, ** p < 0.05, * p < 0.1, n.s. p  0.1.
+
Index built by video-sharing platform to disguise real surfing behavior.
The number of observations varied slightly across brand recall, ad liking, and brand liking due to a no-response option; missing values are excluded case
wise.

Next, we matched respondents’ propensity scores and compared similar users in the pseudo treatment and pseudo con-
trol groups. We applied the nearest neighbor algorithm (Smith & Todd, 2005) and tested the robustness of the results
towards other PSM algorithms (see also section on robustness). We use the common support option and include only respon-
dents whose propensity scores for surfing premium or nonpremium channels overlap to a specific degree. Thus, we can
reduce group differences (Mithas & Krishnan, 2009). This is confirmed by the distributions of the propensity scores before
and after matching for brand recall, ad liking, and brand liking (see Fig. A1 in Web Appendix A).

5.4. Quality of propensity-score matching

We present the results of the logistic regressions in Tables A1 and A2 of Web Appendix A. After matching, systematic
differences in the distribution of the independent variables are lower, which is reflected in substantially lower pseudo-R2
values post matching (Sianesi, 2004). We compared the mean standardized bias across all independent variables in the
matched and unmatched samples (Rosenbaum & Rubin, 1985). After matching, the mean standardized bias across all
independent variables is substantially lower (between 48% and 88% lower than pre-matching). These results indicate that
we are able to reduce systematic differences between the groups with our PSM models.

5.5. Main results

We did not find any significant differences in unaided brand recall for premium versus nonpremium channels in our anal-
ysis (Table 4, Panel A). Before PSM, aided brand recall was significantly higher for premium brands, but the post-PSM results
indicate that this was a statistical artifact stemming from systematic group differences.
Regarding attitudes towards the ad and brand, results show an increase in liking from advertising on premium channels
only for premium brands; nonpremium brands did not significantly benefit from advertising on premium channels (see
Table 4, Panel B). For premium brands, the average ad liking (matched) for premium channels is with MPremium = 3.664 sig-
nificantly higher compared to MNonpremium = 3.285 for nonpremium channels (p < 0.01). Similarly, brand liking for premium
channels (MPremium = 3.729) is significantly higher than for nonpremium channels (MNonpremium = 3.383, p < 0.01). We also
calculate the average treatment of the treated (ATT; see Table A2 in Web Appendix A).
These results provide support for H1b and H2b. We find evidence that premium brands are hurt by nonpremium web-
sites regarding attitudes, but the same does not hold for nonpremium brands. We do not see detrimental effects of web-
site quality on recall for neither premium nor nonpremium brands, which opposes our predictions in H1a and H2a. Thus
for premium brands, advertising on nonpremium channels may be a viable strategy if they aim to increase awareness (due
to wider reach using the same budget), but it can be risky if the objective is to increase ad or brand liking. The same is not
true for nonpremium brands, where we do not see empirical evidence of detrimental effects due to nonpremium website
advertising.

8
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Table 4
Effects of nonpremium channel quality on brand recall and attitudes (Study 1).

Panel A Unaided brand recall Aided brand recall


Non-premium Premium se t Non-premium Premium se t
channel channel channel channel
Premium brand Unmatched 0.319 0.307 0.042 0.290 0.851 0.764 0.032 2.690 ***
Matched 0.323 0.358 0.064 0.540 0.849 0.774 0.058 1.300
N 3,565 127 3,565 127
off support 289 289
Nonpremium Unmatched 0.271 0.321 0.050 1.000 0.870 0.869 0.038 0.030
brand Matched 0.282 0.323 0.076 0.530 0.877 0.854 0.055 0.410
N 1,144 84 1,144 84
off support 96 96
Panel B Ad liking Brand liking
Non-premium Premium se t Non-premium Premium se t
channel channel channel channel
Premium brand Unmatched 3.265 3.480 0.083 2.590 *** 3.367 3.484 0.074 1.570
Matched 3.285 3.664 0.129 2.930 *** 3.383 3.729 0.112 3.100 ***
N 3,475 123 3,743 124
off support 269 279
Nonpremium Unmatched 3.264 3.402 0.122 1.130 3.329 3.512 0.088 2.080 ***
brand Matched 3.255 3.315 0.170 0.350 3.373 3.414 0.134 0.310
N 1,125 82 193 92
off support 73 1,016

*** p < 0.01, ** p < 0.05, * p < 0.1, n.s. p  0.1.

5.6. Robustness of results

We conduct three types of robustness checks. First, we test whether our results are robust towards other types of match-
ing algorithms. We replicate all PSM estimations by using the following algorithms: radius, Mahalanobis distance, nearest
neighbor with caliper, and kernel matching. The results show that, for each algorithm, the results remain consistent with
those displayed in Table 4 (see Tables A3 and A4 in Web Appendix A).
Next, we test the robustness of our results across industries. The brands in our study pertain to four industries: FMCG,
online services, travel, and other. We include industry fixed effects with ‘‘other” being the baseline. Groups are unbalanced
across industries since two industries do not contain nonpremium brands. Even after controlling for the industry effects, the
results remain robust and consistent with the main results from Table 4 (Tables A5 and A6 in Web Appendix A show the
results).
Finally, we test the robustness of our results towards potential unobservable variables that are not part of our PSM mod-
els, and apply the Rosenbaum sensitivity test (Rosenbaum, 2002). In our PSM models, we match the groups using character-
istics available in our data. While this set of characteristics corresponds to what companies are usually able to observe–and
even goes the extra step of capturing the category interest as well as user demographic–it is possible that missing attitudinal
variables influence group assignment. The Rosenbaum test relies on the sensitivity parameter c, which measures how unob-
served factors may change the odds of users being assigned to the premium/nonpremium group and tests the robustness of
the PSM results towards omitted unobserved variable bias. We run the test for both ad and brand liking in relation to pre-
mium brands (see Web Appendix A, Table A7 for details). The test assesses the influence of unobserved variables by varying
the sensitivity parameter c and examining the robustness of the PSM results. For c = 1, the effect would be significant to a
similar degree as in our results from Table 4. Even high levels of c do not change the significance of our results. Specifically
for ad liking, the results remain significant for c = 1.87 (p < 0.05) and for c = 1.89 (p < 0.10). For brand liking, the results are
significant for c = 1.74 (p < 0.05), and for c = 1.76 (p < 0.10). This implies that to attribute a higher ad and brand liking to
unobserved variables, rather than channel quality, these unobserved variables would need to increase the odds of a user
being assigned to the nonpremium group by 87% and 74% respectively.
The Rosenbaum bounds are higher for both ad (1.95, p < 0.05 and 2.00, p < 0.10) and brand liking (2.00, p < 0.05 and 2.05,
p < 0.10) when we include quadratic terms for the number of total ad impressions, time spent online, time spent on the video
platform, and interest in the advertised product category within the logit model specifications. However, for the model spec-
ification with these quadratic effects, the mean standardized bias post-matching also increases after. Considering this trade-
off, and because all results are consistent in both model specifications with and without quadratic effects, we present the
model specification without the squared variables as our main model.
To summarize, our robustness tests show that our results are robust towards the applied PSM algorithm, and they show
good robustness towards unobservable omitted variables. Therefore, the robustness tests suggest that the effect of channel
quality goes beyond user differences in premium and nonpremium channels, and that nonpremium channels affect con-
sumers’ perception of the ad.

9
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

6. Study 2: Effects of website quality on performance campaigns

6.1. Data and descriptive statistics

Study 2 investigates the effect of premium versus nonpremium websites on click behavior for various brands. We use
observational data from one of the largest ad exchanges in Europe, which performs up to 120 million auctions a day and sells
impressions from more than 80 websites that together reach approximately 35 million unique users each month. Our sample
covers more than 350 million impressions of display advertising across 14 brands (eight premium versus six nonpremium)
operating in seven industries (banking, insurances, dating, fashion, lottery and betting, shoes, and travel). The ads were dis-
played on 84 websites (41 premium versus 43 nonpremium) of varying size (in terms of traffic). The data set, aggregated to
the daily level, contains the total number of ad impressions and total clicks per brand, website, and banner format. Finally,
we have information about the size of a website, calculated as the total number of ad impressions sold per website during
May 7–13, 2015.
All websites were classified into premium and nonpremium by three industry experts (a head of media planning, a media
planner, and a trader in RTB auctions) who were familiar with the websites and had purchased media from the ad exchange
in the past. In addition, we classify brands as premium and nonpremium using a rater approach (Lovett, Peres, & Shachar,
2013). Four independent raters coded the brands as either premium or nonpremium (Guitart, Gonzales, & Stremersch,
2018). The raters’ consensus serves as our brand categorization for our analyses (see Web Appendix C for more detail).
We use the number of clicks as the dependent variable, in line with previous research (Dinner, van Heerde, & Neslin,
2014). The average number of daily impressions bought by a brand on a specific website was 11,000. Each brand generated
1.87 clicks per website per day, on average (maximum of 476 clicks), resulting in an average unweighted CTR per brand and
website4 of approximately 0.10%. Finally, the sample includes five banner formats (skyscraper 120  600 pixels; wide skyscra-
per 160  600 pixels; wide skyscraper alternative 200  600 pixels; medium rectangle 300  250 pixels; and leaderboard
728  90 pixels).

6.2. Model

To analyze how website quality affects click behavior, we specified a linear regression. Our variable of interest is the qual-
ity of the website w (i.e., premium = 0; nonpremium = 1). We controlled for the number of impressions bought by brand b on
website w, for banner format f at time t, as well as the website size. We also included day of the week and week of the year
fixed effects, indicator variables for banner format f, and industry effects. Furthermore, we control for time-invariant, unob-
served brand characteristics by brand-level random intercepts capturing potential differences in media buying behavior
across brands. Formally,
Clickswbft ¼ a þ b1 WebsiteQualityw þ b2 PremiumBrandb þ b3 WebsiteQualityw xPremiumBrandb þ b4 WebsiteSizew
X
F X
I X
J X
K
cf BannerFormatf þ ui Weeki þ uj DayofWeekj þ hk Industryk þ ub þ 
wbft
þ b5 ImpsBoughtwbft þ
f ¼1 i¼1 j¼1 k¼1

ð1Þ
There may be a reverse causal effect of clicks on website quality because advertisers may decide to place their most
promising ads, which are expected to generate a higher number of clicks, on premium websites. A Hausman-Wu test indi-
cates that there may be endogeneity issues with the nonpremium dummy (v2(1) = 3.73; p = 0.05). We correct for this reverse
causality by applying a control function approach (Petrin & Train, 2010). The idea behind the control function correction is to
derive a proxy variable that conditions on that part of an endogenous variable that depends on the idiosyncratic error term.
By this, the remaining variation in the endogenous variable will be independent of the error term. To this end, we first esti-
mate an auxiliary probit model with website quality as a dependent variable, the control variables from Eq. (1) and an instru-
ment (which is not part of Eq. (1)) as independent variables. As always with endogeneity, the selection of instruments is an
issue. Two common approaches for defining instrumental variables are lagged values of the endogenous variable (Ailawadi,
Pauwels & Steenkamp, 2008), or using marketing variables from similar but different markets or categories (e.g., Van Heerde
et al., 2013; Dinner et al., 2014). We decided against using lagged values of the nonpremium dummy because of the sus-
pected presence of autocorrelation. Instead, we use, for each brand, the number of ads of all other brands displayed on pre-
mium websites on a specific day t excluding ads by the brand itself. Our instrumental variable holds the exclusion
restriction: with one exception (both brands of the dating industry are premium), the other premium or nonpremium brands
are from different product categories from the focal brand. Therefore, their advertising strategies are unlikely to influence
clicks of the focal brand and will therefore not be correlated with the error term of the main model (Eq. (1)). We estimate:

4
We use an unweighted average, such that we first calculate CTR for each brand-website-day combination and then take the mean across all combinations.
Thus, websites with smaller impression counts have the same weight as those with larger impression counts.

10
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

X
F X
I
WebsiteQualityw ¼ a þ b1 NumberAdst þ b2 WebsiteSizew þ xImpsBoughtwbft þ cf BannerFormatf þ ui Weeki
f ¼1 i¼1

X
J X
K
þ uj DayofWeekj þ hk Industryk þ w ð2Þ
j¼1 k¼1

We test the strength of our instrumental variable (Greene, 2000, p. 360; Dinner et al., 2014). To this end, we run a probit
regression and regress the endogeneous regressor (i.e., nonpremium dummy) against the exogenous variables in the model.
Next, we add the instrumental variable and conduct a Likelihood Ratio test to compare the goodness-of-fit between both
models. The null hypothesis is that the instrumental variable does not add explanatory power. The results show that adding
the instrumental variable in the equation significantly improves model fit (LR v2(1) = 21.93, p < 0.01). Due to these results,
we are confident that our instrumental variable is suitable and fulfills the exclusion criterion.
Finally, we predict the residuals from Eq. (2) and include this control function term as an additional regressor in Eq. (1).
This way we control for the unobserved variation, which may make the website quality variable endogenous (Petrin & Train,
2010).

6.3. Main results

Table 5 contains the results of the linear regression model. We see that nonpremium websites show a negative direct
effect (b = 0.53, se = 0.18, p < 0.01). This effect is stronger for premium brands: the interaction between website quality
and premium brand is negative and significant (b = 1.33, se = 0.21, p < 0.01). These effects are in line with H1c and H2c.
Thus, website quality harms advertising effectiveness more for premium brands. We calculate the marginal effects of this
interaction, when all other variables from Eq. (1) are kept at sample means (see Fig. B1 in Website Appendix B for a visual-
ization). The results show that premium brands generate 64% (= (1.060–2.926)/2.926) fewer clicks when advertised on non-
premium versus premium websites. In contrast, nonpremium brands generate only 31% (= (1.193–1.727)/1.727) fewer clicks
when advertised on nonpremium versus premium websites. These results suggest that contrary to current managerial prac-
tice, managers should also consider negative effects of nonpremium websites for performance campaigns. These negative
effects emerge for both premium and nonpremium brands, although premium brands suffer more.
Regarding the control variables, we see that a higher number of impressions bought by a brand and a larger website size
increase the number of generated clicks. In addition, the control function component for nonpremium websites is significant,
showing the need for endogeneity correction. Lastly, we see some significant industry effects that reveal the need to include
industry fixed effects in our model.

6.4. Robustness of results

First, we estimate a model without random intercepts at the brand level to test the robustness of the effects when brand-
specific heterogeneity is not accounted for (see Table B1 in Web Appendix B). All focal results remain consistent in terms of
direction, size, and significance. In addition, because our dependent variable (number of clicks) is a count variable, we
applied Poisson regression analyses to test the robustness of our results against a potential overdispersion of the dependent
variable. We estimate two Poisson regression models, with and without random intercepts. All focal effects remain consis-
tent in terms of size and significance (see Web Appendix B, Table B2). Overall, the study offers support for a negative website
quality effect on clicks and shows that these effects are stronger for premium brands.

7. Discussion

7.1. General discussion

With programmatic advertising and advertising automation becoming increasingly relevant in online advertising, man-
agers need empirical insights to evaluate its risks and make informed decisions regarding the question on whether to use
programmatic advertising or rely on traditional direct deals. One of the major limitations of programmatic advertising is that
managers have limited control over where their ads appear, leading to appearances on nonpremium websites and brand
safety problems. Empirical insights into website quality effects are lacking; so marketers are guided by gut feeling when
deciding whether to utilize programmatic advertising or not.
This paper analyzes how advertising on nonpremium websites influences recall, attitudes, and clicks for premium and
nonpremium brands in two empirical studies. Our research thereby extends the literature on programmatic advertising
(e.g., Paulson, Luo, & James, 2018; Kim et al., 2014) by analyzing one aspect that is considered a major risk, namely adver-
tising on websites of poor quality. Extant research regarding environmental effects on advertising effectiveness has mostly
analyzed content effects, e.g., BMW being advertised on an automobile website versus a recipe website (e.g., Shamdasani,
Stanaland, & Tan, 2001). We complement these studies by analyzing how website quality–its most basic yet most compre-
hensive characteristic–affects advertising effectiveness. Relying on the evaluative conditioning framework, we demonstrate

11
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Table 5
Effect of nonpremium website quality on clicks (Study 2).

DV: Number of clicks b p-value


(se)
Website quality (nonpremium effect) 0.53 0.00 ***
(0.18)
Premium brand 1.20 0.00 ***
(0.18)
Website quality  Premium brand 1.33 0.00 ***
(0.21)
Impressions bought (in thousands) 8.54 0.00 ***
(0.05)
Website size (in thousands) 0.80 0.00 ***
(0.06)
Banner format
120  600 0.92 0.42
(1.14)
160  600 0.97 0.00 ***
(0.12)
200  600 8.42 0.00 ***
(0.55)
300  250 0.10 0.44
(0.13)
Week of year
15 0.11 0.51
(0.16)
16 0.20 0.22
(0.17)
17 0.31 0.06 *
(0.16)
18 0.23 0.22
(0.19)
Industry
Insurance 1.49 0.00 ***
(0.20)
Dating 1.54 0.00 ***
(0.21)
Apparel 0.23 0.24
(0.20)
Lottery 0.43 0.10
(0.26)
Shoes 0.04 0.80
(0.18)
Travel 0.34 0.14
(0.23)
Day of week
1 0.22 0.24
(0.19)
2 0.29 0.12
(0.19)
3 0.33 0.08 *
(0.19)
4 0.64 0.01 **
(0.19)
5 0.54 0.00 ***
(0.18)
6 0.18 0.29
(0.17)
Control function residual 6.32 0.00 ***
(0.76)
Intercept 0.28 0.32
(0.28)
N 32,744
Overall R2 0.48
Notes: *** p < 0.01, ** p < 0.05, * p < 0.1, n.s. p  0.1
The baseline is a premium website and a leaderboard banner format (728  90).

that nonpremium websites can show detrimental effects, and that these effects depend on campaign goal and brand type.
Specifically, website quality influences affective (i.e., attitudes towards ad and brand) and behavioral (i.e., clicks) outcomes,
but not cognitive consumer reactions (i.e., brand recall). These findings enhance our understanding of online advertising
12
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Table 6
Implications.

Premium brand Nonpremium brand


Branding Current industry Website quality relevant Depending on strategy, some managers consider
campaigns practice ? No programmatic website quality relevant, others not
? Mixed use of programmatic*
Our advice based on Website quality effects depend on campaign goal: Website quality not relevant
empirical evidence Attitudes ? No programmatic ? Programmatic
Recall ? Programmatic
Performance Current industry Depending on strategy, some managers consider Website quality not relevant
campaigns practice website quality relevant, others not ? Programmatic
? Mixed use of programmatic*
Our advice based on Website quality relevant Website quality relevant
empirical evidence ? Programmatic, only if cost advantage outweighs ? Programmatic, only if cost advantage outweighs
negative click effect negative click effect

Note: * Mixed use of programmatic means that one of the following two strategies is applied:
Strategy 1: Website quality relevant for branding campaigns (independent of brand type).
Strategy 2: Website quality relevant for premium brands (independent of campaign type).

effectiveness for premium and nonpremium brands. Furthermore, these findings complement findings on evaluative condi-
tioning (Landwehr, Golla, & Reber 2017): Our findings suggest that, when it comes to website quality, recall is not necessary
for an attitude and behavioral effect to emerge.

7.2. Implications

Our findings have substantial implications for marketing practice–especially in light of Google’s recent announcement to
limit the use of third-party cookies in their Chrome browser by making ‘‘disable third-party cookies” the standard setting
(Neumann, 2020). With this policy, user targeting will become more difficult, and contextual targeting will gain relevance
(Schiff, 2019; Tan, 2020). Since website quality is a central, but under-researched context dimension, it is vital for advertisers
to understand how it affects advertising effectiveness.
Our results quantify the risk of advertising on nonpremium websites, allowing us to derive implications for the usage of
programmatic advertising. Until now these insights had been missing, and managerial practice was mostly guided by rule-
based strategies. Table 6 summarizes these current strategies and shows how our findings challenge these.
We have four conditions when combining campaign goal and brand type: branding and performance campaigns for pre-
mium and nonpremium brands. Our findings challenge the current practice for each of the four combinations.
Branding campaigns for premium brands. The dominant managerial strategy is to advertise premium brands on premium
websites for branding campaigns (Battelle, 2014; Maunder, 2014). Our findings from Study 1 show a more differentiated pic-
ture: While we confirm this finding for ad and brand liking, we find no recall effect. Hence, managers aiming to foster atti-
tudes towards their brands should be careful in exploring programmatic advertising and RTB, as it may backfire due to
uncertain website quality. However, in the case of campaigns that solely aim to generate awareness, e.g., when entering a
new market, advertising on nonpremium websites could be an option. Here, relying on programmatic advertising may be
a cheaper and equally effective strategy than advertising exclusively on premium websites.
Branding campaigns for nonpremium brands. For this setting, we see two current alternative managerial strategies. Some
managers consider website quality less relevant for nonpremium brands, implying the use of programmatic advertising for
branding campaigns of nonpremium brands (Strategy 2). Others seem to avoid nonpremium websites for all branding cam-
paigns (Strategy 1), meaning that they are careful to use programmatic advertising in a branding campaign setting, even for
nonpremium brands. Results from Study 1 do not show any negative effects on attitudes or recall. These results suggest that
programmatic advertising can be used in the context of branding campaigns for nonpremium brands, and this way contrast
the managerial belief that programmatic advertising is not suitable for any type of branding campaign.
Performance campaigns for premium brands. Here, we again see two dominant managerial strategies. Some managers con-
sider website quality less relevant for performance campaigns (Strategy 1), such that programmatic advertising is used in
performance campaigns for premium brands. Others avoid nonpremium websites and programmatic advertising for pre-
mium brands (Strategy 2), independent of campaign type. Our results show that advertising on nonpremium websites leads
to a lower number of clicks. This finding challenges the belief that website quality does not matter for performance cam-
paigns, and suggests that managers need to trade off the upside (lower costs) and downside (lower number of clicks) of
advertising on nonpremium websites before making advertising decisions.
Performance campaigns for nonpremium brands. Similarly, we find negative effects of nonpremium website advertising on
clicks for nonpremium brands, challenging the current managerial practice that considers website quality not relevant. Our
findings suggest that nonpremium brands could benefit from an advertising-up strategy in which they are advertised on pre-
mium websites (Guitart, Gonzales, & Stremersch, 2018). Here, managers can also use programmatic advertising if the cost
advantage outweighs the negative click effect.

13
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

In conclusion, at least two of our empirical findings counter what campaign managers currently do. First, not all branding
campaigns need to avoid programmatic advertising. Campaigns targeted at solely increasing awareness and campaigns by
nonpremium brands may utilize programmatic advertising since it is more cost-effective than traditional direct deals. Sec-
ond, performance campaigns of premium and nonpremium brands do suffer adverse effects on click behavior. Consequently,
the old narrative that the ‘‘where” is not important for performance campaigns does not seem to be true.

7.3. Future research and limitations

Our study has some limitations that may facilitate further research. First of all, we use industry classifications (e.g., by
Google) to derive our premium and nonpremium classifications of website quality in our studies. Although our objective
was not to identify what makes premium websites ‘‘premium,” it may be worthwhile to explore objective assessments of
website quality. Granted, we think that adopting industry categorizations that are used for pricing decisions produces more
managerial relevance than creating our own definitions. Secondly, we do not have information on the content of the web-
sites. It may be interesting to analyze whether the fit between website content and advertised product moderates the web-
site quality effect. Thirdly, our universe of premium and nonpremium websites did not contain websites that distribute
unethical content. For these types of websites, the negative effects on attitudes towards the ad and brand might even be
stronger. Consequently, we believe our effects to be rather conservative in terms of their size. Fourthly, in our studies we
ruled out targeting effects and did not apply any type of behavioral or content targeting. Future studies could analyze web-
site quality effects in the presence of different targeting strategies. Finally, we quantify website quality effects on recall and
attitudes for branding campaigns; for performance campaigns we estimate the effect on the number of clicks. Due to differ-
ences in study design, these effects are not comparable. Although these metrics correspond to typical indicators used in the
industry for measuring branding performance and performance campaigns respectively, future studies could analyze differ-
ences along the funnel for both types of campaigns.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have
appeared to influence the work reported in this paper.

Acknowledgements

The authors thank Jan Landwehr, Franziska Völckner, Christian Schulze, Selin Atalay, and Gülen Sarial Abi for their com-
ments on previous versions of this manuscript. The authors are grateful to the review team for their valuable comments that
substantially improved the article.

Appendix A. Supplementary material

Supplementary data to this article can be found online at https://doi.org/10.1016/j.ijresmar.2020.10.004.

References

Ailawadi, K. L., Pauwels, K., & Steenkamp, J.-B. (2008). Private-label use and store loyalty. Journal of Marketing, 72(6), 19–30.
Andrews, M., Luo, X. M., Fang, Z., & Ghose, A. (2016). Mobile Ad effectiveness: Hyper-contextual targeting with crowdedness. Marketing Science, 35(2),
218–233.
Battelle, J. (2014). The programmatic problem: What’s an audience without a show? Available at http://digiday.com/publishers/programmatic-advertising-
context/.
Caliendo, M., Clement, M., Papies, D., & Scheel-Kopeinig, S. (2012). The cost impact of spam filters: Measuring the effect of information system technologies
in organizations. Information Systems Research, 23(3), 1068–1080.
Cho, C.-H. (2003). Factors influencing clicking of banner ads on the WWW. CyberPsychology & Behavior, 6(2), 201–215.
Choi, H., Mela, C. F., Balseiro, S., & Leary, A. (2017). Online display advertising markets: A literature review and future directions. Information Systems
Research. https://doi.org/10.1287/isre.2019.0902.
Coffee, P. (2017). Verizon, AT&T and J&J are the latest to pull ads from YouTube over extremist videos: Available at http://www.adweek.com/brand-
marketing/att-and-verizon-pull-all-ads-from-youtube-after-extremist-videos-appear-on-the-platform/
Colicev, A., Malshe, A., & Pauwels, K. (2018). Improving consumer mindset metrics and shareholder value through social media: The different roles of owned
and earned media. Journal of Marketing, 82(1), 37–56.
Crovitz, G. (2020). How amazon, geico and walmart fund propaganda. Available at https://www.nytimes.com/2020/01/21/opinion/fake-news-russia-ads.
html.
De Houwer, J., Thomas, S., & Baeyens, F. (2001). Associative learning of likes and dislikes: A review of 25 years of research on human evaluative conditioning.
Psychological Bulletin, 127, 853–869.
De Houwer, J. (2007). A conceptual and theoretical analysis of evaluative conditioning. The Spanish Journal of Psychology, 10, 230–241.
De Pelsmacker, P., Geuens, M., & Anckaert, P. (2002). Media context and advertising effectiveness: The role of context appreciation and context/ad similarity.
Journal of Advertising, 31(2), 49–61.
Dinner, I., van Heerde, H. J., & Neslin, S. A. (2014). Driving online and offline sales: The cross-channel effects of traditional, online display, and paid search
advertising. Journal of Marketing Research, 51(5), 527–545.
eMarketer (2020). Digital ad spending 2019. Available at https://www.emarketer.com/content/global-digital-ad-spending-2019.
eMarketer (2014). Performance focus widens gap between premium and nonpremium inventory. Available at http://www.emarketer.com/Article/
Premium-CPM-Prices-Only-Way-Up/1010960.

14
E. Shehu, N. Abou Nabout and M. Clement International Journal of Research in Marketing xxx (xxxx) xxx

Goldfarb, A., & Tucker, C. (2015). Standardization and the effectiveness of online advertising. Management Science, 61(11), 2707–2719.
Goodrich, K. (2011). Anarchy of effects? Exploring attention to online advertising and multiple outcomes. Psychology and Marketing, 28(4), 417–440.
Greene, W. H. (2000). Econometric analysis. Upper Saddle River, NJ: Prentice Hall.
Guitart, I. A., Gonzales, J., & Stremersch, S. (2018). Advertising nonpremium products as if they were premium: The impact of advertising up on advertising
elasticity and brand equity. International Journal of Research in Marketing, 35(3), 471–489.
Heckman, J. J., Ichimura, H., & Todd, P. (1998). Matching as an econometric evaluation estimator. Review of Economic Studies, 65(2), 261–294.
Hofmann, W., De Houwer, J., Perugini, M., Baeyens, F., & Crombez, G. (2010). Evaluative conditioning in humans: A meta-analysis. Psychological Bulletin, 136
(3), 390–421.
Hsieh, A.-Y., Lo, S.-H., & Chiu, Y.-P. (2016). Where to place online advertisements? The commercialization congruence between online advertising and
website context. Journal of Electronic Commerce Research, 17(1), 36–46.
Hughes, C., Swaminathan, V., & Brooks, G. (2019). Driving brand engagement through online social influencers: An empirical investigation of sponsored
blogging campaigns. Journal of Marketing, 83(5), 78–96.
IAB (2019). IAB video advertising spend report. Available at https://www.iab.com/wp-content/uploads/2019/04/IAB-Video-Advertising-Spend-Report-
Final-2019.pdf.
Janssens, W., De Pelsmacker, P., & Geuens, M. (2012). Online advertising and congruency effects. International Journal of Advertising, 31(3), 579–604.
Joseph, S. (2020). The latest YouTube brand safety ‘crisis’ shows advertisers are taking a more nuanced approach. Available at https://
digiday.com/marketing/latest-youtube-brand-safety-crisis-shows-advertisers-taking-nuanced-approach/amp/?__twitter_impression=true.
Kannan, P. K., & Li, H. S. (2017). Digital marketing: A framework, review and research agenda. International Journal of Research in Marketing, 34(1), 22–45.
Keller, K. L. (1993). Conceptualizing, measuring, and managing customer-based brand equity. Journal of Marketing, 57(1), 1–22.
Kim, J. Y., Brünner, T., Skiera, B., & Natter, M. (2014). A comparison of different pay-per-bid auction formats. International Journal of Research in Marketing, 31
(4), 368–379.
Kwon, E. S., Whitehill, K. K., Nyilasy, G., & Reid, L. N. (2019). Impact of media context on advertising memory: A meta-analysis of advertising effectiveness.
Journal of Advertising Research, 59(1), 99–128.
Landwehr, J. R., Golla, B., & Reber, R. (2017). Processing fluency: An inevitable side effect of evaluative conditioning. Journal of Experimental Social Psychology,
70(2017), 124–128.
Lee, J.-G., & Thorson, E. (2009). Cognitive and emotional processes in individuals and commercial web sites. Journal of Business & Psychology, 24(1), 105–115.
Lobschat, L., Osinga, E. C., & Reinartz, W. J. (2017). What happens online stays online? Segment-specific online and offline effects of banner advertisements.
Journal of Marketing Research, 54(6), 901–913.
Lovett, M. J., Peres, R., & Shachar, R. (2013). On brands and word of mouth. Journal of Marketing Research, 50(4), 427–444.
Maunder, J. (2014). RTB - The long tail of display advertising. Available at http://www.periscopix.co.uk/blog/rtb-the-long-tail-of-display-advertising/.
Meyers-Levy, J., & Tybout, A. M. (1989). Schema congruity as a basis for product evaluation. Journal of Consumer Research, 16(1), 39–54.
Mithas, S., & Krishnan, M. S. (2009). From association to causation via a potential outcomes approach. Information Systems Research, 20(2), 295–313.
Moore, R., Stammerjohan, C., & Coulter, R. (2005). Banner advertiser-web site context congruity and color effects on attention and attitudes. Journal of
Advertising, 34(2), 71–84.
Nelson-Field, K., Riebe, E., & Sharp, R. (2013). More mutter about clutter: Extending empirical generalizations to facebook. Journal of Advertising Research, 53
(2), 186–191.
Neumann, N. (2020). After google finally really did it, a necessary system cleanup. Available at https://www.adexchanger.com/data-driven-thinking/after-
google-finally-really-did-it-a-necessary-system-cleanup/.
Paulson, C., Luo, L., & James, G. M. (2018). Efficient large-scale internet media selection optimization for online display advertising. Journal of Marketing
Research, 55(4), 489–506.
Petrin, A., & Train, K. (2010). A control function approach to endogeneity in consumer choice models. Journal of Marketing Research, 47(1), 3–13.
Riebe, E., & Dawes, J. (2006). Recall of radio advertising in low and high advertising clutter formats. International Journal of Advertising, 25(1), 71–86.
Rosenbaum, P. R., & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score.
The American Statistician, 39(1), 33–38.
Rosenbaum, P. R. (2002). Overt bias in observational studies. In Observational studies (pp. 71–104). New York, NY: Springer.
Rozin, P., & Millman, L. (1987). Family environment, not heredity, accounts for family resemblances in food preferences and attitudes: A twin study.
Appetite, 8, 125–134.
Rubin, D. B., & Thomas, N. (1966). Matching using estimated propensity scores: Relating theory to practice. Biometrics, 52(1), 249–264.
Schiff, A. (2019). Can contextual targeting replace third-party cookies? Available at https://www.adexchanger.com/online-advertising/can-contextual-
targeting-replace-third-party-cookies/.
Shamdasani, P. N., Stanaland, A. J. S., & Tan, J. (2001). Location, location, location: Insights for advertising placement on the web. Journal of Advertising
Research, 41(4), 7–21.
Sianesi, B. (2004). An evaluation of the active labour market programmes in Sweden. The Review of Economics and Statistics, 86(1), 133–155.
Smith, J. A., & Todd, P. E. (2005). Does matching overcome Lalonde’s critique of nonexperimental estimators? Journal of Econometrics, 125(1–2), 305–353.
Stevenson, J. S., Bruner, G. C., II, & Kumar, A. (2000). Webpage background and viewer attitudes. Journal of Advertising Research, 40(1/2), 29–34.
Tan, J. (2020). Google to scrap third-party cookies: Will it crumble parts of the digital ad world? Available at https://www.marketing-
interactive.com/google-to-scrap-third-party-cookies-will-it-crumble-parts-of-the-digital-ad-world.
Van Heerde, H., Gijsenberg, M., Dekimpe, M., & Steenkamp, J.-B. (2013). Price and advertising effectiveness over the business cycle. Journal of Marketing
Research, 50(April), 177–193.
Wilson, R. T., & Casper, J. (2016). The role of location and visual saliency in capturing attention to outdoor advertising. Journal of Advertising Research, 56(3),
259–273.
Wilson, R. T., & Suh, T. (2018). Advertising to the masses: The effects of crowding on the attention to place-based advertising. International Journal of
Advertising, 37(3), 402–420.
Yaveroglu, I., & Donthu, N. (2008). Advertising repetition and placement issues in online environments. Journal of Advertising, 37(2), 31–44.

15

You might also like