Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 23

Introduction to Measurement Methodologies

In this course, you will compare experimental and observational measurement methods.
and recognize when to implement A/B tests vs. RCTs. You will learn how to describe how A/B,
Brand and Conversion Lift tests that are conducted on Facebook.

Experimental and Observational Measurement Methods


In this lesson
This lesson prepares you to apply different measurement methodologies through observational and
experimental techniques.
Measurement methods
The goal of advertising is to reach people in the right place and encourage them to take a particular
action. First, you need to understand where your audience spends their time and the impact different
advertising strategies might have on them.

The measurement of marketing efforts across devices and platforms is a key element of business
development. This practice includes more than just traditional digital metrics, such as clicks or post
engagement.

If you measure only independent ads or campaigns, you may limit your understanding of your full
advertising effort. When you identify which variables work and focus on your business goals and
outcomes though, you can use the insights you gain to adjust your overall strategy and guide future
campaigns.
When you want to measure and analyze your campaign, it's important to distinguish between the
different ways you can do this.
Data
Provides an aggregated record when people interact with content
Reporting
Provides data organization into informational summaries
Measurement
Provides a standardized approach to your data to better understand campaign performance
Insights
Provides guidance to help you improve your future marketing strategies

To use Facebook measurement and analysis tools effectively, you need to know what to measure.
This is where we must consider your business goals.

A key performance indicator, or KPI, is a metric that helps you assess whether you've reached your
goal. It’s important to filter your data and determine which KPI is relevant to your primary goal, so
you can prioritize your resources.
Tools to collect relevant information
Before you begin measurement, confirm you have the appropriate tools set up to collect the relevant
information and assess your results
These tools are connections or pieces of code that collect the information that Facebook uses to
measure the results of your ads. Here are the different tools you can integrate:

The Facebook pixel


Transfers information about actions people take on your website to Facebook.
The Conversions API
Enables you to share key web and offline customer actions directly from your servers to Facebook.
The Conversions API works with the pixel to help improve the performance and measurement of
your campaigns.
The Facebook SDK
Transfers information about actions people take on your app to Facebook. Please note, if you use
the Facebook SDK for iOS, you should update to version 8.0 or above for iOS 14 support.
Offline conversions
Enables you to match interactions in physical store locations (from point of sale systems and CRM
tools) to people who saw your ad. You can also use Business Manager to integrate offline
conversions.
Test methodologies
Marketers want to understand if their ads were, to some extent, responsible for sales, and they use
different measurement methods to answer that question. These methods fall into two categories:
observational and experimental.
Observational
Researchers observe and collect data to infer insights based on that data. The researcher has no
control over the subjects or the variables.
Experimental
Researchers intentionally apply treatments to the subjects, then they observe the effects of those
treatments in a controlled environment.
Exposed and unexposed groups
Both methods compare exposed and unexposed groups, but in the observation method, there's no
guarantee that the groups are equal on all attributes. In the experimental method, groups are
typically equal on all attributes, not just ads. These attributes may include certain traits, such as age
or gender.

Observational methods offer marketers less control, which makes it difficult to use the data to derive
insights on causal effects. After an ad campaign is complete, researchers can attempt to measure a
causal effect on the desired results with data they've collected throughout the campaign.

For various reasons, some people might have seen an ad while others haven't. Maybe they didn't
scroll far enough on a page to see it, they weren't in the target audience or they haven't browsed the
internet lately.

The experimental method enables marketers to expose a group of people to an ad while another
group functions as a control group. The researchers can then measure the effect after the campaign
concludes. A well-designed experiment randomly assigns who sees an ad and who doesn't. That
way, marketers can gain insight about the effect an ad has on sales based on the difference between
the exposed and unexposed groups.
Difference between experimental and observational
methods
There can be a significant discrepancy between results from the commonly used observational
approaches and true experiments
There are sometimes instances when the results from observational approaches on a single ad
campaign come close to results from true experiments. However, those cases are rare, and it's
difficult to predict when it might happen. You can't rely on observational methods to provide the
right answer in the future.
When we consider a collection of randomized control trials (RCTs), as we do in Facebook data-
driven attribution (DDA), we find measurements are less biased and have lower variance than
common observational methods for single campaigns.
An RCT is still preferable to get a more accurate assessment of the effect a campaign has. However,
when it's not practical to run an experiment, this type of model-based attribution is a more reliable
method, even when results show discrepancies compared to experimental methods
Knowledge check

Experimental versus observational methods


Marketers want to understand if their ads were, to some extent, responsible for sales.
Sort the following into the correct category.

A researcher analyzes data from Ads Manager to determine whether people were more likely to
respond to video or image ads in the past three months.The marketer tries to isolate the exposed
and unexposed groups after the facT, based on how people naturally fell into either category.A
researcher shows different types of ads to separate groups of people in a controlled
manner.Marketers plan who sees an ad and who doesn't.

Observational methods

Experimental methods

The fictitious company Storx is a large, membership-based, fictional ecommerce retailer that
focuses on products for parents of young children. The Storx marketing team ran a new
campaign that offers a complimentary service called TeleNanny. TeleNanny gives new moms
and dads the expert opinion of live child care experts at the tap of a button on the app or website.
The team saw a substantial increase in sales this month, and they want to know if that increase is
due to the new advertising campaign.

Which method would you use to answer their question?


Set up an experiment in which one group of people sees the ads while the other doesn't.
Then, evaluate sales in both groups to determine if the campaign had an effect.

Use an observational method to assess the effect campaign exposure had on sales. Evaluate
the results for people who were and weren't exposed to the ads.
Use an observational method that shows different types of ads to separate groups of people
in a controlled manner.

Set up an experiment to isolate the exposed and unexposed groups afterward based on how
people naturally fell into either category.

Key takeaways
 Before you begin measurement, confirm you have the appropriate data tools set up to collect the
relevant information and measure your results.
 Observational methods offer marketers less control, which makes it difficult to use the data to
derive insights into causal effects.
 With experimental methods, marketers expose a group of people to an ad while another group
functions as a control.

A/B Tests Versus RCTs


This lesson prepares you to recognize when to implement A/B tests versus RCTs.
A/B tests
A/B tests enable you to test different versions of your ads against each other, so you understand
which elements achieve the best performance based on your current attribution settings. This test is
ideal for businesses that just want to learn which tactics work best to optimize future campaigns. In
general, A/B tests are highly effective for routine tactical optimizations.

A/B test variables


With an A/B test, you have a few options for the independent variables.
Audience
Test your ad against two different audiences. For example, an audience of people ages 18–25 versus
an audience of people ages 26–35.
Creative
Test different images or creative elements against the same audience.
Placement
Test different ad placements or test your typical placement selections against automatic placements.
Delivery optimization
Test different delivery optimizations against each other.
Budget
Test the effectiveness of different structures for your ad budgets.
Campaign objective
Test different campaign objectives to find which one best helps you achieve your desired results.
Scenario
The fictitious company LaLueur is a beauty and personal care brand focused on creating inclusive
skincare and makeup products. Its marketing team is working on a campaign to introduce a new line
of lipsticks. The campaign focuses on traffic and uses single image ads to promote the new lipsticks
and encourage people to learn about them.

The team has two different images they believe could work for their ads, so they conduct an A/B
test to help them decide which is most effective. They create an A and a B version of the ads where
both versions are the same, except for the image.

The results from the A/B test show version B as the winner with more results and a lower cost per
result, so the LaLueur team uses that ad for the rest of the campaign.
Tip: A/B tests enable you to compare multiple campaign strategies. However, some A/B tests don't
include control groups, only randomized test groups. In this scenario, they don't measure causality
or the incremental value of a strategy, so the A/B test only identifies correlation, not causation
Randomized control trials (RCTS): Brand Lift and Conversion
Lift
A/B tests enable you to test multiple campaign strategies against one another. However, some A/B
tests don't include control groups, only randomized test groups. In this case, they don't measure
causality or the incremental value of a strategy, so the A/B test only identifies correlation, not
causation.

Incrementality
To really understand how our randomized control trials work, you must first understand the concept
of incrementality. The basis of good measurement is incrementality, which means the degree to
which a measurement method can estimate the causal effect of an isolated marketing activity, such
as an ad.

This considers how much value comes specifically from ads (for example, conversions that occurred
as a result of them), which helps marketers understand the returns their ads generate.

To calculate lift in ad performance, incrementality measurement compares differences in outcomes


between a group of people who’ve seen certain ads and a control group of comparable people who
haven't.
Incrementality versus rule-based attribution
When we perform a lift test, we see that the prospecting strategy actually generates greater
incremental value, or more purchases, from people who wouldn’t have acted if they hadn't seen the
ad. If we expect baselines to be systematically different, rule-based attribution can mislead us.

A/B tests don't yield more results. However, when you compare strategies with different baselines,
there are limits to A/B testing because there are limits to last-click attribution. For that reason, it
works better with rapid optimization and routine decisions related to objectives, such as app installs
or lead generation. For bigger business decisions based on sales or conversion-based outcomes, a
lift test can help you confidently decide your next step.

Incrementality measurement is central to your ability to understand the true value of your marketing
decision. Advertisers need to measure the proportion of sales each targeting strategy generates to
learn which has the potential to generate the largest proportion of additional sales in the future.

Success online doesn't result from a single action. Instead, businesses need to build a foundation of
tools and systems that work together. They also need to periodically examine those tools and
systems to confirm they're performing at optimal levels. When you continuously troubleshoot
targeting issues, you can improve your campaign at all levels and maintain your ability to reach the

right audience.
Axion Electronics is a fictitious manufacturer of high-end televisions that sells their products
through their website. The Axion marketing team needs to assess which proportion of their
advertising budget they should spend on ads targeted to previous buyers versus ads targeted to
people who visited the website but never made a purchase.

The advertiser pauses its usual campaigns and sets up an A/B test with two ad sets, both
optimized for purchases. The only difference is the targeting strategy. They then run the ad sets
for seven days. The A/B test results are based on attributed conversions with a last-touch
attribution model to identify the winning strategy. This measurement approach doesn’t
accurately measure causality though.

What’s causing this measurement issue?


Choose the correct answer.
They should set up the test to prevent contamination to the test group.

The test should run a minimum of three weeks to collect sufficient data.

The test needs to run randomized test and control groups.

The test should be set up to use an even credit attribution.

Submit

Key takeaways
 A/B tests enable you to compare the performance of different versions of your ads against each
other. This helps you understand which elements achieve the best performance based on your
current attribution settings.
 With an A/B test, you have a few options for the independent variables to include in your test.
 Incrementality is the degree to which a measurement method can estimate the causal effect of an
isolated marketing activity.

Sources
 A/B Tests Types Available on Facebook
 Viewing and Understanding Facebook A/B Holdout Test Results

RCTs and Control Groups


This lesson prepares you to use randomized control trials to measure the incremental impact of
your campaign.
The standard methodology for experiments
In randomized control trials, you work with a test group and a control group to measure the
incremental impact of your campaign.

In simple terms, the test group sees your ads while the control group doesn't. The groups are
completely randomized and identical, so any differences between the groups is purely the result
of the Facebook campaign. The randomized yet identical groups help ensure factors such as TV
commercials or other outside factors don’t contaminate the results of the test. The differences in
results between the two groups is the incremental impact of the campaign.
Randomized control trials:
Entire Facebook population is randomized
or
Everything about these two groups is identical
Any differences achieved is purely the result of the Facebook campaign
RCT methodology
To measure the impact of your ads, Facebook lift solutions randomize your audience into test
and control groups with similar characteristics. The system then delivers ads to the test groups.
After the ads run, the system administers a poll or tracks sales outcomes for both the test and
control group. Finally, the system compares poll results or sales outcomes from the test group of
people who saw your ads against the control group of people who didn’t. This enables you to
measure the incremental impact of your campaign.

Brand Lift
Let’s explore the two RCT tests Facebook offers. First, let's focus on brand lift tests.

Brand Lift testing enables you to evaluate the awareness lift your campaign is responsible for.
Brand Lift tests help you prove a causal relationship between your advertising and the results
you observe. If we look back to our RCT methodology, the system delivers polls to both the test
and the control groups.

This is how polling in the test appears on the platform. It also helps you prove a causal
relationship between your advertising and the results you observe. These might involve brand
favorability of product consideration. Each person in your test receives only one question
randomly.

If someone is eligible to receive the poll question, the system shows it to them 4–48 hours after
ad delivery.
Note: There's a seven-day polling eligibility window, which means that if the system serves a
person a poll and that person doesn't respond, they're eligible to see the poll again seven days
after exposure. That second one could be a different question than the original poll. This
situation is only possible if the campaign runs longer than seven days.

Axion Electronics
Axion Electronics is a fictional electronics manufacturer that makes the most technologically
sophisticated televisions in the world. Despite the fact that they sell a superior product,
marketers with superior resources have eclipsed their efforts. The general target audience for
Axion is people ages 18–45.

Let’s explore how the Axion team conducted their Brand Lift test.
First, they created the target audience for the campaign, and then Facebook divided the audience
into random, non-overlapping groups. This randomization helped ensure other factors wouldn't
skew the results. The system delivered the ads to the test group but not the control group.

Conversion Lift
Conversion Lift tests are another form of randomized control trial test Facebook offers.
Conversion Lift tests help you evaluate increases in certain behaviors, or conversions, that your
campaign is responsible for. Conversion Lift tests also enable you to prove a causal relationship
between your advertising and the increases in conversions that you observe.
It’s important to note that you need to integrate conversion data with Ads Manager. You should
therefore have the pixel, Facebook SDK or Conversions API installed to provide a data signal.
Let’s think back to an overview of how RCTs work. Conversion Lift tests track sales outcomes.
The difference in sales between the test group and the control group is your conversion lift.

Conversion Lift test for Axion


The team at Axion uses display ads to promote their electronics. They build two audiences. One
is a demo target of adults ages 18–45. The other is a behavioral target of people interested in HD
home theaters, which is a group of people who have actively researched home theater
installations in the past 30 days. When you look at each target audience, which do you think is
the better choice for increased marketing investment?

At a glance, audience 2 may look like the better option because of the higher conversion rate.
However, these two audiences actually each had a separate Conversion Lift test.
AUDIENCE 1: DEMO AUDIENCE 2: BEHAVIORAL
 Ages 18–45  People interested in HD home theaters
 Reach: 10 million  Reach: 1 million
 Spend: $20,000  Spend: $10,000
 eCPM: $2.00  eCPM: $10.00
 Conversion rate: 0.012  Conversion rate: 0.205

eCPM stands for “Effective Cost Per Mille,” the amount an advertiser pays a website per one
thousand visitors who see its advertisements, as calculated by a particular formula.

AUDIENCE 1: DEMO AUDIENCE 2: BEHAVIORAL


 Ages 18–45  People interested in HD home theaters
 Reach: 10 million  Reach: 1 million
 Spend: $20,000  Spend: $10,000
 eCPM: $2.00  eCPM: $10.00
 Conversion rate: 0.012  Conversion rate: 0.205

 Lift = 0.002  Lift = 0.005


 Incremental conversions = 20,000  Incremental conversions = 5,000
 Cost per incremental conversion = $1  Cost per incremental conversion = $2

The example shows that audience 1 is more cost-efficient, given the test and control outputs. It
ends up costing $1 per incremental conversion for the demo and $2 per incremental conversion
for the behavioral. In reality then, audience 1 is more cost-efficient.

Examples such as this one emphasizes why it’s important to conduct lift tests and how they can
help you make more efficient choices.
Knowledge check
An advertiser runs a Conversion Lift test on a new media platform. During discussions about
measurement, the team from the media platform says that they can compare the conversion rates
for the test group exposed to ads versus a pre-selected control group. The team from the
platform feels its methodology is valid. What concern should the analyst have with the test
design?
Randomization is necessary for valid experimental design

Measurement before and after should provide a more accurate assessment.

There may be contamination between the test and control groups.

An A/B test would provide a more accurate assessment.


Storx, a fictitious ecommerce retailer that sells baby products, plans to launch a national TV
campaign to promote its new products. To support this effort, the team plans to create an online
campaign across a few digital platforms, including Facebook. The TV campaign runs through
the month of April, and the Facebook campaign runs from the second week of April until the
first week of May.

The brand team wants to understand the impact of Facebook in addition to all other media. It
also wants to know if the national TV exposure might contaminate a Brand Lift test on
Facebook because both campaigns target people ages 18–34. How will the national TV exposure
interact with the Brand Lift test on Facebook?

Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which would lead to an underestimation of the incremental impact of the Facebook campaign.

Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which would lead to an overestimation of the incremental impact of the Facebook campaign.

TV exposure can't contaminate the test, because the audience the TV and digital platforms reach
are different.

TV exposure can't contaminate the test, because Brand Lift uses randomized control trial
methodology.

Key takeaways
 Randomization is essential to a well-designed experiment.
 A/B tests can help you compare different versions of ads against each other.
 Brand Lift tests can help you determine whether your ad campaigns increase brand
awareness.
 Conversion Lift tests can help you understand the incremental impact your ad campaigns
have on conversions.

Facebook A/B Testing


In this lesson
This lesson prepares you to optimize ad campaigns by comparing tactical approaches to
variables with A/B testing.
A/B testing
When you set up an ad campaign across Facebook products, the success of your campaign
depends on the treatment of several variables. An A/B test enables you to compare tactical
approaches to variables, so you can optimize your campaigns. It's a way to help you determine
which ad tactics produce the best outcome for your campaign based on your KPIs.

A/B testing with Facebook enables you to create multiple ad sets and test them against each
other to see which approach produces the best results.

For example, you can test the same ad on two different audiences to see which performs better.
Likewise, you can test two different creative formats to determine which is more effective. This
can guide the creative direction you take in future ads.

We sometimes refer to this practice as racing tactics within a variable. Once we have the results
of the test and we've determined the winning tactic, the race is over.

How A/B tests work


In an A/B test, your audience is divided into random, non-overlapping groups. This
randomization helps ensure other factors don't skew the results.

Each group sees ad sets that are identical except for the variable you're testing, which may
include the audience, delivery optimization, placements, creative or product sets.

Note: When you create an A/B test, use only one variable. If you test more, you won't know
what’s caused the change.

Axion electronics
The marketing strategist at Axion wants to test the impact of the "Learn More" call-to-action
button compared to a "Shop Now" call-to-action button. Both buttons direct the audience to the
promotions page on the website, but the text differs. With an A/B test, the Axion team can
determine which button is more effective and use that knowledge to refine future campaign
strategies.

Case study: Lionsgate


Lionsgate is a global content leader whose films, television series, location-based and live
entertainment and subscription-based premium streaming service reach audiences around the
world. In previous Facebook campaigns, the Lionsgate marketing team manually targeted ads to
Core Audiences and adjusted budgets. With the release of their new film Angel Has Fallen, the
Lionsgate team wanted to see if a different strategy would improve conversion. They therefore
conducted a “human versus machine” split test that pitted their existing manual campaign setup
against a setup that used machine-learning.
Cell 1: The existing manual strategy. It consisted of more than 20 ad sets, a budget manually
split across ad sets and pre-selected audiences the marketers thought would be interested in the
film.

Cell 2: The new strategy used machine learning. It consolidated ad sets down to three, used
campaign budget optimization (CBO) to automatically shift budgets to the best-performing ad
set in real time, and targeted ads to broad audiences.

For Cell 2, the Lionsgate team created three audiences of different scales, all of which were
broad enough to enable the algorithm to find as many people who were likely to convert as
possible. The system then delivered ads to all three audiences simultaneously in News Feed and
their Instagram feeds

The first audience consisted of people ages 17 and older who have an interest in movies. The
second audience used Custom Audiences to reach people who engaged with the ads or Page for
the film. The third audience was a lookalike audience based on the second. Lionsgate served ads
to this group again close to the release of the film.
Results
The Lionsgate team used a split test in Ads Manager, which revealed that the simplified account
structure, CBO and scaled audiences they used in the August 13–15, 2019 campaign
outperformed the usual setup. The simplified account setup achieved:
 20% more showtime search conversions with the simplified account setup.
 18% lower cost per showtime search with the simplified account setup.

Interpret results
After a test, you receive an email report with results, but you can also review your test results
directly in Ads Manager
Expand each header below to learn more.

Winning ad

We compare the cost per result of each ad set or ad based on your campaign objective to
determine the winner. We also use the data from the test to simulate the performance of each
variable tens of thousands of times to determine how often the winning outcome would have
occurred.

Confidence level

For example, you run an A/B test with a creative variable, such as a video versus an image.
Your results show that the ad set with the video is the winning strategy with a 90% confidence
level. This percentage indicates that the video ads had the lowest cost per result, and there's a
90% chance that if you ran the same test multiple times, you'd achieve the same result.

Take action

With these results, it's most effective to adopt the winning strategy. You may want to move the
budget of the lower-performing ad set to the one that won, or you may instead edit the lower-
performing ad set to give it the same characteristics as the winner.

For any A/B test, a confidence level above 75% is an indication that further use of the winning
ad set is likely a good strategy.

Improve future campaigns


Once you review your results and understand which strategy performed best, you can:

 Create a new ad from the winning ad set right from the results email.
 Reactivate the winning ad set in Ads Manager.
 Create a new campaign based on what you learned.

You could also run more tests to refine your strategy. For example, if an audience of people ages
24–30 outperform an audience of people ages 18–30, you could then test an audience of people
ages 24–30 against an audiences ages 24–36. When you refine your audience, you might
generate greater results from your campaigns.
If your results have low confidence, you can test the campaign again with a longer schedule or
higher budget. More time and budget typically generate more data, which helps us increase the
confidence in your results.

Knowledge check
LaLueur is a fictitious personal care brand that offers an inclusive range of skincare and makeup
products. The marketing team wants to assess what portion of their advertising budget they
should allocate to retargeting previous buyers versus targeting ads to consumers who visited
their website but didn't make a purchase. To predict which is most likely to generate the largest
proportion of future sales, the team needs to measure the proportion of sales each targeting
strategy generated.

The team pauses its usual campaigns and sets up an A/B test that uses two ad sets optimized for
purchases. The only difference is the targeting strategy each ad set uses. The advertiser runs the
ad sets for seven days and the results are based on attributed conversions through a last touch
attribution model. This measurement approach accurately measures causality.
What's causing this measurement issue?
(Choose 3)
They should set up the test to use an even credit attribution model.

They should set up the test to prevent contamination to the test group.

The test should run for a minimum of three weeks to collect sufficient data.

The test needs to run randomized test and control groups.

Submit

Key takeaways
 Compare tactical approaches by using one campaign variable (for example, creative, audience,
placement, product sets or delivery optimization).
 Make data-driven recommendations for future campaigns based on results.
 Use the Facebook A/B test results confidence level, which represents our confidence in the
declaration of the winner. A confidence level of 75% is reliable.
 Improve future campaigns by using the winning ad or create a new ad from the winning ad set.
You can also create a new campaign based on what you learned.

Sources
 A/B tests types available on Facebook
 Viewing and Understanding Facebook A/B holdout test results
 Success Story: Lionsgate

Brand Lift Testing


In this lesson
This lesson prepares you to use Brand Lift tests to measure the impact of Facebook ads on brand
equity.

Brand Lift

Facebook has several solutions to help advertisers understand the impact of their advertising. A
Brand Lift test is one such solution.

Incrementality measurement is central to your ability to understand the true value of your
marketing decisions. Advertisers need to measure the proportion of sales each targeting strategy
generates to determine which is most effective.
A Brand Lift is a randomized control trial (RCT) that measures how a Facebook campaign
impacts brand awareness, including perception and recall of your brand or product among a
particular audience. A Brand Lift test includes a control group, so it can quantify the impact of
your Facebook advertising independent of all your other marketing efforts. To run a test
independently, an advertiser can use experiments or a managed test with a Facebook account
representative. This course only focuses on managed tests.

Measure the effectiveness of your advertising


A Brand Lift can help advertisers measure brand equity in a quantifiable way with a statistical
degree of reliability. It's especially useful for those who want to increase brand or product
awareness, message association, consideration or favorability.

There are limits to metrics that measure brand campaigns, and proxy measurements such as
clicks, likes and shares don’t necessarily align with business outcomes or correlate with brand
equity. For example, clicks may not be an accurate metric, because someone who clicks on your
ad may not remember your brand or product later

Case study
The marketing team at RXBAR wanted to better understand how their Facebook campaigns
could generate stronger awareness and increase purchases. The snack company collaborated
with its teams from Publicis to test a measurement strategy for multiple marketing channels,
which resulted in a two-phase campaign.

In phase one, the team used the brand awareness objective and optimized toward ad recall. In
phase two they used the conversion objective and optimized toward purchases. For both phases,
the team built effective mobile-first ads for delivery in News Feed, Instagram feed and Stories.
The creative included branding elements up-front and messaging that people could easily
understand with sound off.
Phase one: Brand awareness

This phase featured a “Simple Good” theme. It included a video ad that highlighted the
simplicity of ingredients inside RXBAR protein bars and messaging that focused on flavor
combinations and grams of protein.

Another video showed the individual ingredients in a protein bar and two people playing a game
of rock-paper-scissors to it.

These ads included a “Learn More” call-to-action button that linked to a third-party landing page
where people could select which retailer they’d like to purchase from. The team then showed
these ads to a broad audience of adults in the US ages 25–54.
Phase two: Direct response
For this phase, the team built a photo ad in the carousel format that showed a variety of RXBAR
nut butters with messaging about a seasonal flavor available for a limited time. Another photo ad
in the carousel format showcased the RXBAR portfolio of products and flavors, along with
messaging about how to receive a 20% discount. These ads included a “Shop Now” call-to-
action button that linked directly to the product page on the RXBAR website.

The team showed these ads to adults in the US ages 25–54 with an interest in fitness, travel,
health and food. The team also used Custom Audiences to show the ads to people who visited
the RXBAR website during phase one, added an item to their online shopping carts or purchased
within the past 90 days.

Lastly, they showed ads to a lookalike audience of people with characteristics similar to their
highest paying customers.
The RXBAR team measured the results of their January 1–March 31, 2020 campaign with a
Facebook Brand Lift, a Facebook Conversion Lift and an NCSolutions in-store sales lift study,
which revealed a:

 3.3% lift in in-store sales (NCSolutions in-store sales lift study).


 22% lift in online sales (Facebook Conversion Lift).
 1.2X incremental return on ad spend for total campaign.
 13-point lift in ad recall (Facebook Brand Lift).

Brand Lift methodology


Facebook Brand Lift tests rely on experimental design, so the system randomly divides your
chosen audience into test and control groups then polls each about topics such as ad recall, brand
awareness and message association.

The test calculates the difference in performance between the test and control groups, which
represents the lift of your Facebook advertising relative to the conditions of your test. It
compares poll results from people in the test group who saw your ads with people in the control
group who didn’t see your ads, so you can determine the incremental effect of your campaign.

Ad recall questions
An ad recall poll question is necessary for every advertiser that runs a Brand Lift. It often yields
the most statistically significant metric at the sample sizes Facebook polls. Advertisers can
create a standard ad recall or unaided ad recall question. Standard ad recall questions include the
name of the advertiser in the question itself with yes-or-no answer choices. Unaided ad recall
questions don't include the name of the advertiser in the question, but rather the answers.
Advertisers can choose the option that suits their unique preferences and strategy.
Here's an example of a standard ad recall question:

Types of poll questions


Use the arrows to scroll and learn more.
 Upper-funnel poll questions
Upper-funnel poll questions measure standard brand awareness, unaided brand awareness,
campaign awareness and top-of-mind awareness.

 Mid-funnel poll questions


Mid-funnel poll questions measure message association, message recall, message agreement,
standard favorability, familiarity and attribute rating.

 Lower-funnel poll questions


Lower-funnel poll questions measure intent, action intent, recommendation, consideration and
preference.

 Equity metrics poll questions


Equity metrics poll questions, specifically as they relate to affinity, meeting needs, uniqueness
and dynamics.

Placement of poll questions


A person can't see a poll more than once in seven days, regardless of platform. Likewise, a
person can only see a specific poll once, whether they answered it or not.

Facebook delivers poll questions in the following places:


 For all Facebook ad placements, poll questions appear in News Feed.
 For all Instagram ad placements, poll questions appear only in Instagram feed.
 For Audience Network ad placements, poll questions appear in News Feed on mobile.
 For Messenger ad placements, poll questions appear only in News Feed on mobile.

A poll question may not necessarily appear in the same placement as the ad. For example, if the
impression occurred on Audience Network, the individual could see the poll in News Feed. If
placements are in combination, polls appear on the platforms that map to the above. For
example, if placements are on News Feed and Instagram feed, polls then appear on both. A
campaign with placements on Instagram feed and Audience Network poll in Instagram feed and
mobile News Feed only.

Best practices for test optimization


 Creative

Use creative that works for your objective. Creative should include the logo up front within 2–3
seconds and feature the logo throughout. Also, design for sound off and frame the creative in
either a square or vertical format instead of 16:9.
 Frequency

Frequency is key if you want to increase top funnel metrics, such as ad recall. You should show
your ads at least 1-2 times per week per person. It’s important that you monitor your frequency
to ensure people don't see your ad too often during a campaign.
 Polls

Polls should align with the creative. To ensure the creative and polls align, don't submit poll
questions until you've seen the creative.

Knowledge check
How might Brand Lift tests help an advertiser?
Choose the correct answer.

They combine proxy metrics with historical data to measure brand awareness.

They report metrics like cost per conversion lift and sales lift.

They use A/B testing to report metrics, such as clickthrough rate, clicks and likes.

They report metrics that generate real business results, such as brand awareness and ad
recall.

Submit

Key takeaways
 Advertisers can use Brand Lift tests to quantify the effect of Facebook advertising on their
brand.
 A Brand Lift calculates the incremental effect your Facebook ads have on brand awareness,
perception or recall.
 A Brand Lift test compares poll results from people who had the opportunity to see your ads (the
test group) with people who didn’t (the control group).

Sources
 Success Story: RXBAR

Conversion Lift Testing


In this lesson
This lesson prepares you to measure the incremental conversions Facebook ads generate. You'll
also examine the methodology of Conversion Lift, identify test outputs and interpret results.
Conversion Lift
The basis of good measurement is incrementality, which means the degree to which a
measurement method can estimate the causal effect of a particular effort, such as an ad.

To calculate lift in ad performance, incrementality measurement compares differences in


outcomes between a group of people who’ve seen certain ads and a control group of comparable
people who haven't.

It's important for advertisers to systematically and reliably measure the true value of their
advertising. This kind of measurement enables them to optimize future marketing strategies.
Facebook has several solutions to help advertisers understand the impact of their advertising.
Conversion Lift is one such solution.

Conversion Lift is a randomized control trial (RCT) that uses test and control groups to measure
the number of incremental conversions Facebook ads generate. An advertiser can run a
Conversion Lift test on a specific Facebook ad campaign or across all of its Facebook
advertising.
Conversion Lift can help you assess the effectiveness of your Facebook ads, independent of all
your other marketing efforts. It can also help you answer the question: What additional
conversions occurred as a result of my Facebook ads? This increase is commonly known as
incremental impact.

Data sources
Conversion Lift uses data from three sources to measure conversions: the Facebook pixel for
online actions, the Facebook SDK for in-app actions and offline conversions for offline actions.
It's important to ensure you implement these data sources correctly before you run a Conversion
Lift test.

Validate a test hypothesis


First, identify your business goal, prioritize which question you'd like to answer, develop a test
hypothesis and identify the variable to test. Then you can decide whether Conversion Lift is
right for you. Use it when you have a test hypothesis that you can’t prove or disprove with
existing insights and research.

All Conversion Lift tests can help determine the incremental effects of your advertising and also
answer questions such as:

 Which audience has a lower cost per incremental conversion?


 How can I choose the best attribution model for my business objectives with reliable
experimental data?
 How should I balance my prospecting and retargeting budgets?
For example, if you notice a campaign performs better, a Conversion Lift test can help you
understand why and identify to what extent your Facebook ads or other factors are responsible.

Guide large scale decisions


A Conversion Lift is a great option when you need to consider the next steps for important
business decisions based on sales or conversion-based outcomes, as opposed to small-scale
decisions that use split testing. All Conversion Lift results show the incremental value of
Facebook ads and our statistical confidence that the ads caused incremental conversions.

As a result, a lift test can give you better insights into how to adapt an advertising strategy.
However, it might not be the most effective option when you want to make small changes to
your campaign and optimize your results.

ROAS for your campaign strategy


Sometimes a strategy focuses more on incremental growth rather than the nominal total. An
advertiser might recognize that the attribution model they chose doesn't reveal exactly how ads
affect their business. Conversion Lift enables them to gain an expansive overview of their
advertising has, which can help them select an effective business strategy. This helps provide a
better understanding of the full ROAS of potential campaign strategies.

Scenario
Akash is an analyst at Dart, the new, rapidly growing auto rental startup in South Asia.

The marketing team is running a new campaign, and there is more interaction with the ads than
Akash expected. He speculates this is the result of the creative, but he wants to confirm this. He
runs a Conversion Lift test, which compares a Facebook campaign against a control group. The
results show that his Facebook ads did in fact cause the incremental conversions.

Number of purchases
Strategy A: Remarketing
Strategy B:
Prospecting
Strategy A wins the Split test
For example, Akash compares two audience strategies and wants to see how a split test and
Conversion Lift test reveal different perspectives. His goal is to understand if a prospecting
audience generates better outcomes than a retargeting audience. He uses a split test and gets the
following results, which indicate strategy A is the winner. As a result, he decides to pursue a
strategy that invests more in remarketing.
Next, Akash runs a Conversion Lift test with cells for each of his audiences. While his results
saw more total conversions for the remarketing audience as in the split test, his test results reveal
that the prospecting audience yielded more incremental conversions. This means the test, which
controlled for outside factors, determined that the Facebook ads that reached the prospecting
audience generated a higher number of conversions.

Number of purchases
Strategy A: Remarketing
Strategy B:
Prospecting
Strategy B drives more incremental purchases
With this information, Akash is able to identify the specific incremental value of a prospecting
audience over a remarketing audience for his current business strategy. This scenario helps show
how a lift test can help guide larger scale, strategic decisions for campaigns. Conversely, split
tests are better suited to smaller decisions that don't need to control for outside factors.

Methodology of Conversion Lift


A Conversion Lift test uses your chosen audience to create a test and control group. It then
tracks conversions that the pixel, Facebook SDK and offline conversions (for actions that
happen in stores or over the phone) observe throughout the duration of the test.
Facebook uses a methodology called intent to treat (ITT) to manage the effect of potential error
in the test results and more accurately ensure comparable audiences. It creates greater
confidence that our observations are correct. In this methodology:
✓ Test and control are comparable, balanced populations.

X Test-exposed and test -unexposed aren't comparable.

X Test-exposed and control aren't comparable.

During opportunity logging, Facebook excludes people from the experimental group who didn't
have the opportunity to see the ad. This could be because they weren't on Facebook, didn't use
Facebook during that time or because the advertiser didn't win auctions. Opportunity logging
filters out conversion data that occurred before a person had the opportunity to see an ad,
regardless of their test or control designation.

Opportunity logging is a reliable way for us to create test and control groups from an audience
that's likely to see your ad. This ensures that the opportunity isn't imbalanced between the
groups.

People who were eligible for the treatment but didn't see the ad during the test period remained
in the original group that they were assigned to, which was the test group. However, people who
never had the chance to see the ad are excluded at the opportunity logging level.

For the duration of the Conversion Lift test Akash ran, there were some people in the test and
control group who weren't active on Facebook. Facebook controlled for this through opportunity
logging. They weren't included in the population.
For example, as you can see in the image that depicts a Conversion Lift report:

 There were 10 million people logged into the experiment.


 The test reach 75% of people.
 There were 2.5 million conversions from people in the test group.

When we exclude people from the experimental group who weren't eligible to see the ads, we
reduce dilution, improve the quality of the control group and increase the accuracy of the test
results.
To promote their new product, the team at Axion Electronics plans to launch a national TV
campaign with an online component across a few digital platforms, including Facebook. The TV
campaign runs through April, and the Facebook campaign runs from the second week of April
to the first week of May.

The brand team wants to understand the effect Facebook has in addition to all other media. They
also want to know if the national TV exposure could contaminate a Brand Lift on Facebook,
because both campaigns target people ages 18–34.
Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which could lead to an underestimation of the incremental impact of the Facebook
campaign.

Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which could lead to an overestimation of the incremental impact of the Facebook campaign.

TV exposure can't contaminate the test, because the audiences that the TV and digital
platforms reach are different

TV exposure can't contaminate the test, because Brand Lift uses randomized control trial
methodology

An analyst at an ecommerce business wants to test the hypothesis that automatic placements can
increase incremental sales for their business compared to only Facebook and Instagram
placements.

If results show with 99.9% confidence that the use of automatic placements resulted in
additional conversions, what should they do next?
Run another Conversion Lift test to ensure confidence in results.

Expand your placements for future direct response campaigns.

Use Facebook and Instagram placements without automatic placements.

Run a split test to confirm that the results are valid.

Key takeaways
 Conversion Lift uses data sources to measure the number of conversions Facebook ads
generate.
 Conversion Lift helps you understand the incremental impact your Facebook ads have.
 You can use the information from a Conversion Lift test to validate a hypothesis, guide large-
scale decisions and understand ROAS.
 Conversion Lift divides people into randomized tests and control groups in an ITT methodology
and compares the performance of these groups over time.

Sources
 About Conversion Lift Tests
 About Lift Holdouts in Facebook Advertising Tests
 Cost per Conversion Lift

You might also like