Professional Documents
Culture Documents
2.measurement Methodologies - Facebook
2.measurement Methodologies - Facebook
In this course, you will compare experimental and observational measurement methods.
and recognize when to implement A/B tests vs. RCTs. You will learn how to describe how A/B,
Brand and Conversion Lift tests that are conducted on Facebook.
The measurement of marketing efforts across devices and platforms is a key element of business
development. This practice includes more than just traditional digital metrics, such as clicks or post
engagement.
If you measure only independent ads or campaigns, you may limit your understanding of your full
advertising effort. When you identify which variables work and focus on your business goals and
outcomes though, you can use the insights you gain to adjust your overall strategy and guide future
campaigns.
When you want to measure and analyze your campaign, it's important to distinguish between the
different ways you can do this.
Data
Provides an aggregated record when people interact with content
Reporting
Provides data organization into informational summaries
Measurement
Provides a standardized approach to your data to better understand campaign performance
Insights
Provides guidance to help you improve your future marketing strategies
To use Facebook measurement and analysis tools effectively, you need to know what to measure.
This is where we must consider your business goals.
A key performance indicator, or KPI, is a metric that helps you assess whether you've reached your
goal. It’s important to filter your data and determine which KPI is relevant to your primary goal, so
you can prioritize your resources.
Tools to collect relevant information
Before you begin measurement, confirm you have the appropriate tools set up to collect the relevant
information and assess your results
These tools are connections or pieces of code that collect the information that Facebook uses to
measure the results of your ads. Here are the different tools you can integrate:
Observational methods offer marketers less control, which makes it difficult to use the data to derive
insights on causal effects. After an ad campaign is complete, researchers can attempt to measure a
causal effect on the desired results with data they've collected throughout the campaign.
For various reasons, some people might have seen an ad while others haven't. Maybe they didn't
scroll far enough on a page to see it, they weren't in the target audience or they haven't browsed the
internet lately.
The experimental method enables marketers to expose a group of people to an ad while another
group functions as a control group. The researchers can then measure the effect after the campaign
concludes. A well-designed experiment randomly assigns who sees an ad and who doesn't. That
way, marketers can gain insight about the effect an ad has on sales based on the difference between
the exposed and unexposed groups.
Difference between experimental and observational
methods
There can be a significant discrepancy between results from the commonly used observational
approaches and true experiments
There are sometimes instances when the results from observational approaches on a single ad
campaign come close to results from true experiments. However, those cases are rare, and it's
difficult to predict when it might happen. You can't rely on observational methods to provide the
right answer in the future.
When we consider a collection of randomized control trials (RCTs), as we do in Facebook data-
driven attribution (DDA), we find measurements are less biased and have lower variance than
common observational methods for single campaigns.
An RCT is still preferable to get a more accurate assessment of the effect a campaign has. However,
when it's not practical to run an experiment, this type of model-based attribution is a more reliable
method, even when results show discrepancies compared to experimental methods
Knowledge check
A researcher analyzes data from Ads Manager to determine whether people were more likely to
respond to video or image ads in the past three months.The marketer tries to isolate the exposed
and unexposed groups after the facT, based on how people naturally fell into either category.A
researcher shows different types of ads to separate groups of people in a controlled
manner.Marketers plan who sees an ad and who doesn't.
Observational methods
Experimental methods
The fictitious company Storx is a large, membership-based, fictional ecommerce retailer that
focuses on products for parents of young children. The Storx marketing team ran a new
campaign that offers a complimentary service called TeleNanny. TeleNanny gives new moms
and dads the expert opinion of live child care experts at the tap of a button on the app or website.
The team saw a substantial increase in sales this month, and they want to know if that increase is
due to the new advertising campaign.
Use an observational method to assess the effect campaign exposure had on sales. Evaluate
the results for people who were and weren't exposed to the ads.
Use an observational method that shows different types of ads to separate groups of people
in a controlled manner.
Set up an experiment to isolate the exposed and unexposed groups afterward based on how
people naturally fell into either category.
Key takeaways
Before you begin measurement, confirm you have the appropriate data tools set up to collect the
relevant information and measure your results.
Observational methods offer marketers less control, which makes it difficult to use the data to
derive insights into causal effects.
With experimental methods, marketers expose a group of people to an ad while another group
functions as a control.
The team has two different images they believe could work for their ads, so they conduct an A/B
test to help them decide which is most effective. They create an A and a B version of the ads where
both versions are the same, except for the image.
The results from the A/B test show version B as the winner with more results and a lower cost per
result, so the LaLueur team uses that ad for the rest of the campaign.
Tip: A/B tests enable you to compare multiple campaign strategies. However, some A/B tests don't
include control groups, only randomized test groups. In this scenario, they don't measure causality
or the incremental value of a strategy, so the A/B test only identifies correlation, not causation
Randomized control trials (RCTS): Brand Lift and Conversion
Lift
A/B tests enable you to test multiple campaign strategies against one another. However, some A/B
tests don't include control groups, only randomized test groups. In this case, they don't measure
causality or the incremental value of a strategy, so the A/B test only identifies correlation, not
causation.
Incrementality
To really understand how our randomized control trials work, you must first understand the concept
of incrementality. The basis of good measurement is incrementality, which means the degree to
which a measurement method can estimate the causal effect of an isolated marketing activity, such
as an ad.
This considers how much value comes specifically from ads (for example, conversions that occurred
as a result of them), which helps marketers understand the returns their ads generate.
A/B tests don't yield more results. However, when you compare strategies with different baselines,
there are limits to A/B testing because there are limits to last-click attribution. For that reason, it
works better with rapid optimization and routine decisions related to objectives, such as app installs
or lead generation. For bigger business decisions based on sales or conversion-based outcomes, a
lift test can help you confidently decide your next step.
Incrementality measurement is central to your ability to understand the true value of your marketing
decision. Advertisers need to measure the proportion of sales each targeting strategy generates to
learn which has the potential to generate the largest proportion of additional sales in the future.
Success online doesn't result from a single action. Instead, businesses need to build a foundation of
tools and systems that work together. They also need to periodically examine those tools and
systems to confirm they're performing at optimal levels. When you continuously troubleshoot
targeting issues, you can improve your campaign at all levels and maintain your ability to reach the
right audience.
Axion Electronics is a fictitious manufacturer of high-end televisions that sells their products
through their website. The Axion marketing team needs to assess which proportion of their
advertising budget they should spend on ads targeted to previous buyers versus ads targeted to
people who visited the website but never made a purchase.
The advertiser pauses its usual campaigns and sets up an A/B test with two ad sets, both
optimized for purchases. The only difference is the targeting strategy. They then run the ad sets
for seven days. The A/B test results are based on attributed conversions with a last-touch
attribution model to identify the winning strategy. This measurement approach doesn’t
accurately measure causality though.
The test should run a minimum of three weeks to collect sufficient data.
Submit
Key takeaways
A/B tests enable you to compare the performance of different versions of your ads against each
other. This helps you understand which elements achieve the best performance based on your
current attribution settings.
With an A/B test, you have a few options for the independent variables to include in your test.
Incrementality is the degree to which a measurement method can estimate the causal effect of an
isolated marketing activity.
Sources
A/B Tests Types Available on Facebook
Viewing and Understanding Facebook A/B Holdout Test Results
In simple terms, the test group sees your ads while the control group doesn't. The groups are
completely randomized and identical, so any differences between the groups is purely the result
of the Facebook campaign. The randomized yet identical groups help ensure factors such as TV
commercials or other outside factors don’t contaminate the results of the test. The differences in
results between the two groups is the incremental impact of the campaign.
Randomized control trials:
Entire Facebook population is randomized
or
Everything about these two groups is identical
Any differences achieved is purely the result of the Facebook campaign
RCT methodology
To measure the impact of your ads, Facebook lift solutions randomize your audience into test
and control groups with similar characteristics. The system then delivers ads to the test groups.
After the ads run, the system administers a poll or tracks sales outcomes for both the test and
control group. Finally, the system compares poll results or sales outcomes from the test group of
people who saw your ads against the control group of people who didn’t. This enables you to
measure the incremental impact of your campaign.
Brand Lift
Let’s explore the two RCT tests Facebook offers. First, let's focus on brand lift tests.
Brand Lift testing enables you to evaluate the awareness lift your campaign is responsible for.
Brand Lift tests help you prove a causal relationship between your advertising and the results
you observe. If we look back to our RCT methodology, the system delivers polls to both the test
and the control groups.
This is how polling in the test appears on the platform. It also helps you prove a causal
relationship between your advertising and the results you observe. These might involve brand
favorability of product consideration. Each person in your test receives only one question
randomly.
If someone is eligible to receive the poll question, the system shows it to them 4–48 hours after
ad delivery.
Note: There's a seven-day polling eligibility window, which means that if the system serves a
person a poll and that person doesn't respond, they're eligible to see the poll again seven days
after exposure. That second one could be a different question than the original poll. This
situation is only possible if the campaign runs longer than seven days.
Axion Electronics
Axion Electronics is a fictional electronics manufacturer that makes the most technologically
sophisticated televisions in the world. Despite the fact that they sell a superior product,
marketers with superior resources have eclipsed their efforts. The general target audience for
Axion is people ages 18–45.
Let’s explore how the Axion team conducted their Brand Lift test.
First, they created the target audience for the campaign, and then Facebook divided the audience
into random, non-overlapping groups. This randomization helped ensure other factors wouldn't
skew the results. The system delivered the ads to the test group but not the control group.
Conversion Lift
Conversion Lift tests are another form of randomized control trial test Facebook offers.
Conversion Lift tests help you evaluate increases in certain behaviors, or conversions, that your
campaign is responsible for. Conversion Lift tests also enable you to prove a causal relationship
between your advertising and the increases in conversions that you observe.
It’s important to note that you need to integrate conversion data with Ads Manager. You should
therefore have the pixel, Facebook SDK or Conversions API installed to provide a data signal.
Let’s think back to an overview of how RCTs work. Conversion Lift tests track sales outcomes.
The difference in sales between the test group and the control group is your conversion lift.
At a glance, audience 2 may look like the better option because of the higher conversion rate.
However, these two audiences actually each had a separate Conversion Lift test.
AUDIENCE 1: DEMO AUDIENCE 2: BEHAVIORAL
Ages 18–45 People interested in HD home theaters
Reach: 10 million Reach: 1 million
Spend: $20,000 Spend: $10,000
eCPM: $2.00 eCPM: $10.00
Conversion rate: 0.012 Conversion rate: 0.205
eCPM stands for “Effective Cost Per Mille,” the amount an advertiser pays a website per one
thousand visitors who see its advertisements, as calculated by a particular formula.
The example shows that audience 1 is more cost-efficient, given the test and control outputs. It
ends up costing $1 per incremental conversion for the demo and $2 per incremental conversion
for the behavioral. In reality then, audience 1 is more cost-efficient.
Examples such as this one emphasizes why it’s important to conduct lift tests and how they can
help you make more efficient choices.
Knowledge check
An advertiser runs a Conversion Lift test on a new media platform. During discussions about
measurement, the team from the media platform says that they can compare the conversion rates
for the test group exposed to ads versus a pre-selected control group. The team from the
platform feels its methodology is valid. What concern should the analyst have with the test
design?
Randomization is necessary for valid experimental design
The brand team wants to understand the impact of Facebook in addition to all other media. It
also wants to know if the national TV exposure might contaminate a Brand Lift test on
Facebook because both campaigns target people ages 18–34. How will the national TV exposure
interact with the Brand Lift test on Facebook?
Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which would lead to an underestimation of the incremental impact of the Facebook campaign.
Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which would lead to an overestimation of the incremental impact of the Facebook campaign.
TV exposure can't contaminate the test, because the audience the TV and digital platforms reach
are different.
TV exposure can't contaminate the test, because Brand Lift uses randomized control trial
methodology.
Key takeaways
Randomization is essential to a well-designed experiment.
A/B tests can help you compare different versions of ads against each other.
Brand Lift tests can help you determine whether your ad campaigns increase brand
awareness.
Conversion Lift tests can help you understand the incremental impact your ad campaigns
have on conversions.
A/B testing with Facebook enables you to create multiple ad sets and test them against each
other to see which approach produces the best results.
For example, you can test the same ad on two different audiences to see which performs better.
Likewise, you can test two different creative formats to determine which is more effective. This
can guide the creative direction you take in future ads.
We sometimes refer to this practice as racing tactics within a variable. Once we have the results
of the test and we've determined the winning tactic, the race is over.
Each group sees ad sets that are identical except for the variable you're testing, which may
include the audience, delivery optimization, placements, creative or product sets.
Note: When you create an A/B test, use only one variable. If you test more, you won't know
what’s caused the change.
Axion electronics
The marketing strategist at Axion wants to test the impact of the "Learn More" call-to-action
button compared to a "Shop Now" call-to-action button. Both buttons direct the audience to the
promotions page on the website, but the text differs. With an A/B test, the Axion team can
determine which button is more effective and use that knowledge to refine future campaign
strategies.
Cell 2: The new strategy used machine learning. It consolidated ad sets down to three, used
campaign budget optimization (CBO) to automatically shift budgets to the best-performing ad
set in real time, and targeted ads to broad audiences.
For Cell 2, the Lionsgate team created three audiences of different scales, all of which were
broad enough to enable the algorithm to find as many people who were likely to convert as
possible. The system then delivered ads to all three audiences simultaneously in News Feed and
their Instagram feeds
The first audience consisted of people ages 17 and older who have an interest in movies. The
second audience used Custom Audiences to reach people who engaged with the ads or Page for
the film. The third audience was a lookalike audience based on the second. Lionsgate served ads
to this group again close to the release of the film.
Results
The Lionsgate team used a split test in Ads Manager, which revealed that the simplified account
structure, CBO and scaled audiences they used in the August 13–15, 2019 campaign
outperformed the usual setup. The simplified account setup achieved:
20% more showtime search conversions with the simplified account setup.
18% lower cost per showtime search with the simplified account setup.
Interpret results
After a test, you receive an email report with results, but you can also review your test results
directly in Ads Manager
Expand each header below to learn more.
Winning ad
We compare the cost per result of each ad set or ad based on your campaign objective to
determine the winner. We also use the data from the test to simulate the performance of each
variable tens of thousands of times to determine how often the winning outcome would have
occurred.
Confidence level
For example, you run an A/B test with a creative variable, such as a video versus an image.
Your results show that the ad set with the video is the winning strategy with a 90% confidence
level. This percentage indicates that the video ads had the lowest cost per result, and there's a
90% chance that if you ran the same test multiple times, you'd achieve the same result.
Take action
With these results, it's most effective to adopt the winning strategy. You may want to move the
budget of the lower-performing ad set to the one that won, or you may instead edit the lower-
performing ad set to give it the same characteristics as the winner.
For any A/B test, a confidence level above 75% is an indication that further use of the winning
ad set is likely a good strategy.
Create a new ad from the winning ad set right from the results email.
Reactivate the winning ad set in Ads Manager.
Create a new campaign based on what you learned.
You could also run more tests to refine your strategy. For example, if an audience of people ages
24–30 outperform an audience of people ages 18–30, you could then test an audience of people
ages 24–30 against an audiences ages 24–36. When you refine your audience, you might
generate greater results from your campaigns.
If your results have low confidence, you can test the campaign again with a longer schedule or
higher budget. More time and budget typically generate more data, which helps us increase the
confidence in your results.
Knowledge check
LaLueur is a fictitious personal care brand that offers an inclusive range of skincare and makeup
products. The marketing team wants to assess what portion of their advertising budget they
should allocate to retargeting previous buyers versus targeting ads to consumers who visited
their website but didn't make a purchase. To predict which is most likely to generate the largest
proportion of future sales, the team needs to measure the proportion of sales each targeting
strategy generated.
The team pauses its usual campaigns and sets up an A/B test that uses two ad sets optimized for
purchases. The only difference is the targeting strategy each ad set uses. The advertiser runs the
ad sets for seven days and the results are based on attributed conversions through a last touch
attribution model. This measurement approach accurately measures causality.
What's causing this measurement issue?
(Choose 3)
They should set up the test to use an even credit attribution model.
They should set up the test to prevent contamination to the test group.
The test should run for a minimum of three weeks to collect sufficient data.
Submit
Key takeaways
Compare tactical approaches by using one campaign variable (for example, creative, audience,
placement, product sets or delivery optimization).
Make data-driven recommendations for future campaigns based on results.
Use the Facebook A/B test results confidence level, which represents our confidence in the
declaration of the winner. A confidence level of 75% is reliable.
Improve future campaigns by using the winning ad or create a new ad from the winning ad set.
You can also create a new campaign based on what you learned.
Sources
A/B tests types available on Facebook
Viewing and Understanding Facebook A/B holdout test results
Success Story: Lionsgate
Brand Lift
Facebook has several solutions to help advertisers understand the impact of their advertising. A
Brand Lift test is one such solution.
Incrementality measurement is central to your ability to understand the true value of your
marketing decisions. Advertisers need to measure the proportion of sales each targeting strategy
generates to determine which is most effective.
A Brand Lift is a randomized control trial (RCT) that measures how a Facebook campaign
impacts brand awareness, including perception and recall of your brand or product among a
particular audience. A Brand Lift test includes a control group, so it can quantify the impact of
your Facebook advertising independent of all your other marketing efforts. To run a test
independently, an advertiser can use experiments or a managed test with a Facebook account
representative. This course only focuses on managed tests.
There are limits to metrics that measure brand campaigns, and proxy measurements such as
clicks, likes and shares don’t necessarily align with business outcomes or correlate with brand
equity. For example, clicks may not be an accurate metric, because someone who clicks on your
ad may not remember your brand or product later
Case study
The marketing team at RXBAR wanted to better understand how their Facebook campaigns
could generate stronger awareness and increase purchases. The snack company collaborated
with its teams from Publicis to test a measurement strategy for multiple marketing channels,
which resulted in a two-phase campaign.
In phase one, the team used the brand awareness objective and optimized toward ad recall. In
phase two they used the conversion objective and optimized toward purchases. For both phases,
the team built effective mobile-first ads for delivery in News Feed, Instagram feed and Stories.
The creative included branding elements up-front and messaging that people could easily
understand with sound off.
Phase one: Brand awareness
This phase featured a “Simple Good” theme. It included a video ad that highlighted the
simplicity of ingredients inside RXBAR protein bars and messaging that focused on flavor
combinations and grams of protein.
Another video showed the individual ingredients in a protein bar and two people playing a game
of rock-paper-scissors to it.
These ads included a “Learn More” call-to-action button that linked to a third-party landing page
where people could select which retailer they’d like to purchase from. The team then showed
these ads to a broad audience of adults in the US ages 25–54.
Phase two: Direct response
For this phase, the team built a photo ad in the carousel format that showed a variety of RXBAR
nut butters with messaging about a seasonal flavor available for a limited time. Another photo ad
in the carousel format showcased the RXBAR portfolio of products and flavors, along with
messaging about how to receive a 20% discount. These ads included a “Shop Now” call-to-
action button that linked directly to the product page on the RXBAR website.
The team showed these ads to adults in the US ages 25–54 with an interest in fitness, travel,
health and food. The team also used Custom Audiences to show the ads to people who visited
the RXBAR website during phase one, added an item to their online shopping carts or purchased
within the past 90 days.
Lastly, they showed ads to a lookalike audience of people with characteristics similar to their
highest paying customers.
The RXBAR team measured the results of their January 1–March 31, 2020 campaign with a
Facebook Brand Lift, a Facebook Conversion Lift and an NCSolutions in-store sales lift study,
which revealed a:
The test calculates the difference in performance between the test and control groups, which
represents the lift of your Facebook advertising relative to the conditions of your test. It
compares poll results from people in the test group who saw your ads with people in the control
group who didn’t see your ads, so you can determine the incremental effect of your campaign.
Ad recall questions
An ad recall poll question is necessary for every advertiser that runs a Brand Lift. It often yields
the most statistically significant metric at the sample sizes Facebook polls. Advertisers can
create a standard ad recall or unaided ad recall question. Standard ad recall questions include the
name of the advertiser in the question itself with yes-or-no answer choices. Unaided ad recall
questions don't include the name of the advertiser in the question, but rather the answers.
Advertisers can choose the option that suits their unique preferences and strategy.
Here's an example of a standard ad recall question:
A poll question may not necessarily appear in the same placement as the ad. For example, if the
impression occurred on Audience Network, the individual could see the poll in News Feed. If
placements are in combination, polls appear on the platforms that map to the above. For
example, if placements are on News Feed and Instagram feed, polls then appear on both. A
campaign with placements on Instagram feed and Audience Network poll in Instagram feed and
mobile News Feed only.
Use creative that works for your objective. Creative should include the logo up front within 2–3
seconds and feature the logo throughout. Also, design for sound off and frame the creative in
either a square or vertical format instead of 16:9.
Frequency
Frequency is key if you want to increase top funnel metrics, such as ad recall. You should show
your ads at least 1-2 times per week per person. It’s important that you monitor your frequency
to ensure people don't see your ad too often during a campaign.
Polls
Polls should align with the creative. To ensure the creative and polls align, don't submit poll
questions until you've seen the creative.
Knowledge check
How might Brand Lift tests help an advertiser?
Choose the correct answer.
They combine proxy metrics with historical data to measure brand awareness.
They report metrics like cost per conversion lift and sales lift.
They use A/B testing to report metrics, such as clickthrough rate, clicks and likes.
They report metrics that generate real business results, such as brand awareness and ad
recall.
Submit
Key takeaways
Advertisers can use Brand Lift tests to quantify the effect of Facebook advertising on their
brand.
A Brand Lift calculates the incremental effect your Facebook ads have on brand awareness,
perception or recall.
A Brand Lift test compares poll results from people who had the opportunity to see your ads (the
test group) with people who didn’t (the control group).
Sources
Success Story: RXBAR
It's important for advertisers to systematically and reliably measure the true value of their
advertising. This kind of measurement enables them to optimize future marketing strategies.
Facebook has several solutions to help advertisers understand the impact of their advertising.
Conversion Lift is one such solution.
Conversion Lift is a randomized control trial (RCT) that uses test and control groups to measure
the number of incremental conversions Facebook ads generate. An advertiser can run a
Conversion Lift test on a specific Facebook ad campaign or across all of its Facebook
advertising.
Conversion Lift can help you assess the effectiveness of your Facebook ads, independent of all
your other marketing efforts. It can also help you answer the question: What additional
conversions occurred as a result of my Facebook ads? This increase is commonly known as
incremental impact.
Data sources
Conversion Lift uses data from three sources to measure conversions: the Facebook pixel for
online actions, the Facebook SDK for in-app actions and offline conversions for offline actions.
It's important to ensure you implement these data sources correctly before you run a Conversion
Lift test.
All Conversion Lift tests can help determine the incremental effects of your advertising and also
answer questions such as:
As a result, a lift test can give you better insights into how to adapt an advertising strategy.
However, it might not be the most effective option when you want to make small changes to
your campaign and optimize your results.
Scenario
Akash is an analyst at Dart, the new, rapidly growing auto rental startup in South Asia.
The marketing team is running a new campaign, and there is more interaction with the ads than
Akash expected. He speculates this is the result of the creative, but he wants to confirm this. He
runs a Conversion Lift test, which compares a Facebook campaign against a control group. The
results show that his Facebook ads did in fact cause the incremental conversions.
Number of purchases
Strategy A: Remarketing
Strategy B:
Prospecting
Strategy A wins the Split test
For example, Akash compares two audience strategies and wants to see how a split test and
Conversion Lift test reveal different perspectives. His goal is to understand if a prospecting
audience generates better outcomes than a retargeting audience. He uses a split test and gets the
following results, which indicate strategy A is the winner. As a result, he decides to pursue a
strategy that invests more in remarketing.
Next, Akash runs a Conversion Lift test with cells for each of his audiences. While his results
saw more total conversions for the remarketing audience as in the split test, his test results reveal
that the prospecting audience yielded more incremental conversions. This means the test, which
controlled for outside factors, determined that the Facebook ads that reached the prospecting
audience generated a higher number of conversions.
Number of purchases
Strategy A: Remarketing
Strategy B:
Prospecting
Strategy B drives more incremental purchases
With this information, Akash is able to identify the specific incremental value of a prospecting
audience over a remarketing audience for his current business strategy. This scenario helps show
how a lift test can help guide larger scale, strategic decisions for campaigns. Conversely, split
tests are better suited to smaller decisions that don't need to control for outside factors.
During opportunity logging, Facebook excludes people from the experimental group who didn't
have the opportunity to see the ad. This could be because they weren't on Facebook, didn't use
Facebook during that time or because the advertiser didn't win auctions. Opportunity logging
filters out conversion data that occurred before a person had the opportunity to see an ad,
regardless of their test or control designation.
Opportunity logging is a reliable way for us to create test and control groups from an audience
that's likely to see your ad. This ensures that the opportunity isn't imbalanced between the
groups.
People who were eligible for the treatment but didn't see the ad during the test period remained
in the original group that they were assigned to, which was the test group. However, people who
never had the chance to see the ad are excluded at the opportunity logging level.
For the duration of the Conversion Lift test Akash ran, there were some people in the test and
control group who weren't active on Facebook. Facebook controlled for this through opportunity
logging. They weren't included in the population.
For example, as you can see in the image that depicts a Conversion Lift report:
The brand team wants to understand the effect Facebook has in addition to all other media. They
also want to know if the national TV exposure could contaminate a Brand Lift on Facebook,
because both campaigns target people ages 18–34.
Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which could lead to an underestimation of the incremental impact of the Facebook
campaign.
Some of the people in the Brand Lift control group might see the ads for the new product on
TV, which could lead to an overestimation of the incremental impact of the Facebook campaign.
TV exposure can't contaminate the test, because the audiences that the TV and digital
platforms reach are different
TV exposure can't contaminate the test, because Brand Lift uses randomized control trial
methodology
An analyst at an ecommerce business wants to test the hypothesis that automatic placements can
increase incremental sales for their business compared to only Facebook and Instagram
placements.
If results show with 99.9% confidence that the use of automatic placements resulted in
additional conversions, what should they do next?
Run another Conversion Lift test to ensure confidence in results.
Key takeaways
Conversion Lift uses data sources to measure the number of conversions Facebook ads
generate.
Conversion Lift helps you understand the incremental impact your Facebook ads have.
You can use the information from a Conversion Lift test to validate a hypothesis, guide large-
scale decisions and understand ROAS.
Conversion Lift divides people into randomized tests and control groups in an ITT methodology
and compares the performance of these groups over time.
Sources
About Conversion Lift Tests
About Lift Holdouts in Facebook Advertising Tests
Cost per Conversion Lift