Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

How to plan for A/B testing

As you’ve already learned, an A/B test compares two variants of the same content, to determine
which yields better results. An A/B testing plan helps you structure this experiment by outlining key
information about the test and its success metrics. In this reading, you’ll learn how to prepare an
effective plan for an A/B test.

Define the problem


You can run A/B tests on almost any digital content or design element. But no matter what you
choose to test—from an ad headline to the color of a button—you should first identify a specific
problem or goal for your experiment. It might be that you want to improve low conversion rates or
find a way to fill a new customer need. Even if the problem or goal seems large, it’s best to start with
small changes. Understanding how minor adjustments affect performance will give you a baseline
for testing more ambitious changes.

Elements of an A/B testing plan


The details of an A/B testing plan may differ by company or testing tool, but the fundamentals of an
effective plan are often the same. Below are some examples of common elements you might find in
an A/B testing plan.

Hypothesis
Once you have a clear idea of what you want to achieve, it’s time to create a hypothesis. In an A/B
test, the hypothesis describes the “why,” “what,” and “how” of the experiment. It also makes a
prediction about the outcome. A hypothesis should be backed by research or data and focus on a
single issue.

At minimum, the hypothesis should describe:

 The problem or insight the test will address


 What you plan to change to address the problem
 The expected impact or result of the change
For example, imagine that a company wants to increase the percentage of marketing email
recipients who click through to their website. After examining the data, they determine that
subscribers are more likely to click elements that appear near the top of an email. Their A/B test
hypothesis could be:

 “Because we discovered that customers are more likely to click elements near the top of an email,
we expect that changing the position of a link will boost our click-to-open rate by 15%.”
A strong hypothesis makes it easier to report on the test’s results and to share your insights with
stakeholders. But the process of creating a hypothesis is just as important. That’s because it forces
you to state the reason for the test in concrete terms. If you find it difficult to create a compelling
hypothesis, it might mean you should gather more data before running the test.

Variants
With a hypothesis in place, your team can begin to plan the variants. Variants are the different
versions of the content served to users during an A/B test. Variant A represents the original content,
while variant B usually differs in some meaningful way. It’s generally a good idea to limit the number
of modifications to a single variant, however. Changing too many things at once can make it harder
to interpret the test results.

For example, in the email marketing scenario, the link in variant B might be moved to the top of the
message. But what if this variant also included new call to action (CTA) text and turned the link into a
button? How would you measure the impact of each change individually? By limiting the changes to
each variant, you’re more likely to get clear and actionable results.

Note: Even a “failed” test can provide valuable data. If your B variant doesn’t produce the expected
improvement, that doesn’t necessarily mean your hypothesis is wrong. You may need to test
different variants to get the outcome you want.

Metrics
Before you begin testing, your team should decide how to measure results. While you’ll likely track
several metrics for each test, some will be more important than others. In the email marketing
example, the primary metric is the click-to-open rate (percentage of recipients who clicked one or
more links after opening an email). But the team might also track the conversion rate to find out what
percentage of people who clicked that link eventually made a purchase.

Current performance and expected improvement

You’ll also need to agree on a definition of success. How big of an improvement do you want or
expect from the test? Is a 5% increase enough? How about 10%? The goal you set can be
ambitious, but it should also be realistic according to the available data.

Other testing details


An A/B testing plan can also contain other vital details about the test. Remember that different
companies may put different information in their A/B testing plans, but a basic plan might include:

 A brief overview of the test and its purpose


 The channel being tested (e.g., Google Ads, etc.)
 Type of asset being tested (e.g., display ad, button copy, etc.)
 The duration of the test (start and end date)
 The number of users per variant
 The confidence level (the estimated probability that the test results would remain consistent if the test
ran longer)

Key takeaways
A/B testing is a valuable tool for improving digital marketing and e-commerce performance. An A/B
testing plan helps you organize your efforts and results, which can lead to more efficient
improvements. With a data-driven hypothesis, carefully selected variants, and a plan to measure
success, you’re more likely to reach your testing goals.
A/B test ad variations and bidding
strategies in Google Ads
A video in this course demonstrated how to perform an ad variation test in Google Ads. This is one
type of A/B test. Google Ads allows you to test variations for responsive search ads. This reading
reviews the steps to set up an ad variation test and introduces another type of A/B test in Google
Ads, one that tests automated bidding strategies so you can decide which one is best.

Note: All tests are performed using the Drafts and Experiments feature in Google Ads.

Steps to set up and test ad variations


Here is an overview of the steps to set up and test a responsive search ad variation in the Drafts and
Experiments section of Google Ads:

1. Decide if the variation is for all campaigns, or for certain campaigns only.
2. Select that the variation is for responsive search ads.
3. Filter ads by the text ad component that will be tested, such as headlines or descriptions.
4. Modify the text for the text ad component to be tested.
5. Name the variation.
6. Choose the start and end dates for the A/B test.
7. Configure the split; almost always 50%.
8. Save the settings. The test will automatically run on the start date.

Responsive search ad variations


Responsive search ads display best-fitting headlines and descriptions based on user search queries.
The more headlines and descriptions that are configured, the more opportunities Google Ads has to
serve ads that more closely match your potential customers’ search queries, which can improve
search ad performance. Testing variations may enable you to select the headlines and descriptions
that attract more customers.

Steps to set up and test alternate bidding strategies


You can use campaign drafts and experiments to compare your current bidding strategy to an
alternate strategy. For example, you can run an experiment to compare Target CPA (cost-per-
action) automated bidding to manual bid changes. This will help you determine if automated bidding
will improve the performance of a campaign.

Here is an overview of the steps to set up and test an alternate bidding strategy in the Drafts and
Experiments section of Google Ads; for example, you can test a Target CPA bidding strategy (an
experiment) against a maximum CPC (cost-per-click) bidding strategy (a current campaign).

1. Name the experiment.


2. Choose the start and end dates for the A/B test. You should allow a bidding strategy test to run for
four to six weeks.
3. Configure the split; almost always 50%.
4. Decide if the test will be a search-based or cookie-based split.
 In a search-based split, users are evenly split between the existing and experimental campaigns.
 In a cookie-based split, users who perform repeated searches are directed back to the same
campaign. A cookie-based split may give more accurate test results, especially for remarketing
campaigns that are directed towards returning users.
5. Save the settings. The test will automatically run on the start date.

Best practices to test alternate bidding strategies


A/B testing on bidding strategies must use an existing, live campaign for comparison. Choose a
campaign that has an adequate budget to split between the variations and a large enough audience
for a statistical comparison of the results. For more information about statistically significant data for
A/B tests, refer to the reading about monitoring A/B test results in this course. Only choose a
campaign that you’re willing to experiment with because the experiment may impact the
performance of the existing campaign. And finally, focus the test on one change or variable only.
Testing multiple changes at the same time prevents you from knowing for sure which variable
caused a change in performance.

Digital Marketing Reading Material – Faculty: Qasim Ali Shah: SZABIST University Larkana Campus

You might also like