Video Title: Why Test Your Marketing Emails?

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Email Marketing Certification

Transcript: Testing Your Marketing Emails


Video Title: Why test your marketing emails?

Hi there, it’s Courtney with HubSpot Academy.

When you’re testing and optimizing your marketing emails, there are two keys areas you want to focus on. The
first is the test itself, and the second is how you measure and improve based on the results.

Measuring the results of your email tests is an important step but first let’s focus on the test.

So why is it so important to test your marketing emails? Before sending an email, you’re planning, preparing, and
carefully crafting the emails that you send. So why, after all of that, are you also going to test them to change
them?

Testing is important because your prospects and customers are continuing to change. They are changing the
way they buy, what they buy, and why they buy it. So if the people you are sending to are changing, then your
emails need to change too.

And instead of changing everything all at once and hoping it works, this is where testing different elements of
your emails and seeing the results is going to be beneficial to the continued growth of the people reading your
emails but also for you and your business.

It is important to have a specific purpose for why you are testing something. You might identify a problem or
area that’s not performing well, so you can test different options to find a solution that works for both your
contacts and for you and your business.

When you’re sending marketing emails, you’ll have goals, and by setting these goals for your emails, you’re
trying to provide value to those who will be reading your emails.

Creating valuable conversations from the start and continuing them over time may require you to test different
types of methods and language. Even different designs. Why? Because people have conversations differently.
And they want different conversations at certain points in their lifecycle with you.

Your contacts will continue to develop relationships with you and your business. And with that in mind, you’ll
continue to test to ensure they’re receiving the most amount of value out of those relationships.

Testing the ways that you can start or continue conversations with your emails is a great way to see how your
contacts are either engaging or not engaging with those emails.

Karl Pearson an english mathematician once said that:

Courtney, Inbound Professor reading quote

“That which is measured improves.”

When you measure and test your email sends, you’re working towards improvement.
Email Marketing Certification
Transcript: Testing Your Marketing Emails

To continue to develop great relationships with people through the emails you send them, you need to always
look ahead, which means testing consistently to see how you can continue to improve to provide the most
human, helpful, and valuable experience to your contacts.

Video Title: How to develop tests for your marketing emails?

We’re going to focus on how to run tests on you marketing emails to get results that you can measure.

Every email test you run should have a strong purpose behind it. Each time you decide to test an email, ask
yourself the following questions: “Why am I running this test, and what am I hoping to get out of it?”

By testing your emails, you’re focusing your email marketing efforts around data-driven analysis, which gives you
the next steps for improving the next send.

Let’s explore how to test your emails to identify the right next steps to continue to send emails that will provide
value to your contacts.

Before diving into the steps you’re going to be using, let’s first talk about a/b testing. What is it, and how can you
use it to test your marketing emails?

A/B testing is the inbound answer to a controlled experiment. It’s defined as a method of comparing two
versions of such as a web page, an app, or an email to determine which one performs better.

In this case, we’re comparing two versions of an email. You could use an A/B test to pinpoint specific variations of
your email and focus on how to improve that asset. Which allows you to focus on on data-driven analysis instead
of a guessing game.

Most email marketing tools will have a specific feature that allows you to A/B test assets, but you can also run an
A/B test on your own, without these tools as well.

An A/B test allows you to test variations of your email, alongside one another. Then you can review the results to
see which one performed better to get the data to back up future decisions on your email sends.

Now that you have an idea of what A/B testing is, let’s move on to the steps you’ll take to run tests on your
marketing emails.

The first step is to define the goal and purpose of your test. Second, evaluate the segment of recipients you're
sending to, third, design your test, and lastly, review and start your test.

They are your steps to get your started on developing tests for your marketing emails. You will analyze and
report on these results as well but first we need to focus on creating the tests.
Email Marketing Certification
Transcript: Testing Your Marketing Emails

When you’re running a test on an email, all you want to focus on is one element that you are testing: the subject
line, the body content, or CTA you are using. Think of the tests you are running as experiments where you want a
control and a variable.

With this is mind, you can take your first step in developing the test for your marketing emails.

The first step in any inbound strategy is defining the purpose for doing something.

If you’re testing just to test, you won’t discover results that give you actionable steps to help you improve. While
testing your marketing emails consistently will help you improve over time, keep in mind that doing something
just to do something will not provide valuable results nor provide value to those receiving your emails.

Take a look at how your emails are performing and decide what you want to improve. Maybe a specific type of
email you’re sending is not yielding the results you want. Or maybe you’re going through a rebranding and want
to test different colors or logos. Whatever it is, make sure you have a purpose before setting out to run a test.

When setting this goal for your email test, you’re also preparing to design your email test later on.

Take for example, looking at the email elements you can test.

Which elements can affect open rates? It could be a few things, such as the number of emails you send to a list,
the subject line, and the preview text. And which elements can affect clickthrough rates? The email body
copy, the body design/layout,the body images, the CTA, and the email signature.

These elements can give you a starting point for focusing your goal and purpose. From here, you can see what’s
working well and what’s not to draft a hypothesis of what you want to test and thus improve.

Now that you have a goal and purpose for your test, you’ll need to evaluate the segment of recipients you’re
sending to. You can’t run an A/B test on your email unless it goes to someone — and when you’re testing an
email, you need a minimum amount of recipients to make the test conclusive. This is where statistical significance
comes in.

Testing significance involves doing some math to determine the number of people you want on your email list in
order to run a test.

If you send an email to five people to try and test a new subject line. You might send 3 out of those 5 people the
updated subject line and while they might all love it you won’t be able to confidently say that the rest of your
contacts will. You need more people for the results to be statistically significant.

So how do you know how many people to run a test with?

HubSpot’s a/b testing tool for example requires you to have at least 1,000 contacts on your list to run a test. This
is the total number of people of contacts you wish to send a specific email to.

To run your test you will need to determine a percentage or a sample size from that 1,000 contacts to send your
variations or versions of email to.
Email Marketing Certification
Transcript: Testing Your Marketing Emails

You will have your Version A which can be your control, the typical email you would send and then you have your
Version B the one in which has a variation made to. Whether this is a chance to your subject line, body text, or
other element.

If you are testing under a 1,000 contacts you can run a 50/50 test for your email send. Where 50% get Version
A and the other 50% get Version B.

Let’s say you have do have 1,000 or more contacts that you want to send to. You will now need to determine the
sample size that will yield conclusive results.

If you are using a tool like HubSpot then the tool can help make this calculation for you. You will select the
percentage you wish to send to each variation and the number will be set.

But you can also determine that sample size using a significance test calculator. This will give you the number for
each sample size that will help yield conclusive results.

This calculator will help you determine the number of people that will receive each version of the email: the
control and the variation.

Let’s walk through an example together.

You can see here on this sample size calculator there a few different options you will need to fill out: the
confidence level, internal and the population. And the final produced sample size.

Let’s begin with the population. The population is the total number of contacts that you want to send your email
to. For example, 1,000 contacts. You can get an estimate of this number by looking at the last four to five emails
you have send and how many people you sent it to.

Once you have your population you will set a confidence interval. You might have heard this called "margin of
error." Lots of surveys use this. This is the range of results you can expect once the test has run with the full
population.

And lastly, you need to look at the confidence level. This tells you how sure you can be that your sample results
lie within the confidence interval. The lower the percentage, the less sure you can be about the results. The
higher the percentage, the more people you'll need in your sample size to test.

For example in HubSpot, the A/B test tool uses the 85% confidence level to determine a winner.In a tool like this,
you can choose 95% as a base.

Now let’s apply these values to see what we get. We have our list of 1,000 contacts and we want to be 95%
confident our winning email version fall within a 5-point interval of our population metrics.

Here's what we'd put in the tool:


• Population: 1,000
• Confidence Level: 95%
• Confidence Interval: 5
Email Marketing Certification
Transcript: Testing Your Marketing Emails

This would produce a sample size of 278.

This would be mean that 278 people get Version A and another 278 get Version B. Each segment would receive
that one of these versions. Then you would be able to see which version performed better. For example, Version
B with your variation and then send that version the rest of the contacts from your original list who did not receive
a variation.

An a/b testing tool can help do this automatically for you but you can also implement your a/b test by creating
your different segments once you know each of the sample sizes you will need.

Now that you know the purpose and the goal of your email test, and you know the number of recipients you
need to make your test produce results, you can move on to designing the actual test.

The design will relate heavily to your purpose or goal. Like other aspects of your inbound strategy, your goal is
tied directly to the content, purpose, or outcome you’re producing.

When you set your goal, you identified areas in your email that need improving. Now it’s time to take that a step
further and figure out ways to improve them.

An important aspect of testing is to make sure what you’re proposing is feasible. If you don’t want anyone to
unsubscribe from your emails, don’t send ANY emails! Great experiment right?

Not so much.

When you’re hypothesizing, be creative but also keep your ideas within the boundaries of reality.

You want to explore tests that will provide long-term results for your business.

Let’s look at an example of a hypothesis and what type of test you might design.

In this example, when setting the purpose of your test, let’s say you identified that your newsletter emails are not
getting the open rates you’d like and you want want to find a solution by running a test to see how you can
improve them.

Your goal is to improve email newsletter open rates from 11% to 15% by during a business quarter.

Your hypothesis is that the subject lines contain characters and words that are triggering the recipients’ spam
filters.

To test this hypothesis, you can design a test to adjust the subject lines to avoid exclamation marks and
percentage signs and remove sales-y words like “free” and “discount.” You want to aim to closely align the
subject line with what the email contains. And you’ll test if applying those best practices improves your open
rate.
Email Marketing Certification
Transcript: Testing Your Marketing Emails

Another hypothesis and solution for your low open rates is: You send too many emails, so your contacts are less
compelled to open them. And you can design a test to try reducing your email frequency for at least one month
and observe if email open rates improve.

This is how you can tie your goal to the design of your test to start to measure and improve your email sends.

Lastly, you’ll review and start the test.

This is an import step because it means deciding on how long you want to run your test for.

There is no magic number, no perfect time of the week or even day of the month to run your tests, but you want
to run your test long enough to make sure enough of your contacts have time to interact with the content.

Some email A/B testing tools will have you set a timeframe for the test, and at the end of that time period, the
tool will choose a winning email to send to the rest of the contacts. This is why timing can be so important.

Your A/B test might not be significant after an hour or even after 24 hours. To decide on this timeframe, you can
take a look at past performance (remember, you want to focus on data-driven analysis, not guesswork).

One of the most common mistakes people make is ending a test too soon. And this doesn’t just mean the one
A/B test you’re running. Make sure you’re testing many emails to start to see how things are trending before
making an overall change to the way you send email.

Maybe you choose to test a few different elements over multiple email sends and multiple months. Analyzing
these metrics will help you decide on what you want to adjust for the time being.

But for a single email send, the time is still important.

Take a look at past email opens and clicks and see where things start to drop off.

For example, what percentage of total clicks did your email get during its first day? If you found that it got 70% of
clicks in the first 24 hours, and then 5% each day after that, it'd make sense to cap your email A/B testing timing
window to 24 hours, because it wouldn't be worth delaying your results just to gather a little bit of extra data.

In this scenario, you would probably want to keep your timing window to 24 hours. If you use an email platform
that has an A/B testing tool then it will determine a statistically significant winner. If not, you can determine the
winner yourself by calculating the conversion rates of the two types of emails.

But what happens if your test fails? What if neither version performs better than the other or if it’s too close to
actually determine significance?

If neither variation produced statistically significant results, your test was inconclusive. That is okay! This is why
testing is important. Not every test will produce results for you to take action on immediately.
Email Marketing Certification
Transcript: Testing Your Marketing Emails

This might mean adjusting your goal or looking at the numbers you want to move. But most importantly, don’t be
afraid to test and test again. After all, repeated efforts can only help you improve.

This where you can start to see how these tests are performing. You might decide to run the test multiple times
to determine what you want to change.

These are the steps for outlining the test you want to run on you marketing emails: Define the goal and purpose
of your test, evaluate the segment of recipients you're sending to , designing your test, and review and start your
test.

Testing is great way to see how your contacts are engaging or not engaging with your marketing emails, and by
following these steps, you’ll continue to prove your ability to do data-driven analysis for your business.

Video: What does an email marketing test look like?

There are a lot of different ways to test marketing emails. But what does an effective email test look like?

Let’s walk through an example together. This example uses the HubSpot A/B testing tool,but you can apply these
methods to any system you’re looking to use.

This example comes from a SmartBug Media client, AdairHomes. They wanted to optimize their subject lines
using A/B testing.

In subject lines it’s considered best practice to stay away from words that can come off as salesy or pushy. Words
like “free,” or “percent off.” However, other words, when used sparingly, can be effective in driving leads when
you have something of value to promote. Don't abuse these words, though, or people will stop paying attention.

These words include “announcing,” “announcement,” “[new offer],” or “[new guide].”

In this example, AdairHomes used words like this in their subject line that helped them test how the emails
performed.

Take a look at this first email.

By testing the two subject lines “Announcing two new floor plans perfect for entertaining” and “Look through our
latest floor plans!” they were able to improve their open rate by 4%.

Now they know that on future sends, it might provide more value to their contacts to be more descriptive in the
subject line and then follow that up with a strong connection to the content inside of the email.

In the second send, the subject line and preview text are working together to provide enough value to get
someone to open the email. This test is interesting because it switches where the information is. In the first email,
which did not win, the information is in the subject line and the ebook is promoted in the preview text. In the
winning send, this is switched.
Email Marketing Certification
Transcript: Testing Your Marketing Emails

This is a great example of a test to see how your contacts interact with the subject and preview text and where
they want to see the information.

Running a few tests like this can help you understand how your contacts prefer information to be presented.

These are just two examples of A/B tests that have been run by real companies. There are many different things
to test depending on the goals you’re trying to reach.

Remember, you can test multiple emails over a period of time to understand how your contacts want to receive
information and how different segments of recipients might like different content.

Testing your emails will help you continue to provide value to your contacts and, over time, maintain the trust
you’ve built with them.

You might also like