7 Steps to Running an Effective A/B Test on Your Email Marketing

By

9VjC06k5Cq1Lc0FxVBziKtKbIYyzjBMYxWDPtHcw_54

Email marketing has grown to become of the most important components of an effective marketing strategy. Often cited as one of the most powerful marketing channels for a given company, email marketing can be used to achieve a variety of goals, such as growing your reach, educating your audience, generating sales leads, and converting those sales leads into customers. With such great potential for helping you achieve these numerous goals, email marketing should be one of your primary focuses as a marketer.

But it’s not enough to just be doing email marketing. You should also be constantly striving to optimize your email marketing to yield better and better results.

Your most powerful tool for improving your emails is A/B testing. An A/B test is an experiment that compares two different versions of one piece of content (such as an email, landing page, or call-to-action) to two similarly sized audiences to see which performs better. In other words, A/B testing gives you a way to identify changes you can make to your emails to increase your click-through rates or conversion rates.

So, how do you get started? Simply follow these 7 steps to create and run an effective A/B test on your marketing emails.

1. Identify your goal & your metrics.

Before you dive in and start creating your emails, the first thing you need to do is determine the specific goal of your email send. Why is this so critical? Because the whole point of A/B testing is to produce numerical data that will help you drive better results. But how can you drive better results if you haven’t defined what “results” means to you?

So, take a step back and ask yourself, “What am I looking to achieve with this email send?” Your answer might be that you’re looking to drive more visitors to your website or your blog. Or perhaps your goal is to generate more leads for your sales team. Maybe you’re focused on leveraging your email channel to convert more leads into customers for your business.

Once you’ve identified your goal, you should have a very clear sense of the metric(s) that you’re solving for. Whether you’re looking at the number of views you send to your website, the number of conversions you drive, the number of customers you create, or something else entirely, you need to map your goal to your metrics. For example, if I’m solving purely for increased traffic to my site, I’m going to focus most closely on improving my click-through rates, because the links in my emails are my means for driving page views. But if, instead, my goal is to generate more leads, then I’ll want to look at both click-through rates (getting visitors to my site) and conversion rates (visitors converting into leads on my site).

Whatever your goals are, go into your A/B test with a basis for measuring success later.

2. Choose an element to test.

Now that you have your goals and metrics lined up, it’s time to choose which aspect of your email you’re going to test. I’d recommend starting with the elements that you believe could have the biggest impact on your metrics, such as your calls-to-action, content, format, and tone. However, there are no “wrong” or “bad” elements to test! Almost every component of your email has the potential to make a big difference, and no matter which one you’re testing, as long as you’re achieving statistically significant results, you’re taking another incremental step toward creating your ideal, optimized email.

An important best practice in A/B testing is to only test one variable in your emails at a time. The reason for this is because you’ll want to be able to easily identify the exact variable that caused the change in your results. You can run a test with many variables and drastically different emails, but you’ll likely struggle to determine what it is that led one to perform better than the others, which means that the success will be harder to replicate.

3. Create an “A” variation and a “B” variation.

Once you’ve determined which element of your emails you’ll be testing, you can go ahead and create the actual email variations (often called the “control” and the “treatment”). Since you’re likely only testing one element, I’d suggest creating Variation A, cloning it, and adjusting your test variable in the clone to create Variation B. This saves you the hassle of having to recreate the same email again.

Make sure you have a way of differentiating between the two variations, whether by naming convention, tracking tokens within the links, or a software tool, so it’s easy to compare the results later.

4. Decide what percentage of your list to send each variation to.

Conventionally, A/B tests are run with each variation sent to half the list. This is a perfectly fine approach, and is definitely most appropriate if your list is relatively small in size.

However, if you have a larger list to work with, a handy trick is to send each variation to a smaller percentage of your list (for example, Variation A to 10% and Variation B to 10%), identify the winner, and send the winning variation to the rest of your list (the 80% remaining). This approach is highly useful for running a test and implementing the results immediately, without having to “use up” your entire list for your test. Whatever breakdown you choose, make sure that you’re sending your variations to equal percentages of your list.

5. Send the emails and measure your results.

Alright, time to push that big “SEND” button! Once your optimized emails are out the door, you can begin collecting results. Remember to look at the metrics that you identified in Step 1 so you can stay focused on your goal. Now that you have collected your results for each variation, you need to calculate their statistical significance. You can do this with some simple math equations, or just plug in your numbers to an A/B calculator tool.

If your results are statistically significant, you can declare that your test did, in fact, have a true impact on your metrics. If you do not achieve statistically significant results, it’s still important to note the trends in your data, but you cannot assert that the variable tested had a major impact on your results.

6. Determine the implications of your findings.

If your A/B test results are statistically significant, congratulations on a successful experiment! You should consider this a huge win for your email marketing. But, before you head out to go celebrate, you need to decide what this mean for your emails going forward. How will you implement these findings in your next email send?

7. Log your results and findings.

Lastly, be sure to keep a record of the A/B tests you run so you have a log of the elements you’ve tested and the results you’ve achieved. Include the tests that were not statistically significant as well! These will help you identify trends in what works and what doesn’t, and it’ll give you a full picture of all of the tests you’ve run. This way, when you’re looking for your next A/B test idea, you can see which ones you’ve already done. Feel free to re-run a test of a certain element with new variations, too! Just because you didn’t get significant results the first time doesn’t mean you won’t the second time.

Most importantly of all, always be testing. This is your primary means for constantly improving your email marketing, and ultimately driving better results.

Learn About Digital Marketing at GA Here

Sarah Goliger is a marketing manager at HubSpot, and an email marketer to the core. Sarah currently manages HubSpot’s paid marketing channel, and also has extensive experience in email marketing, lead nurturing, lead generation, SEO, blogging, and landing page optimization. Sarah is passionate about teaching marketers how to optimize their marketing efforts and grow their businesses. You can find Sarah on Twitter at @sarahbethgo and meet her in person for a class at GA Boston.