While many understand what A/B testing (split testing) is, many do not use this vital tool to its full potential. The beauty of these tests is that they can be used for almost anything. You can test something as simple as ad copy in a paid search campaign to something as complex as the entire content on a web page.
These tests have an enormous impact on your overall marketing efforts. They allow you to analyze what works and what doesn’t on a granular level. The results from just one of these tests could greatly increase your conversion rates, sign-ups, or other goals if you know what to look for.
For even more on A/B testing, join us and Blue Acorn for a live webinar on July 14th, hosted by Bronto Software.
What is an A/B Test
For those who may be fuzzy on the details, an A/B test is where you make two or more different versions of a piece of content such as an ad, landing page, or email, and in each version you modify the text, call to action, colors, images, videos, buttons, or virtually anything else that could make an impact on user experience.
The important part of the test is variety, but that variety must have a reason. Simply designing one of these tests without considering your market will most likely do very little to help your marketing efforts. You need to tailor these tests to fit your audience. A good way to do that is to build buyer personas for your content to find out exactly who you will be targeting and what it is they are looking for.
The next step is to determine which of these personas you plan to target. If it’s a more general topic that needs to work for your entire market, then your design and layout needs to mirror that.
The idea is to create two or more unique pieces of content that are different enough to give you differing results, but similar enough so that you can still see the basic message.
How to Implement an A/B Test
First, it’s important to make sure that you know what the end result of the test is. There needs to be a goal for the A/B test, otherwise the data you gain will do very little to help you. Establish a baseline for the test, and base it on past results and expected outcomes.
The version of your content that you’re currently using is called the control. The control is the standard, and it establishes a baseline for the test. Since it’s the method you’ve used before in the past and seen what the results are, you should have a solid idea of what you’ll get out of it.
The second part, the one where you’re testing new ideas or a different way to approach the subject of the content, becomes the variation. You can use as many variations as you want to test multiple different approaches. Just make sure that you have a very well-defined goal in mind, and a conclusive base to compare against.
All variations will run simultaneously. The idea is that equal amounts of your traffic gets directed to each variation in the test. These are running at the same time to rule out any external variables that could alter the results. For instance, if you tried to run test A in the morning and test B at night, you might get completely different traffic patterns and a different set of users that will skew the results and not give you accurate data.
Why You Should Use A/B Tests
Let’s dive into some examples of split test success, shall we? In 2008, Barack Obama was elected as the President of the United States. But how did he get there? If we break it down to its simplest form, the answer can be summed up in a single word: money.
Campaigning is expensive work, and a huge part of that comes from fundraising and donations. In December 2007, with Obama trailing massively in the polls, his campaign team decided to change things up.
They implemented split testing on the Obama website and tested four different buttons and six different pieces of media on the splash page of the site. With that many variations (24, 4 buttons x 6 media), they were given incredibly specific and detailed information on which were the most effective.
The results were impactful. The combination that the staff thought was best was actually one of the worst-performing. The best performing, in comparison, had a 40% higher sign-up rate. With over 10 million people signing up during the campaign, they would have lost nearly 3 million emails had they gone with the option they thought was best. At $21 per average donation, that factors out to over $60 million in donations!
Visit the Optimizely Blog to read more about this amazingly successful split test.
It’s not just presidential campaigns that can benefit from A/B testing. Google does it too!
Back in the early days of Gmail, Marissa Mayer, then the Vice President of Google Product Search, wanted to find a way to see which shade of blue would be the most successful for Google links and even their logo.
So she did an A/B test on 40 different shades of blue to determine the best situation. Each visitor to Google was randomly shown one of these 40 shades (2.5% of site traffic per color) and then they recorded data such as conversion rate of sign-ups, time on site, and bounce rate for each test.
You see the blue that shows up on Google or in Gmail or any link on one of their sites? Yeah, that’s the end result of the test. If it’s good enough for Google and Obama, it’s good enough for your business.
Now let’s be serious, no one expects you to run such a specific test as Google, or probably even as President Obama. The point is, these tests can have an incredibly significant effect on your business model.
If you’re a small business, you could test out copy for your landing pages, different calls to action, different images, colors, etc. Virtually any business can benefit from these tests, as they will help you to fine-tune your content and get the absolute most out of your marketing budget.
Are you interested in learning more about A/B testing? Join ROI’s Director of Sales, Denis Coombes, and Blue Acorn’s Director of Optimization, Jay Atkinson, for a webinar at 1 pm on July 14th. This webinar is hosted by one of ROI’s partners, Bronto Software.