A/B testing, also known as split testing, is a method of comparing two versions of a web page to see which one performs better. By testing different versions of your web page, you can determine which design elements, copy, and call-to-actions convert visitors into leads or customers.
If you’re hesitant about starting A/B testing or refocusing on improving customer experience before advertising, think about this one statistic. According to Forbes, Jeff Bezos invested 100x more into customer experience than advertising during the first year of Amazon. Considering Amazon is referenced in practically every article about superior customer experience, it becomes impossible to argue with his strategy.
Regardless of your industry, your business exists to create value for your customers whether it be in the form of a product, service, or content. Every customer interaction with your brand creates a measurable amount of data. But what are you doing with that data? Are you just sitting back and passively collecting it? The best companies aren’t storing their data for later analysis, they actively generate valuable data and customer insights through experimentation. If you want to grow in your industry, then implementing efficient and consistent A/B testing is essential.
The obvious benefit of A/B testing is to improve your company’s value and increase revenue, but there are other benefits to creating an experimentation culture. When you have the ability to quickly run an effective A/B test, your company has more flexibility to test out new ideas and no longer needs to rely on anyone’s “gut instinct”. Remove all of the guesswork from your strategic decisions, and get actionable insights into what does, and doesn’t, work.
In this blog post, we'll share eight keys to successful A/B testing. By following these best practices, you can maximize your chances of achieving significant results from your tests.
Define Your Objective
Before you begin designing your test, it's important to have a clear understanding of what you're trying to achieve. What is the primary goal of the page you're testing? Do you want to increase conversion rates, click-through rates, or time on-site? Once you've defined your objective, you can design your test around that goal.
Choose the Right KPI
A key performance indicator (KPI) is a metric that helps you measure progress toward your goal. When choosing a KPI for your test, be sure to select a metric that's directly related to your objective. For example, if you're trying to increase conversion rates, then your KPI should be conversion rate rather than time on site.
Select One Element to Test
It's important to only test one element at a time; otherwise, you won't be able to isolate the factor that caused any changes in your KPI. For example, if you're testing two different headlines, then keep everything else on the page the same. That way, if there's a change in your KPI, you'll know it was caused by the headline and not by some other element on the page.
Avoid “micro-tweaking”
Make sure your testing doesn’t involve the use of trivial changes for the sake of testing. You want to focus on making the smallest changes with the biggest impact. If that’s not an option, then sometimes you have to go big and bold.
Take into account seasonality
This one is pretty self-explanatory. Sometimes seasonality can play a big part in the success of a test for a variety of reasons. Save your old tests and re-run them at different times. The results will often surprise you.
Develop a highly targeted approach to customer experience pain points
When testing changes, make sure that you’ve identified the specific reason why you think it’s affecting your KPI, and craft your hypothesis as to why this change should improve it. Make sure your addressing your customers' objections for why your goals aren’t being reached, and provide the counter-objections in your test. The last thing you want is to run a test and have no clue why it was successful.
Create a Hypothesis
Before running your test, take some time to create a hypothesis about what you think will happen. This will help you interpret your results after the fact and determine whether or not your test was successful.
Set Up Your Test
Once you've designed your test and created a hypothesis, it's time to set up the actual test using an A/B testing tool like Crazy Egg or VWO (Google Optimize is being sunset, but probably for the best in my professional opinion). Be sure to select a tool that integrates with your website platform so that setting up the test is as easy as possible.
Run Your Test for Enough Time
In order for your results to be statistically significant, you need to run your test for at least two weeks. However, if you have a high volume of traffic, you may be able to get results more quickly.
Review Your Results
Once your test has been running for at least two weeks, it's time to analyze the results. Compare the performance of the two versions of your web page using the KPIs you selected earlier. If there's a significant difference between the two versions, then congrats - - you've found a winning combination! If not, then try tweaking your design and running another test. Remember, it's all about experimentation. The more tests you run, the more likely you are to find a winning combination.
Implement Your Results
Once you've found a winning combination, it's time to implement those results on your live site. Doing so will help ensure that more visitors take the desired action when they land on your page. And that's ultimately what A / B testing is all about!
Segment your data (i.e. personalization)
The results from every user who encountered the test may not tell you the whole story. Make sure you’re segmenting your results based on customer demographic and behavior to see who responded positively and who didn’t. It may be that certain age groups or geographic areas responded differently which opens up the opportunity for more personalization with your audience.
Learn from the losers
Not every idea and test is going to be a guaranteed winner. Your intended outcome may not have been achieved, but there may be other metrics that experienced a positive impact. Make sure understand what your losing tests are telling you about what your customers need.
Conclusion
A / B testing is an essential part of any digital marketing strategy. By following these best practices, businesses can maximize their chances of achieving significant results from their tests.
A/B Testing FAQs
What is the best A/B testing software?
In my humble opinion, I think VWO is the best all-around enterprise-level A/B testing and CRO tool around. It has everything you could possibly need and allows you to do advanced testing on sites that I would not recommend for beginners. If you’re looking to get started with A/B testing then CrazyEgg definitely has the most bang for the buck. You don’t have the ability to set up advanced tests, but its full suite of tools (heatmaps, session records, etc.) more than makes up for that.
What is statistical significance?
Statistical significance is the point when the test has enough Power (i.e. traffic) to determine a final conclusion of the test with enough confidence (usually 95%) that if the test was repeated under the exact same conditions then the same result would occur (95 out of 100 times). For example, VWO says in order for a test to reach statistical significance in 2 weeks, they recommend each variation has at least 1500 visitors and 25 conversions each over that time period. If you have less traffic than that, then the test won’t have enough Power to confidently determine a winner. Hopefully, I haven’t confused anyone further. If I have, dig up your old statistics book from college or call me and I’ll apologize.
What if my site doesn’t have enough traffic to reach statistical significance?
I’d say you can still run the test, but it may take much longer, and the results may not be as reliable. Instead, I would recommend starting with heatmaps, session recordings, and user testing to make improvements and test variations of your site. Apply the best feedback from those results, and start testing once your traffic has increased.
When should I stop testing?
Never, to stop is to die. Not really, but I think companies should breed a culture of experimentation. You shouldn’t simply be testing just to test, but it can certainly help end disputes on creative and strategic direction. For example, marketing wants to test an edgy and provocative headline, but operations think it will alienate or offend key customers. No one wants to give an inch, so test it. Let the people decide.