How to run QA (Quality assurance) on your A/B tests

how to qa your ab tests

Whether you’re looking to optimise your website, mobile app or marketing strategy, A/B testing has rapidly emerged as one of the most popular and successful methods available.

The basic premise of A/B testing is to build and roll out two different variations of your website or app. These variations are randomly displayed to split proportions of your audience, and the results – all being statistically significant – should give you solid, actionable insights to help grow your business.

This can help you make educated decisions on all manner of different things –   from copy and calls-to-action, to layout and design, and more. Ultimately, it’s all about establishing what works best for your unique user-base to ensure your website is fully optimised for success. This virtuous cycle allows you to identify what’s most successful among your users, roll with it – and then try and beat it again with further testing.

But there are potential pitfalls.

The problem with A/B Testing

When you set out to run an A/B test, you’ll come up with a hypothesis, create the two variations, and then initiate the test within your testing tool. The Javascript code snippet provided by your optimisation tool determines what your users see, whether it’s version ‘A’ or version ‘B.’

 The problem is, Javascript can be notoriously inconsistent across browsers and devices. Users viewing the same website on different browsers will often experience errors and variations in display that you, the A/B test instigator, would have no knowledge of.

 Of course, this produces a massive problem. With no knowledge of these discrepancies, there’s every likelihood that you could draw inaccurate conclusions from the results of your test.

 So, let’s say, for example, your A/B test hypothesis is that a bigger ‘Buy Now’ button on your website might lead to more conversions. You deploy the test and, although you’re completely unaware of it, there’s a Javascript error with the larger button variation meaning that it doesn’t display properly on Firefox and Chrome. The results will most likely come back and show that the larger button has bombed, and you’d (understandably) take this to mean your users had roundly rejected it – when, in fact, they were simply reacting to a browser error. Your decision, then, to persist with the smaller button, could be costing you money.

How common are these problems? Well, according to Craig Sullivan (@OptimiseOrDie) QA issues affect as many as 40% of A/B tests. This means that the results of 2 in 5 A/B tests, in theory, could be tainted. These numbers aren’t all that surprising, since many people have a decidedly gung-ho approach to A/B testing. They’re so excited to try out different variations of their website that they forget to go through the correct process and ensure the test will produce valid results.

And, as in the example above, if we’re making key business decisions that are based on inconsistent data, that means that – far from optimising our website for conversion – we could be making changes that actually damage our bottom line, albeit with the very best intentions.

 How can we trust the results of our A/B tests when we don’t ensure that the tests work correctly beforehand? The short answer is, ‘we can’t.’

What is QA Testing?

Quality Assurance (QA) Testing gives you the power to guarantee the validity of your A/B test results by letting you catch and correct these errors before your tests are deployed.

It allows you to, essentially, run the test on yourself before deploying to your user base, so you can make sure that both variations are rendered correctly across browsers and devices. This means you can actually trust what your test results are telling you!

 How to do it

QA is very much a step-by-step process that enables you to define and test a range of different features of your test. This starts with determining your success metrics, including:

  • Primary KPI metrics
  • Secondary KPI metrics
  • Ensuring targeting is accurate

Want a cool checklist that you can download as a pdf, print and use everytime you create an A/B test?
Click here to download the checklist

 

Cross browser checks

In terms of ensuring cross-browser functionality, there are a number of different tools out there to help (and save you from buying a plethora of different devices to test on!) CrossBrowserTesting and Browserstack.com, for example, are solutions which allow you to test your site across a range of different browsers, operating systems and devices with instant results. Optimizely, too, gives you the power to QA your experiment before it goes live. These solutions allow you to ‘force’ a browser to show a particular variation to make sure it renders correctly. This means there’s no second guessing once the test goes out to your users!

Setting Test Cookies

The first step to the QA process (unless you have a staging environment) is to set a test cookie on your live production environment. This lets you start the experiment as if it were live, but it can only be seen by yourself and select others.

cookie1

 

You can target your test by using the test cookie. Under the ‘Audience Conditions’ field, you can specify that only visitors with that particular cookie will be sampled by the test.

cookie2

cookie3  

 

Make sure that you specify that 100% of your audience with this cookie will be included in the experiment, and will be displayed the ‘test’ variation rather than the original. Finally, go through the test yourself and QA the test…being sure to answer questions such as:

  • Is it loading without any flashing? Ie. Is the user seeing the old version of the page for a split second
  • Are the images rendering correctly?
  • Are there any issues with the CSS?
  • Are the goals firing and being logged correctly?

 

 Once you’re happy that the test is ready to roll, remember to reset the test criteria. You can then deploy the test with total confidence that you can trust the resulting data!

Click here to download the checklist

Here at Digital Tonic we’re all about helping you to convert more of your visitors into customers, and we know the value of thorough, quality assured A/B testing. So many people go wrong at the pre-launch development stage, which is why we’ve created a handy checklist to guide you through the QA process before you deploy your tests.


 

Why not give us a call if you feel we may be able to help you with your conversion optimisation, web analytics or user experience?

 

Top