Two's a Queue

Retail, eCommerce, usability, customer experience, service, technology...

Monday 12 December 2011

A/B Testing, some golden rules - aka What an idiot just learned


I have to admit I’ve always been somewhat dismissive of A/B and multivariate testing. As an ex retailer and someone who has worked on designing parts of some of the UK’s biggest ecommerce businesses I thought this kind of stuff was basically a buzz-concept trotted out by "marketing people" who had no knowledge or instinct and spent their days tinkering about which whether the ‘Add to Basket’ button should be blue or cyan rather than dealing with much more interesting and important things like ‘is my stock file up to date?’. Also I am generally of the opinion that most online retailers should probably get themselves up to at least some kind of best practice level of actual functionality before they start tinkering about with tests and suchlike.(Kurt Geiger.com- I kind of mean you)

No one will probably convince me of the difference between blue and cyan but I have learned a few things recently about A/B testing that I guarantee someone who knows the answer to that question won’t tell you.


I should probably sub title this post as “A/B Testing: The myth’s aren’t true (well they probably are but not for you)”.

-       Know what you’re testing

If you don’t know what you’re wanting to prove with a test what you end up with is some data. Just that, some data. No business question=no real results. You need to be proving something. This is why I often think basic functionality comes first. You don’t need a ton of insight to know whether customers want to be able to track their order or see their previous orders somewhere – none of this affects conversion, it affects customer loyalty, your operations budget and probably your brand among other things but it isn’t going to convert a whole load more. A more interesting thing you could test would be whether your customers convert higher when the price is shown on content or not (for example) . The moral is-  think before you test.

-       In most cases you need big traffic to see big differences


You hear all these ecommerce ‘experts’ speaking about how they improved conversion by about a million percentage points by testing whether a female or male model is better on their Homepage or not. That’s great for them, but I almost guarantee it won’t happen for you. I’ve had tests running for literally weeks on non-core pages – fact is you need visitors to that functionality. If you didn’t have many when it was just A – it won’t be much difference when you have A+B. Learn which things have enough traffic to test and which don’t. Think about the sample size you need and work out how long you’ll be testing for based on your current traffic through that part of the site.

 -       Statistical significance is all

The reason you need volume is to ensure you reach statistical significance. Think of it as a level of confidence in the result. For a normal non maths geek this means the difference in the results for A and B have to be different enough to not just have happened by accident. There is some great articles about this and calculating your sample size around on the web –most of them I don’t understand. What I do understand is-  no statistical significance, no result. The smaller your volumes also mean the less consistency you’ll see-  when a couple of visitors can tip your result you’ll see more ups and downs that when you get thousands.

 -       Don’t test more than one thing at a time

It sounds simple but is actually incredibly hard to implement. If you change more than one thing at once you’ll never know exactly what you did to cause the positive (or negative) test result. And yes this means when you add additional functionality in a lot of cases you can’t go re-designing your page around it. The best way to approach is usually to go for the concept first and then look at the design. Or think about designing without the functionality first, setting it live to establish a baseline and then adding in the new thing.
 
-       Don’t always hope for a positive difference


Sometimes the best test you can do is to hope that when you add something there is no difference – that way you know that developing that functionality isn’t going to have an adverse effect on conversion. There are other factors than conversion (like reducing the calls to your call centre for example).

 -       Don’t listen to agencies

 Just don’t. I'd explain but it will make me angry and stab at my keys. It's my birthday and I have a sprained wrist so neither of these things seems like fun.

 There'll probably be a part two to this.....

No comments:

Post a Comment