-
-

What is Threatening Your Conversion Optimisation Test Results?

Neil McKay
  • Can you trust your test results?
  • You can manage and minimise validity threats by being proactive.
  • Check for instrumentation, history, selection, and broken code effects.

Your conversion optimisation tests have run, you’ve gathered your results, and now you’re analysing your data. It all looks good and you’re feeling pleased with how everything has gone. But, what tells you that everything is fine and that you’ll be able to trust your results?

Are your results valid?

You need to be aware of what threatens the validity of your tests. These problems are known as effects and they include:

  1. Instrumentation effect
  2. History effect
  3. Selection effect
  4. Broken code effect

Knowing what they are and by taking command, you’ll lessen their potential to do any damage.

1. Instrumentation Effect

When you insert incorrect tracking on your website you’ll suffer from an instrumentation effect. It’s a common error that harms results. But, you can avoid it by ensuring you take your time in setting up the test and by scrutinising the implementation with great care.

Importantly, ensure that you keep a very close eye on every goal and metric you want to track and confirm that they are being recorded by your analytics package.

If you find a problem (e.g., add to cart click data is missing), stop the test and fix it. Then reset the data and start again.

Here are some tips to avoid instrumentation effect:

  • Do not rely on your testing platforms analytics. Send all test data to a second analytics platform, for example Google Analytics, and make sure “integration” is activated. This will allow you to compare data and trends as well as enable you to spot errors more quickly,
  • Match results with back-end transactional data, so that when you report a 20 per cent lift it is also visible in banking/payment merchant statements.

2. History Effect

History effects occur when external factors interfere with your tests and challenge their validity. Think marketing campaigns, holiday seasons—Christmas, Easter, etc., the various seasons themselves, good/bad PR involving your company or industry, product recalls, and so on.

For example, you may remember in 2013 the media reported that breast implants made from unauthorised silicone fillers were being used by cosmetic surgeons. That made PIP a well-known company for all the wrong reasons.

And, any cosmetic surgery business in the UK doing conversion optimisation at that time would have seen the history effect in their results.

A positive example would be if a famous person (maybe the Duchess of Cambridge) was photographed wearing a dress made by your company. Your sales for that dress may increase greatly for a brief time, thereby unintentionally affecting your test results dramatically. You just need to be aware that’s what is causing the anomalies.

IMAGE SOURCE: THE TELEGRAPH, 12 APRIL 2016.

There is always something going on in the outside world, just be aware of how it could affect your business.

Consider the following to avoid history effect:

  • Be conscious of major holiday dates that affect your market.
  • Monitor the media (or have your marketing/PR department keep you informed about negative and positive press)
  • Ensure everyone who needs to know about your testing is apprised of your activities
  • Track your tests daily; monitor them closely; be vigilant when looking at the data
  • Do not run any sequential tests (i.e., running one treatment followed by another for an equal period of time); always do A/B tests.

3. Selection Effect

Wrongly assuming that a segment of the traffic accurately represents the total is known as a selection effect. It can occur when promotion acquired traffic hits your website, skewing the results.

For example, you might run a product page test using your email database traffic that results in conversions, sales, and revenue growth (much better than your site’s average), but you are testing loyal customers and so that should be expected.

Nevertheless, you are happy with the results and the conversion lift is even better than before. You declare the new variation the winner.

You then open up this winning variation landing page to all site traffic and expect it to convert at the same percentage. But, it doesn’t! Your assumption was incorrect.

Here’s what you should do to avoid selection effect:

  • Make sure that everyone involved in marketing is aware of your tests and that you know what they’re doing
  • If possible don’t send campaign traffic to your test (or exclude it from your results) unless you specifically are testing that segment of your audience
  • Always look at results across all traffic sources. Don’t take the average. Segment the sources to be sure you have proper representation.

4. Broken Code Effect

Unbeknown to you, you may have bugs in your test that are producing flawed data. For example, these bugs might mean your variation isn’t displaying correctly on some browsers and devices. This is called the broken code effect.

To avoid the broken code effect, you should:

  • Perform cross-browser and cross-device compatibility testing on the new page. You have to make sure it looks good and works on every browser and device
  • Check each of your test results across different browsers and devices. If you spot large discrepancies in performance across browsers, the chances are you have code issues
  • Find the coding problems, fix them, and restart the test.

Thoroughness is the watchword

I hope that I have helped you to understand how and why your tests are open to attack from various error causing effects. If you take control, then you’ll minimise them—and you’ll get as close as possible to eliminating their influence on your results.

It’s all down to being thorough and following a process. The more you prepare for all eventualities, the easier it is for you to maintain testing progress and get valid results.