In the other videos in this series, I’ve suggested techniques – based on psychological theories that you can use to boost your conversions. But in this video, I want to change tack just slightly and ask how you can tell when a change that you’ve made to your site is having the desired effect.
In my book Psy-Q, I talk about a classic thought experiment that is often used as an example of the phenomenon that I’m talking about. Think about pilots who are learning to land planes, which is the most difficult part of the job. What’s the best way to train pilots – the carrot or the stick? Should you praise and reward them for good landings or scold and punish them for bad landings? (The same question arises in lots of situations such as trying to discipline kids in school, or even at home).
So, suppose that you decide to do both. You praise and reward them for every good landing, and fine or punish them for every bad landing. You do this for a while and then have a look at your results.
You notice that after praising them for an excellent landing, the next one is always worse. On the other hand, after you’ve punished them for a bad landing, they always do better on the next one. So, you conclude that the stick is better than the carrot, and that you’ll concentrate on this from now on.
It all seems pretty reasonable doesn’t it? But the whole thing is just an illusion. This pattern of landings would happen regardless of your praise or punishment because of something called regression to the mean.
Plane landings, like just about everything else involving human behaviour – follow what’s called a normal distribution. Most landings are OK – just about average. A few landings are excellent, so perfect that you can barely feel the wheels hit the ground. A few are terrible – so bumpy that you almost fear for your life.
But the excellent and the terrible landings are both very rare, and the average landings are very common. So, after a terrible landing, odds are that the next one will be better, just because most landings are better than terrible. On the other hand, after an excellent landing, odds are that the next one will be worse, just because most landings are worse than excellent.
So, if you go around making changes to your site with no real plan, you risk falling into the same trap. If your site is doing terribly on one day, and then you change it and things pick up, you have no way of knowing whether it was because of your change or simply because on most days your site does averagely well, so after a terrible day, a better day is almost inevitable.
On the other hand, if your site drops off after a particularly excellent day, it’s probably not because of some change that you did or didn’t make, just regression to the mean.
Psychologists, and in fact all scientists, are all too aware of these types of statistical fallacies – seeing patterns that aren’t there – which is why we’ve learned never just to take our data at face value, but to analyse them using statistical tests: either the traditional type that look at whether particular changes in performance pass some threshold for statistical significance.
Or increasingly these days, using Bayesian statistics – like we do here at Endless Gain – to compare the relative probability that a particular upturn – or downturn – in conversions or revenue was due to differences between versions of the site, and not just down to chance.
It’s only by using these types of statistical techniques that you can avoid getting caught out by statistical illusions like regression to the mean.