A/B testing is the bread and butter of today’s marketer. It’s a straightforward method to find the top-performing version of a website, campaign, or promotion from a series of available options. However, it’s predicated on marketers having the time and the tools available to perform A/B testing. With so many tools out there that can get the job done, it becomes mostly a question of time management.
The Good
Let’s start with the good part about A/B testing: if you’re already testing, then you’re ahead of the curve. This is because you have already realized that in this fast-paced and changing world of technology, you can’t get it right the first time.
You might have good ideas and years of experience to rely on, but that’s not enough to stay ahead of the competition and remain actively engaged with the needs and desires of your consumers. To make sure your assumptions about your consumers are still valid, you need to run tests, even if just to prove you got it right.
The Bad
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” – Mark Twain
When A/B tests are not properly performed, they can cause more harm than good. For example, if you act on early results it’s easy to assume that you picked the right ‘winner’ and that you’re done. Only when you start analyzing the results a few weeks or months down the line do you make the confusing discovery that they are not as good as they first appeared.
For a test to be conclusive, you need a lot of data points, especially if you target your audience on multiple channels. If you haven’t waited long enough for significant results to be generated, you can easily pick the wrong version as your winner. This effect is common, but can be mitigated with frequent re-testing, and not rushing to draw conclusions from insufficient data.
The Ugly
Brian: You’re all different!
The Crowd: Yes, we ARE all different!
Man in crowd: I’m not… – Life of Brian
There is an ‘ugly’ side to A/B testing. Even if you have the tools, the time, and patience to do it properly, there is a common misconception that there is only one winner in these tests. The fact is that A/B tests will only tell you which version works best for the majority of your consumers, completely ignoring any minority consumer groups.
Remember, a ‘Minority’ can be as large as 49% of your consumers, meaning that you could be neglecting a large portion if you rely too heavily on A/B testing. Don’t forget that you are targeting every individual in a crowd, not the crowd as one amorphous entity, because split testing might fool you into thinking so.
Also, remember that you can create nested A/B tests for different minority groups, and reduce the amount of ‘missed off’ recipients. The amount of effort required, and the diminishing returns here, make it a messy business, meaning you have to choose how many customers you want to sacrifice in the name of efficiency.
The Future
What should you do then? If the strict and sophisticated A/B testing ignores almost half your audience, what can you do?
You have to create personalized marketing strategies. You also have to forget averages and start thinking about individuals.
Every element of the marketing effort that you can A/B test today will be truly personalized in the not-so-distant future. This personalization is not just about “first name, last name” personalization, it’s about testing all engagements and customizing them on an individual level. However strange and obvious it may sound, personalizing anything, from send times to color schemes, drives engagement.
However, A/B testing and personalization can’t be done manually for every individual recipient, meaning that you have to leverage data science and heavy computing. You have to leverage predictive algorithms and artificial intelligence so you can stop guessing.
The technology is ready. Are you?