A/B Testing

Weird Science, Part I: Introducing the Art of Numbers into Advertising

Posted: November 17, 2017 by Angela Corbett

As someone who gets the spins whenever numbers are involved, and thus dove hand grenade-style into the humanities, I find myself throwing around a lot of percentages these days. Copy writing isn’t all clever wordplay and rhythmic voiceover like I’d thought/hoped/prayed it would be. There’s a lot of research involved.

This little phrase, backed by data, is in our RFPs, it’s on our website and written in our social media profiles. Everywhere you go: creative backed by data, blah blah blah. What does that even mean?

It means we do our research before, during and after making work for clients. Now, put on your glasses and pocket protector. We’re gonna get sciencey.

Any advice column will tell you, success is all about the little things. One small change makes a big difference—like skipping your daily Starbucks Venti Lowfat Rainbow Unicorn Blood Latte™ to save a little money each day, or doing power squats before a meeting (power being the optimal word here) to enhance your performance (trust me). And even though big ideas are all the rage—do something no one’s ever done, give people an experience they’ve never had, make them feel, really-really feel—most progress is achieved by making small changes, especially when it comes to online content.

Decisions like button size and color, word choice, the location of information, the font and the images used all have an effect on conversion rates, or the percent of people who take a desired action. For instance, Amazon found that moving credit card offers to their shopping cart instead of their home page increased profits by tens of millions of dollars annually.

According to recent research from RedEye and eConsultancy, a survey of 800 marketers revealed that a majority found A/B testing is the best way to improve online conversion rates. Large companies like Facebook and Google conduct tens of thousands of tests, A/B and otherwise, every year, engaging millions of users. Testing gives them the power to minimize wasted time and money by quickly determining what does and doesn’t work. Then, they’re able to get rid of the losers early enough to allocate more time and money to the winners.

A/B testing is a way to compare two versions of something to find out which performs better, and it’s nearly 100 years old. In A/B testing, one version, “A” is the control and a modified version, “B” is the treatment. Users are randomly selected to engage with both, and the results are compared. What A/B testing does (and it’s more rigorous mates A/B/C and A/B/C/D testing) is back creative with an evidence-based process.

Companies already have access to large customer samples, and analytics already collect enormous amounts of data on users habits and the way they interact with content. The pieces are there. A/B testing just provides a cost-effective framework that companies can use to qualify this information and use it to make fast, effective changes in their content to improve their results.

But brands have to be careful. A/B testing is a science. Collecting and analyzing data has many intricacies. Large companies employ fancy “data scientists” for this very reason. It’s easy to misinterpret results. To ensure your test’s integrity, there are ground rules, and here are a few of them:

  1. Define an evaluation metric that aligns with your goals—in other words, what outcome is the best predictor of reaching your long-term goals?

 

  1. Set up a safeguard to ensure your tests are reliable. A good way to do this is to run A/A tests first.

 

  1. Be a skeptic. Twyman’s law states that any figure that looks interesting or different is typically wrong. So, if you get a surprising result, replicate your study to assure it’s correct.

 

  1. Exclude outliers. In other words, don’t let the exceptions shape your perception of the data.

 

  1. Switch up users from one experiment to the next. Not doing so can lead to the “carryover effect” which is when people’s experience in one experiment affects their future behavior.

 

The best way to gauge the significance of an experiment is to assess the difference between its expected outcome and the actual result. If the result was surprising, you’ve learned something you didn’t know before, and learning something new about your target is always a good thing.

So go on and get tested—er, we mean start testing. Run all the tests. Better safe than sorry, that’s what Jeffrey always says.

 

 

Share this...
Share on FacebookTweet about this on TwitterShare on Reddit

No Comments

Leave a Reply