A/B Testing for Customer Experience

0
66

Share on LinkedIn

A/B testing is one of the most powerful tools for determining which of two (or more) ways to design a customer experience is better. It can be used for almost any customer experience, and provides definitive data on which design is better based on almost any set of criteria.

Stripping off the jargon, an A/B test is really just a controlled experiment like what we all learned about in 8th grade science class. “A” and “B” are two different versions of whatever you’re trying to evaluate: it might be two different website designs, two different speech applications, or two different training programs for contact center agents. The test happens when you randomly assign customers to either “A” or “B” for some period of time and collect data about their performance.

Conducting a proper A/B test isn’t difficult but it does require some attention to detail. A good test must have:

  • Proper Controls: You want the “A” and “B” test cases to be as similar as possible except for the thing you are actually testing, and you want to make sure customers are being assigned as randomly as possible to one case or the other.
  • Good Measurements: You should have a good way to measure whatever you’re using for the decision criteria. For example, if the goal of the test is to see which option yields the highest customer satisfaction, make sure you’re actually measuring customer satisfaction properly (through a survey, as opposed to trying to infer satisfaction levels from some other metric).
  • Enough Data: As with any statistical sampling technique, the accuracy goes up as you get data from more customers. I recommend at least 400 customers in each test case (400 customers experience version A and 400 experience version B, for 800 total if you are testing two options). Smaller samples can be used, but the test will be less accurate and that needs to be taken into consideration when analyzing the results.

In the real world it’s not always possible to do everything exactly right. Technical limitations, project timetables, and limited budgets can all force compromises in the study design. Sometimes these compromises are OK and don’t significantly affect the outcome, but sometimes they can cause problems.

For example, if you’re testing two different website designs and your content management system doesn’t make it easy to randomly assign visitors to one version or the other, you may be forced to do something like switch to one design for a week, then switch to the other for a week. This is probably going to work, but if one of the test weeks also happens to be during a major promotion, then the two weeks aren’t really comparable and the test data isn’t very helpful.

But as long you pay attention to the details, A/B testing will give you the best possible data to decide which customer experience ideas are worth adopting and which should be discarded. This is a tool which belongs in every CX professional’s kit.

Republished with author's permission from original post.

Peter Leppik
Peter U. Leppik is president and CEO of Vocalabs. He founded Vocal Laboratories Inc. in 2001 to apply scientific principles of data collection and analysis to the problem of improving customer service. Leppik has led efforts to measure, compare and publish customer service quality through third party, independent research. At Vocalabs, Leppik has assembled a team of professionals with deep expertise in survey methodology, data communications and data visualization to provide clients with best-in-class tools for improving customer service through real-time customer feedback.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here