11 Common Measurement Mistakes in Customer Experience Programs

3
840

Share on LinkedIn

We all need a methodology. But simply having a methodology does not guarantee success. A methodology is often just a system of measurements accompanied by an acronym. Nowhere is it said a methodology must to be accurate, precise or reliable. There are “methodologies” on the market today with margin of error rates in the double digits, but I have no doubt the people using them still have confidence in those methodologies.

Here are 11 common measurement mistakes that I’ve found undermine success in customer experience programs.

  1. Drawing Conclusions from Incomplete Information
    Let’s say your analytics show visitors spend a relatively high amount of time on a particular page. Is that page great – or is it problematic? Maybe visitors simply love the content. Or, maybe they are getting stuck due to a problem with the page. Possibly your call center statistics show average call time has decreased. Is a decrease in average call time good news or bad news? When calls end more quickly, costs go down, but have you actually satisfied callers or left them disgruntled, dissatisfied, and on their way to your competition? Without additional information to help better evaluate the data, you simply cannot know. Never draw conclusions from any statistical analysis that does not tell the whole story.

  2. Failing to Look Forward
    Every company seeks to look forward, and measuring customer satisfaction after an activity or transaction is certainly helpful, but what if you also want to better predict the future? Measuring customer satisfaction by itself will not provide the best view forward. Using a complete satisfaction measurement system – including future behaviors and predictive metrics such as likelihood to return to the site or likelihood to purchase again – generates leading indicators that complement and illuminate lagging indicators.

  3. Assuming a Lab is a Reasonable Substitute
    Usability groups and observation panels are certainly useful and have their place; the problem is the sample sizes are small and the testing takes place in a controlled environment. Say you bring people into a lab and tell them what you want them to do. Does that small group of eight participants represent your broader audience? Does measuring and observing them when they do what we tell them to do provide the same results as real users who do what they want to do? Observation is helpful, but applying science to the voice of customer and measuring the customer experience through the lens of customer satisfaction is critical to success.

  4. Confusing Causation and Correlation

    Suppose your online store sales and conversion rates increase after implementing an online experience update. If you made no other changes to your site, ran no promotions, and did not market differently, you can safely determine your change was a direct – and positive – causal factor. However, if you made that change but also ran a special on the shirts at the same time and marketed the promotion through traditional media, online, and social media, you can’t say for certain what worked. With a precise scientific measurement technology you can dig deeper and determine what did and, more importantly, didn’t work, allowing you to apply your resources to the areas that are boosting your bottom line.

  5. Mistaking Feedback for Measurement
    Feedback is necessary and important and can do some good because it gives you something to react to (broken links, missing items, missing pages, etc.) – things you can, and should, act on. However, feedback is purely reactive because it’s easier to react to complaints than it is to proactively identify and measure potential problems. Feedback isn’t inherently bad, but it does tend to be anecdotal and biased. People are more likely to speak up when they have a bad experience, less likely when they have a great experience, and hardly ever when they’re somewhere in the middle. There’s this silent majority that usually makes up the biggest part of your business and drives, in large part, the success of your business that often goes unheard. Good customer experience analytics randomly intercept users to create a representative sample of your entire customer base – not just the disengaged and the super happy but those in between – at the right time with the right survey.

  6. Forgetting the Real Experts are Your Customers
    Experts, like usability groups, have their place. But who knows customer intentions, customer needs, and customer attitudes better than actual customers? When you really want to know, go to the source. It takes more time and work, but the results are much more valuable. I cannot say how many times I have been in meetings with analysts and experts who swear that a new site navigation system will solve every problem on a particular website. Meanwhile, what customers want are more product varieties to choose from. Experts and consultants certainly have their place, but their advice and recommendations must be driven by customer needs as much if not more than by organizational needs.

  7. Gaming the System
    Measuring correctly means creating as little measurement bias as possible while generating as little measurement noise as possible. Avoid incentivizing people to complete surveys, especially when there is no need. Never ask for personal data; some customers will decline to participate if only for privacy or confidentiality concerns. Also never measure with the intent to prove a point. Unfortunately, research run by internal staff can often contain some amount of built-in bias. As employees we may, however unintentionally, create customer measurements to prove our opinions are correct or support our theories, but to what end? Customer measurements must measure from the customers’ perspective and through the customers’ eyes, not through a lens of preconceived views.

  8. Sampling Problems
    Sampling works well when sampling is done correctly. Sample selection and sample size are critical to creating a credible, reliable, accurate, precise, and predictive methodology. Sampling is a science in and of itself. You need samples representative of the larger population that are randomly selected.

  9. Faulty Math
    Intelligence is not binary. People are not just smart or stupid. People are not just tall or short. Customers are not just satisfied or dissatisfied. “Yes” and “no” do not accurately explain or define levels or nuances of customer satisfaction. The degree of satisfaction with the experience is what determines the customer’s level of loyalty and positive word of mouth. Claiming 97% of your customers are satisfied certainly makes for a catchy marketing slogan but is far from a metric you can use to manage your business forward. If you cannot trust and use the results, why do the research?

  10. Keeping it Simple – Too Simple
    The “keep it simple” approach does not work for measuring customer satisfaction (or, really, for measuring anything regarding customer attitudes and behaviors.) Customers are complex individuals who make decisions based on a number of criteria, most rational, some less so. Asking three or four questions does not create a usable metric or help to develop actionable intelligence. Still, many companies take this approach and make major strategic decisions – and often compensate their executives – based on a limited and therefore flawed approach to measurement. Great managers do not make decisions based on hunches or limited data; “directionally accurate” is simply not good enough when our companies and our customers are at stake.

  11. Measurement by Proxy
    Task completion only measures – no surprise – whether a task was completed. It does not measure satisfaction and does not measure future intentions. Finding ways to get customers to recommend your business, and measuring their likelihood to recommend your business, is smart business and often generates substantial revenue. But never use recommendation as a proxy for satisfaction or loyalty. Never use satisfaction as a proxy for recommendation or for loyalty. Research from universities and corporations around the world consistently proves that satisfaction is causal and is a key driver of recommendations and of customer loyalty.

Remember, a truly useful measurement methodology is accurate, precise, and reliable. Otherwise it is just garbage in, garbage out. Inaccurate and imprecise methodologies lead to poor decisions – and to a false sense of confidence in those decisions.

So remember to measure right, manage forward and make a difference in your company.


This article is an edited version of a series of Foresee blog posts which are excerpts from Larry Freed’s second book, Innovating Analytics, which will be published this fall through Wiley Publishing.

Larry Freed
As President and CEO of ForeSee since it was founded in 2001, Larry Freed is a widely recognized expert in customer satisfaction analytics, strategy and technology and speaks extensively on the topic at private and public-sector industry events. He is regularly quoted in numerous publications and other media, including CNN, Marketplace, Dow Jones, The Wall Street Journal, The Washington Post, the New York Times, Investor's Business Daily, Internet Retailer, and Computerworld, among many others. He is also the author of more than 100 articles, white papers, and other research reports.

3 COMMENTS

  1. …and, among the biggest for analysts may be the causation-correlation issue. It sometimes feels like researchers are too often ready to take the easy road to insights, thinking that, if one behavioral driver looks like it’s connected to another, it probably is: http://www.customerthink.com/blog/correlation_is_not_causation_big_data_challenges_and_related_truths_that_will_impact_business_s This is why your call for precise measurement is so important; and it’s also why I’ve come to rely on the causal accuracy of customer advocacy and customer-brand bonding research frameworks.

  2. Larry,

    This is a great article which points to many of the failures of current CEM measurement practices. I hope you don’t mind, but I would add one more–the inability of CEM managers to connect their efforts and expenses to quantifiable results.

    While this might not sound like a measurement problem, this problem stems from the terrible job done by most managers in linking CEM metrics to customer behaviors. For example, the average percentage of variance explained across industries from satisfaction and/or Net Promoter classifications to customers’ share of category spending (aka share of wallet) is around 1%. When the relationship at the customer level is this weak, there is no way we can reliably predict financial outcomes from improving these metrics.

    Most managers don’t want to accept the above reality. But this is easy for managers to prove for themselves in EXCEL.

    To do so, simply record customers' satisfaction ratings (or Net Promoter classifications) in one column and their share of wallet in another. Next, compute the correlation between satisfaction/Net Promoter and share of wallet. Finally, to determine how much of the variance in share of wallet is explained by satisfaction/Net Promoter, simply square the correlation. The percentage of variance explained will almost always be less than 5%—typically around 1%—meaning that 95% or more of the variation in share of wallet is completely unexplained by satisfaction or Net Promoter.

    Therefore, before CEM managers simply accept that their measurement processes are pointing them in the right direction, they need to first be certain that their metrics strongly correlate to the share of spending their customers give to their businesses. Only then can they confidently link their efforts to improved business performance.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here