Customer Effort Score remember the NPS wars

8
1401

Share on LinkedIn

The Effortless ExperienceMany commentators have recently debated the relative merits of Customer Effort Score (CES) verses Net Promoter Score (NPS). As a leader who remembers the controversy that surrounded NPS when it first came to dominance, the parallels are concerning. I still recall the effort wasted trying to win the battle to point out the flaws in NPS and lack of academic evidence, whilst in fact I was looking a gift horse in the mouth (I’ll explain that later). I would caution anyone currently worrying about whether or not CES is the “best metric” to remember the lessons that should have been learnt from “NPS wars”.

For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry. They are Customer Satisfaction (CSat), NPS and now CES. These are not equivalent metrics, as they measure slightly different things, but are all reporting on ratings given by customers to a single question. Satisfaction captures emotional feeling about interaction with the organisation (usually on a 5 point scale). NPS captures an attitude following that interaction, i.e. likelihood to recommend, against 0-10 scale with detractors (0-6) subtracted from promoters (9-10) to give a net score. CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what they wanted/needed (again on a 5 point scale).

The reality, from my experience (excuse the pun), is that none of these metrics is perfect and each has dangers of misrepresentation or simplification. I agree with Prof. Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organisation. This is because satisfaction, likelihood-to-recommend & effort-required are different ‘lenses’ through which to study what you are getting right or wrong for your customers. However, that utopia may not be possible for all organisation, depending on volume of transactions and you capability to randomly vary metrics captured and order of asking.

But my main learning point from the ‘NPS wars’ experience over a couple of years, is the metric is not the most important thing here. As the old saying goes, “it’s what you do with it that counts”. After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learnt that this was not a defeat but rather a ‘gift horse’, as I referred to earlier. Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat. So, over a year or so, I came to really value the NPS programme we implemented. This was mainly because of its granularity (by product & touchpoint) and the “driver questions” that we captured immediately afterwards. Together these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers & targeted action to implement systemic improvements.

Now we appear to be at a similar point with CES and I want to caution about being drawn into another ‘metric wars’. There are certainly things that can be improved about the way the proposed question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…”). However, as I hope we all learned with NPS, I encourage organisations to focus instead on how you implement any CES programme (or enhance your existing NPS programme) to maximise “action-ability”. That is where the real value lies.

Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being ‘right’ about your lead metric.

What’s your experience with CSat, NPS or CES? Do you share my concerns?

Paul Laughlin
Paul helps companies make money from customer insight. That means helping them maximise the value they can drive from using data, analysis & research to intelligently interact with customers. Former Head of Customer Insights for Lloyds Banking Group Insurance, he has over 12 years experience of creating & improving such teams. His teams have consistently added over £10m incremental profit per annum through improvements to customer retention and acquisition.

8 COMMENTS

  1. Though respecting your point of view, I’m very much of the professional researcher/consultant camp believing that, if an organization is truly serious about optimizing customer-centricity, satisfaction, CES, and NPS as core metrics have serious conceptual flaws and/or granular action limitations:

    http://customerthink.com/as-the-old-expression-goes-you-can-put-lipstick-on-a-pig-but/

    http://customerthink.com/is-there-a-single-most-actionable-contemporary-and-real-world-metric-for-managing-optimizing-and-leveraging-customer-experience-and-behavior/

    http://customerthink.com/when-b2b-and-b2c-key-performance-metrics-flatline/

    At the very minimum, a performance metrics system should be augmented with customer advocacy and brand bonding metrics, and accompanying actionable analysis. These will help any B2B or B2C company generate stronger results for their array of customer experience initiatives.

  2. Thanks for your input, Michael, and I also respect your point of view. In fact I’d agree that both NPS and CES have conceptual flaws.

    However, after over a decade of improving customer experiences and seeing incremental profit as a result, the point of my article was to advise against being drawn into that kind of “war”. Despite the conceptual weaknesses which one could seek to address with more complex or additional metrics, I have seen more value focussing on granular capture of NPS or CES and proper root cause analysis. For instance asking supplementary questions of customers (aligned to your learning from existing research) that you can test are correlated to movements in the scores can help you know the aspect of the experience that resulted in the low score.

    But, as with much customer insight work, there is more value to be had from taking action than having the perfect answer. Despite my own concerns with NPS, I’ve seen the benefits of robust capture and use of analytics + research to diagnose the action needed. More importantly I’ve seen the top table support for such simple metrics drive a culture which supported remedial action to fix systemic faults and prompt communication to individuals who gave low scores.

    In principle, as I mention, I’m with Prof Moira Clark in supporting the capture of NPS + CES + CSat to enable convergence of evidence (ideally still supplemented by driver questions). But my heartfelt plea after many years would be don’t waste your effort arguing over metrics, use what is supported & can enable you to get support/investment for consistent remedial action & communication. After all, it’s actually improving the experience that counts.

  3. My response was not about having a ‘perfect’ metric. We can easily agree that perfection doesn’t exist. The metric doesn’t drive performance. What I’m suggesting is that the more granular, holistic and real-world the actionability of a KPI, the better-served will be the customer, the employee, and the enterprise.

    Holistic includes not only evaluating the elements of functional and tangible value delivery but also, and at least equally, evaluating the elements that build relationship and trust: : http://www.greenbook.org/marketing-research/corporate-image-trust-reputation-customer-advocacy-behavior-linkage-00524

    CES, NPS, satisfaction, and loyalty indices don’t meet this set of standards.

  4. I think companies should develop the best metric they can. However, based on my recent survey, the choice of metrics has little to do with driving business performance.

    My Sept. 2014 survey found companies using a mix of CSAT, NPS, CES and custom metrics. No one metric was correlated with better performance.

    Instead, my research suggests that top performance is more a factor of a company’s “bias towards action” as one business leader put it to me.

    Practices that did correlate with improved business performance included using multiple sources (including non-survey sources like text and social media) and closing the loop with customers who provided feedback.

    To net it out, I think a company with an “ok” metric that has a bias for action will outperform a company stuck in analysis paralysis looking for metric perfection, but won’t mobilize the organization to do something about the feedback.

  5. I totally agree, Bob, that’s my experience too.

    Michael, thanks for raising the topic of alignment with longer term journey or brand metrics. It prompts me to clarify that I’m talking about “transactional NPS” in my article, not “brand NPS”. I have found near-time capture of NPS (or CES) from customers who’ve had a recent sales or service interaction much more useful as a metric than the vague brand impression captured using wider surveys. Although I know a lot of organisations are keen to benchmark their performance against peers in their industry, brand NPS is all that is captured in these benchmarking surveys and it does not directly relate to transactional NPS scores. The former can be heavily influenced by PR and media spend (i.e. tends to track awareness & consideration), whereas a good quality transactional metric can normally be correlated with internal value metrics (like customer retention).

    I hope that helps. I’ll avoid being drawn into whether or not brand advocacy really exists or has any real influence on customer journeys. I can accept that varies by sector, however after 25 years working for FS firms, I tend to agree with Prof Bob Shaw in his views as to the myth of brand advocacy in most circumstances.

  6. In all honesty, tactical transactional response, and measuring its potential effect on future customer action, is a lot more chancy and open to problems than relationship loyalty measures. Bias toward action is critical in optimizing transactional experiences; however, here the measure or measures applied is important to help make certain that whatever correctives or improvements a company makes, especially in service situations, are not red herrings (what Deming used to call ‘chasing hot rabbits’): http://customerthink.com/the-challenges-of-addressing-the-right-customer-needs-and-solving-the-right-customer-problems-three-examples-and-three-important-questions/ Again, this is not an argument over metrics applied, which is often non-productive and feeds into the ‘analysis paralysis’ seen with many risk-averse researchers. It’s about helping to make the best, most granular decisions where building strong customer relationships is concerned. And, if there is an analytical approach which is more consistent, contemporary, and real-world, and also has proven actionability, the bias for action should be to at least try it in parallel with everything else the organization may be doing. Pilot programs can often lead the way to better results.

  7. In fact, brand advocacy and brand bonding behavior not only exists; we have understood and analyzed its influence on customer marketplace actions (informal online/oiffline word-of-mouth, purchasing, and both direct and indirect referral) for more than a decade: http://www.slideshare.net/lowen42/wragg-lowenstein-customer-advocacy. I can provide extensive study results, in multiple industries, if desired.

    For more complete information about the power of advocacy, in b2b and b2c verticals around the world, I’d invite you to review my 2011 book on the subject, The Customer Advocate and The Customer Saboteur. http://asq.org/quality-press/display-item/?item=H1410

  8. I too remember the NPS wars, but any CES wars should be a short battle. However, people seem to swallow any findings without thinking about their legitimacy — a sad commentary.

    Much of the concern over NPS as a loyalty metric related to the inability of any credible academic research to replicate the findings. With CES it would be impossible to replicate the research effort because the research as described in Effortless Experience is, well, effortless. It would be a wonderful book to use in a research design class; there is so much to question.

    Simply put, the research is not credible.

    1) The research model is weak. Measures of actual customer loyalty behaviors are not in the model. And as they say, CES is a measure of disloyalty, not loyalty.

    2) The research execution is problematic. As noted, the core effort question in the research is a contorted mess of confusion. (I’ve been training people on survey practices for years, and this is one of the most poorly phrased questions I’ve ever seen.) The authors themselves recognized this shortcoming, but still believe their findings are meaningful, which is quite a statement about their qualifications as researchers. The administration of the research survey is also questionable. Who got the invitation? How were they instructed to chose a service experience on which to report? What bias did this introduce to the findings? How do the researchers know the respondents recalled the events clearly?

    3) The researchers understanding of statistics and the interpretation of statistics is astonishingly bad. For example, it appears that they lend interpretation to the sum of beta coefficients in a regression model. That is utter nonsense. They also misinterpret p-values, I think. I say, “I think” because they never really tell us what analysis they did. They just try to “delight” us with astonishing findings to support a consultancy.

    I have an extensive review of the book here: http://www.greatbrook.com/customer_effort_score_flaws.htm
    I may be wrong in some of my assertions, but that’s because of the poor explanation of the research provided by the authors.

    Measuring customer effort in an interaction is certainly useful, but any company that uses it to drive action based on the belief CES is a measure of loyalty needs to apply not “customer think” but “critical think” — those critical thinking skills that those of us in academia try to impart on our students.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here