Customer Experience Benchmarking: beware how you use it!


Share on LinkedIn

If you have read my articles in the past, you will know that I am an ardent fan of investigating the definitions of certain words and phrases. Yesterday, I had the pleasure of participating as a judge at the International Business Excellence Awards in Dubai. One of the awards entrants used the word ‘benchmarking’, multiple times in his presentation. In fact, ‘benchmarking’ is a word that I frequently hear as I go about my business around the world. So what exactly is the definition of ‘benchmarking’?

A measurement of the quality of an organisation’s policies, products, programmes, strategies, etc., and their comparison with standard measurements, or similar measurements of its peers

According to the business dictionary, the objectives of benchmarking are to determine what and where improvements are called for, to analyse how other organizations achieve their high performance levels, and to use this information to improve performance. This all makes sense – well to me anyway!

The principle of benchmarking is undeniably sound. Understanding how your organisation is performing in relation to others – both in your own sector and in other sectors is a very effective way of determining how well your business is evolving. When it comes to Customer Experience, benchmarking is regularly seen by many leaders as being an important ‘yardstick’ by which to determine the success of their business in achieving their customer focused objectives – if indeed they have customer focused objectives in the first place.

Only last week, I was asked by a company if I had access to NPS (Net Promoter Score) benchmarking scores. My response was as follows:

Urrgghhhh!! No – but you might find some stuff on here – – Can I ask why you want them? I am NOT a fan of NPS benchmarking!!

So before I go any further, I want to reiterate what I said in my response to the question I was asked – I am NOT a fan of NPS benchmarking. In fact, I am NOT a fan of benchmarking Customer Experience measurement in general. Funnily enough, the business leader who asked the question (and a very competent business leader at that), knew what my likely response was going to be! The reason the question was asked, is that their boss wanted the information – we are unsure as to why (although we can guess) – a very common scenario.

Over the years, I have been very conscious about the desire of business leaders to know if their organisations are ‘better than the competition’. More often than not, Customer Experience measurement (predominantly NPS), has been used as the justification for these leaders concluding that their company’s are performing ‘well’ with regards to Customer Experience. At times it has almost felt as though there has been a secret , members only, directors club. Without wanting to sound disrespectful (and possibly failing), directors, or members of the ‘C-Suite’, have frequented the ‘club’, schmoozing and mingling with each other, whilst proudly (or smugly) saying things like, ‘My Net Promoter Score is 45, what’s yours?’

To coin a phrase, ‘size isn’t everything’! As I have already stated, comparing your performance with others is not a bad idea in principle. However (there is always one of those), unless you have absolute clarity and certainty of exactly what you are actually comparing against, it is impossible to make a robust conclusion from a benchmarking exercise. Herein is the problem – whilst many organisations and their leaders can state a number as a ‘fact based’ reflection of their perceived customer focus, few of them can be certain what the number they are using is a representation of.

I have been quoted many times as saying that whilst many businesses measure the Customer Experience in some way, most do so rather badly. As a result, whether it be a Customer Satisfaction, Customer Effort or Net Promoter Score, there is no guarantee that the number being produced by an organisation is calculated in the same way as another – or indeed is actually a reflection of the same thing. Some companies capture and measure customer perception at specific ‘touch points’ in their customer journey – telephone interactions, for example – whilst others capture and measure perception across the entire end to end customer journey – some do both. When a business reports its measure of customer perception, there is no way of determining what the number is representative of.

As a result, if you compare one Net Promoter Score (published by a business) with another, you are very likely to NOT be comparing ‘apples with apples’. This is why I urge anyone who intends to use benchmarking as a way of evaluating performance and progress, that you need to do so with caution. Just because your published score is 45 and your competitors is 35, it does not guarantee that customers perceive you to be better than them. It is only when/if the way the score has been calculated and what it is representative of is IDENTICAL between the two organisations, that you can benchmark with confidence.

I know of businesses who are using different scales to that defined in the original method for Net Promoter Score. I know of companies who have manipulated the calculation, so their published scores can never be negative! You can NOT assume that the scores you see are genuinely representative of the truth.

Benchmarking does serve a purpose. I am not a fan of it, because most of the time it is not an accurate, like for like comparison. If your senior leaders demand benchmarking be used, then you must beware exactly what it is you are using. Failure to do so will likely result in you drawing either the wrong, or inaccurate conclusion from your benchmarking exercise.

Republished with author's permission from original post.

Ian Golding, CCXP
A highly influential freelance CX consultant, Ian advises leading companies on CX strategy, measurement, improvement and employee advocacy techniques and solutions. Ian has worked globally across multiple industries including retail, financial services, logistics, manufacturing, telecoms and pharmaceuticals deploying CX tools and methodologies. An internationally renowned speaker and blogger on the subject of CX, Ian was also the first to become a CCXP (Certified Customer Experience Professional) Authorised Resource & Training Provider.


  1. The author is right that benchmarking needs to be performed on data that is defined and collected in consistent ways. If everybody defines and gathers their own data, this problem will arise.

    However, if a single organization collects the data, e.g., a SaaS company that wants to benchmark its own customers’ behaviors and outcomes, then this problem just doesn’t arise.

  2. While the author’s perspective on the risks of comparing Net Promoter Scores across companies seems reasonable, a much more nuanced approach to this issue is available.

    A simple summary:
    Because of methodology and sampling differences, comparisons of one company’s customer-derived NPS (customer relationship surveys, or touchpoint/journey/customer experience feedback) can almost never be reliably compared to another’s. The only reliable way to compare Net Promoter Scores is on the basis of double-blind market research — what Bain calls “Competitive Benchmark Net Promoter Scores.” These are controlled samples, with a methodology designed to yield true “apples-to-apples” comparisons. And they’re the basis on analysis of relative revenue growth rates versus NPS have been done, for example. Even these are hard to compare in the absolute across industries or geographic markets.

    For a more detailed and nuanced view on the topic you might want to read up:


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here