Sorry NPS, I’m not buying (it)..


Share on LinkedIn

question marks by mikecogh on Flickr

I know a lot has been said about Net Promoter Score (NPS), and I’m not in this world to judge anyone who’s working with it, or developing it into a Net Promoter System. I do like to share my experiences with it though, hoping to attract other people who’d like to share theirs, so we can all get a better understanding of what drives Customer loyalty and how to manage for it. Unfortunately the Net Promoter Score is not working for me right now. And here’s why:

In my role at Delta Lloyd Groep I have the pleasure to work together with Zanna van der Aa, who is working in my team as Program manager of the Customer Experience Program we’ve launched this year. Zanna recently received her PhD based on her research on the role of the customer contact center in relationship marketing. In short: she pretty much knows her stuff and she’s as curious as I am to really understand what drives Customer Loyalty 😉 (and she blogged about this in Dutch on here)

How we measure
As part of the program we are measuring Customer Satisfaction, Net Promoter Score and Customer Effort Score. The first two we are both measuring on the level of interactions (e.g. after a service call, or claiming damages) and the level of our annual Customer Satisfaction survey on a large proportion of our Customer base (including those not in interaction/transaction with us over the past year). Customer Effort Score we are only measuring on the level of interactions. Apart from these standard questions, we are asking more questions in different forms, including an open answer box to obtain qualitative feedback as well. Response rates are quite high on the transactional surveys, and very satisfactory on the annual one.

What we see
Customer Satisfaction ratings are quite stable and have been increasing steadily over the past years. The scores are also very similar throughout both methods. We seem to have a good understanding on what needle we need to move to get improvement on Customer Satisfaction. How different is this with Net Promoter Score. The score itself is all over the place. It seems to change from quarter to quarter going up and down without any reason (and we have been looking for them).

I’m ‘bothered’
A recent event really makes me doubt the Net Promoter Score question/methodology: Our own measurement showed a score, whilst a survey held by the same research firm on exactly the same sample, as part of an industry benchmark as little as two months later, produced a difference of 20 points in the score. And Customer Satisfaction scores in both surveys showed the exact same result. On top of this there are even bigger differences between the score in our own measurements and other so-called ‘industry benchmarks’. Since for the latter we don’t know the exact way the questions are asked and in what order, we could not really be bothered. But with current ‘evidence’ that’s exactly what we are..

Oh.. and the judge is still out on Customer Effort Score (CES), but so far we don’t see the higher relationship with Customer loyalty, as promised..

So, what do you think? Back to Customer Satisfaction as the primary metric?

[Disclaimer: This blog in general and this post in specific reflects my personal opinion only]

Republished with author's permission from original post.


  1. …..customer advocacy, such as the following will offer:

    Over the past several years, we have effectively incorporated advocacy research components in our customer transaction and relationship studies in many industries, and in multiple geographic areas around the world. Our framework is extremely flexible and highly actionable, and can be applied to virtually any area of client marketing and communication planning or decision-making.

  2. Wim – we too have observed odd swings in the NPS.

    Using the Overall Satisfaction (OSAT) measure (using a scale from zero to 10), we observed that 80% of those who gave a rating of 8,9 or 10 renewed their contract while 60% of those who assigned a rating of zero also renewed their contract.

    So, while there is clearly a strong relationship between OSAT and customer retention, it is no perfect solution.

    The Likelihood of Renewal question – same scale – produced very similar results.

    Consequently, we developed our own metric, The Dunvegan Affinity Rating (DAR) to measure the strength of the bond between a company and its customers.

    The theory and foundation for this new metric will be presented to the Consumer Satisfaction/Dissatisfaction and Complaining Behavior Conference on the morning of June 21, 2012 at the University of La Verne (La Verne, California).


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here