High NPS. Low Revenue. Is it sampling bias or could it be something else?

6
139 views

Share on LinkedIn

In a recent interview, Richard Owen, CEO of Satmetrix, said that companies who don’t see benefits from improving their Net Promoter Score® are probably not measuring Net Promoter Score correctly. He says poor sampling is to blame:

“They may have very poor response rates, or have selective data. So they could be looking at the data and thinking they’re improving when actually what they’re really suffering from is poor quality of data.”

Richard Owen is right: lots of surveys suffer from a multitude of sampling biases. But that doesn’t explain why companies can have high Net Promoter Scores and sinking sales. In fact, as a prominent study by Timothy Kenningham (of Ipsos Loyalty) explains, companies may not see benefits from tracking Net Promoter Score because NPS often fails to correlate with revenue.

INTERACTION THINKING APPLIEDnet-promoter-score-is-tired

The team at Interaction Metrics hypothesizes that there are many reasons why Net Promoter Score does not necessarily correlate with profitability. But above all, it’s a tired, generic question that may not even be appropriate for some companies to ask. “Me-too” questions rarely produce information that will drive the insights needed to actually improve companies.

Because so many organizations—from Trader Joe’s, to Sony, to B2B firms—use the NPS question, customers have become bombarded by it. When you drone on in ways that don’t engage with customers, customers may feel like you don’t really care about their survey answers—and may respond accordingly.

It’s like when a stranger asks you “how are you doing?” No one takes the time to give a thoughtful answer—they just say, “fine.” It’s the same for your customers. When you ask, “How likely is it that you would recommend our company/product/service to a friend or colleague?” they may not take the question seriously. Instead, they often just say, “Sure, I’d refer you,” regardless of how they actually feel. I know I’ve been guilty of this when hurrying to check out at Enterprise Rental Car.

Companies may think things are great, but if their customers are giving off-hand answers, then those customers’ voices aren’t really being heard.

There is a way to avoid this problem and preserve the simplicity of NPS: ask questions that are relevant to your customers and their specific experiences. When your customers feel listened to, they’re inclined tell you what they really care about—and you’ll be able pinpoint opportunities to improve your customer experience, your business, and your bottom line.

Resources:

6 COMMENTS

  1. Another problem could be gaming behavior. This came out in a recent interview with a company that invested quite a lot in VoC programs, including implementing a measurement/reward system for front-line people.

    The scores went up, but revenue was flat. They attributed the problem to front-line staff “encouraging” customers to give higher scores.

    NPS has some technical flaws, but most of the problems mentioned in the Owens interview and your post could also apply to CSAT or any other customer loyalty metric.

  2. There is something even more basic wrong here. Net Promoter has NEVER been peer reviewed in any meaningful study in the academic realm. How a metric like this that is little more than a marketing gimmick like this metric be?

    Why should someone who gives a “6” cancel out someone who gives a “9” on a “Likelihood to Recommend” Questions. Anyone taking a Stats101 class can see that the methodology is more random than it is statistically sound.

    ASCI is the “Anti-Net Promoter”, and is much more widely accepted by those who understand statistics. It is backed the the University of Michigan and has been directly tied to stock performance. 11 out of 13 years, the ACSI Hedge fund outperformed the S&P 500. Less than 15% of actively managed funds outperform the S&P 500 on average each year.

    The ACSI works because it uses the WHOLE SCALE (gaining precision) and asks three questions about SATISFACTION, not Recommend (which isn’t even CSAT).

    Consider:
    1. I could be happy with my paper towel buying decision, but I am not going to recommend it to others at lunch tomorrow. It does not matter how good my experience is with the paper towel. My happiness will likely NOT be shared with others.
    2. I could give a 6 on “likelihood to recommend” on my lunch experience. However, I am not going actively tell others about it. I am going to remain silent. I am NOT a true detraction as the Net Promoter presumes.
    3. A respondent in Germany considers a 5 on a 10 point scale to be average. In the US it is a 7.5. Net Promoter does not account for this at all. Net Promoter can be wildly skewed depending on country. On the other hand, taking an average ACSI score can be normalized easily.

  3. Hi Bob,
    We agree. Gaming can be a huge issue and with most C-SAT tests there are technical challenges. But, ‘would you recommend” does seem unusually tired. And when customers get so many surveys it’s critical to use specific, engaging questions that show customers that you care and are listening. An example of a specific question for a B2B company could be “What would make our company an ideal vendor?” A teen retailer could ask whether trying on clothes was fun or more of a drag. Of course, with specific questions, executives may wonder if they can benchmark against other companies? Our answer is yes, but there is a process involved and we welcome the conversation!

  4. Bobblehead, Great points! There are many challenges with the NPS question, including the reliance on “recommending,” as you pointed out. Here, we merely meant to look at what seems its most obvious current problem—its over-use.
    But there are many variants on Net Promoter Score that are useful and that work. Let’s also make sure to remember that Satmetrix and Fred Reichheld have done a great job in getting executives motivated to think about customer experience. Also they’ve challenged our community to present clear, simple customer experience metrics that everyone understands — and certainly at our company, we’ve taken that to heart. So let’s not entirely throw the baby out bathwater!

  5. When I researched customer loyalty metrics a few years ago (when NPS was being debated intensely) I found that a composite of three questions was recommended as a more generally valid metric:
    * likelihood to recommend
    * likelihood to buy again
    * overall satisfaction

    While I agree that more creative questions like you proposed (“What would make our company an ideal vendor?”) are more likely to get engagement, they won’t help in benchmarking.

    It seems to me a combination of “boring” but standard questions plus others that engage the customer would give the best of both worlds.

    More info: Find the “Ultimate” Loyalty Metric to Grow Your Business

  6. Poor survey sampling and respondent gaming, or conceptual, analytical, and actionability challenges associated with the metric? Contrary to the title of your blog, I’ve been working with multiple clients achieving impressive year-over-year sales gains, at the same time receiving negative NPS scores.

    After over ten years of NPS, these vitally important, and core, issues are still being actively debated:
    http://customerthink.com/is-there-a-single-most-actionable-contemporary-and-real-world-metric-for-managing-optimizing-and-leveraging-customer-experience-and-behavior/
    and http://customerthink.com/emerging_chinks_and_dents_in_the_universal_application_and_institutionalization_armor_of_popula/

LEAVE A REPLY

Please enter your comment!
Please enter your name here