Want (or Need) Higher Customer Satisfaction, Loyalty, and Recommendation Scores? The Real Question Is…..Why?

4
867

Share on LinkedIn

Spoiler Alert: Higher scores can’t be consistently attained by measuring satisfaction, loyalty, and recommendation!! Also, these scores don’t tie very well to actual customer behavior.

If the true goals of customer experience optimization and enterprise customer-centricity are to generate more positive levels of stakeholder behavior – higher individual share of wallet if we are speaking about customers – then using popular performance metrics like satisfaction, loyalty, and recommendation are perhaps the least reliable and actionable methods for getting there. This may be a controversial statement, but it’s also well-proven.

For my most recent reference point, I’m using The Wallet Allocation Rule, a new book (John Wiley, 2015), by my colleague Tim Keiningham, and Lerzon Aksoy, Luke Williams, and Alexender Buoye. Very early in the book (pp. 16 and 17), they addressed the analytical value and ability of satisfaction and recommendation questions (via NPS) to interpret share of customer spend. They stated: “Satisfaction (and NPS) is so weakly correlated with the share of spending that customers allocate to the brands that they use, that it is useless as a metric to drive share of wallet.”

How can these authors, who are seasoned senior research professionals and skilled marketing academics, make such a bold claim? Well, to paraphrase Jaggers the lawyer in Dickens’ Great Expectations, they’ve taken everything on evidence, and not on its looks or what is generally accepted by others.

They arrived at this perspective by drawing on results from 250,000 consumer ratings, covering over 650 brands, in more than a dozen countries. As they noted: “….we found that the average variance explained is around 1 percent. In layman’s terms, this means that 99 percent of what is going on with consumer share of category spending is completely unexplained by knowing their satisfaction, i.e. CSAT, level or NPS. Worse still, the effect of the change in satisfaction on changes in share of wallet is even weaker. Our research finds that changes in satisfaction (and NPS) explains a miniscule .4 percent of the change in share of wallet over time.”

As Keiningham and his co-authors (rightly) concluded: “….this is disastrous. When the relationship is this weak, there is no reliable way to predict financial outcomes from improving satisfaction and NPS.” This information is in Chapter 1 of the book, aptly titled “It’s ‘Oh My God!’ Bad”

With respect to “loyalty”, as measured through indices (customer loyalty index and/or secure customer index), this summary metric suffers from some of the same application challenges which plague customer satisfaction and recommendation. This index usually contains three measures: customer satisfaction, likelihood to recommend, and future purchase intent. The presence of satisfaction and recommendation question components immediately compromises the value and actionability of the loyalty index.

So, if not measuring customer satisfaction, loyalty, or recommendation, how is it possible to generate higher scores for all three metrics? From over a decade of b2b and b2c customer research, conducted in a broad array of verticals around the world, the most consistent methods for understanding share of wallet allocation behavior, i.e. not just correlation, but causation, are customer advocacy and brand bonding.

Advocacy represents the degree of kinship with, favorability toward, and trust of brands; but, principally, advocacy identifies the downstream customer communication and marketplace behavioral effects of word-of-mouth and personal brand experiences. It is straightforward and contemporary, and does not have the application and analytical limitations associated with satisfaction, recommendation, and loyalty metrics:

Is There A Single Most Actionable, Contemporary, and Real-World Metric for Managing, Optimizing, and Leveraging Customer Experience (and Behavior)?

Emerging Chinks and Dents in the Universal Application and Institutionalization Armor of Popular Performance Metrics

http://www.slideshare.net/lowen42/wragg-lowenstein-customer-advocacy.

Whether an organization is seeking to understand the impact on behavior of image and reputation (http://customerthink.com/corporate_reputation_and_advocacy_linkage/) or the impact of the granular, i.e. functional and emotional, elements of transactions and experiences on perceived value and future behavior (http://beyondphilosophy.com/trust-really-emotion/ ), advocacy measurement will provide actionable guidance. It offers polar and directional insights, thus making for greater, and more real-world, reliability. Those who are true brand advocates have used products or services more recently, more frequently, and with higher share of spend than customers with high satisfaction and high downstream likelihood to recommend.

In designing both the original (2004) and refined frameworks for customer advocacy and brand bonding measurement, satisfaction and recommendation question elements were originally included. However, it was found that the framework, which produced stratified customer behavior for each respondent (ranging from advocate to saboteur) actually performed more effectively, and produced more actionable results, without these traditional elements.

Moreover, our research has determined that, using advocacy measurement as a base, higher satisfaction and recommendation scores can be achieved than just measuring satisfaction and recommendation. The reason for this is simple: For virtually any industry, advocacy provides clear, granular, validated insights into the real dynamics of customer brand decision-making.

Thus, if satisfaction doesn’t always satisfy, recommendation can’t assure growth, and loyalty indices don’t give adequate guidance. the name of the game is understanding and leveraging customer behavior that monetizes. This begs the question – is there one ‘best’ question or one best approach? As some companies, those trying to make sense of a controversial and highly-promoted one-question research approach, are beginning to discover in their results, building a metric from any single question – attractive as that might seem – doesn’t neatly translate into real marketplace behavior.

Here’s a suggestion for selecting the most appropriate metrics: Use them all, in part because, even if they are all used in a questionnaire, they add very little to interview length or customer research cost. More to the point, these questions have individual and collective actionability value.

Satisfaction, especially dissatisfaction, will help identify where a company’s touch and transactional processes are meeting expectations or putting customers at risk (and potential loss), and there’s significant proven value in knowing that and being able to take targeted and appropriate action. Commitment and loyalty measurement will help determine what specific emotional and rational components of value drive customer behavior.

Customer advocacy scoring brings in the influence of neural, peer-to-peer communication; and advocacy evaluation also helps provide deeper insights into what is behind satisfaction, loyalty, and recommendation measure results. Taken together, they offer companies a detailed, highly actionable and user-friendly paint-by-numbers landscape picture of why new, active, and at-risk customers do what they do today, and what they are likely to do tomorrow.

Michael Lowenstein, PhD CMC
Michael Lowenstein, PhD CMC, specializes in customer and employee experience research/strategy consulting, and brand, customer, and employee commitment and advocacy behavior research, consulting, and training. He has authored seven stakeholder-centric strategy books and 400+ articles, white papers and blogs. In 2018, he was named to CustomerThink's Hall of Fame.

4 COMMENTS

  1. Michael,

    Thanks so much for referencing my co-authors’ and my research and our book, The Wallet Allocation Rule: Winning the Battle for Share.

    For those readers interested in the study cited here, it is:
    Timothy L. Keiningham, Bruce Cooil, Edward C. Malthouse, Alexander Buoye, Lerzan Aksoy, Arne De Keyser, and Bart Larivière (2015), “Perceptions Are Relative: An Examination of the Relationship between Relative Satisfaction Metrics and Share of Wallet,” Journal of Service Management. vol. 26, no. 1, 2-43. Available at:
    http://www.emeraldinsight.com/doi/pdfplus/10.1108/JOSM-12-2013-0345

    Also, if readers would like a copy of chapter 1 of The Wallet Allocation Rule that Michael references, it is available for free at: https://www.walletrule.com/

  2. If I can sound a caution here, just on the “use all metrics” paragraph. When it comes to data collection there are two issues where I think customer service folks get it wrong.

    First, more data is not better data. The problem in collecting too much data without having clear purpose is that it’s impossible to make sense of it. The problem with data is not getting it. It’s making sense of it.

    Second, the more data you collect (i.e., metrics), the more likely you’ll actually draw conclusions that are a result of chance. Just as an extreme example, if one collects data on a month to month basis, then some percent of that data, and its changes from month to months will be noise — chance occurences, when you end up analysing for statistical significance.

  3. Robert –

    While I’ve got strong perspectives, based on over a decade of working with clients around the world, regarding the most granular and actionable customer experience metric (http://customerthink.com/is-there-a-single-most-actionable-contemporary-and-real-world-metric-for-managing-optimizing-and-leveraging-customer-experience-and-behavior/), I also recognize that companies have metrics that have been socialized within their individual cultures.

    Agree that more data isn’t necessarily better data. If data are truly actionable, and organizations are comfortable with the metrics coming out of the data, i.e. the metrics have value, then replacing one metric with another doesn’t serve an organization particularly well. I’m not suggesting that metrics should be switched, but rather summarized and interpreted for application..

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here