Customer Loyalty 2.0, Part 2: Advocacy, Purchasing and Defection Loyalty

4
2,314 views

Share on LinkedIn

The measurement of customer loyalty has been a hot topic lately. With the latest critiques of the Net Promoter Score coming in from the both practitioners and academic researchers, there is much debate on how companies should measure customer loyalty. I wanted to formally write my thoughts on this topic to get feedback from this community of users. Much of what I will present here will be included in the third edition of my book, Measuring Customer Satisfaction. I welcome your thoughts and critiques. Due to the length of the present discussion, I have broken down the entire discussion into several parts. I will post each of them weekly. Here is Part 2 of the discussion. Read Part 1 and Part 3.

Individual Loyalty Items vs. Composite Loyalty Scores
Customer surveys, oftentimes, include multiple loyalty questions. There are different approaches in how these loyalty questions are used. One approach is to use single loyalty questions as the loyalty measure. For example, Reichheld (2006) recommends the use of the "likelihood to recommend" as the single best question to use as a measure of customer loyalty. Still other researchers use "overall satisfaction" as their key measure of customer loyalty (Fornell, et al., 2006). Another approach is to use a composite score (typically averaging across items) based on several loyalty questions. The question now becomes, "When multiple loyalty items are used in a customer survey, should we use a composite score as our ultimate loyalty criterion or use each item as unique measures of customer loyalty?"
To answer that question, we need to understand exactly what each loyalty question is measuring. Specifically, we want to know if each of the loyalty questions is an observable indicator of the construct of "customer loyalty." If we can provide evidence that the seemingly distinct loyalty questions are really measuring the same construct, it would be appropriate to create an overall index of customer loyalty. If we can provide evidence that each loyalty question, however, measures something different from the other loyalty questions, it would be appropriate to use each loyalty question as a unique measure of customer loyalty. A statistical analysis technique that is used to understand the meaning of items is called factor analysis. Factor analysis is a data reduction technique that explains the statistical relationships among a given set of variables using fewer unobserved variables (factors). The output of a factor analysis will tell us two things: 1) The number of factors that can explain the relationships among the set of observed variables and 2) Which variables are related to which factors. Specifically, for our problem, a factor analysis will help us identify if the relationship among the set of many loyalty items can be explained by fewer factors (constructs). It is important to note that an exploratory factor analysis involves some form of judgment when determining the number of factors as well as which variables are related to the smaller set of factors. A full discussion is beyond the scope of this discussion but the interested reader can read more about this topic (Hayes, 1997). In the next section, I apply factor analysis to seven loyalty questions.
Wireless Service Providers Study (2007)
The survey was fielded in June 2007, asking a sample of 994 general consumers in the United States ages 18 and older about their attitudes toward their current wireless cell phone provider. The survey data for this study was collected by GMI (Global Market Insite, Inc., www.gmi-mr.com), who provided online data collection and consumer panels.
This particular study on wireless service providers included seven (7) loyalty questions and additional quality-related questions. About 44% of the respondents were male. Sixty-one percent of the respondents were 40 years old or younger.
To examine the dimensionality of the loyalty items, a factor analysis was conducted on seven loyalty items (see Part 1 for items). The results of the factor analysis suggested that seven items measure three (3) constructs. The figure below represents the factor pattern matrix. The elements in the factor pattern matrix are called factor loading and essentially reflect the correlation between each item and the three factors. Higher factor loadings indicate a stronger relationship between the item and the underlying factor.



The items loading on the first factor are the standard loyalty questions typically used in customer loyalty research (e.g., overall satisfaction, choose again for first time, recommend, continue purchasing). The items loading on the second factor are the specific purchasing behavior questions. The remaining loyalty question appears to form its own factor. The factor pattern matrix suggest that, instead of thinking of each item as representing seven distinct variables, the first four items measure one underlying construct, the next two items measure a different underlying construct and the last item measures yet a different construct.

As a general rule, the naming of the factors should encompass the entire set of items that load on the factor; the items that load on the first factor appear to have a strong emotional component to them, reflecting the extent to which customers advocate the company. Consequently, this factor is labeled Advocacy Loyalty. The items that load on the second factor reflect specific purchasing behaviors. Consequently, this second factor is labeled Purchasing Loyalty. The item that represents the third factor reflects defection and is, therefore, labeled Defection Loyalty. The naming of factors in a factor analysis involves some level of creativity and subjectivity. Other researchers might label the factors with different words (but probably similar words); the underlying construct being measured, however, remains the same.
PC Manufacturers Study (2007)
This survey was fielded in July 2007, asking a sample of 1058 general consumers in the United States ages 18 and older about their attitudes toward their personal computer manufacturer. All respondents were interviewed to ensure they meet correct profiling criteria, and were rewarded with an incentive for filling out the survey. The survey data for this study was collected by GMI (Global Market Insite, Inc., www.gmi-mr.com), who provided online data collection and consumer panels.

This study was slightly different than the prior study. An additional purchasing loyalty question was created to tap another element of the purchasing loyalty construct (Likelihood to increase the frequency of purchasing) and the defection question was removed. A factor analysis was conducted on all seven (7) items. The results of this factor analysis suggested that the seven questions measure two underlying dimensions. The figure below reflects the factor pattern matrix of this factor analysis (after rotation).



The results of the factor analysis of these items are similar to the results using the Wireless Provider sample. That is, it appears that the seven seemingly disparate loyalty items actually measure two underlying constructs, Advocacy Loyalty and Purchasing Loyalty.

Summary

Based on the results of the two separate factor analyses, we see that the apparently disparate loyalty questions actually only reflect fewer distinct loyalty constructs. Rather than thinking of each loyalty item as measuring some unique dimension of customer loyalty, the results indicate that there is much overlap in loyalty questions (at least how customers respond to these questions). Customers tend to respond to loyalty questions in similar ways and do not make distinctions among general loyalty-related questions. Specifically, given the Advocacy Loyalty questions, if customers rate one loyalty question high, they will likely rate the other loyalty questions high. Conversely, if customers rate a question low, they will likely rate the other questions. The same can be said for responses to the Purchasing Loyalty questions.
Satisfaction, Recommend and Purchase Same

Of particular interest are three specific loyalty items that load on Factor 1: 1) satisfaction, 2) recommend, and 3) purchase same. The Net Promoter Score (NPS) developers state that the "recommend" question is the best predictor of business growth (Reichheld, 2003, 2006). This conclusion has come under recent attack from other researchers who have found that the "satisfaction" and "purchase same" questions are just as good as the "recommend" question in predicting business growth (Fornell, et al., 2006; Keiningham, et al., 2007; Morgan & Rego, 2006). The current results cast additional doubt on the conclusions by the NPS camp. The recommend question appears to measure the same underlying construct as these other two loyalty questions. Given these loyalty questions measure the same thing, we should not expect the "recommend" question to be a better predictor of business metrics compared to the "satisfaction" and "purchase same" loyalty questions.
Objective vs. Subjective Measures of Loyalty
It is important that we make the distinction between objective measures of loyalty and subjective measures of loyalty. These objective metrics of customer loyalty have minimal measurement error associated with them. Because these metrics are not subject to interpretation, these objective loyalty metrics have unambiguous meaning. The number of recommendations a customer makes is clearly distinct from the number of repeat purchases that customer makes. I’m not saying that these measures of customer loyalty are unrelated, but that they are measurably different constructs (similar to the fact that height and weight are different constructs but are related to each other – taller people tend to weigh more than shorter people).
Measuring customer loyalty via questions on surveys is an entirely different process; customers’ ratings of each loyalty question (e.g., likelihood to recommend, satisfaction, likelihood to repurchase) become the metric of customer loyalty. Even though we are able calculate separate loyalty scores from each loyalty question (e.g., NPS, Overall Satisfaction, Likelihood to Repurchase), the distinction among the loyalty questions may not be as clear as we think. Because of the way customers interpret survey questions and the inherent error associated with measuring psychological constructs, ratings need to be critically evaluated to ensure we understand the meaning behind the ratings.
When using questionnaires to measure constructs, we need to be mindful of how the customers interpret and respond to the questions. Questions might appear to contain very different content (e.g., recommend, satisfaction, purchase same) yet the customers apparently do not make those same distinctions when ratings these questions.
Conclusions
Customers’ ratings of a set of loyalty questions suggest that there are two, very general, loyalty constructs, Advocacy and Purchasing; and a third construct, Defection loyalty. The present findings suggest that we can create composite loyalty scores. These composite scores and their definitions are:

  • Advocacy Loyalty: reflects the degree to which customers will advocate of the company
  • Purchasing Loyalty: reflects the degree to which customers will increase their purchasing behavior
  • Defection Loyalty: reflects the degree to which customers will switch to a different company
  • The next part of Customer Loyalty 2.0, I will explore the differences among the measures of customer loyalty. Toward this end, I will try to identify the different antecedents and consequences of customer loyalty. Rather than thinking of customer loyalty as a one-dimensional construct, this multi-dimensional approach to and measurement of customer loyalty can help companies better understand how to improve growth through both new and existing customers.
    You can download a free copy of executive reports on the two studies (Wireless Service Providers and PC Manufacturers) at Business Over Broadway.
    References
    Fornell, C., Mithas, S., Morgensen, F. V., Krishan, M. S. (2006). Customer satisfaction and stock prices: High returns, low risk. Journal of Marketing, 70 (January), 1-14.
    Hayes, B. E. (1997). Measuring Customer Satisfaction (2nd Ed.). Quaility Press. Milwaukee, WI.
    Keiningham, T. L., Cooil, B., Andreassen, T.W., & Aksoy, L. (2007). A longitudinal examination of net promoter and firm revenue growth. Journal of Marketing, 71 (July), 39-51.
    Morgan, N.A. & Rego, L.L. (2006). The value of different customer satisfaction and loyalty metrics in predicting business performance. Marketing Science, 25(5), 426-439.
    Reichheld, F. F. (2003). The One Number You Need to Grow. Harvard Business Review, 81 (December), 46-54.



    Reichheld, F. F. (2006). The ultimate question: driving good profits and true growth. Harvard Business School Press. Boston.

    4 COMMENTS

    1. Bob

      I congratulate you on the fine job you have done bringing a degree of experimental rigour to your recent blog posts. Not an easy thing to bring across to CustomerThink’s readers.

      The findings of your mobile telco study are interesting. For example, although the questions loading against the Advocacy factor show a higher likelihood of repurchasing than the Purchase factor, they also show a lower likelihood of purchasing more and a higher likelihood of switching to another provider.

      From a customer valuation perspective, the tentative conclusion would be that you need to base customer valuation on both repurchasing behaviour (leading directly to value creation) and recommendation behaviour (leading indirectly through others to value creation). Another conclusion is that high volume purchasers will not recessarily be those most likely to recommend to others.

      These conclusions are similar to the findings of Kumar et al’s recent study on mobile telecoms recommendation and its impact on customer valuation reported in the October edition of the Harvard Business Review.

      I look forward to your third installment.

      Graham Hill
      Independent CRM Consultant
      Interim CRM Manager

    2. When it comes to recommendations, I’ve always wondered about people who find a good product or service provider and don’t want to share that information with anyone. For instance, say I find a great place for unique gifts at great prices. I wouldn’t necessarily want to share that information because I wouldn’t want my friends to know my secret. On a corporate level, if you found a great supplier, you might not want to recommend it for similar reasons; you might not get the same deals if the supplier gets more customers.

      So it makes perfect sense that you could absolutely love a vendor and not want to recommend it at all. The question is how you would answer a survey asking if you would recommend.

      Have you seen this type of “psychological” thing happening?

      Gwynne Young, Managing Editor, CustomerThink

    3. Gwynne

      A recent Forrester survey ‘Demystifying the WOM Consumer’ suggests that only about 60% of US adults give advice to others, receive advice from others or both give and receive advice. About 40% neither give nor receive advice. This is similar across all generations. The vast majority of advice giving and receiving is done off-line, either face-to-face or talking on the phone.

      It will be interesting to see how the wider adoption of mobile telephony in the USA (not exactly at the leading edge of mobile telephone usage) and of mobile social networking changes this.

      Both Bob’s & Kumar’s research suggest that the customers with the highest usage (and thus the highest ‘traditional’ customer lifetime value) are not necessarily the customers with the highest degree of recommendation (and thus the highest customer recommendation value). This has significant implications for how we value customers. Customers need to be valued based on their own usage and their recommendations that results in others’ usage too. Paradoxically, the best customers could be the ones who have relatively moderate usage themselves, but who recommend products & services widely to others.

      Graham Hill
      Independent CRM Consultant
      Interim CRM Manager

    4. Gwynne,

      FYI: I looked at my Wireless Service Providers survey data to see how many respondents say they would buy again from their vendor (ratings of 6 to 10 on a 0 to 10 likelihood scale) but would not recommend their vendor (ratings of 0 to 4 on a 0 to 10 likelihood scale). Turns out, only about 1% of the respondents do this sort of thing.

      Due to the small sample size (N = 12), I can’t make any reliable statements about this customer segment.

      Bob E. Hayes, Ph.D.
      Business Over Broadway

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here