What scale should I use for my customer satisfaction surveys?

0
869

Share on LinkedIn

When selecting the best scale to use when measuring customer satisfaction, the decision should be driven by several key points:

  • What is the methodology for the measurement project?
  • What is the intended use for the results?
  • What are the best analytics to accurately interpret the results?

What is the methodology for the customer experience measurement project?

The research participants must be easily able to understand and to apply the scale. With the need in post-call IVR survey research to be one of clarity and speed for the respondent, the scale selected must be anchored with a high and a low end rather than identifying a category for the scale points. Categorical scales must be repeated to insure the correct application by the respondent, thereby limiting the effectiveness of the approach in the post-call IVR survey methodology where the goal is to quickly collect responses to as many research variables as will be acceptable.

What is the intended use for the results?

Customer satisfaction research in the contact industry is intended to be an element in performance management programs. When individuals or teams are to be held accountable for customer evaluations the focus turns to what will be “fair” to the agents and the teams. With a need to receive a certain score, the scale used is a critical element and specifically relates to the amount of variability in the response options. The fewer scale points used, the less variability in responses is available.

Most research projects utilize 5-, 7-, or 9-point scales depending on the level of precision for comparison across research measurement channels, the scale is normalized using a transformation table. The technique of normalization has no impact on the relationship of the data points to one another, ie., does not affect the results. The procedure is outlined below to demonstrate the amount of variability in the response choices and how a larger amount of dispersion across the scale is beneficial to the results (precision and “fairness).

survey-scale-100-point-conversion-chart

Clearly seen in the above scale conversions is that the larger the endpoints provide more response choices which permits more variability in the ratings that directly affects the precision of the results. Research has shown that responses for feelings of satisfaction differ more when offered more points on the scale (Bendig, 1954; Garner 1960). For a 5-point scale, little variation between each scale point is available and on the 100-point scale is a difference of 25 points from one rating to the next. Surveys with fewer response options are “forcing” respondents into a category that causes information loss and renders the results to be less reliable than those with more variability (Van Bennekom, 2002). Research also highlights a cognitive difference between a 3 and 4 rating that is smaller than between 4 and 5 rating on the 5–point scale (Van Bennekom, 2002). Applying this logic to the “fairness” test by the agents, one can see the difference in securing a 4 versus a 5 as compared to a 3 versus a 4. Considering that only the 5 ratings will be counted as the desired service level (discussed below), the goal is actually more difficult to achieve with a 5-point scale.

The issue of variability must also be considered along with reliability. Surveys with more response alternatives are more reliable than those with fewer responses (Scherpenzeel, 2002; Alwin and Krosnick, 1991). Surveys that use a 7-, 9- or 10-point scale are most reliable based on several research studies in the academic community (Andrews and Withey, 1980; Andrews, 1984; Alwin and Krosnick, 1991, Bass, Cascio & O’Connor, 1974 and Rodgers, Andrews and Herzog,1992)

With more scale points, consumers can make better decisions on their experience and give a clearer indication of the experience. Considering the 9-point scale, the difference between each scale point is 12.5 (rounded to 13 in the chart) permitting the respondents to provide a more granular rating of the service experience.

What are the best analytics to accurately interpret the results?

The scale also has implications on the analytics used and on how the results are reported. Referring back to the concept of performance management and the goal of 87 on the 100-point converted scale, successful service is defined by research of satisfaction and its relationship to customer loyalty. The zone of affection is achieved at 4.3 on the 5-point scale (or 87 on the converted scale).

putting-service-profit-chanin-to-work

By definition, only a score of 5 on a 5-point scale is considered a successful service experience. On a 7-point scale, only score of 7 would be delighted, but on the 9- and 10-point scale, both scores of 9 and 10 can be considered in the customer delight category (and therefore successfully meeting the goal). The scale used influences the definition of success for the contact center agents.

Additionally, a scale that allows more dispersion of the ratings also permits the trends in satisfaction (or dissatisfaction) to be more definable. The goal with a post-call IVR survey program is to identify improvement areas using analysis to quantify items that have an impact on the overall satisfaction rating and a scale with endpoints of high versus low is conducive to this analysis (Van Bennekom, 2002). With fewer response choices and less variability/dispersion of the scores, the inherent clustering of the ratings reduces the ability to identify opportunities. This will cause issues when reporting the results as it may cause a question of reliability on your results.

Conclusion

For post-call IVR survey programs, the use of a 9-point scale is the most effective. The customers can easily use the rating schema. The agents feel that performance can be fairly assessed with more choices for ratings and appreciate the definition of success being a Top Two Box (8 and 9) score as compared to a Top Box (5) score. The management team can leverage the analytic techniques appropriate for a scale with high/low anchors compared to analysis appropriate for categorical scales.

register-for-a-complimentary-survey-assessment

References

Alwin, D.F. and Krosnick, J.A. (1991). The Reliability of Survey Attitude Measurement: The Influence of Question and Respondent Attributes. Sociological Methods and Research, 20: 139 – 181.

Andrews, F.M. (1984). Construct Validity and Error Components of Survey Measures: A Structural Modeling Approach. Public Opinion Quarterly, 48 (2): 409-442.

Andrews, F.M. and Withey, S.B. (1980). Social Indicators of Well-Being: Americans’ Perceptions of Life Quality. Annals of the American Academy of Political and Social Science, 451: 191-192.

Bass, B.M., Cascio, W.F., & O’Connor, E.J. (1974). Magnitude estimations of expressions of frequency and amount. Journal of Applied Psychology, 59, 313-320.

Bendig, A.W. (1954). Transmitted information and the length of rating scales. Journal of Experimental Psychology, 47, 303-308.

Garner, W.R. (1960) Rating Scales: Discriminability and information transmission. Psychological Review, 67, 343-352.

Rodgers W.L., Andrews, F.M. and Herzog, A.R. (1992). Quality of Survey Measures: A Structural Modeling Approach. Journal of Official Statistics, 8 (3): 251-275.

Scherpenzeel, A. (2002). Why use 11-point Scales? http://www.swisspanel.ch/file/doc/faq/11pointscales.pdf

Van Bennekom, F.C. (2002). Customer Surveying: A Guidebook for Service Managers. Bolton, MA, Customer Service Press.

Republished with author's permission from original post.

Jodie Monger
Jodie Monger, Ph.D. is the president of Customer Relationship Metrics (CRM) and a pioneer in business intelligence for the contact center industry. Dr. Jodie's work at CRM focuses on converting unstructured data into structured data for business action. Her research areas include customer experience, speech and operational analytics. Before founding CRM, she was the founding associate director of Purdue University's Center for Customer-Driven Quality.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here