The research proves it


Share on LinkedIn

A 2002 Harvard Business Review article stated that after a year, customers who were surveyed regarding satisfaction with a service interaction (with a financial institution) were more than three times as likely to open a NEW account, less than half as likely to defect and were more profitable than consumers who had not been surveyed (Dholakia & Morwitz, 2002). The only difference between the two groups was that one was surveyed and one was not; neither group received any direct marketing from the company during the year. The impact of surveying customers was shown to be profound due to the customers’ desire to be acknowledged by the company; the company also remains top-of-mind when product choices are made, simply because the process of asking a consumer’s opinion allows people the opportunity to think about your products and services that otherwise may not occur (Dholakia & Morwitz, 2002).

Since surveying customers is so important, how do you ensure that your customer satisfaction results are a profitable business process in the contact center? To increase the value of the initiative, be certain the research is done the right way and not only done for the sake of surveying customers. Customer feedback results will be used by colleagues regardless of the number of caveats listed on your graphs so we must be diligent in providing valid and credible customer intelligence.

Surveying is a Science

Little effort is needed to convince your management teams that surveying is necessary, but this is often where effort is expended. Instead, the bulk of the effort should be spent on the survey program design itself. Many contact center teams analyze what others are doing – either inside or outside the organization – and then design and implement a measurement program that is believed to be best-in-class. But don’t forget that surveying is a science not to be taken lightly, so follow the guidelines of this science when developing, executing and reporting results of your program. Also, be aware that companies selling research services do not always apply the science but rather squeeze all projects into one mold. Many speak from “their” findings on these project molds, but these findings are often not replicated or validated by research scientists nor are the findings consistent with the body of academic research on customer measurement.

Visit the Customer Experience Resource Library for FREE videos, ebooks, and…

A more effective approach is used by Customer Relationship Metrics whereby every project and every measurement need is considered individually. Not every research need can fit into a standard mold. We do not, and neither should you, build programs based on what we “think” will work, but rather what we know has worked based on consistency with scientific data provided by the body of academic research. It is important to identify where the supportive research comes from, the quality of the research project, and the potential bias of the findings before you “hang your hat” on the study. By knowing what pitfalls in your program to avoid, you will be able to hold the list up against your own customer satisfaction research program. And then ask yourself: where can improvements be made to align with validated science?

When you accept an existing program or borrow a research plan that is right for another company, rather than designing an effective program based on the fundamentals of science and your own organizational culture, you must consider the implications. You proceed with analysis of this data that you believe provides an understanding of customers’ satisfaction with your service. The results are then disseminated and used by management to make changes in service delivery in an effort to improve customer satisfaction. But, rather than providing actionable intelligence to your organization, you are instead exposing them to a significant amount of loss – of customers and in revenue. The good news is that most research programs can be repaired or re-applied to the proper place within the organization.

Realtime is the Most Real

As customer service experts, we know that surveys from our callers provide the critical evaluation needed to engineer the best possible service environment. A common pitfall is to conduct follow-up phone calls to gather feedback about the contact center. That type of research methodology certainly has its place in your company’s research portfolio but is less effective than the use of point-of-service, real-time customer evaluations for your contact center. If you decide to proceed with follow-up phone surveys, there is a fix that is available and must be used in order to correct the bias (error) that is inherent in this type of program.

According to scientific research, analysis from evaluations that are delayed has several biases (errors) and is, therefore, not reliable unless you include important correction factors. In the interest of brevity, the error that is introduced by the telephone interviewers is not fully discussed here. Interviewer bias is corrected for in the analysis in the same manner as the issues that will be discussed. Accepting the fact that different individuals conduct the surveys differently is not difficult to recognize. Accepting other errors that occur requires a more complete discussion.

Fundamentally, you must “correct” your data through quantitatively accounting for the evaluation gap by creating an additional weighted variable in your analysis. The “gap” variable will account for the delay between the service interaction and the service evaluation so you can accurately interpret the results. The greater the gap, the greater the bias and the greater the adjustment needed in order to generate actionable intelligence that your organization can use with confidence. Essentially, the gap is being controlled for within your results rather than assuming all data to be equal when the evaluations were gathered 24, 48, or 72 hours later (use the exact number of hours as the data point for each survey).

When using a delayed evaluation program after a service interaction, it is based on the caller’s recall of the interaction. Anything other than real-time measurement inherently introduces this recall bias (error). Many studies have shown that asking customers to recall information is often inaccurate – customers tend to either over- or under-report past events, causing bias (Brennan, Chan, Hini & Esslemont, 1996; Sudman, Finn & Lannom, 1984; Sudman & Bradburn, 1974). Over-reporting is the most typical response of consumers when asked to recall events (Brennan, et al., 1996). We may be tempted to characterize this as a good thing since customers tend to over-report and score the interaction higher than what it actually was. The bias causes the results to be positively skewed and does not create an accurate representation of the reality of the service delivery process. With inflated scores, the management team will not make the necessary changes to maximize the customer experience aimed at ultimately increasing customer loyalty and shareholder wealth. Customer defection or leakage will continue without a clear understanding as to the cause.

It only seems easy

Collecting customer satisfaction data may seem easy, but deciding on the methodology to gather it has a significant impact on the analysis. If you are using a method that is not completed immediately after the service interaction, the gap is certainly a factor. When asked survey questions, consumer recall of the experience is affected by several factors, including the time allowed to answer each question (differences across interviewers equals interviewer bias), the involvement the customer has with the product/service, the order of the events being recalled, and the presence or absence of comparisons (Sudman & Kalton, 1986; Sudman and Bradburn, 1982). This recall becomes biased and may lead, among other things, to inaccurate assumptions of how consumers will behave (Pearson, Ross & Dawes, 1992) when applying the results to the business practice.

Statistical analysis is not always the most “user-friendly” information, causing managers and other employees who must use the information to make decisions without always understanding what they are looking at (Brandt, 1999). Explaining the statistical error corrections is not easy either, so we tend not to do it at all. However, ignoring the error does not mitigate it. When management only concentrates on the results and no adjustments are made for the inherent errors, the data will not be representative of the true feelings of your customers. The consequences of using these biased results will continue to snowball. Management will decide how to invest money on training, technology, marketing and/or R&D and expect the impact to be quantified with subsequent measurements. With questionable customer intelligence, it is quite possible that the wrong decisions were made on behalf of the customer. Subsequent measurement of the effect gained by the initiative will also be flawed and may or may not substantiate your ROI case. The plan is actually a guess that is made with more certainty than truly possible. Are you prepared to accept this now and face the future issues of reliability, credibility and validity of any results you present?

Your credibility in on the line

The consequences of a poor measurement program and inaccurate reporting will have a profound and far reaching effect in the organization. To maximize the return on your investment for your customer measurement program and to make sure your credibility is not called into question, be certain to back up your data with science. Ensure that you do it right from the beginning. If you do not want to study the science around creating and interpreting the gap variable from a delayed measurement, set up a program to measure customer satisfaction immediately after the contact center interaction.

One of the main objections to performing a real-time measurement and instead using a delayed survey methodology is the desire to include both the service interaction and fulfillment evaluations. To get an accurate view of the contact center service experience, it cannot be combined with the fulfillment evaluation without allowing the gap bias to be introduced into the results. The bias is created from the delay between the service interaction and the evaluation – a delay that is needed to evaluate fulfillment. To overcome the bias (and fix your program), you should create two separate measurement programs. Keep the follow-up phone interviews to secure the results on whether the outcome was as expected and successful. Then, take the results of the two measurement programs and look at them holistically as actionable intelligence to implement positive change for your organization.

In addition to the bias from the interaction and evaluation gap, there is also the issue of first responder bias (Gendall & Davis, 1993). This bias occurs because, by definition, the program contacts customers within two or three days (sometimes more) of a service interaction. The customers that respond to the survey first could be different than your general customer base because they are home at the time and have answered the phone. Think about the changes in society over the last few years with caller ID, cell phones, single parents, or two parents working outside the household. All of these social factors, and more, impact who within your customer base actually answers the call to complete the follow-up survey. If you are fortunate enough to get a valid sample within this short window, it may not be representative of your customer base. Are you prepared to accept this for strategic decision-making?

Who are these people?

This is an important factor because research studies have shown that there are differences between customers that are contacted on the first call and those contacted on subsequent calls (Gendall & Davis, 1993; Robinson & Lifton, 1991; Ward, Russick & Rudelius, 1985). In the Gendall & Davis (1993) research, they confirmed that those that are contacted on the first call tend to be more rural, lower income, less educated, female and older. While this study examined the general public, your results may also share specific demographic categories not representative of your customer base because the research company calls the telephone numbers on the list until someone answers the phone and completes the survey. Again, though, this can and must be fixed by correcting for the sample demographics with your analytic techniques, because you can not ignore it.

Satisfaction programs will increase loyalty, lower transaction costs, reduce failure costs, and help build an organization’s reputation in the marketplace (Anderson, Fornell, and Lehmann, 1994). But they must be done the right way to recognize such results. Do not allow the potential biases to prevent you from surveying your customers or asking them to evaluate their service experience – just be aware of and know how to control or eliminate as much bias as possible. If you do not control biases that are present in different types of measurements and different methodologies, the gap in your research may be too great and do more harm than good.

Visited the Customer Experience Resource Library yet?


Anderson, E.W. Fornell, C. & Lehmann, D.R. (1994). Customer Satisfaction, Market Share and Profitability: Findings from Sweden. Journal of Marketing, July, 53-66.

Bearden, W.O., Malhotra, M.K. & Uscategui, K.H. (1998). Customer Contact and the Evaluation of Service Experiences: Propositions and Implications for the Design of Services. Psychology & Marketing, 15, (8), 793-809.

Brandt, R. (1999). Satisfaction Studies Must Measure What the Customer Wants and Expects. Marketing News, 17.

Brennan, M., Chan, J., Hini, D. & Esslemont, D. (1996). Improving the Accuracy of Recall Data: A Test of Two Procedures. Marketing Bulletin, 7, 20-29.

Dholakia, P.M. & Morwitz, V.G. (2002). How Survey Influence Customers. Harvard Business Review, May, 18-19.

Gendall, P. & Davis, P. (1993). Are Callbacks a Waste of Time? Marketing Bulletin, 4, 53-57.

Pearson, R. W., Ross. M., & Dawes, R. M. (1991). Personal recall and the limits of retrospective questions in surveys. In J. M. Tanur (Ed.). Questions about survey questions: Meaning, memory, expression, and social interactions in surveys. New York: Russell Sage.

Robinson, L., & Lifton, D. (1991). Reducing Market Research Costs: Deciding When to Eliminate Expensive Survey Follow-up. Journal of the Market Research Society, 33 (4), 301-308.

Sudman, S. & Bradburn, N.M. (1982). Asking Questions: A Practical Guide to Questionnaire Construction. Jossey- Bass.

Sudman, S. & Bradburn, N.M. (1974). Response Effects in Surveys. Chicago: Aldine publishing Company.

Sudman, S. & Kalton, G. (1986). New Developments in the Sampling of Special Populations. Annual Review of Sociology, 12, 401-29

Sudman, S., Finn, A. & Lannom L. (1984). The Use of Bounded Recall Procedures in Single Interviews. Public Opinion Quarterly, 48, 520-524.

Ward, J.C., Russick, B., & Rudelius, W. (1985). A Test of Reducing Callbacks and Not-at-Home Bias in Personal Interviews by Weighting At-home Respondents. Journal of Marketing Research, 12, 66-73.

Republished with author's permission from original post.

Jodie Monger
Jodie Monger, Ph.D. is the president of Customer Relationship Metrics (CRM) and a pioneer in business intelligence for the contact center industry. Dr. Jodie's work at CRM focuses on converting unstructured data into structured data for business action. Her research areas include customer experience, speech and operational analytics. Before founding CRM, she was the founding associate director of Purdue University's Center for Customer-Driven Quality.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here