7 Survey Rules to Live by

0
42

Share on LinkedIn

How do we ensure that customer satisfaction results are a profitable business process in the contact center and elsewhere in the organization? To increase the value of the initiative, be certain that the research is done the right way, and not only done for the sake of surveying customers.  Note that customer feedback results will be used by colleagues regardless of the number of caveats listed in the footnotes, so be diligent in providing valid and credible customer intelligence.  The consequences of a poor measurement program and inaccurate reporting can have profound and far-reaching effects in the organization. 

Put another way, are you guilty of survey malpractice by giving your company faulty information based on inadequate research methods?

Malpractice is a harsh word — it directly implies professional malfeasance through negligence, ignorance or intent.  Doctors and other professionals carry insurance for malpractice in the event that a patient or client perceives a lack of professional competence.  For contact center professionals and other managers, there is no malpractice insurance to fall back on for acts of professional malfeasance, whether they’re intentional or not.  Of course, it is much more likely that one would be fired than sued for bad acts, but that offers little comfort.

Never put yourself in a position where your competence can be called into question.  That’s why so many call center managers are “skating on thin ice” when it comes to their customer satisfaction measurements: there are demonstrable failings with many of the typical practices used by call center managers.  By definition, an ineffective measurement program generates errors from negligence, ignorance and/or intentional wrongdoing.  You have a fiduciary responsibility to your company — and recommendations made based on erroneous customer data do, indeed, meet the definition of malpractice.

Measurement programs must meet certain scientific criteria to be statistically valid with an acceptable confidence level and level of precision or tolerated error.  Without these considerations, you are guilty of survey malpractice.  Defending your program with statements like, “it has always been done this way” or “we were told to do a survey” is not sufficient.  Research guidelines adhered to in academia apply to the business world, as well.  A deficient survey yields inaccurate data and results in invalid conclusions no matter who conducts it.  Unnecessary pain and expense are the natural outgrowths of such errors of judgment.

To maximize the return on investment (ROI) for the EQM customer measurement program, and to ensure that the program has credibility, install the science before collecting the data.  Make sure that the initial program setup is comprehensive.  If there is no research expert on staff, then hire this out to a well-credentialed expert.  The alternative is to train someone in the science around creating and interpreting the gap variable from a delayed measurement.  Or better still; engage a qualified expert to design a program to measure customer satisfaction immediately after the contact center interaction.

Before assuming that survey malpractice does not or will not apply to your program, consider the following tell-tale signs of errors and biases, as they are critical to a good program.

1.  Measuring too many things.  Your survey of a five-minute contact center service experience takes the customer 15 minutes to complete and includes 40 questions.  While everyone in your organization has a need for customer intelligence, you should not be fielding only one survey to get all of the answers. 

Should the contact center be measuring satisfaction with the in-home repair service, the accounting and invoicing process, the latest marketing campaign, or the distribution network? Certainly input on these processes is necessary, but don’t try to get it all on a single survey.

2.  Not measuring enough things.  An overall satisfaction question and a question about agent courtesy do not make a valid survey.  Without a robust set of measurement constructs, answers to questions will not be found.  Three or four questions will not facilitate a change in a management process; nor will they enable effective agent coaching or be considered a valid measure to include in an incentive or performance plan.

3.  Measuring questions with an unreliable scale.  In school, everyone agreed on what tests scores meant: 95 was an A, 85 was a B, and 75 was a C.  Everything in between has its own mark associated with it, as well.  Yet, when it comes to service measurement, we tend to give customers limited responses.  What do the categories excellent, good, fair and poor really mean? Offering limited response options does not permit robust analysis, and statistical analysis is often applied incorrectly.  In addition, using a categorical scale or a scale that is too small (like many typical 5-point survey questions) is not adequate for the evaluation of service delivery. 

4.  Measuring the wrong things or the right things wrong.  Surveys should not be designed to tell you what you want to hear, but rather what you need to hear.  Constructs that are measured should have a purpose in the overall measurement plan.  Each item should have a definitive plan for use within the evaluation process.  The right things to measure will focus on several overall company measures that affect your center (or your center’s value statement to the organization), the agents and issue/problem resolution.

5.  Asking for an evaluation after memory has degraded.  When we think about time, 24 to 48 hours doesn’t seem that long.  But when you’re measuring customer satisfaction with your service, it’s the difference between an accurate evaluation and a flawed one.  Do you remember exactly how you felt after you called your telephone company about an issue? Could you accurately rate that particular experience 48 hours later, after other calls to the same company or other companies have been made? That’s what you’re asking your customers to do when you delay measurement.  It opens the door to inaccurate reporting and compromised decision-making, and is also an unfair evaluation of your agents.

Conducting follow-up phone calls to gather feedback about the center’s performance is a common pitfall.  While the research methodology certainly should have its place in the company’s research portfolio, it’s less effective than using point-of-service, real-time customer evaluations. 

Mail and phone surveys are useful for research projects that are not tactical in nature, but rather focused on the general relationship, product feature, additional options, color, etc. 

6.  Wiggle room via correction factors.  If you’re using correction factors to account for issues in the data or to placate agents or the management team, some aspect of the survey design is flawed.   A common adjustment is to collect 11 survey evaluations per agent and delete everyone’s lowest score.  However, with a valid measurement that includes numeric scores, as well as explanations for scores and a rigorous quality control process, adjustments in the final scores will not be necessary.   Making excuses for the results or allowing holes to be poked in the effort diminishes and undermines the effectiveness of the program, and highlights an opening for survey malpractice claims.

7.   Accuracy and credibility of service providers and product vendors.  As with any technology or service, the user assumes responsibility for applying the correct tool, or applying the tool correctly. 

There are plenty of home-grown or vendor-supplied tools to field a survey, but, again, if you do not apply the functionality correctly, you will be responsible for the error.  Keep in mind that some service providers are only interested in selling you something that fits into their cookie-cutter approach, and it will not be customized to your specific requirements. 

 ~ Dr. Jodie Monger, President

 

This post is part of the book, “Survey Pain Relief.”  Why do some survey programs thrive while others die? And how do we improve the chances of success? In “Survey Pain Relief,” renowned research scientists Dr. Jodie Monger and Dr. Debra Perkins, tackle numerous plaguing questions.  Inside, the doctors reveal the science and art of customer surveying and explain proven methods for creating successful customer satisfaction research programs. 

“Survey Pain Relief” was written to remedy the $billions spent each year on survey programs that can be best described as survey malpractice.  These programs are all too often accepted as valid by the unskilled and unknowing.  Inside is your chance to gain knowledge and not be a victim of being lead by the blind.  For more information http://www.surveypainrelief.com/

 

“Survey” Photo Credit: The University of York www.york.ac.uk/…/training/gtu/staff/cros.htm

Republished with author's permission from original post.

Jodie Monger
Jodie Monger, Ph.D. is the president of Customer Relationship Metrics (CRM) and a pioneer in business intelligence for the contact center industry. Dr. Jodie's work at CRM focuses on converting unstructured data into structured data for business action. Her research areas include customer experience, speech and operational analytics. Before founding CRM, she was the founding associate director of Purdue University's Center for Customer-Driven Quality.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here