{"id":1015783,"date":"2022-04-19T13:14:09","date_gmt":"2022-04-19T20:14:09","guid":{"rendered":"http:\/\/customerthink.com\/?p=1015783"},"modified":"2022-04-19T13:14:09","modified_gmt":"2022-04-19T20:14:09","slug":"your-contact-center-monitoring-and-coaching-may-be-doing-more-harm-than-good","status":"publish","type":"post","link":"https:\/\/customerthink.com\/your-contact-center-monitoring-and-coaching-may-be-doing-more-harm-than-good\/","title":{"rendered":"Your Contact Center Monitoring and Coaching May Be Doing More Harm Than Good"},"content":{"rendered":"
Discard the illusion of fairness and actionability of the “random sample of 10 cases!”<\/em><\/p>\r\n When I interview contact center supervisors and managers to explore their biggest frustrations, one of the most prevalent issues is the amount of time and hassle associated with customer service rep (CSR) evaluation. Much of the frustration comes from a serious disconnect between customer service objectives, HR-mandated CSR evaluation, and organizational performance improvement. <\/p>\r\n\r\n This article first describes the operational problems, then the disconnect between standard practice and rational objectives, where text and speech analysis may be helpful and suggests a less painful, more practical approach. While I comment on how customer surveys are part of the context for evaluation, the best practices and mechanics of surveys will be left to another article.<\/p>\r\n The foundation of contact monitoring and evaluation in most companies is a very common practice of selecting a “random sample” of ten contacts per CSR every month. This foundation is not a valid sample. <\/p>\r\n Here’s a summary of what I’ve just discussed.<\/p>\r\n Evaluation objectives are usually over-simplified. They are often articulated by a single customer satisfaction target for the contact center. This target is set by what sounds like a reasonable number, e.g., 85 percent top-two box on a five-point scale, 8.5 on a ten-point scale. Other companies set targets for Net Promoter Score (NPS) of a 65 for consumer packaged goods (70 percent promoters minus 5 percent detractors) or an NPS of 40 for an auto company contact center (55 percent promoters minus 15 percent detractors).<\/p>\r\n What is the best indicator of this oversimplified objective problem? Mixed messages from different data sources. For example, CCMC recently visited a major auto company contact center to review its operations. The head of service quality cheerily announced that the reps were doing very well on monitoring, most received more than 95 percent on the monitoring score. The gloom came when we asked what the customer satisfaction tracking survey score was. It was a 65 percent completely satisfied. In many companies, the satisfaction score from surveys is measured with a question asking if the customer would recommend the CSR or, worse, the company, and the answer is “no” if the problem was not resolved.<\/p>\r\n There are two major reasons for the disconnect. First, the supervisor who conducted the call-quality monitoring and evaluation was listening for two sets of items: 1) items important to the customer (such as first call resolution, clarity of CSR explanations, and empathy), and 2) items important to the company (such as appropriate greeting, complete logging of the call, adherence to policy, and bridges to cross-selling or up-selling). The disconnect arises when the CSR does very well on the company priorities and less well on the customer priorities. The sum of the two scores may be very respectable (e.g., an 85 percent), while the customer’s key needs remain unfulfilled, thus the score of 65 percent from the customer’s perspective. In addition, the dimensions thought important to customers such as using the name three times, are based on folklore or personal opinions.<\/p>\r\n The second reason for the disconnect between evaluation objectives and how the data is collected is that the call mix evaluated include issues that the CSR is not able to resolve to the customer’s satisfaction. If, out of the ten selected calls, the CSR happens to receive three calls where the customer has a difficult issue such as repair comebacks (repeated failed attempts to repair the car) or out-of-warranty repairs, there is a high probability that the CSR will have at least 30 percent detractors. This assures a maximum satisfaction score in the 50s or 60s, and therefore a failed evaluation.<\/p>\r\n In fact, there should be four separate evaluation objectives that require different data and analysis. These are: CSR performance improvement, celebrating great CSR performance, contact center process improvement, and organizational (corporate) improvement. In most companies, only the first objective is addressed. This is the fundamental problem.<\/p>\r\n In summary, answer these questions to ensure the proper focus on CSR skills and contact center operations.<\/p>\r\n The following issues must be addressed in every monitoring and evaluation process. While there is no perfect solution, each issue should be explicitly addressed rather than glossed over.<\/p>\r\n\r\n I believe that the random sample concept is spurious, as mentioned above. If you cluster ten calls per month for three months, thirty calls out of 3,000 still is not a rigorous sample. A better approach is to evaluate ten challenging<\/em> contacts, ideally with an eye toward improving a particular skill, such as diffusion of anger, negotiation, or troubleshooting. If HR objects, point out that each CSR gets a random offering of difficult contacts, and the sample is addressing the difficult issues where each particular CSR has challenges. The goal is improvement, and each CSR may have different areas where improvement is needed.<\/p>\r\n\r\n The CSR is scored on the drivers of satisfaction over which they have control. CSRs should be scored where they are given flexibility for solutions using flexible solution spaces (FSS). The focus should be where there are multiple ways of resolving issues depending on the circumstances – vs. not penalizing CSRs for uncontrollable factors like follow-through by a car dealer on a repair.<\/p>\r\n\r\n Supervisors are the usual evaluator, but some organizations use a separate quality team to assure consistency. If the quality team evaluator is not familiar with a particular CSR they are less able to focus on where that CSR needs developmental coaching, and the score, a function of contact issue mix, is much less credible with the CSR. Supervisors are credible but are very pressed for time and often do not select contacts to support skill development. A better approach that should be implemented at least some of the time, is to have a peer do the evaluation. This saves supervisor time while a fellow team member listens to calls or review emails and provides feedback. Feedback from peers is less stinging than feedback from supervisors. Also, one auto company has CSRs evaluate their own calls and take the evaluation to the supervisor to discuss. We found that the CSRs were often harder on themselves than the supervisor.<\/p>\r\n\r\n The two options are real-time and digital recording. The benefit of real-time collection is that you can immediately give positive feedback to the CSR. This has tremendous value. The advantage of recording is that you can sort through calls and identify several of the same type so that you can focus on a particular issue or skill. The recording also is useful when escalations or regulatory complaints are encountered.<\/p>\r\n\r\n The three most important aggregate<\/em> analyses are:<\/p>\r\n Annually, there should be an aggregate analysis of how accurately the dimensions measured predict customer satisfaction. A correlation coefficient of less than 0.7 should instigate a re-examination of what you’re measuring. Additional analyses should look at talk time by issue, whether the issue was preventable via self-service or proactive communication, and whether delight was attempted, and if so, did it work.<\/p>\r\n Speech and text analytics tools are still relatively expensive but have two big advantages based on the fact that they can process a large number of calls quickly. This analytical horsepower must still be appropriately guided and interpreted by a human being. This is especially true when going beyond the basic score to develop an assessment of a particular skill and then an action plan for the employee.<\/p>\r\n The three key issues are the timeframe, the format, and who transmits the evaluation.<\/p>\r\n To recap, consider these practices to improve how feedback is monitored.<\/p>\r\n I will not provide an in-depth review of the second two objectives of monitoring, Contact Center and Corporate improvement, except to say that they often provide bigger opportunities for moving the overall service quality needle than focusing on the CSRs. I will address these issues in another article.<\/p>\r\n In summary: focus on difficult calls where the CSR can demonstrate their expertise and which can be the basis for fair<\/strong>, substantive coaching and goal setting.<\/p>\r\n Contact me<\/a> for more detailed article on all of the above. Get further information on employee satisfaction and survey best practices at https:\/\/customercaremc.com\/insights<\/a>.<\/p>\r\n\r\nThe Problem<\/h2>\r\n
\r\n
Prevalent Problems with Monitoring Samples and Analysis<\/h3>\r\n\r\n\r\n
Causes: Muddled Objectives Begetting Muddled Data Collection<\/h2>\r\n
\r\n
\r\nFinally, the emphasis of CSR evaluation, which is usually the primary, and often the only objective for monitoring and evaluation, is misplaced. Our experience is that only 20-30 percent of possible improvement can come from improving CSR skills. The vast majority of customer satisfaction improvement comes from improving the contact center processes and response guidance and empowerment.<\/li>\r\nProper Focus of Analysis<\/h3>\r\n\r\n\r\n
Challenging Issues for using monitoring and evaluation surveys for CSR evaluation and motivation<\/h2>\r\n
What contacts should be monitored?<\/h3>\r\n
Resolution should be judged by type of issue<\/h3>\r\n
Who does the evaluation?<\/h3>\r\n
What data should be collected?<\/h3>\r\n
\r\n
How is monitoring data collected?<\/h3>\r\n
How should monitoring data be analyzed?<\/h3>\r\n
\r\n
What is the Role of Speech and Text Analytics? <\/h2>\r\n
\r\n
How should the evaluation be fed back to the CSR?<\/h3>\r\n
\r\n
Best Practices for Monitoring Feedback<\/h3>\r\n\r\n\r\n\r\n\r\n
Recommendations<\/h2>\r\n
\r\n
Notes<\/h3>\r\n