{"id":1015783,"date":"2022-04-19T13:14:09","date_gmt":"2022-04-19T20:14:09","guid":{"rendered":"http:\/\/customerthink.com\/?p=1015783"},"modified":"2022-04-19T13:14:09","modified_gmt":"2022-04-19T20:14:09","slug":"your-contact-center-monitoring-and-coaching-may-be-doing-more-harm-than-good","status":"publish","type":"post","link":"https:\/\/customerthink.com\/your-contact-center-monitoring-and-coaching-may-be-doing-more-harm-than-good\/","title":{"rendered":"Your Contact Center Monitoring and Coaching May Be Doing More Harm Than Good"},"content":{"rendered":"

Discard the illusion of fairness and actionability of the “random sample of 10 cases!”<\/em><\/p>\r\n

When I interview contact center supervisors and managers to explore their biggest frustrations, one of the most prevalent issues is the amount of time and hassle associated with customer service rep (CSR) evaluation. Much of the frustration comes from a serious disconnect between customer service objectives, HR-mandated CSR evaluation, and organizational performance improvement. <\/p>\r\n\r\n

This article first describes the operational problems, then the disconnect between standard practice and rational objectives, where text and speech analysis may be helpful and suggests a less painful, more practical approach. While I comment on how customer surveys are part of the context for evaluation, the best practices and mechanics of surveys will be left to another article.<\/p>\r\n

The Problem<\/h2>\r\n

The foundation of contact monitoring and evaluation in most companies is a very common practice of selecting a “random sample” of ten contacts per CSR every month. This foundation is not a valid sample. <\/p>\r\n