Would you wear both a belt and suspenders? Probably not: one device should be enough to hold up your pants. So if you have a customer satisfaction survey in place and customers are responding in adequate numbers should you bother to also put in place a quality monitoring process?
I would say yes. Let’s start by affirming that customer satisfaction surveys are wonderful and irreplaceable since they capture what really matters, namely what customers think of your service. Indeed, if your quality monitoring ratings are strikingly different than your customer satisfaction ratings I would (1) always believe the customer satisfaction ratings over the quality monitoring ratings, assuming that the customer ratings rely on a reasonably high response rate for the surveys and (2) review your quality monitoring checklist, as clearly you are not focusing on the right dimensions. So customer surveys are very precious and should be implemented whenever possible.
But don’t give up on quality monitoring just because you have a survey in place. Quality monitoring can ferret out issues that customer surveys can’t, including
– process adherence: are the reps following the various guidelines you have in place, be they using the approved greeting or handling refunds
– troubleshooting technique: are the reps as efficient as they could be when they resolve customer issues?
– tool usage: are the reps logging issues appropriately? Are they escalating systemic issues to their supervisors?
– customer skills: while outrageous good or bad customer skills will be reflected in customer ratings quality monitoring can look at more detailed aspects
Bottom line: quality monitoring and, most important, coaching are good programs to keep in parallel with a formal customer satisfaction survey.
First of all I would like to thank you for writing on the subject. I should probably have myself, but since you now provide the platform here, I’ll take it.
You are absolutely right that measuring Customer Satisfaction alone is not enough. Measuring Customer Satisfaction on Contact Center experiences is mostly flawed anyway. At first because the moment of measurement is too short after the fact (nowadays increasingly in after-call IVR solutions) when resolution of the problem or request has not yet taken place. Secondly because of poor questions, not aiming at measuring Customer desired outcomes but to justify internal metrics (did you call on this topic before in the 6 weeks prior to your last call?). But that’s a totally different topic.
Quality Monitoring can seriously harm Customer Satisfaction. Why and How? Here’s what I think:
I know COPC is not tremendously popular in the USA, in Europe it is increasingly so. Nevertheless I think most companies that perform quality monitoring on call center agents do this in a way quite similar to COPC standards. They will have a bunch (some over 20!) of “errors” that the agent can make and they will “monitor” the agents performance against those possible mistakes once every week or another frequency. If the agent does not pass this “test” monitoring frequency will be increased and – at best – some coaching an training will be provided. COPC “best”-practice is to take someone off the phone if one or more “critical errors” have been made in a call (that has been monitored).
Now think of this: How comfortable would you be in your job?
I think you’d be unhappy most of the time. You would be jumping at every opportunity to get off the phone and into the “backoffice” or leave the company. And, just a little over a year ago, what was the Call Center industry’s biggest challenge: reduce attrition. I’m not surprised, are you?
Another question: If you were a Call Center agent in these circumstances, what would be the first thing on your mind when doing your job every day? What should it be?
Right: Customers it should be and Customers it’s not! Because agents have to follow the script, do not forget to tell the Customer’s name 3 times, summarize the conversation, provide the correct answer, “nod” with their voices, register details correctly in the CRM system (while they can’t find the information they’re looking for in the knowledge management system) and what have you..
If agents cannot focus on the Customer and his needs, and if agents come in and turn around whenever they get the (better) chance, we have been taking the wrong approach towards Operational Excellence. As a consequence your Customer Services will never get top ratings nor will it be a differentiating element in the Customer Experience you are so much trying to improve..
This is the time to evaluate and re-design your approach towards improving the Customer Service Experience. Doing it after the crisis is over might just be too late..
Please let me know your thoughts.
I completely agree with you that inane criteria for quality monitoring (such as using the customer’s name 3 times during the conversation) lead to poor outcomes for customer satisfaction and employee satisfaction. My point is simply that if you are serious about coaching you should look beyond customer ratings and engage in (well thought-out) quality monitoring. It’s time to use substantial criteria in those quality monitoring checklists!
I agree, monitoring as it is now focuses on the wrong criteria, it seems as though its designed so when reports are made up it makes the company look like their doing well and the customers are happy, but really if they stopped focusing on mere formalitys like using the customers name x amout of times, or asking the customer how their day is going, or id checking and writing notes and doing it in under a certain time frame, the agent could actually focus their mind on listening to the customer and solving their problem, studies done of multi-tasking show that it is ineffective, call centre agents on a call have to do multiple things at once while still engaging with the customer, let the agent just focus on whats important the customer! there problem! and then finding a solution!