I have two scenarios for you regarding contact center quality assurance. In the first, imagine a manager or supervisor listening to a phone call and using a set of criteria to grade that call. They then add up all of the points and deliver that evaluation to the agent who handled the call. While we’d like to think this quality coaching session holds some special transformational power, what’s more likely is that the eyes of both the coach and the agent go straight to the score and all opportunity to help that agent improve at their job is lost.
Now picture a second scenario where an operations manager is running through a slide deck to talk about the performance of their team over the past month. They arrive at the quality slide and proclaim, “Our average quality last month was 91%.” When you dig a bit further you realize that it’s always right around 91% and we have no clue how to improve that number or if it’s good or not. It’s just a number.
There’s been a bit of a debate among my peers in the contact center industry in recent years over the relevance of quality scores and there are a couple hotly contested issues. One is whether or not the knowledge of a score helps or hinders individual agent performance. The other is whether an overall quality score offers any value at all? Is it even worth tracking as a key performance indicator.
While I believe quality scores are indeed important, there’s some balance and thoughtfulness required in our approach to them. In this article I’ll give you four recommendations around quality scoring that will actually help drive what’s really important for your contact center — agent and customer engagement.
1. Simplify your quality scoring method
At its core, a quality assurance process is a set of criteria that’s critical for agents to complete on every customer interaction. Simply put, it’s what they must do to be successful when interacting with customers and it typically emcompasses communication skills, the application of job knowledge, and the ability to follow specific policies and procedures.
What I’ve seen happen many times is that after determining the criteria, leaders spend more time on their elaborate scoring system than they did on the criteria itself. In my opinion it doesn’t really matter if the scores add up perfectly to 100 and that certainly shouldn’t influence the number of questions on the form. Keep in mind also that when creating a scoring schema — let’s say it’s a 5 or 10 point scale — you need to clearly define what’s a 1, versus a 2, versus a 3, and on down the line. That’s time consuming and difficult for quality teams to calibrate with one another on for consistent grading.
In favor of simplicity and keeping the quality process about quality and not a score, I prefer a simple yes/no scale. The agent either exhibited the behavior at or above the expected standard or they didn’t. Remember that your quality form isn’t a work of art — it’s a tool for helping your team provide better customer service. You’re better served spending more time reviewing interactions and coaching agents.
2. Track performance at the individual question level
If you want to improve the quality of your customer service team you most certainly have to measure it, but a percentage on a slide once a month won’t do. Leaders need to be able to break that percentage down in order to improve it. I recommend looking at team and individual agent performance on each individual question on the form. You’ll then see the areas on the form where the team excels and the areas where they struggle and you can better focus both individual coaching and team-wide training.
When you look at it through this lens it also helps you in building the criteria for your quality form and determining the weight for that criteria. Those behaviors and skills that you really want to track and improve should be included on that quality form. Depending on your industry, there are going to be specific aspects of the customer interaction that carry a critical level of importance. For example, your agents may be required to pause a call recording while taking payments over the phone or announce that a call is being recorded when they place an outbound call. This stuff needs to be tracked to minimize and eliminate these errors.
One more thing that’s important to address in this recommendation is tools. There are many great quality tools on the market that can help get to this level of reporting but this can also be accomplished with a form and a spreadsheet. Though an actual quality tool is so much better for a lot of reasons.
3. Check your alignment with customer perception
In a past article I encouraged customer service leaders to check their alignment between quality and customer satisfaction. This was born out of an interesting study where we found on some of our teams that quality scores were extremely high and customer satisfaction was much lower. Where this is the case, the quality assurance process may not be measuring the right things and this check can be a catalyst for updating and refining your process.
I recommend a couple steps in this approach. First, tt’s a good practice to ask yourself after scoring an interaction if the results align with the way the customer might rate the interaction. Did that interaction you just scored as a 100% look, sound, and feel like 100%? How would the customer have rated this call and the way the agent handled it? What we don’t want is for the agent to check all of the boxes on your quality form but for it to miss the mark with the customer.
Secondly, at a higher level, take your overall quality percentage and compare it to your customer satisfaction percentage. If quality is significantly higher it’s possible you’re measuring the wrong things or being too lenient in scoring. Either way, you’re not aligned with customer perception.
4. Determine when and if you really need to show agents a score
I’ve seen a couple things happen when agents are shown their quality score during a coaching session. The first is that they shut down. They came to that session for their score, and even if it’s a good one, they aren’t receptive to anything you say after that. The second response is that they begin nitpicking and haggling over whether or not they agree with your assessment. “Why did you rate this a 2? I think it should be a 3.” I absolutely believe that quality is a key metric on the agent scorecard and it’s something agents need to be held accountable to but perhaps it doesn’t belong in the coaching session.
At FCR we’ve approached this a couple different ways. The first idea is to wait until the end of the coaching session to discuss overall scores. The first portion of the meeting should be to discuss what the agent did right, where they can improve, and then practice the desired behaviors together. Another option I’ve seen is to discuss metrics in a separate meeting with agents and the quality metric is one of several that they review on a balanced scorecard.
Keep in mind that there will be some change management involved if you change when and where agents see scores. Many of your more tenured agents might say, “Just give me my score so I can get back to work.” Remember that a big function of quality assurance is coaching agents to better support customers. If we’re focused on the score in those coaching conversations, we’ve completely lost all sight of what’s most important.
As I conclude, I’ll reiterate that quality scores both as a measurement of agent performance and team performance are an essential metric for your operation — but they should be used with great thoughtfulness and care. Never forget that quality is about empowering and equipping your agents to provide better customer service and the scores will help you see whether or not your efforts are paying off.