The Fundamentals of Quality Calibration

1
1079

Share on LinkedIn

This article was originally published on the FCR blog on February 10, 2017. Click here to read the original.

Quality assurance is an essential function in any contact center — at least it should be. It’s the primary means of ensuring that the level of customer service our agents are providing to customers consistently meets or exceeds our (and customer) expectations. At it’s best, QA brings to the surface various coaching opportunities to help agents grow and perform at their very best, and it reveals key business insights to help our organizations do the same.

What is QA Calibration?

One essential ingredient to a great QA process is regular calibration sessions. Simply defined:

QA Calibration is where everyone responsible for doing QA in the organization comes together to rate the same interaction(s), ensuring that they are aligned and upholding the same expectations.

In the case of an outsourcer, like FCR, there’s an added layer to this where we not only need to be calibrated internally, but we need to be calibrated with our clients as well.

The first step to calibration is making sure that we have a clear set of expectations so that everyone who completes a QA review knows what’s expected for each interaction. These expectations are best outlined in a quality definitions guide. This guide is a living, breathing document that ideally should align with your customer service vision and ultimately help your customer service department achieve that vision one interaction at a time.

Out of that definitions guide should come your criteria or form by which you measure each customer interaction. The calibration is a time where everyone who does QA practices completing that form together to ensure that they are rating each interaction the same way.

Methods for Calibration

It’s important to do regular calibration sessions with your QA team to continuously improve alignment. Typically, you’ll find that when you implement a new QA form or add new members to the quality team, you’ll want to calibrate more frequently — like once a week. But as the team grows more aligned, you can space sessions out to bi-weekly.

There are a few different formats for calibration sessions. I’m going to briefly outline them and speak to their benefits.

Method #1- Review and grade first, then discuss

Our recommended method for ongoing calibrations works like this:

  • Each member of the QA team receives the interactions to grade prior to the calibration session.
  • They grade them and submit the scores independently.
  • Someone on the team compiles the scores and highlights the difference.
  • The group then comes together and talks about their differences.
  • The end result should be a calibrated score that everyone agrees on.

There are a couple benefits to this method for calibration. First of all, it’s the most efficient way to run the session because everyone has already reviewed the interactions on their own. The goal of the session is to discuss the differences and come to an agreement. No need to waste time talking about things everyone already agrees on. This is also a great method for avoiding “groupthink” and making sure you get the most honest response from every member of the team.

Method #2- Review together and grade together

The second common method for calibration is as follows:

  • All members of the QA team come together.
  • They review the interactions together.
  • They grade the interactions together.

This method is quite popular for teams that are strapped for time but know they need to calibrate. The session requires minimal prep work beforehand. This is also a great method when you first roll out a new form, giving the team an opportunity to discuss and work through it line by line. The drawback to using this method on an ongoing basis is that it’s difficult to get an accurate gauge of how well-calibrated each of the team members is.

Method #3- Review together with agents

The third method entails the following:

  • Set up a one on one session with an agent you want to review.
  • Review the interactions together with the agent.
  • Grade the interactions together and come to an agreement on the score.

This method is more of an alternative way of doing QA versus calibration. Nonetheless, it’s a very beneficial exercise to review interactions together with your agents. By doing so and discussing openly, it’s a great way to get agent buy-in on what makes up extraordinary interactions.

Measuring Success

At the end of the calibration session, especially method #1, you should be able to measure its success but calculating the variance in the scores. The best way to calculate the variance is to measure the average difference between each score and the agreed upon calibrated score.

Another slightly different way to calculate it is to take the difference between the highest and lowest score. Your goal over time should be to see the variance decrease somewhere below a 5% difference on a regular basis. This will give you ongoing insight not only into how well the entire quality team is aligned but also into how individual members of the quality team are performing.

Bonus Tip: Talk about Customer Satisfaction

While you’re at it, have each of your reviewers put themselves in the customer’s shoes and rate whether they were satisfied or not. Feel free to substitute Net Promoter Score or Customer Effort Score here if you measure those in your contact center. Talk about the key drivers that led to that rating. Was it an issue with the service provided by the agent? Was it an issue with the product? This discussion builds awareness of the greater customer experience and helps turn QA into a business insights machine!

Finally, every company’s quality assurance process looks a little different. Whether you have a form with tons of items on it or you have no form at all, it’s essential that every member of your team is aligned (or calibrated) on what a great interaction looks like. That sort of alignment only comes with great communication and practice.

Republished with author's permission from original post.

Jeremy Watkin
Jeremy Watkin is the Director of Customer Support and CX at NumberBarn. He has more than 20 years of experience as a contact center professional leading highly engaged customer service teams. Jeremy is frequently recognized as a thought leader for his writing and speaking on a variety of topics including quality management, outsourcing, customer experience, contact center technology, and more. When not working he's spending quality time with his wife Alicia and their three boys, running with his dog, or dreaming of native trout rising for a size 16 elk hair caddis.

1 COMMENT

  1. QA calibration is necessary to ensure that managers, supervisors and QA teams are able to accurately evaluate agent performance and thus improve their customer service. Quality Monitoring allows better understanding of the behaviors that could impact organization’s goals.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here