When you have data in one hand and unhappy customers in the other, you know it’s time to improve your customer experience (CX). This is especially crucial in startups, where every happy customer is a potential brand ambassador.
In fact, even if your customers are overwhelmingly satisfied, you still want to shore up your customer support practices—especially if you’re preparing to scale.
There are several types of data that will allow you to investigate the efficacy of your team. However, there’s one type of exercise in particular that can give you clear, actionable steps forward: the call calibration.
What are call calibrations?
Call calibrations are the family meetings of the customer service world. They bring agents, supervisors, and quality assurance vendors together to talk through scorecards from individual interactions. The meetings involve discussions of how the calls went, inconsistencies or triumphs in applying policies and procedures, and perceived errors in QA scoring.
Think of calibrating a scale: you take everything off the scale and tare the empty scale (i.e. tell the scale that the current weight is zero). Then, you can accurately measure whatever is placed on the scale.
Why call calibrations matter
There are several metrics that customer service teams use to measure their efficacy. One is customer satisfaction (CSAT), reported by the customer. This one, taken alone, is one-dimensional — it doesn’t provide any context as to why the customer was or wasn’t satisfied.
Another is first call resolution (FCR), which measures how many customer issues are resolved with just one contact. This one is helpful for identifying areas of growth but requires deeper analysis in order to decipher how — and if — the problem was actually solved.
Yet another is average handle time (AHT). AHT is often inversely related to FCR, which makes for a delicate dance when it comes to data priorities.
At this point, you’re probably understanding that CX data needs to be taken in context in order to determine problem causation and possible solutions. This can happen when reviewers sit down with agents and go through QA scorecards one by one, evaluating what went well, what didn’t, and how to improve for the future.
Common issues with call calibrations
Customer service agents have several ways they can improve. They can look at their own scorecards, if those are kept on an easily accessible QA platform. They can work with a coach. They can review standard operating procedures and knowledge bases. Or they can take part in call calibrations.
However, with their potentially subjective nature, call calibrations can have issues if not carried out thoughtfully.
It’s easy to calibrate a scale when you can clear the whole surface off and see that the scale is holding zero weight. It’s a lot harder when someone puts their finger on the edge of the scale, weighing down the edge and subverting the calibration process.
Unfortunately, call calibrations can have similar issues if company politics or communication issues affect them. If newer agents perceive that more experienced agents don’t have to take any heat, resentment could develop. If negative scores are the only ones reviewed, the calibration can become demoralizing. If reviewers appear to play favorites, the calibration won’t accurately portray the team’s true strengths and weaknesses.
Tips for great call calibrations
Reviewers don’t need to tiptoe around team members’ emotions, but they do need to be fair and honest. You can create that environment for your own team by setting up a few simple parameters that will make all the difference in your call calibrations.
1. Standardize your call calibration model
Your goal is to remove subjectivity from call calibrations and keep graders on the same page. Keep in mind that call calibrations ensure your QA team is aligned with how they interpret the standards for your organization.
When formulating QA scores, each grader should review the same support ticket to get started with call calibrations. The ticket could be related to a phone call, email conversation, or live chat, although it’s essential to provide graders with the entire transcript regardless of the support channel. Graders can review the ticket independently and then join a calibration session for a team-wide discussion.
When choosing a ticket for calibration, QA data management platform MaestroQA offers multiple approaches to consider:
1. Target tickets with low QA scores or low Grader QA scores.
2. Find tickets tied to a problematic topic.
3. Find tickets related to a process that has recently changed.
2. Lead with data
If you’ve ever been told, “Your work looks terrible!” you know the kind of emotional response that criticism can provoke. Bringing data to the table allows you to avoid skewed criticism and instead focus on standard, impersonal metrics.
For example, if a scorecard shows low CSAT, review the rest of the scorecard to determine if there was something the agent could have objectively done better, per your policies and procedures. If nothing pops up, move on to another data point, rather than speculating about the customer’s experience or projecting other experiences onto the call.
3. Set consistent—and wise—expectations
Call calibrations are always going to go better when your agents already understand expectations. Your expectations need to be both consistent and smart.
For example, if you expect every agent to adhere to every company policy 100 percent of the time, you’ll likely end up with unhappy customers. Those are the customers who call in with unique complaints that require flexibility from your agents. If your agents don’t have permission to use their best judgment — or easily escalate requests for exceptions — they can’t provide the kind of CX that keeps your customers coming back.
I just messaged a mammoth retailer after a multipack of an expensive food item arrived partially damaged. I needed the food right away, so I didn’t want to return the entire multipack. The agent had to ask his manager if he could provide me with a partial refund, rather than making me return everything. He was able to quickly provide me with an exception. If that hadn’t happened, the interaction may have gotten a high score in adherence to policies, but a low CSAT score.
That kind of scenario is a good one to discuss during a call calibration. With company expectations in mind, a conversation about this could lead to new policies. It could influence other branches of the company to improve product shipment procedures. It could lead to better training for handling certain complaints. When expectations are clear up front, call calibrations can be more productive for everyone.
Image credit: Yan Krukov; Pexels