Quality Scores: Are They Really Necessary?

3
552

Share on LinkedIn

Pretty much every contact center on the planet has some sort of a quality assurance process — or so I learned when I contracted with an outsourcer a half dozen years ago. Since going to work for an outsourcer, I’ve been witness to dozens of quality processes and have observed many similarities and differences.

At the most basic level, a quality process includes a form that’s a set of criteria that must be met on a customer interaction — and this may vary by contact type (e.g. phone, email, chat, social, etc). I’ve seen forms range in length from thirty or more items to just a few. Some forms even include penalties and auto fails for major mistakes and bonus points for going above and beyond with the customer. The latter sounds good in theory but is rarely used.

One thing that’s a constant in most quality processes is a score, typically out of a 100%. Like other KPIs, this is a metric that’s tracked and improved upon from the organizational level all the way down to each individual agent. The idea is that through consistent monitoring and coaching, agents will improve these scores and the quality of their customer service over time.

Batting around an alternative

In recent months, I’ve had conversations with various managers at FCR as to whether or not agents really need to see their quality scores. Sure, they need to know where they’re excelling and where they need to improve, but what good is the grade in and of itself? Here are some of the arguments against showing scores that I’ve heard:

  • When presented with a quality review, agents inevitably look straight at the score and regardless of whether it’s good or bad, don’t hear anything else in the conversation. This is where I typically like to talk about emotional intelligence and the way a negative score can hijack an agent’s amygdala and put them in fight or flight mode. This is not a mindset conducive to learning.
  • Similarly, coaching sessions can quickly turn into haggling over the degree to which someone did or didn’t do a behavior all in the name of negotiating a higher score. The focus should be on the behavior and improvement, not the score.
  • The score simply doesn’t matter. It’s about whether or not the agent achieved the desired behavior. In fact, on many of our quality forms, we’ve moved away from a rating scale to a simple “yes you did it” or “no you didn’t.” If the desired behavior isn’t up to standard, it’s less about points missed and more about “how do I improve my performances to change that no to a yes?”

As I think through the idea of hiding quality scores from agents, I’ve posed this question to a number of our other managers and it’s been met with both intrigue and skepticism. The best argument I’ve heard in favor of showing scores to agents thus far is, “My team members are really motivated by their scores.” I get it — change is hard.

What’s still important?

Whether it’s people claiming to only own ten possessions or moving into “tiny houses,” minimalism is definitely gaining a lot of publicity in our culture. In similar fashion, I’m seeing a movement toward a more minimalist approach to quality assurance. This means sifting through current processes and keeping only the aspects that are most important. Here are the four most important aspects of your quality process:

  • You still need a quality form, but it should be shorter – There are 3 essential pieces to every customer interaction regardless of channel that should be on a quality form. In no particular order agents should make an authentic, human connection with the customer, they should properly follow all policies, procedures, and security protocol, and they should give customers the right answers — sometimes answering questions the customer didn’t know to ask. You might add a couple more to this depending on your business and goals, but those are my universal essentials for your quality form. Also remember, that the longer your quality form is, the more time your team is spending filling out a form and the less time they have for coaching agents and doing other activities.
  • Tie it to customer satisfaction – I’ve said it before but it bears repeating, CSAT (or NPS, etc) should always be part of your quality process. If your agents are doing everything “technically” right but customers aren’t happy with the service, something’s wrong with the process. Some things are out of the agents’ control but some aren’t and quality should always be tied back to the impact on the customer.
  • Focus on the coaching – Timely, consistent coaching is critical to quality assurance. At FCR, all of our leaders spend a day learning how to Coach with Compassion. Feedback should be delivered to agents, preferably within 24 hours of the interaction. This is what drives continuous improvement.
  • Track results over time – Whether you show scores to your agents or not, it’s still wise to use a tool like Scorebuddy, MaestroQA, or good ole Google Forms to track results. Tracking your average out of 100% may be nice to see month over month to determine if overall quality is improving, but I recommend getting a bit more granular. Tracking how the team and individuals are performing on the different areas of the quality form will give you a better idea of where coaching, training, and agent empowerment efforts should be focused.

What’s next?

As a result of my conversations, we recently gave our managers the option to decide whether or not they want to display quality scores to agents during coaching sessions. We now have a handful of teams hiding scores and purely discussing the areas where agents either excelled or need improvement.

The arguments in favor of hiding scores are compelling, but it’s still too early to make an organization-wide change. As we continue to look to maximize coaching and development and empowering our agents to have more fluid, authentic conversations with customers, I’m not sure seeing their scores helps a whole lot. That being said, I’m going to have to share a part two in a future column after I circle back with our managers to understand the positive or negative impacts of this change.

Jeremy Watkin
Jeremy Watkin is the Director of Customer Support and CX at NumberBarn. He has more than 20 years of experience as a contact center professional leading highly engaged customer service teams. Jeremy is frequently recognized as a thought leader for his writing and speaking on a variety of topics including quality management, outsourcing, customer experience, contact center technology, and more. When not working he's spending quality time with his wife Alicia and their three boys, running with his dog, or dreaming of native trout rising for a size 16 elk hair caddis.

3 COMMENTS

  1. Great article Jeremy. Quality scoring has long lived past its day of usefulness. I’ve experimented with so many different systems and scoring methodologies but have found that what drives improved team member behavior is just having conversations around what took place, absent a score. I’ve built some terribly complex quality programs in my career (see: https://www.linkedin.com/pulse/hexahedron-quality-matt-beckwith/) but have since moved to a much simpler model. Listen to calls, take notes on what went well and what didn’t and then have a conversation with the team member. Of course, we aggregate those comments to spot trends and guide training but there isn’t a numeric score. Over time, it has helped reduce the instances of bad or ineffective behavior and increased the instances of good and effective behavior.

  2. Thanks for sharing your insights Jeremy.
    Personally I witnessed many quality processes over te years that yielded no results, then I’m a bit skeptical about it.
    As for the point of full disclosure eith service agents, I think putting everything on the table is s good policy including final score and personal KPIs / metrics.

  3. @Matt, I like they want you talk about the importance of still aggregating the comments to spot trends. Definitely very important.

    @Benny, good point about putting everything on the table. I think we’re moving toward devaluing the score itself but definitely tracking improvement of key behaviors. If that makes sense.

    Thanks both of you for your comments!

    Jeremy

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here