When it comes to developing a scale by which you measure your company’s phone calls, there are a number of ways to approach it. I always encourage clients to start with a few foundational questions:
- “What is our goal, and what do we want the outcome to be?”
- “What are we trying to achieve?”
- “Who are we primarily serving with the scale/scorecard/form/checklist?”
The process of deciding what behaviors you will listen for, what you expect from your Customer Service Representatives (CSRs), and how high you set the standard can be a complex web. When you involve many voices from within the organization who have their own agendas and ideas, the task can slide into conflict and frustration very quickly. By defining up front what you want to accomplish, you can always take the conflict about a particular element back to the question “How is this going to help us achieve the goal we established?”
Let me summarize some general observations about QA scorecards and programs I’ve seen which represent different organizational goals.
- Reaching for the stars. Some companies set a high standard in an effort to be the best of the best. They know that their CSRs are human and will never be perfect, but they set the bar at a level which will require conscious effort to achieve. CSRs are expected to continuously improve their service delivery. In these cases, the behaviors measured by the QA form can be exhaustive and ideal.
- Maintaining the standard. There are some organizations who don’t care about being the best of the best, they just want to maintain what they’ve deemed to be an acceptable standard. The scale rewards the vast majority of CSR with acceptable scores while identifying the relatively few CSRs who could hurt the organization and likely need to find another job.
- Motivating the troops. CSR motivation and encouragement is the focus of some QA programs. Scorecards designed in these situations tend to look for and reward any positive behaviors the CSRs demonstrate on a consistent basis while minimizing expectations or negative feedback. In these cases, the elements of the QA form gravitate towards easily identifiable and reward-able behaviors.
- Customer Centric. Some call centers really focus on designing their QA evaluation form around what their customers want and expect. They use research data to identify and behaviors which drive their customers satisfaction. The Quality Assessment checklist is designed to create a snapshot of how the individual or team is performing in the customers mind. These types of evaluations can vary depending on the market and customer base.
- Going through the motions. I’ve encountered some companies who really don’t care what they are measuring or how they are measuring it. They just want to have a program in place so that they can assure others (senior management, shareholders, customers, etc.) that they are doing something about service quality. In this case, the scorecard doesn’t really matter.
Some quality programs and scorecards struggle because they haven’t clearly defined what they want and what they are trying to acheive. Different individuals within the process have competing goals and motivations. Based on my experience, I recommend some approaches more than others and have my own beliefs about which are best. Nevertheless, I’ve come to accept that most of the differing approaches can be perfectly appropriate for certain businesses in particular situations (I’d never recommend the “Going through the Motions” approach as it tends to waste time, energy and productivity). The key is to be honest about your intentions and clear in your approach. It makes the rest of the process easier on everyone.