This page was created in part for the Customer Insights to Action (CIA) Bulletin Newsletter; register here to subscribe.
Do you remember when your school teachers broke the class into groups for a project? If you’re like me, you inevitably ended up doing the lion’s share of the assignment (to make sure it was done right!), only to have the laziest group member step up and take the credit. Or no one in the group did the same amount of work and everyone got the same grade. Sadly, this sort of thing still happens to adults when our performance isn’t always evaluated correctly or fairly.
The Quality Program in contact centers is fraught with the possibility of unfairness. A highly visible hazard is common with post-call IVR surveys because of the easy occurrence to link scores to the wrong agent. Callers are instructed to evaluate the last agent but they decide to rate the first agent or the one from yesterday or last week. In these cases, it’s not the agent that’s taking the credit for themselves like my lazy group member, but instead it’s the customer assigning an evaluation that was intended for someone else.
Almost everyone has a customer service experience when they make a simple call only to have it turn into anything but simple. You speak with an agent that tries to answer your question but realizes after you have told your story that you must be transferred to someone else. After telling your story yet again, this person realizes that you really need to talk to someone in a different department to get your issue resolved. In total, you’ve spoken with three people at the company before you take the post-call IVR survey. The person that actually helped you was very nice and extremely helpful, but the second person was rude during your brief interaction. You decide that you are going to evaluate the second contact center agent because it is evident that this person needs some serious coaching. In one of the comment opportunities, you clearly state that your ratings are based on the second person you spoke with so the scores do not end up going against the third helpful representative. So, the scores are intended for the second agent and not the last or even the first, but are linked to the last agent. Now the third agent will be penalized for something that he or she had no control over.
Now what do you do – throw out the evaluation or leave it falsely attached to the wrong agent? You make it a point to value your customers’ opinions, otherwise you wouldn’t be surveying them in the first place, right? How will you defend a low customer satisfaction score to the agent that didn’t earn it? By throwing out the evaluation entirely, you are essentially skewing the customer opinion of your contact center. Keeping the survey attached to the wrong contact center agent sends the message to the employee that they are not valued. You need employees to trust the survey process, but they will lose faith if they think the survey process is blatantly flawed and will lead to scores that don’t belong to them.
I’ve been preaching this for a while now, about twenty years – think beyond just surveys to External Quality Monitoring (EQM). By utilizing an EQM system, this hazard will be avoided because of the Survey Calibration process. Survey Calibration insures accurate customer satisfaction ratings are provided for each of your contact center agents. Each survey is be reviewed to ensure that the survey is linked to the correct agent. If the correct agent cannot be identified, the score will stay with the center but will not remain with an agent who did not deserve/earn the scores. You will no longer have to worry about defending the scores when the performance ratings are provided. Being able to stand behind clean data allows your company to maintain performance accountability with a sense of fairness amongst the contact center agents and supervisors. If only there was a survey calibration process for our school projects!