Will automating the transfer to post-call IVR surveys prevent agents from cheating?

0
94

Share on LinkedIn

Why is this a problem?

Automated is not the same as fool-proof. Dictionary.com defines automated: to apply the principles of automation to a mechanical process, industry, office, etc. Nowhere in that definition does it state that doing so will create a perfect, fool-proof process. In this case, automated means customers answering ‘yes’ they would like to participate in a survey when prompted by the IVR at the start of the call will then be transferred to complete the survey once the agent disconnects. Sounds fool-proof and simple enough, right? Not so much.

Remember, the system is designed to transfer the customer after the agent disconnects from the call. That means that the agent has to hang up before the customer for the system to be able to work. Think about the call that just ended – the customer is not pleased that they can’t get what they want. They have just spent 10 minutes asking for something five different ways only to find out that their request simply isn’t feasible. That agent can tell just by the way that the call has gone that it is unlikely that the customer is going to give them a top box score in anything. It is reasonable to suspect that this agent may not allow someone to complete a survey that they feel pretty confident will only add to that pile of lackluster scores. So the agent decides to stay on the line until the customer hangs up. What? You didn’t think the agents would figure this out.

Let’s also not forget about the accidental cheat. This agent doesn’t intentionally prevent customers from completing the survey; they are simply providing stellar customer service.

Agents could unintentionally create a bias in the survey data simply by doing what has been ingrained in them as part of the fundamentals of customer service – never hang up on a customer. If an agent has been in the customer service industry for any length of time they have heard this statement more times than they can remember. How are they now supposed to violate that fundamental rule and hang up on a customer? Since the agent has no idea if the customer has agreed to participate in the post-call survey, they would essentially have to hang up on every customer. Yikes! Some agents will be able to adapt; however, many will struggle and struggle at inconsistent rates.

THINK ABOUT THE CUSTOMER

If the customer is hung up on they may call back and ask to be connected to the survey so their feedback can be submitted but now the survey is connected to a different agent (if that call was selected for the survey).

What if the customer selects “no” I do not want to participate and changes their mind? This one could go either way. The customer may have become irritated during the call and then decided they wanted to participate after all. Now they must call back to participate and the survey gets attached to the wrong agent (again if that second call was selected for the survey). What if they were wowed and changed their mind? If they call back, then it still gets attached to the wrong agent.

What if the customer forgets? Relying on customer memory can be a risky proposition. If the calls require interactive dialogue between the customer and the agents (most do) the customer can easily forget to hang on the line to participate. If they do call back, again, wrong agent assignment.

You may say don’t assign it to any agent. Case studies prove that would be a very bad decision. Review the ebook Improving First Contact Resolution with Agent Accountibilities.

The Solution

The solution here is simple – remember that there is no fool-proof method that will prevent agents from cheating or creating bias. You need to think about what customers do as well. Regardless of whether we use an automated, semi-automated or manual methodology, we employ a stringent, consistent survey calibration process on every survey collected to insure that every survey is attached to the correct agent and that the feedback should be owned by the agent, and you need to do this as well. Every method has strengths and weaknesses. We design customer experience Voice of the Customer measurement programs to be the best possible research study given the technical make up of a contact center. Success is obtained through acknowledging them and building them into your design plan.

Republished with author's permission from original post.

Jodie Monger
Jodie Monger, Ph.D. is the president of Customer Relationship Metrics (CRM) and a pioneer in business intelligence for the contact center industry. Dr. Jodie's work at CRM focuses on converting unstructured data into structured data for business action. Her research areas include customer experience, speech and operational analytics. Before founding CRM, she was the founding associate director of Purdue University's Center for Customer-Driven Quality.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here