Last year, my ISP decided to upgrade its network in the weeks before Christmas. Needless to say, it was an upgrade in name only, and I found myself calling every day for 10 consecutive days just to connect. It’s a good thing my ISP offers a toll-free support line, because each call took 45 minutes to resolve. The only good news was that at the end of each call I was online.
Toward the end of my last call, I expressed my dissatisfaction and indicated my view that the company must have a real problem with customer satisfaction. “On the contrary,” replied the technician, “we actually have a very high satisfaction rate.”
I wondered how the company measured this “rate” and held back from saying, “Says who?” I mean, no one from the ISP ever asked me.
‘‘On the contrary,’ replied the technician, ‘we actually have a very high satisfaction rate.’’
It’s common for businesses with contact centers to measure all sorts of performance metrics. These “key performance indicators” (KPIs) may include first-call resolution (FCR), length of call or time for a customer to be connected to a live person. Because of the plethora of metrics provided by call center software, call center managers believe that they can infer customer satisfaction with agent performance against those metrics.
In my case, 10 consecutive FCRs must have meant I was a super-satisfied customer! Not quite.
Performance metrics are, by definition, trailing indicators of business performance because they indicate what has happened in the business. We can extrapolate into the future to predict performance, but as the investment banking world makes sure to tell us, past performance is no guarantee of future results.
So how can a call center manager reliably predict future performance? By asking customers what they thought of their most recent call. While satisfaction isn’t necessarily predictive of customer loyalty (somewhat surprisingly, I’m still a customer of that dreaded ISP), soliciting the right type of feedback will give you a way to understand why your performance indicators are behaving the way they are and how to know when they might slip.
Egg, the largest internet bank in the United Kingdom, regularly obtains feedback from its customers after they’ve spoken to contact center agents. The integration of its Enterprise feedback management system with its CRM system generates a high response rate to feedback requests.
Such integration allows for personalization of surveys so that Egg’s customers know why they’ve been asked for feedback. It allows for event-driven invitations, so the feedback is requested in context. And it means that surveys can be shorter because the company doesn’t have to ask questions it already knows the answers to.
In one example, in addition to asking about satisfaction, Egg asked its customers to indicate their level of agreement with a series of emotive statements about their most recent call. These statements included:
- I quickly got to a live person.
- I thought my time was well spent.
- The agent was helpful.
- The agent understood what was important to me.
Because management relied on internal metrics, the expectation was that the first statement would most quickly correlate with satisfaction: quickly getting to a live person.
Had the company acted on that statement, it would have allocated resources accordingly, either by increasing staffing at the center (representing an increased cost) or by incentivizing agents to end calls early (or transfer callers) so they could pick up new calls.
As it turned out, the statement, “I thought my time was well spent,” correlated most closely to customer satisfaction. Further analysis showed that increasing 50 percent of respondents’ scores for that statement would increase satisfaction by 6 percent to 7 percent.
The impact of this was that Egg’s management allocated resources a bit differently. It created new roles in the contact center: customer voice analysts, who work closely with contact center agents to help them to deliver to the best of their ability a satisfied customer at the end of the call.
The program has had a significant side effect, with agent absenteeism down 40 percent since its inception.
Egg now has an attitudinal metric to go along with its performance metrics. And it’s not the satisfaction score itself. Instead, it’s the level of agreement with the statement, “I thought my time was well spent.” Management is confidently assured that if the score on that statement slips, then customer satisfaction will slip along with it.
By tracking this “key attitudinal indicator” continuously in relation to specific customer interactions or events, Egg can track responses to that statement over time. This allows the company to establish a benchmark and to correlate it with business performance indicators.