Top

Get Rid of Average Thinking, Make Every Experience Count 

Bill Price | Dec 13, 2015 373 views 5 Comments

Share on LinkedIn

Customer service and customer experience love to use averages to explain performance and to plan capacity. How many of these averages do you use?1 How many others do you use?

  • ABA (abandonment rate) %
  • AHT (average handle time), in seconds
  • ASA, in seconds
  • ATT (average talk time = AHT + ACW (after-call work)), in seconds
  • CES (the new Customer Effort Score)
  • Contacts to resolve (number of customer-initiated contacts to get something fixed), usually with two decimal places
  • Containment rate (for IVR calls that can provide automation) %
  • Conversion rate %
  • CPH (contacts handled per hour), usually with one decimal place
  • C-Sat % (sometimes “top 2 boxes” out of 5)
  • FCR (first contact resolution, including conferencing and transfers) %
  • FPR (first point resolution with no conferencing or transfers) %
  • NPS (as high as 80, and can be <0)
  • Sales per hour, also usually with one decimal place
  • Snowballs (repeat contacts not handled earlier, like a snowball rolling downhill) %
  • Take up rate (customers’ attempt to use IVR instead of going straight to an agent) %

But we are missing a lot by boiling down performance to averages. In our recent book Your Customer Rules! Delivering the Me2B Leaders That Today’s Customers Demand (Wiley 2015)2 we profiled a short list of “Me2B leaders,” companies like Apple, Nordstrom, and Yamato Transport toss out “B2C” or “B2B” orientations that place the business first and, instead, always put the customer “Me” in charge. They also follow what I like to call “segments of 1,” a riff of Don Peppers’ and Martha Rogers’ “One to One Marketing”. Here’s how we put it in the final Chapter 10, “The Foundations of Me2B Success”:

Successful Me2B leaders avoid success metrics that present customer experience averages. It sends a negative message if, for example, a company achieves certain service goals by averaging the poor and the good service periods; for example, contact centers might rejoice about hitting 80/20 (80 percent of calls answered in the first 20 seconds), but what if 5 percent of the callers have to wait ten minutes or longer before they can speak to an agent? This sends the message that some customers and some experiences don’t matter. A true customer culture only develops when a company starts to show that each and every experience counts.

Let’s start with that last clause: “… show that each and every experience counts”. When we boil down anything to averages we miss rich data that can help interpret everything from customer retention to likely complaints to State Attorneys General. My colleagues at Antuit (a “Big Data” provider where I spend half of my time as the Partner designing and applying a portfolio of “Customer Experience analytics”) say that you have to include the entire range of all performance, feedback, and other data; clean them; perhaps assign them to clusters or buckets; and then be able to determine what is actually driving behavior and business success.

Now let’s spend a little time with FCR or FPR (personally I prefer first point resolution, equipping the first agent to get the contact to handle it completely). Many companies struggle to figure out how to calculate FCR or FPR, sometimes resorting to “did the customer not contact us again in 72 hours?” and sometimes asking the customer (see the Amazon story later in this column), FCR is usually reported in the low 70s, meaning that ~72% of the time, on average, the issue is resolved the first time BUT this also means that, on average, there is a 28% “failure rate” that produces a second, third, or more contacts to resolve, hence a snowballs effect. What if you discovered that your best 5 customers had FCR of 40%, 45%, 52%, 55%, and 56%? Maybe they buy the new and not-entirely-tested products or software, maybe they know more than your reps and want the right solution, but in any case – wouldn’t you want to park FCR “on average” and look very closely at “the long tail” (to riff on Chris Anderson’s popular book)3 to determine what to do to increase these best customers’ resolution rates?



Take, for example, average number of contacts to resolve (defined again as the number of customer-initiated contacts to get something fixed). While you might start to celebrate that the average has come down from 1.52 to 1.35, how would you react if the most number of contacts to resolve has risen from 8 to 9, and that 6% of your customers have to make 9 contacts to get something fixed? If you are able to capture c-sat or CES or NPS during and after these many attempts you might see critically low results, but my guess is that these customers won’t even bother to respond and, instead, will simply find another provider.

Finally, let’s look at the nexus of AHT and CPH, FCR, and c-sat, whether conducted post-interaction or via a panel, automated or using live agents to collect the data. A recent article in my local Seattle paper about Amazon (where I spent some quality years as its first WW VP of Customer Service) points out the fallacy of traditional speed metrics and notes Amazon’s sole c-sat like metric called NRR:

“What most call centers are judged by is call efficiency,” said Forrester Research analyst Kat Leggett. “It drives the behavior of trying to get the customer off the phone quickly.” Amazon, though, doesn’t measure its reps by the speed by with which they dispatch calls. The metric it uses is “negative response rate” or NRR. At the end of each call, reps ask if they’ve resolved a problem. Each “no” counts as a negative reponse.”4

Amazon pores over each and every NRR, and doesn’t rest its laurels on averages.

This is the crux of Me2B thinking: every experience counts, and averages get in the way. So get rid of “average” thinking and maybe you, too, can achieve NPS above 70 with huge re-purchase rates and high share of wallet!


1. Some of these are defined in Appendix B, Glossary, in our first book The Best Service is No Service: Liberating Your Customers From Customer Service, Keep Them Happy, and Control Costs (Wiley 2008)

2. Here are the 7 Customer Needs that Lead to a Winning “Me2B”Culture:

  1. “You know me, you remember me”
  2. “You give me choices”
  3. “You make it easy for me”
  4. “You value me”
  5. “You trust me”
  6. “You surprise me with stuff that I can’t imagine”
  7. “You help me better, you help me do more”

3. The Long Tail: Why the Future of Business is Selling Less of More, Chris Anderson; Hachette Press, revised 2008

4. “Customer’ confusion, rage all in a day’s work,” Jay Green, Seattle Times, November 30, 2015, page A1

Print Friendly


Recent Editor's Picks:


Categories: ColumnEditor's PickPerformance MetricsService and Support
Tags:
373 views

5 Responses to Get Rid of Average Thinking, Make Every Experience Count

  1. Michael Lowenstein December 13, 2015 at 10:41 pm (1240 comments) #

    Issue/complaint resolution level (positive, neutral, or negative), an important metric that I’ve rarely seen applied, has a profound impact on both customer emotion and customer memory. As such, it contributes directly to downstream customer behavior. Incidentally, would suggest that the first listed Customer Need (“You know me….”) be expanded to include unexpressed complaints, representing a significantly higher percentage of the existing volume of customer complaints than the ones actually expressed and addressed.

  2. Chip Bell December 14, 2015 at 9:22 am (171 comments) #

    Great thinking and a solid plea for excellence. A CEO of a large hospital interrupted her marketing director’s glowing presentation on their patient sat averages, among them their ABA (call abandonment rate.) The CEO’s position was similar to yours. “While we know that perfection may not be always possible, we must focus always on the pursuit of excellence. Averages are for mediocre organizations. Average thinking leads one to ask questions like ‘What is our acceptable number of dropped babies per year?’ If we tolerate a small percentage of abandoned calls, would it matter if one of those callers was our largest donor? Digging deeper tells a vital story otherwise hidden by speaking of averages.

  3. Gautam Mahajan December 14, 2015 at 10:26 pm (166 comments) #

    You are speaking my language. It is not the metrics but whether customers are pleased. Keep the good work going

  4. Graham Hill December 15, 2015 at 6:24 am (992 comments) #

    Hi Bill

    An interesting take on the plethora of information hidden in ‘averages’.

    Rather than going the whole hog and trying to be all things to all customers, wouldn’t it make more sense initially to expand the range of statistics, e.g. to include minimum, maximum, median, standard deviation, etc, to capture more of the variation around the average and the granularity of the statistics, e.g. to different multi-factor segments. Of course, all customers benefit from the focus on improvement, even if initially targeted at a small group of them.

    The challenge of the vast majority of businesses lies not so much in trying to be excellent for everyone but in trying to be excellent to those customers who have the greatest natural fit with them. As the lifetime value data shows, there is often much more value to be gained by focusing on improving the outcomes for specific groups of customers than for everyone.

    The devil is in the detail.

    Graham Hill
    @grahamhill

  5. Andrew Rudin December 16, 2015 at 2:36 pm (861 comments) #

    I agree with your colleagues at Antuit. Unless the entire range of results is considered, averages of anything can create erroneous conclusions. The often-cited, poignant comparison is the statistician who drowns crossing a river because he knew that, on average, the depth was three feet. As Graham pointed out, other statistics are important to track. A call center manager might get lulled into thinking that her service performance is acceptable because the average hits a specific target. But the range of results could reveal serious problems when the outcomes are volatile.

    These issues were covered in a 2002 Harvard Business Review article titled The Flaw of Averages, by Sam Savage, who wrote a book with the same title. I cited this in an article, Don’t Mangle What You Measure.

Add Your Comment (All comments are reviewed by moderator, no spam permitted!)