Customer Effort Score: How Hard is it to be Your Customer?

9
514

Share on LinkedIn

Are you familiar with the Customer Effort Score (CES)? It is rapidly gaining converts as a way to measure the transactions that make up your customer experience.

The Net Promoter Score, or NPS, measures your overall customer experience. But it doesn’t show where to focus to improve your results. Imagine telling your store manager, B2B sales team, or director of your call center only that “Your NPS scores are low. Fix them!” Where do they begin?

Transactional measurements show what segments of your experience impact your customer loyalty. Some companies have tried to use NPS to measure transactions, but it was never designed for this. Asking “Would you recommend your call center rep?” doesn’t work, as most customers have no desire to call your call center in the first place. Similarly, “Would you recommend [Company] website” causes confusion – are your customers recommending the company behind the website, the design, the functionality, or all three? This is where the Customer Effort Score shines.

When customers have to expend more effort than they expect, they leave. High effort equals low customer loyalty. The CES helps you monitor this.

Rhe CES was not around when I managed the customer experience for a Health Savings Account (HSA) provider, but we did know that the leading driver of our customer experience was the high difficulty of logging into our website. When people had trouble logging into our website, they deposited less money and renewed less often. The CES would have been a leading indicator of financial success.

You can find more details on the Customer Effort Score here. It is typically just one question, “”How much effort did you personally have to put forth to handle your request?” For example, “”How much effort did you personally have to put forth to login to your HSA website?” While the article calls for a 5-point scale, you could certainly use the same scale as is used for the rest of your survey.

At SMS Research Advisors we have found that a second question, “How did this effort compare to your expectations?” provides much-needed context. Expectations frame the effort. If your customer expects a transaction to take little effort, and it takes a lot, you are at risk, as in our HSA website. Customers do not expect to spend much energy logging in, so that effort reduces loyalty. Alternately, if customers expect high effort, then meeting this reality has far less impact on that loyalty. At least, until a competitor makes it easy for them….

I really enjoy my Customer Experience Professionals Association (CXPA) membership. Among other benefits, we have regular calls where members of an industry segment discuss their challenges as customer experience leaders and ask for advice from their peers. I attended a financial services call last month where we discussed the Customer Effort Score. Members did not give permission for me to share their names, so I cannot disclose their companies. But these were very large national firms whose names you would instantly recognize.

Multiple attendees shared their positive experience using the Customer Effort Score as a complement to NPS. One even went so far as to say that the CES was a better predictor of loyalty than NPS. All who used the CES agreed that it helped them understand where they needed to focus improvements better than their previous methods.

At SMS Research Advisors, we have found the CES is also a strong input to customer journey mapping. Journey mapping helps you understand how your customer navigates your customer experience. Effort plays a large role, so it is important to include this in your map. You can see an example of our maps incorporating the CES in this white paper.

Your customers are busy. You are very fortunate that they spend the time to be your customer. Finding ways to minimize that effort is your most efficient way to make sure they remain a loyal customer. Consider adding the Customer Effort Score to your transactional surveys. Your customers will thank you for your consideration.

———

Are you using the Customer Effort Score today? Share your experience in the comments, or send me an email at [email protected].

Thanks!

Jim

Republished with author's permission from original post.

Jim Tincher
Jim sees the world in a special way: through the eyes of customers. This lifelong passion for CX, and a thirst for knowledge, led him to found his customer experience consulting firm, Heart of the Customer (HoC). HoC sets the bar for best practices and are emulated throughout the industry. He is the author of Do B2B Better and co-author of How Hard Is It to Be Your Customer?, and he also writes Heart of the Customer’s popular CX blog.

9 COMMENTS

  1. As an overall response to your blog, I’m not a strong supporter of CES, for the reasons I’ll enumerate; however, that said, your add-on expectations question provides a critical measure for assessing the level of importance attached to the level of effort. You and I also agree on challenges associated with endeavoring to use NPS for anything more granular than aggregated performance interpretation: http://www.customerthink.com/article/customer_advocacy_behavior_personal_brand_connection

    CES, for those who may be unfamiliar with the term, was originally introduced in mid-2009 by the Customer Contact Council (CCC) of the Corporate Executive Board, in a presentation titled “Shifting The Loyalty Curve: Mitigating Disloyalty by Reducing Effort”. A client asked me to review it at the time (when I was a Senior Vice President and Senior Consultant in Stakeholder Relationship Management at Harris Interactive); and, among my three pages of comment were:

    “There is no holistic view of customer experience in CCC’s conclusions represented in the CES or effort reduction/mitigation focus. With specific respect to the multiple CES methodological challenges, we (Harris senior methodologists and I) feel that a customer effort score is too one dimensional to capture the overall customer experience or, more narrowly, the customer service experience. Again, customer experience means looking at the overall perception of value through use or contact. It involves the entire system. CCC, for instance, is using callback tracking as a ‘standard proxy for customer-exerted effort’; and very much like NPS, building their case on a single question (“How much effort did you personally have to put forth to handle your request?”, on p. 71 of the presentation), and then taking it to the next level by having a CES Starter Kit (p. 73). Our approach is to validate the impact of customer service within the overall experience and, as well, more around actual behavior than anticipated behavior.”

    If we’ve learned anything from the Kano Model since the 1980’s and early 1990’s, it’s a recognition that dissatisfiers can hurt loyalty behavior and enhancers can help drive more positive downstream customer action. Those receiving customer service will not be particularly energized by having their problem solved or questions answered, because these are table stakes and basic expectations. Positive service differentiators, though, can have a beneficial impact on customer experience and brand perception, informal peer-to-peer communication (offline and online word-of mouth), share of wallet, etc.

  2. I think CES could work as one of several metrics in a kind of balanced scorecard. Customer effort (too much of it) is a dissatisfier that needs to be mitigated.

    So for companies that can’t get the basics right, CES could help. But it won’t drive real loyalty, because removing dissatisfiers is not the same as creating delight.

    After taking a close look at CES, I wrote a rebuttal in this article: KEEP Trying to Delight Your Customers, While Also Meeting Basic Service Expectations.

    One key point: the CEB seems confused about the difference between delight and placating customers. Here’s an excerpt from my article:

    The CEB study found that 89 of 100 customer service heads wanted to exceed customer expectations, with extras like “offering a refund, a free product, or a free service such as expedited shipping.” Yet 84% of customers didn’t feel their expectations were exceeded. As a result: customers were “only marginally more loyal than simply meeting their needs.”

    In my view, these are examples of placating customers, not delighting them. Consumers get offered extras when the company has screwed up. Personally, I don’t want to make customer service calls, but if I have to, I want my problem solved pronto. Any extras aren’t really “exceeding my expectations,” they are making up for the fact that my expectations weren’t met.

    If you don’t exceed expectations (as customers perceive it), you’re not actually delighting customers, are you? And therefore, it’s hardly surprising that there’s little loyalty impact.

    I think the CEB suffered from NPS envy and tried to create another “one” metric. CES isn’t it.

  3. As Tom Hanks famously said in “A League of Their Own”, “…if it was easy, everyone would do it.” We’re focused on what drives behavior, and getting the right measure has been a Holy Grail quest. Thus, we see the endless search for a simple metric, a single question which provides all the analytical detail and actionability anyone could want.

    So, does placation through meeting service expectations create delight and added value? Simple answer: Nope. As you point out “If you don’t exceed expectations (as customers perceive it), you’re not actually delighting customers, are you? And therefore, it’s hardly surprising that there’s little loyalty impact.” There’s only minimal value, and certainly no value-add, in simply reducing effort. To reinforce your conclusion, expectations are about levels of touchpoint and experience importance. Just doing the basics, and not exceeding expectations (overpromise and overdelivery, as I wrote in one of my blogs from a few years back), won’t leverage loyalty behavior.

  4. CES is a good research technique and probably a good start at understanding customer behavior. The real magic of customer experience comes in design. Yes, make it easy for the customer but also make it inspirational. Like most forms of design, a true customer experience model responds well to human behavior and encourages interaction. We all crave a story we can identify with and product we can interact with intuitively. If you design an inspirational story around the product and an experience that is intuitive for the customer and you will be well on your way.

  5. Based on a similar LinkedIn conversation, it appears that I need to be clearer! While there may be some in the “One question to rule them all” camp, I’m not there.

    I definitely agree, Bob, that the CES works best as part of a larger approach. And, as you both argue (in my view correctly) it only covers half of the issue.

    Similar to the Satisfaction-NPS-Engagement continuum, CES is ideal for low-engagement engagements – call centers, etc. It is not appropriate for measuring every interaction. However, I would always use it the first time I investigate a situation, in order to understand the context. That’s why we have had really good luck using it in experience mapping (see http://www.heartofthecustomer.com/customer-journey-map-white-paper/).

    It’s not perfect – but no measure is.

  6. Agreed – Design is critical to creating a great customer experience. And really hard to do.

    The Customer Experience Score often comes in when an existing experience needs to be redesigned. It helps you understand where the existing experience is going wrong – so you can unleash your designers on the problem!

  7. ….since when are call centers ‘low engagement’ with regard to potential impact on future customer action? Repeated customer service studies by RightNow Technologies (now part of Oracle) and Harris Interactive – http://www.customerthink.com/article/negative_word_of_mouth_customer_alienation_and_sabotage – have demonstrated the strong capability of service experience to influence downstream customer behavior. I would submit, and argue, that Customer Experience Score, and the concept underpinning it, contributes little to help understand customer expectations around service, and would be insight-challenged and actionability-challenged when evaluating any transactional interaction.

  8. Sorry, I’m using shorthand. Call centers are typically low on the emotional engagement scale. Customers rarely wish to call the call center, so focus is on solving the problem with as little pain as possible. This is separate from a sales-oriented call center, such as a catalog, where emotional engagement potential is much higher.

    Where the CES, simplicity, etc. contribute to the understanding are to identify greatest opportunity. Areas with high effort and a low expectation for effort are areas that need to be studied more in-depth. That’s why we use it for such activities as mapping – high-level overviews to understand the current state of your experience. The CES could tell you that logging into a website, ordering refills, or scheduling an appointment are high-effort activities. It would never tell you what part of logging in, ordering, or scheduling is causing the problem. That’s for more granular tools in follow-up studies.

    You can’t conduct an in-depth study of every part of your experience at one time. The CES and its counterparts provide that high-level scan to help you know where you need to dive deeply.

  9. Compared to an area of high importance and high difficulty in achieving resolution, it is more of a priority for problem mitigation or elimination. That’s simple resource allocation, with a focus on optimizing customer downstream behavior. If it’s an area of low importance to customers, but relatively higher in terms of achieving resolution, then this still gets attention; however, it is a lower priority for action. As Deming said, companies shouldn’t diffuse their resources by ‘chasing hot rabbits.’ So, for example, if any of the areas you identified – logging onto a website, ordering refills, appointment scheduling, etc. are identified as both high effort and high importance (which are open issues until quantified) – can all be addressed (and, btw, on a granular basis within the same transactional study without doing follow-up research) through threshhold analysis, looking at the relative effect on desirability and potential market actions such as future purchase intent.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here