Please, Sir, May I Have a 10?: The Sad State of Customer Satisfaction “Research”

19
863

Share on LinkedIn

I stayed last week at a hotel near a major airport. Following an efficient check-in, I went to my room and began to unpack.

As I began to set up my laptop, I noticed on the desk a rather large white card with bold blue lettering, propped against the desk lamp. It read “On a Scale of 1-10 it is our desire that your stay be a perfect 10. When you receive your survey after check out, we would like you to give us a 10. If we fall short of this score during your stay, please stop by the front desk or dial zero and allow us the opportunity to make it right.”

Also on the desk was a letter personally addressed to me (I am, after all, a gold member of their frequent guest program) from the hotel’s General Manager. Following a most cordial welcome to the hotel, an offer of a 20% discount at the restaurant, information about the fitness center and business center, the GM ended his letter with a repeat of the request that I rate the hotel a “10”, in this case on all attributes in the survey. He went on to ask that I contact him personally if there was anything about my stay that would prohibit me from providing such a score.

This experience was notable in part because it was the second time in a few weeks that I had been asked directly to provide the highest score on a customer satisfaction survey.

I had bought a not-inexpensive new car. In turning over the keys and instructing me in how to manage my way through the gadgetry, the salesman also kindly filled out the customer satisfaction form for me. He said “You will be getting a survey and we’d like you to complete it like this.” He had checked all “10s”. He went on to explain that a great deal was riding, both for him and the dealership, on whether I gave them top box scores.

So, what’s actually going on here? What lies behind these blatant requests of customers to provide the highest scores on customer satisfaction surveys? Well, when your bonus or access to inventory depends on your success in the customer satisfaction “survey”, apparently you will stop at nothing to get the scores you need.

But, let’s stop and ask why companies started doing customer satisfaction research in the first place. I can remember when they were genuinely interested in knowing what customers thought of them, so they could make improvements, and hopefully receive higher scores next time.

Things began to go sideways when firms started to link bonuses and other incentives to scores received on customer sat surveys, and were able to connect the customer’s score to the person who actually served her. Over time, I’ve seen the objective of customer satisfaction research in many firms shift from a genuine interest in making service better to a focus on getting the highest bonus possible. Scores are eagerly awaited, not so that managers can see how much improvement has been made, but to see how much of a bonus they will receive this quarter.

So, just how useful is customer satisfaction research when employees are blatantly asking customers to give them “top box” scores? Nothing short of a 10 will do. Such behavior is organizationally sanctioned because the auto dealer and the hotel GM are also rewarded or punished on the basis of their customer satisfaction score. Research purists among us are crying “foul.” There was a time when such attempts to influence research results would be considered bias in the extreme. There is no way that results from such research can be considered to represent an accurate reflection of how customers really feel.

So, let’s think long and hard about why you are measuring customer satisfaction in your firm. Is it so that you can really listen to customers and make improvements in how service is delivered? Or, is it simply intended to be your way of injecting a customer-based measure into how you evaluate your sales personnel and managers? If it’s the latter, and if your employees are out there begging for 10s, then don’t pretend that you are getting a true picture of what customers think of you.

19 COMMENTS

  1. Jim

    It is indeed a sad state of affairs. One automotive manufacturer I worked with has taken customer satisfaction away from its dealers entirely because of the problems you describe. The satisfaction survey is now administered at random by telephone by an external agency. Results are fed back to the dealers concerned and they are expected to act on improvement areas. Even so, the process is far from perfect, and customers can still be nobbled by dealers by the offer of a free ‘first service’ or other appeals to customers’ baser instincts.

    The problems stem from using a small number of overly-simplistic measures rather than an integrated scorecard of measures. Satisfaction is a complex construct which is not well understood and still doesn’t have any universally accepted definition. Despite what Claes Fornell and the ACSI woulld like us to believe. And if you think customer satisfaction is over-egged, it is nothing in comparison with the hyper-promoted but now discredited Net Promoter Score.

    The whole ‘satisfaction game’ also leads to non-congruent behaviour though. I will never forget attending a celebration of customer satisfaction scores increasing at a (different) automotive manufacturer some years ago. Just down the hall was an acromonious workshop berating the manufacturer’s marketing agencies for their inability to reduce customer defections from its vehicles. Satisfaction’s up, so loyalty’s down! That makes sense only if satisfaction as measured is nothing to do with the customer’s definition of satisfaction. Now there’s a novelty.

    Graham Hill
    Independent CRM Consultant
    Interim CRM Manager

  2. Spot on, Graham

    You obviously share my frustrations. One problem with the current state of affairs is that managers typically want to oversimplify what is in fact a very complex concept; that of customer satisfaction or even more importantly customer loyalty. There are other concepts that are central to marketing and customer strategy that are equally complex (e.g. quality, value, service), but we’ll leave that discussion for another time.

    We see managers (and, increasingly, researchers) being drawn into a situation where they measure such complex concepts using a single number (viz. NPS) or tie bonuses, promotions and other rewards to a single score (e.g. percentage giving top box ratings). But life is not that simple. This is human behaviour and psychology that we are dealing with here.

    Customer satisfaction is but one component in a comprehensive view of how customers view a firm and of how predisposed they are to continue dealing with it. The objective, it seems to me, should be to make things better for the customer, not to drive the highest bonuses possible. That leads to what you describe as non-congruent behaviour, an example of which I encountered a few weeks ago. Faced with lower “top box” scores than they would have liked (and lower bonuses as a result) one firm decided that the criterion for the awarding of bonuses would henceforth be the top two boxes! There, that problem is solved. Take-home pay is secure. And the customer? On yes, the customer: tell me again why we asked him to rate us.

    Jim Barnes

  3. In my research last year about measuring customer satisfaction and loyalty, I asked numerous experts and practitioners how to measure and reward employees. The general advice was that no more than 20% of compensation should be focused here, because of the “gaming” problem.

    I’ve had the same experiences with dealers negotiating or pleading with me to get a survey result. In one case from a rep who otherwise did an outstanding job. Instead of giving a excellent and heartfelt score, I withheld the survey because I didn’t like getting my arm twisted.

    I can only conclude that car manufacturers have gone overboard on either the rewards for achieving a certain score (e.g. bonuses) or penalties for failure (e.g. loss of inventory).

    Anyone who has managed sales forces has learned that compensation systems are tricky business. If you’re not careful, you’ll get what you ask for, no matter how much it hurts.

    Further reading: Find the “Ultimate” Loyalty Metric To Grow Your Business.

    Bob Thompson, CustomerThink Corp.
    Blog: Unconventional Wisdom

  4. Isn’t it interesting that we tend to get the kind of behavior that we are rewarding. But is it the behavior that we really want or need? I guess what surprises me most in the scenarios that we have been discussing is the fact that the companies involved condone or even encourage the behavior that customers encounter on the ground, the overt solicitation of top box scores. Clearly, the automakers and hotels and others involved know what’s going on; correspondence is printed on their letterheads, after all. Yet, they appear to accept the results of such research as an accurate reflection of how satisfied their customers are with the service they have received. Who’s running these programs? How can they possibly accept the results as anything other than extremely biased?

    I encourage companies to build into their employee evaluation programs an element of customer feedback. But, please, let’s make sure that the data we are using are accurate and truly reflect how customers feel toward us and the service they have received.

    Jim Barnes

  5. I do a lot of customer surveys in projects for clients and therefore feel somewhat embound to fill in a survey form every time that someone puts one in front of me. I’m always interested to try and work out their objectives in running the survey and exactly what the company are trying to get out of it. Sometimes this is pretty hard and its obvious that this is a tick in the box exercise – if you will forgive the pun. Other times the questions are so slanted that its difficult not to give high ratings.

    I did a project for a client some time ago where the numbers had been extensively gamed for years by sales and others who got bonuses based on customer sat scores. The task was to put in a customer satisfaction survey that worked where nobody directly benefited from the raw scores but everyone benefited from the knowledge gained. We set the tone by stating that anyone with a score over 8 should be growing revenues faster than average. I told my client that if a branches scores are going up and their revenue/profits are not someone needs to get fired. Either they are manipulating the customers or the questions or something even worse – Not acting upon information that customers are giving them. While this was a very crude weapon my client passed this message verbatim to their sales force. I’m certainly not recommending this approach universally but for this one rather unusual client it had the desired effect.

    Malcolm Wicks

  6. I don’t know about you, but I hate it when I feel someone is twisting my arm to get me to do something. Maybe it brings out the teenager in me. But when I get those requests to please rate the salesperson or service rep with a “10,” no matter how well I was treated up until then, I suddenly don’t feel as though I’m getting served well. The mere fact of the wheedling and pleading brings the score down. And I rate the people exactly that way.

    At some level, as a consumer, I always know people are getting paid and rewarded for doing their jobs, but at another level, when I’m dealing with businesses, I like the feeling that the people are being nice to me because, well, they’re nice people and they care about me.

    When I get those scorecards, that whole façade disappears. The curtain draws back from the wizard, and I know that they’re just paid employees and they don’t really care.

    Gwynne Young, Managing Editor, CustomerThink

  7. How odd that automotive manufacturers could compensate for the Bad Customer Experience culture they created (through saturating territories with dealers, and dealers with inventory) by creating another remuneration scheme for their salesforces to game! Surveys tied to bonuses are simply lipstick on a pig. Until surveys can be conducted and analyzed without direct connection to compensation, there will always be manipulation. As the other comments have expressed, the outcome is counter to the objective of improving the overall customer experience.

  8. Great discussion about a sad state of affairs, the automobile and hotel industries have taken customer satifaction to an all time low with their arm twisting tactics. No matter how you look at it there is no patented process for providing excellent customer experiences; consistent service excellence demands commitment from the business owner and staff, patience, flexibility, and a genuine desire to provide the very best of service. Sounds daunting doesn’t it? The difference between doing something because you know you have to and doing something because you want to is the simple difference between poor and excellent customer experience in my mind. As a customer it is this difference that determines my experience with a business. As a business owner creating harmony between profit margin and customer care requires a constant state of awareness, sense of worth and continually asking the question “how would I feel in my customer’s shoes?” Now that I have re-iterated what many of you have said before me and looking at the overall picture of customer experience/satisfaction what is the definitive answer? How do you influence the mindset of a business owner who sees customer care as a have to not a want to? Now that I have re-read this I see I have veered off topic and it feels good to get this off my chest, comments?

  9. There is real research and then there is this kind “non-research research” That is research under the guise of wanting to satisfy customers.

    Much of this discussion centers on the bogus use of research to increase customer satisfaction measurements. And how outrageous it is for a marketer to ask customers to give a top rating rating for good customer service.

    I don’t see it that way. To me, when a marketer is blatantly asking for a top-box 10 on a customer satisfaction survey, it as a way of telling the customer they are trying to do everything possible to provide a superior experience. It is a marketing tactic…and it is one that places the marketer under ususual scrutiny.

    Would it be better if the marketer wasn’t using the false presumption of a real survey here? The purist in me would probably lean toward saying Yes!

    But I’m not outraged that non-research research is being used as a technique to motivate employees to do better. What would be outrageous is if the scores from such a bogus approach were the only ones that a marketer might use for measuring true customer satisfaction and handing out employee bonuses.

    Indeed, the only way of generating valid and projectable customer satisfaction data, and where employee bonuses might be calculated, is via a random sampling procedure administered by an unbiased researcher.

    So, the next time you see a plea from a marketer to give them a top box rating for customer service think about the pressure those marketers are putting on themselves. What they are really saying is that their goal is 100% customer satisfaction and, in trying to achieve such a lofty goal, they are setting up customers to be overly criticle of their performance. I regard that as honorable pursuit and a reasonable tactic for raising the bar on customer satisfaction.

    Bob Kaden
    Author of Guerrilla Marketing Research
    [email protected]

  10. Robert

    I wonder what the difference between real research and non-research research really is?

    Most customer satisfaction research that I have seen in the automotive industry is carried out with the best of intentions to the whole customer base. There are any number of methodological difficulties with customer satisfaction research, as you point out, but these are common to much real research. For example, the supposedly real research into the intentions of voters in the early democratic caucases wouldn’t have been so wrong if they had not been based upon simplistic and flawed assumptions about sampling. And this by supposedly professional market researchers. What goes around comes around, as they say.

    Your suggestion that the hotel manager and dealer principal Jim describes were just setting out their committment to 100% satisfaction by asking for a 10, is an ‘unusual’ interpretation of events to put it mildly. Putting hotels to one side, automotive dealers are directly compensated (by extra margin) and indirectly compensated (by being eligible for sales contest trips to exotic locations) for top-box scores. They have long used a number of cheats to get customers to give them top-box scores, ranging from bribes (reductions in prices), through bribes in kind (free initial service) all the way to using Cialdini’s ‘contrast’ persuasion trick as Jim describes.

    Asking for a 10 is a sad sign that it is easier to ask, than to do all the stuff in the background to deliver. It is a sure sign that the managers have absoloutely no committment to 100% satisfaction. Otherwise they wouldn’t need to ask in the first place! Would they?

    Graham Hill
    Independent CRM Consultant
    Interim CRM Manager

  11. I’m with Graham on this one. It’s a very charitable interpretation indeed to cast such behavior as a commitment of the company to a superior customer experience. In fact, as Gwynne has pointed out, the effect is often the opposite — the experience that may actually have been a positive one is soured by the pleading for the top box score.

    To concede a point to Bob, even if the manager or dealer principal is genuinely interested in satisfying customers, the system that the automaker or corporate head office has put in place encourages this perversion of research. It’s not research, plain and simple. It’s a tool that encourages the behaviors that we’ve all apparently experienced because it ties monetary and other rewards to results, and the results are totally lacking credibility.

    Yet, the corporations involved couch it in the guise of research by engaging seemingly reputable research firms to administer the surveys by phone or online. But the customer has already been primed to provide “all 10s.” I’d welcome some comment from auto companies and others who use such a system to explain their rationale and how they may actually use other forms of research to capture a “true” interprepation of customer satisfaction.

    Do automakers and others operate parallel research programs to get at real customer attitudes and satisfaction levels?

    Jim Barnes

  12. Perhaps it would be better to reward customers first for their actual recommendations before rewarding the Sales/Marketing Executives for their well-solicited ratings. Reward for the sales executive who produced a customer who made an actual recommendation (referral) should come later when the referral makes contact with the compny. Using this approach in the dealership-level of the automotive business might make more sense than it’s high-investment counterparts. Legitimate satisfaction presents itself from the willfull act of a customer to recommend the company to other people. Perhaps the first question Sales Executives should ask persons entering a showroom is whether a friend had told them how satisfied they were… then let the company reward THAT “friend” and eventually the Sales Executive who made the same person satisfied. Just an opinion.

  13. After reading the great discussion here, a thought occurred to me. Maybe the debate shouldn’t be about whether employee surveys are effective research or part of a compensation program or signify whether a business is actually performing well. Even if a business receives all 10’s, assumptions are being made that a customer is truly satisfied and is loyal to the business or brand.

    The opportunity that is apparently missed–in my own “survey” experience as well–is how less-than-ten is managed. The truly effective Customer Relationship Managers set themselves apart by how they handle events that don’t go well–not by always making everything right. We’ve already seen how bland that can be, especially when businesses are populating the satisfaction surveys for us.

    Some examples of what wasn’t done when I gave my last car dealer a less-than-stellar rating (by the way–they never even followed up!):

    * the car wasn’t clean when delivered? Offer six free passes for the dealer’s car wash

    * the salesperson didn’t answer all of my questions? Lunch with the General Manager at a fine restaurant to discuss concerns.

    * had to wait too long for service? “Premium” express change service at no charge for the next six visits.

    * etc.

    Bonuses and “high-fives” are legitimately earned when the shortcomings are fixed. In addition, if the business genuinely cares, future problems can be avoided.

  14. Andrew

    An interesting observation.

    Assuming that customer satisfaction scores are normally distributed with some skewness towards higher scores, then there is no reason to expect all customers to score 10. Or even 9. Indeed, if they all start scoring 10, or 9, I would start asking serious questions about survey design and its representation of dealership reality.

    There is a lot of wooly talk about needing to score a 10 for every customer, which although it may be desirable for individual customers, is probably not essential to retain them. And certainly not essential to retain all customers in the dealership’s customer population.

    It also ignores the competitive reality of how well your dealership is doing against others the customer might consider using. A dealership is wasting money over-delivering to customers if it is consistently scoring, say an 8, when all its competitors can only manage a 5 at the best of times.

    Perhaps the real problem is not not scoring 10s all the time, but being caught in the hard to defend middle of not scoring high enough, not being technically competent enough or not being cheap enough compared to its competitors.

    Of course, that doesn’t mean that we can’t learn from the Black Swans of extremely good scores or extremely bad ones.

    Just a thought.

    Graham Hill
    Independent CRM Consultant
    Interim CRM Manager

  15. Picking up on Andy’s comments, what I find when I take surveys is that they never ask me about what I want to tell them. Occasionally, there is room for open-ended comments, but it’s always to the question the company wants to ask.

    So I wait for my opportunity to say that, although I gave the web site high marks for design, it was too slow. Or that, although the guy who sold me my van was super nice and went over everything and gave me a good price, I couldn’t believe spending six hours to pay cash for a van. Or that, although I loved the clothing selection, I was less than thrilled when I didn’t find out that half of the items I “bought” were out of stock until I got the purchase confirmation email. And on and on.

    What I think surveys should always ask at the end is this: “Is there anything you’d like to tell us?”

    Gwynne Young, Managing Editor, CustomerThink

  16. Gwynne

    BINGO!

    You hit the nail on the head. Customer satisfaction surveys should be designed through a two-stage provess. First customers should be surveyed to find out how they really assess satisfaction with a company, its products, services and experiences. The initial survey should look at soft factors such as feelings about interactions with the company as well as harder factors such as the outcomes customers perceive.

    When you know how customers assess satisfaction, this should be used to design the final survey used to measure customer satisfaction. There is no reason why additional questions cannot be added that are interesting to the company, but to base the entire survey on these company-centric questions is to invalidate it as a measure of customer satisfaction.

    But beware. The results can be challenging for management. A group of us in the Aviation Practice at PricewaterhouseCoopers developed such a two-stage customer satisfaction survey for the airline industry, with the help of Prof. Bob Westbrook of Rice University. The survey was administered to all of PwC’s 45,000 worldwide consulting staff. They filled in a survey each time they took a flight. Just to put this in perspective, when we ran this survey, PwC was the second biggest corporate airline customer in the world, with an annual air travel budget of almost $600 million. That’s quite an interesting audience for airlines. It was too, the only trouble was that it was too challenging for the airlines for public release. It was felt that the ‘road-warrior’ consultants dished the dirt on the miserable experiences they had at the hands of most airlines would have been too tough a message to hear even for tough airline executives. Reluctantly, we decided only to use the survey results with airline clients on improvement projects who requested them (which most did).

    Which airlines had the highest satisfaction scores? Southwest and Singapore Airlines, the recognised industry leaders, with blue-chip European airlines like Lufthansa a short way behind.

    For many companies it is convenient not to hear the truth about customer satisfaction. But it is there for the asking if they use the right questions. And ask customers about what they really want to say.

    Graham Hill
    Independent CRM Consultant
    Interim CRM Manager

  17. With new tools to analyze unstructured text, some companies are mining the blogosphere and other sources to hear what customers are saying, on their own terms.

    There is still work involved to train the categorization engine to make sense of the text, but in some cases this could be much faster than conventional structured research design, or could provide insights from customer comments where there is real structure (blogs/forums) or an opportunity to even field the survey.

    A couple of vendors active in this area (there are many more):
    Clarabridge
    Island Data

    Further reading:
    http://www.customerthink.com/blog/text_analytics_listen_to_customers

    Bob Thompson, CustomerThink Corp.
    Blog: Unconventional Wisdom

  18. Bob

    That is a great suggestion. The blogosphere, or better still, the Web2-o-sphere, should be monitored for breaking news that needs to be managed. Just think what might have happened if Dell had spotted Jeff Jarvis’ original ‘Dell Hell’ post before it became national news.

    Having said that, monitoring the Web2-o-sphere is susceptible to all the sampling biases that structured market research works hard to remove. It is full of black swans like Jeff Jarvis’ Dell Hell. It should therefore be seen as an addition to market research, rather than as an alternative to it.

    In addition to the two vendors you suggest, check out some of the Web2-o-sphere monitoring services like Neilsen’s BuzzMetrics in the US or James Cherkoff’s Collaborate Marketing in the UK too. Monitoring the Web2-o-sphere is probably a bit too difficult for the neophyte company to get right first time.

    Graham Hill
    Independent CRM Consultant
    Interim CRM Manager

    Further Reading:

    Collaborate Marketing
    http://www.collaboratemarketing.com/about/

    Blog Monitoring Services
    http://www.e-consultancy.com/news-blog/362569/blog-and-user-generated-content-monitoring-services.html

  19. Well said. I received an email from a Wells Fargo loan officer asking me to complete a customer satisfaction survey with an “exceeded expectation” rating last week. The loan officer did the job I expected from a well functioning bank. That’s why I chose Wells Fargo. I thought his e-mail was cheeky. Instead of completing the survey I contacted the CEO of WFHMC. I am happy to report that his office responded with dismay. I refuse to fill out C-Sat surveys where specific ratings are solicited. It’s insincere and I feel cajoled.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here