I know you get asked to take surveys all the time, because I do. Even the shortest business trip results in at least 5 surveys: Delta wants to know about your flight; Hilton wants to know about your stay; Enterprise asks about your car rental and on and on. But the most prevalent of all surveys is that one at the bottom of your sales receipt, the request from Apple, Kohl’s, Nordstrom, Target and virtually all retailers to “tell us how we did.”
So last fall, two of my analysts and I set out to measure the quality of those point-of-purchase surveys (Point-of-Purchase Survey Study). We thought it would be interesting to know what level of science and engagement the nation’s largest retailers bring to their surveys. The surveys say they want to know about our experiences as a customer, but do they really want to know? Or, is this just PR spin?
Well, friends, unfortunately… it’s PR spin. The nation’s largest retailers run tragically poor customer satisfaction surveys, they’re bad for customers, bad for companies—they’re a waste of time and money all the way around.
So what are these big retailers, like Amazon, Apple, Wal-Mart, Kohl’s, and Target, doing wrong? Are there lessons that can be learned from their mistakes? And how can you make your survey better than some of the biggest companies in the world?
Let’s look at the two main problems: First, the vast majority of the surveys were riddled with biases, so we can’t imagine they provide anything but highly skewed data. And second, most of the surveys failed to show they care about their customers and the experiences they had.
Let’s look at the problem of bias. There were five types of biases in these surveys, each negatively affecting data accuracy in different ways.
- Leading Questions— Known within psychology as priming, leading questions are designed to elicit a particular response. Ace Hardware asked: “How satisfied were you with the speed of our checkout?”
This question is phrased in a way that assumes the customer is at least somewhat satisfied.
- Forced Wording—The Gap asked customers: “Rate your agreement with the following statement: The look and feel of the store environment was very appealing.”
“Appealing” is a weird word. It’s probably not how customers think about their experience in a store like Gap. They’d be more likely to think “it’s a mess,” “that was fun,” or “it’s well-organized.” Furthermore, the question would seem to have an agenda behind it—as in Gap executives want to hear that their store environment was very appealing.
- Faulty Scales—Wal-Mart asked its questions on a 1-10 scale. This scale introduces two problems: first, there is an even number of selections and therefore no true midpoint:Selecting a 5 would imply a lower than neutral score, while selecting a 6 would imply a higher than neutral score.
The second problem with Wal-Mart’s scale is that there is no zero and some experiences are just that, zeroes, not sort of poor, plain old bad.
- Double-Barreled Questions—This is where one question asks about multiple topics, usually that’s two questions compressed into one. Lowe’s asked customers: “Were you consistently greeted and acknowledged in a genuine and friendly manner by Lowe’s associates throughout the store?”
Here, we see four questions in one. Yikes! Does Lowe’s want to know if the customer was greeted OR acknowledged? And was that greeting/acknowledgement friendly OR genuine?
Imagine Lowe’s finds that 85% of customers say “No,” they were not consistently greeted/acknowledged in a genuine/friendly manner. Obviously they need to make improvements—but what? Their greetings or their acknowledgements? How friendly they are or how genuine they are?
The best survey questions provide clear and actionable insights. To improve, Lowe’s should instead divide this question into four, or even better, consider what they really want to know and devise a clearer way to ask it.
- Question Relevance—Ace, Gap, JC Penney, and O’Reilly Automotive all asked about their associate’s product knowledge (e.g. “Please rate your satisfaction with the employee’s knowledge of parts and products)—and none of these retailers offered the NA option. It’s likely that a large portion of shoppers didn’t ask a question of any associate and so would have no way of accurately providing customer feedback.
There are two ways to ensure questions are relevant to the customer. One way is to use survey logic and gating questions such as “Did you ask an associate for help?” Only customers that respond “Yes” will be asked about the associate’s product knowledge.
Another way to do this is even simpler: offer the N/A option, this way, when the question is irrelevant, you won’t have bogus responses clogging up your data.
On top of the myriad data accuracy issues, our Point-of-Purchase Survey Study showed that retailers have little regard for their customers.
For example, Walmart asked 4 introductory questions irrelevant to the customer’s experience, and required the input of 2 receipt codes. Really? That’s a hassle.
But the biggest, most consistent engagement mistake? Many of the surveys were just too long—the average length was 23 questions. A survey should certainly never take longer than the interaction itself, in fact, it should take less time.
Family Dollar asked a whopping 69 questions in their survey—with 10 seconds a question that’s over ten minutes spent reflecting on items that cost a buck.
Designing a quality customer satisfaction survey is a process, requiring multiple edits to reach the best version. Throwing in every question is how NOT to design a survey. Think about what you want to know, and carefully craft your questions.
It’s also important to set expectations at the outset, communicating how long the survey will take, and then meeting that expectation. Nordstrom advertised their survey as 2 minutes, but with 25 questions it took closer to 5 minutes.
Most retailers didn’t provide any estimate of survey length, and instead simply let their customers click into the abyss.
To execute a customer satisfaction survey that’s better than just about every major retailer, get serious about accuracy and engagement:
- Ensure your survey collects accurate and actionable data. Eliminate biases such as leading questions, forced wording, and faulty scales.
- Make every question clear and relevant to the customer.
- Show the customer that you respect and value their time by designing a survey that only asks what’s necessary and that states at the outset how long it will take.
If you follow even a few of the guidelines we’ve provided here, your survey will be leagues ahead of the biggest companies in the world. For additional hints about how to improve the quality of your customer feedback, get our Genius Tips. And if you’re interested in more about the first of its kind, Point-of-Purchase Survey Study, check out the 2-minute video or ask us for the complete report.
Finally, as always, if you have questions about your own customer satisfaction survey design, say hello, we’re happy to help.
Great analysis Martha. I might also add another potential issue with POS receipt surveys is that they run around a 1-2% response rate. While that doesn’t not guarantee non-response bias, there is a pretty good chance that those who choose to respond to these types of surveys are systematically different than those who do.
Thanks Dave, good point. Certainly companies could be running scientific customer satisfaction surveys–or surveys that engaged with customers and yielded robust qualitative insights–or, possibly both.
The question is, why don’t they? Why are the top retailers producing lousy POP surveys? My hypothesis is that it’s a pervasive cultural situation, where companies view tasks and priorities in light of their own interests, not the interests of their customers. Of course, ironically, when it comes to research, privileging one’s own company over the customer is a fast route to bad data, zero insight, no benefit to the company. But thinking positively, if the problem is cultural, that’s a problem that can be fixed. Do you have any ideas about what’s holding these retailers back? if so, please circle back!