How’d this happen?

0
11

Share on LinkedIn

It’s always fun to play “how’d this happen” when I see examples of bad survey design. Trying to unpack the thought process which lead to an obviously broken survey is not only entertaining, it also helps me to avoid making the same mistakes.

Today I have two examples, both courtesy of Friday’s entry in the blog Daily WTF.

The first one, from a survey about datacenter virtualization, is asking how much of a company’s datacenter is physical as compared to virtual. The problem is that the options don’t match the question.

What probably happened here is that the question was originally worded to ask what percentage of the data center is physical (or maybe virtual–it’s hard to tell) and the options were developed for that question. But at some point the wording of the question changed, probably because someone (ironically) thought the question was confusing. The options didn’t get changed, however, and nobody ever caught it.

I know that proofreading is hard work, especially on the tenth or twentieth time you read the same thing. In this case, showing the options as a pop-up list probably didn’t help, since the mistake isn’t visible until the user clicks on the list of options. This is why I strongly recommend having someone pretend to be a real customer and actually take the survey, start to finish, and answer every question before approving the final survey. Doing this would very likely have caught this mistake in time.




The second example comes from a Dunkin Donuts customer satisfaction survey, and asks the customer to select 3 “for data quality purposes.”

Questions like this pop up from time to time, and the purpose is usually to make sure the customer is really reading the questions and not just answering blindly.

There’s a lot of problems with this, and I generally consider the presence of this type of question an indicator of overall poor survey design. Usually what happens is that the survey is much longer and more cumbersome than it should be, but there’s some reward for finishing it (such as free donuts).

Naturally there’s a concern that customers aren’t really providing feedback, they’re just trying to get through the survey as fast as possible to get their incentive. This is a real problem, but instead of making the survey more streamlined, the company sticks in a non-question just to make sure the customer is awake.

While that might reveal a small number of customers who aren’t paying attention, it makes the root problem worse and adds a whiff of insult to the process by telling the customer you don’t really trust him to read the survey questions.

A much better idea is to cut the survey back to the 5-10 questions you really need, put them all on a single page so the customer can see how long the form is, and maybe ditch the incentive so you don’t have to worry about the customer’s motivations. The result will be a better, more focused survey with more honest feedback, fewer abandonments, and cleaner data. And no stupid “data quality” questions.

Republished with author's permission from original post.

Peter Leppik
Peter U. Leppik is president and CEO of Vocalabs. He founded Vocal Laboratories Inc. in 2001 to apply scientific principles of data collection and analysis to the problem of improving customer service. Leppik has led efforts to measure, compare and publish customer service quality through third party, independent research. At Vocalabs, Leppik has assembled a team of professionals with deep expertise in survey methodology, data communications and data visualization to provide clients with best-in-class tools for improving customer service through real-time customer feedback.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here