Channel Bias in Surveys

0
26

Share on LinkedIn

One of the key decisions in designing a survey is which channel to use: e-mail, IVR, interviews, pencil-and-paper, or something else. Often there are cost and practical reasons for choosing one channel over another. It’s also to keep in mind that the choice of channel will bias the survey results in two important ways.

The first is that customers’ responses will change somewhat depending on whether there’s a human interviewer, and also whether the interviewer is perceived as a neutral third party. People natually want to please the interviewer, and will tend to shade their responses towards what they think the interviewer wants to hear.

The other, which in my experience is much more important, is that the channel has a very strong effect on whether customers take the survey at all. The highest response tends to be with an interview; more automated and more annoying channels generally see a substantial drop in response.

When the overall response rate is lower, participants bias towards customers who have stronger opinions, and towards customers who are more engaged with the brand.

Sometimes the channel itself will introduce a strong bias in who takes the survey. For example, many companies struggle to maintain accurate e-mail lists of their customers. If one segment of your customer base is less likely to provide an e-mail address, you will bias against those customers. One of my clients has a particular customer segment where they almost never manage to get a valid e-mail, and that segment is also most likely to churn–this makes the client’s e-mail surveys a very poor measure of their overall performance.

Finally, the more automated the survey process, the more brittle it tends to be. Where a human interviewer will let you know if there’s a problem in the script and can work around a technical problem, you don’t get that level of resilience with an e-mail or IVR survey. If the survey is badly written you may get customers abandoning it or giving nonsense answers; and if something breaks it will give errors. I’ve seen companies with broken surveys, oblivious to the fact that they are getting worthless responses. Automated surveys require constant monitoring, you can’t just set-and-forget them.

Republished with author's permission from original post.

Peter Leppik
Peter U. Leppik is president and CEO of Vocalabs. He founded Vocal Laboratories Inc. in 2001 to apply scientific principles of data collection and analysis to the problem of improving customer service. Leppik has led efforts to measure, compare and publish customer service quality through third party, independent research. At Vocalabs, Leppik has assembled a team of professionals with deep expertise in survey methodology, data communications and data visualization to provide clients with best-in-class tools for improving customer service through real-time customer feedback.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here