In my last couple of posts, I’ve hammered away on what I think are some pretty big issues with the way most companies are using online surveys. I don’t want to leave the impression that online surveys are all there is to building a Customer Intelligence System (CIS). I love online surveys because they are so flexible. You can create an online survey for many, many different types of research and almost any type of question or situation. But “many, many” different types isn’t quite the same thing as all types; and there are some important problems for which online surveys are ill-suited.
After all, every research method has limitations and in this case, many of the most fundamental limitations are inherent to the basic mechanisms of online surveys. These mechanisms limit your ability to target certain kinds of samples, to isolate your research from your marketing, to understand certain classes of problems which exist outside the Web, to tackle operational problems, and to find ideas outside your known research program.
I want to tackle each of these in turn to show why integrating multiple sources of VoC data into a CIS is essential and to make it clear that even the most comprehensive online survey program will necessarily miss a great deal of importance.
Targeting Certain Kinds of Samples
When you use online surveys, you’re working with a specific population – your Website visitors. Ten years ago, that population was not particularly representative of your entire customer base, limiting the uses you could make of online surveys. Today, it might be a very good representation. But even if your Web population is a near-perfect match for your overall population, it is almost certainly NOT a perfect match for your entire Prospect Audience. If you want to explore Prospect attitudes and opinions, online surveys will limit your sample to the group of people giving you enough consideration to visit your Website. That’s always an important limitation and it can be a crippling one depending on your research question. If you’re Samsung looking to compare the attitudes of Prospects about the iPhone and Galaxy, intercept surveys on your properties aren’t worthless but they are missing a big chunk of the story. Even if you explore segmentation based on respondents who say they’ve visited the Apple site as well, you’re missing respondents who never considered Galaxy or didn’t consider Galaxy seriously. Remember the old adage in real-estate – location, location, location. In ALL Voice of Customer Research, it’s sample, sample, sample. Your ability to control and define your sample determines what type of research you can do. For early stage prospect research, traditional offline surveys, off-site online surveys, and social media research are the right pieces to use in a CIS.
Isolating Research from Marketing
The dirty little secret of online surveys is also bound up with sampling. Unlike pretty much ALL offline research, the sample for online intercept surveys is directly tied to your digital marketing and Web functionality. When you change your digital marketing, you change the shape of the visitor population and you change the distribution of visit types on the site. Since visit type and population are two huge drivers of high-level scores like NPS or Satisfaction, these numbers are heavily influenced by changes in the underlying sample. Because of this dependency, it’s much, much harder to insulate your online survey results from trends and changes in marketing than it is in the offline world. This isn’t such a big issue if you are stratifying your sample and are focused on site usage questions. In such cases, as long as you look below the site-wide numbers you’re probably okay. But if you’re running intercepts inside public content and you have active marketing campaigns, you’ll find it very difficult indeed to trend that research over time. To isolate your research from your marketing efforts, Social doesn’t work well either. You’ll have to rely on off-site intercept surveys and traditional offline VoC studies.
Understanding Some Non-Web Problems
I’m a firm believer that you can use your online survey program to understand and research issues that go far beyond the Website. In my last post, I argued for a fairly deep re-write of the content of surveys to do exactly that. I suggested, for example, that online survey research was an appropriate place to delve into customer drivers of choice around product and competitive set. Not every type of problem is equally accessible, however. Survey’s focused on drivers of choice are highly relevant to the consumer in the Web channel. But asking questions about your stores, or their call-center experiences, or your mobile app is going to be much less effective. I don’t think this is controversial and it’s not something I see anyone really getting wrong. On the other hand, what I do see people missing is the implication that VoC research needs to be tightly embedded in-channel to be maximally effective. Trying to do mobile app research on the Web (or, far worse, offline) just doesn’t make sense. This doesn’t mean you can’t email a survey to app users. It does mean that trying to randomly find users of Channel X by surveying in Channel Y is unlikely to work and unlikely to be interesting. Embedded research and email surveys are the right VoC techniques for most channels.
Tackling Operational Problems
The big drawback to online survey research when it comes to operational problems is sampling. But it isn’t the nature of the sample – it’s the very existence of a sample. Many operational problems on today’s Websites exist for very small sub-populations and in relative “edge” cases. An operational cart problem may only exist for 1 in very 10,000 site visitors – but still be an important defect. From the survey perspective, however, it’s difficult to catch such situations and difficult to track them down when they occur. You’ll have to collect a lot of surveys before any pattern emerges – so many, in fact, that from a statistical perspective this will always just be noise. Of course, it’s quite possible to study the specific behaviors of negative responders. But this can be a thankless task. You’ll have negative responders for many reasons, most of which aren’t operational problems. Making matters worse, most survey instruments aren’t setup (rightly) to research operational issues. So they often leave the user with little room or incentive to detail the real problem. So I’m not exactly against using online survey research to find operational problems. I am against using it as your only mechanism. Tools like OpinionLab are really better suited to this role. Though anecdotal, most operational research is, effectively, anecdotal. You’re not looking for general problems, you’re looking for specific ones. By providing an immediate and direct venting mechanism focused on customer-care, OpinionLab provides a more reliable source of operational defect discovery than almost any intercept-based survey program. Most of our clients get this right, but some still treat Opinionlab and Online Survey programs as if they were competitors and make either/or decisions about which to use. 99 times out of a hundred, the right answer is both.
Discovery Outside Your Research Program
Survey research is great for finding out what you don’t know. Just as long as you know you don’t know it! That isn’t a paradox. Most survey instruments are much more effective with close-ended questions than open-ended ones. I take quite a few surveys and I probably qualify as both a critic and a know-it-all, but I rarely end up filling out open-ended questions. As a respondent, open-ended questions are just too much work! So while it’s possible to build a survey instrument to study almost anything, it’s not so easy to build something that helps guide you to new research programs or illuminates issues you haven’t thought about. For discovering new research questions, there are behavioral techniques and there are other VoC methods; in particular, Focus Groups and Social Media Research. While Focus Groups can suffer from some of the same problems as surveys, they provide a more open-ended forum where a skilled facilitator will often find un-explored and novel customer attitudes and issues. Even richer is Social Media. Because Social Media captures un-guided and un-fettered conversation, it can be a great vehicle for exploring outside your research paradigm. This takes real work though. New research questions don’t just fall out of social media reporting. To effectively mine this data, you’ll need real analysts specifically targeting this type of research.
Most of what I’ve called out in this post isn’t startlingly new. Much of it is obvious and a good portion of that is already standard practice. The really important point is that if you’re trying to construct a comprehensive warehouse and reporting system around customer attitudes you are necessarily in a multi-source world. Site surveys are central to that world, but they have limitations that simply can’t be got around. Fortunately, we don’t lack for sources. From online feedback systems to social media to Focus groups, there are ways to explore VoC data that can dovetail perfectly with an aggressive Site Survey program.
Speaking notes: I have some speaking gigs coming up in April – two of which are around Tag Management. The first is an AMA Workshop in Atlanta on Tag Management. If you’re going or in the area, check it out. I’ve created the content and it’s both a pretty wide-ranging survey of Tag Management and a pretty decent deep-dive too. Not long after I’m providing a short overview of Tag Management Systems for a free webinar with Tatvic.