Re-Thinking Your Survey Research – Asking the Right Questions

1
197

Share on LinkedIn

In this first part of a series on doing measurement that matters, I’ve been focusing on one of my main themes this year to Semphonic’s clients – that it’s time to fundamentally re-think Voice of Customer and customer research programs. I started the series with a look at how the 2012 Obama campaign broke from traditional survey research to create a massive customer feedback cycle that not only allowed them to better predict battleground states but let them rapidly test and experiment with campaign messaging targeted toward key states. The use of massive survey collection to supplement behavioral modeling and the integration of VoC with field experiments were radical alternatives to traditional small sample opinion research. In the next post, I showed how much more sophisticated enterprises are with behavioral and transactional data warehousing than with customer research data and I argued for similar attention to data quality, centralization, standardization, and dashboarding. In last week’s post, I showed how survey research could fill a critical (and largely un-noticed) gap in digital marketing programs – allowing the marketer to understand whether targeting or creative was responsible for performance. Today, I want to dive right into the heart of online customer research and spend some time on the type of questions that should make up an online survey.

With all due respect to site usability, I think it’s fair to suggest that in 99 cases out of 100, site usability is the LEAST important factor in a customer’s decision-making process. I can count on my hand the number of times in the past year that a site’s usability actually discouraged me from transacting with them. It does happen, and yes, we as analysts do have more control over site usability than product quality or product pricing or branding. But if you’re limiting yourself to a small set of incremental cases, you’re simply not doing measurement that really matters. Good research should include the Website as a tool, but there’s no reason to limit yourself to a narrow range of possible improvement.

For several years now, I’ve been telling our clients to stop asking dozens of questions in their online surveys rating site navigation, site images, and site functionality and start asking customers about drivers of choice – the things that make customers decide to purchase or not. Often enough, they give me a blank stare and ask “Like what?”

The answer, of course, is highly specific to each business. But there’s some general themes that emerge as Semphonic increasingly tackles this type of research. Let’s start at the top. One of the most basic site survey questions is “visit intent” – the “Why did you visit the site today?” question.

Nearly all of our client’s ask this question in some form or another, and the available answers are usually something like this:

A) To Buy or Research a Product

B) To Check the Status of my Order

C) To Get Customer Support

D) To find out more about the Company

E) Other

If the client has a few broad business lines, they’ll often follow up with a question like:

Which type of product were you interested in?

A) Product Line AB

B) Product Line B

C) Product Line C

D) Product Line D

E) Other

Then it’s off to the races with a bunch of questions about their site experience.

There’s nothing wrong with these two questions except that, in practice, they are nearly useless. Useless because in most cases there are strong, clear behavioral proxies for the intent and because they live at a level of generality that makes it impossible to use them to understand any deeper behaviors.

In every site I’ve ever looked at, the overwhelming majority of visitors who say they came to the Website because they were interested in buying Product Line B went and looked at…drumroll…Product Line B pages! What exactly is this telling me that I didn’t already know from the behavior? You may claim that there’s something interesting in looking at the behavior of people who said that and then didn’t look at those pages, but most of the time there just isn’t. Sure, in some less common use-cases, there is real interest in what we call behavioral mis-matches – cases where stated intent and demonstrated behavior don’t match. But on really broad levels like this, it’s beyond rare for such mismatches to be common enough to be either studied or interesting.

So what should you be asking?

When it comes to understanding customer behavior on site, you have to go at least one level deeper. Let’s take Product Research and Purchase as an example. One of the most important things I’d like to know is where in the sales-cycle the visitor is when they arrive on the site. This is something very few survey instruments tackle well. But it’s vitally important when assessing site behavior to know whether the visitor arrived on the Website “ready to buy”, “shopping for price”, or “researching the brand.”

It’s powerful to structure questions around what the customer knows when they arrive at the site. If they are interested in Product Line B, did they arrive having a specific product(s) in mind? If so, we’re they trying to decide between 2-3 products? Or were they shopping for the best price on a single product? Had they decided on a product but were unsure whether or not now was the time to buy? These are questions that, once understood, can help illuminate the site (and site content’s) role in actual persuasion. Not only are they powerful in and of themselves in understanding your customer base, they are far more useful than the big broad questions in deciding whether or not your site is doing the necessary work in any given customer use-case.

Once you understand where the customer is in the sales-cycle, you can start to tune your research to drill down even further.

For customer’s without specific products in mind, you want to drill down into what features, price points, and brands are potentially interesting. For customer’s trying to choose between products, you can explore what the key decision points are. Then you can match those decision points to product lines and products actually viewed and/or purchased. If a customer is doing price comparison, you can explore what other sites they are going to or have already checked out. You can cross-tabulate this with the products viewed and conversion rates to decide how your prices need to stack up to the competition to get sales. Do you need to be cheaper? By how much? Can you be at the same price as Amazon? Can you charge more? This is vital research for any eCommerce shop and online opinion research is the lynchpin of a research program to decide where and when you need to adjust your pricing relative to the competition.

Is the customer trying to decide between your product and a competitors? You can explore the competitive set in the customer’s mind. You can ask about the key decision points. Then you can match to your visitor’s actual behavior to see which of your product’s they considered and differential rates of conversion based on their competitive set. This type of research can help you build a profile of when you have a competitive advantage or disadvantage relative to any given competitive set and feature ordering. That’s vital in knowing whether or not you’re likely to win with a given type of customer – and whether or not you need to offer a discount, can re-market effectively, or will likely win without any loss of margin.

Or you can ignore all this and continue to ask customers how they like the left-navigation on your Website.

It’s not that I mean to diss any questions about site functionality. But even when it comes to Website functionality, most survey instruments are deeply flawed. Let’s say you want to understand whether or not the search functionality is working well on your site. The wrong way to do this is to just include a question on the survey to rate search functionality from 1 to 10 (hint – this is what everybody does).

Why is this a bad approach? First, a significant number of folks who rate search didn’t use search. We’ve done this behavioral match many times and it’s not unusual for 20-30% of the folks who give an opinion NOT to have used the actual functionality they are rating.

Even worse, this simple rating doesn’t explore the customer experience enough to understand the nature of the feedback. Here’s a real-world example we found a couple of years back and have since seen duplicated many times. Search was one of the lowest rated functional components of a client Website. When we drilled down into the behavior, we found that search scored very differently when it was used early in the session vs. later in the session. Early session searchers had a much more favorable opinion of search functionality. That’s a clue, and a very common one. The tool doesn’t get worse the longer someone sits in a session! Further research showed that there was a fairly significant use-case where the site simply didn’t have the desired functionality. Users would thrash around looking, try a search, then give up. They were very dissatisfied with search in the resulting opinion research (it was the last thing they tried and it failed) and that experience generated a strong correlation between search-usage and site dissatisfaction. A correlation that had, obviously, almost nothing to do with search functionality.

So my advice for site functional research is quite similar to my advice around customer drivers of choice. If you want to understand search performance, only ask people who used search. Focus your research and use branching and pre-sets to make sure you only ask useful questions. Don’t just ask for a rating. Explore the drivers of the attitude – not just the attitude itself. For search, you can save yourself a lot of time by finding out why they used search. Did the user start with search? Did they use Search because they couldn’t find something in navigation? Or did they use search because where to start in the navigation wasn’t clear? It makes a huge difference in understanding their subsequent behavior and opinions.

If a visitor was using Search to find a product, how much did they know about the product? Did they have a model number? Did they know if it was no longer sold? Did they have the product to hand?

Every use case should have its own set of questions, its own unique path. A survey without branches is about as useful as a tree without branches: a barren stump not much good to the world and very hard to use productively.

I want to hammer home this demand that the survey instrument be responsive to your individual business. This is a bad place to copy.

Here’s a common example of indiscriminate copying that also illuminates a failing of survey research for certain types of problem. For ecommerce sites, it’s certainly important to understand whether or not product images provide enough detail to help users pull the trigger.

Because it’s important for some sites, it’s become a kind of standard question that all sorts of sites use – the “Please rate the images/pictures on the Website” question. Even Financial Services sites where the products don’t have ANY physical appearance at all will often ask this question. Of course, those sites do have graphics, so they probably think the question still applies.

It doesn’t.

It’s one thing to explore whether or not the product images on a Website provide enough visual information to support a purchase decision. It’s true that you can measure this behaviorally very well, but you can supplement that knowledge with targeted questions to help explain the underlying attitudes. Customers will tell you whether or not they thought the product images gave them enough information.

This is much less practical with general site graphics.

Like any research tool, survey research has its strengths and limitations. There are things you just can’t explore directly in opinion research. It would be stupid, for example, to ask a question like:

“Were the calls to action on the Website aggressive enough?”

It’s an important question. But it’s a case where your goals and the customer’s perspective don’t necessarily align even with perfect information. In such cases, opinion research methods break down.

It’s similarly ineffective to ask the user a question about your general site graphics and images. The user doesn’t really care, isn’t responsible for, and isn’t particularly cognizant of the psychological impact of your site’s general images. Knowing the user thinks your images are nice, ok, or even a little ugly will tell you next to nothing about the impact of those images on their behavior.

If you really want to use opinion research for this type of question, you need to approach the topic elliptically. You might, for instance, ask a series of brand perception questions to test groups to see if more aggressive calls to action (or different pictures) influence the respondents.This is much more meaningful than a direct ask. A user may love the graphics on your site but be left with a brand impression that is quite different than you’d really like. A user may really appreciate the genuine restraint of your “calls to action” but fail to convert precisely for that reason.

There are a whole range of cases where the direct approach just doesn’t work well and if you don’t recognize those cases, you can waste a lot of time and energy in useless research.

I started this discussion by admitting that finding the right questions for online research is necessarily specific not general. There is no one right set of survey questions for an enterprise. Good opinion research should be more finely tailored than the gowns worn on the red-carpet at the Academy Awards. The questions you ask need to fit your business and your research questions – not some arbitrary standard drawn from other sites. What’s more, the work in building and using survey instruments is never done. A static survey (any static survey) will outlast its original function. As your knowledge of customers grows and deepens, your survey instruments need to adapt to explore ever deeper areas of interest.

At a high level, however, the message should be clear. Online survey research is an amazingly flexible, inexpensive tool for conducting all kinds of customer research. And it’s easy to decide if you have a good program or a bad program. Just look at the questions you’re asking.

If that’s not enough, here’s a set of basic principles to use when evaluating or designing a Voice of Customer program:

  1. A good online survey instrument uses branching to deeply explore drivers of choice while keeping total questions to a minimum
  2. A good survey instrument won’t ask questions to which the answers can be reliably inferred from behavior
  3. A good online survey instrument doesn’t ask about things the user hasn’t used
  4. A good online survey instrument isn’t static
  5. A good online survey instrument isn’t just about the Website
  6. A good online survey instrument isn’t one instrument – you can’t investigate search functionality with the same survey you use to understand competitive choice.

Precision, brevity and depth are the hallmarks of good opinion research as they are of good writing. And though – when it comes to writing – I regularly fail in at least one of these, at least it’s not for want of trying.

Republished with author's permission from original post.

Gary Angel
Gary is the CEO of Digital Mortar. DM is the leading platform for in-store customer journey analytics. It provides near real-time reporting and analysis of how stores performed including full in-store funnel analysis, segmented customer journey analysis, staff evaluation and optimization, and compliance reporting. Prior to founding Digital Mortar, Gary led Ernst & Young's Digital Analytics practice. His previous company, Semphonic, was acquired by EY in 2013.

1 COMMENT

  1. Just to add, the problem with customer surveys isn’t getting the data. It’s interpreting the data, and that ties in directly with your title. One of the tricks to getting interpretable results is to work towards that when you build the questions, playing through the process for each item: If we get responses a, what will that mean in terms of our future actions? If we get response type B, what will be our actions? What will each mean?

    Having worked in social science research, I find that even the survey research from major firms, and oft reported here, ends up misinterpreting the data, because they aren’t asking the right questions. See http://www.customerthink.com/blog/holding_customer_research_firms_accountable_for_misleading_research).

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here