Best Practices in Customer Surveys: Does your Survey Add Value?


Share on LinkedIn

Most companies survey their customers. Whether its a periodic “relationship” or Net Promoter type of survey, an ongoing “transactional” survey that requests feedback following a customer interaction, or even a market research study, companies seem to love surveys!

But let’s face it – surveying your customers “just because you want a score” isn’t exactly a sound customer strategy. And besides, if your customers are giving you the gift of their time and insight, shouldn’t you do something with it? Customers are far more valuable and deserve to be treated better.

Here are 5 areas to assess your company’s survey process (Net Promoter or otherwise), oriented toward B2B companies:

1. What is the purpose of the survey – are there clearly stated objectives? For example, is it used only to present a Net Promoter Score (NPS) or satisfaction score to leadership, or are there actions expected to come out of it? Our clients generally leverage a survey process in 2 ways – “1:1” for an account team to drive growth in individual accounts (by activating promoters, as one method), and “1:Many” to assist with prioritizing the right improvement initiatives (e.g. product, support, consulting, etc.). The survey objectives generally fall into these 2 areas. Are there specific outcomes expected of the survey?

2. Many companies like to tie survey scores to MBOs and employee metrics. Therefore the survey results need to accurately represent the portion of the business being measured (we often use the word “trustworthy” or even our fictional word “representativeness” to describe this). Does your company have a method in place for determining how “trustworthy” the scores are? That is, does the feedback truly represent the segment (e.g. region, product, account manager/team) being assessed?

3. Many companies like to trend scores from survey results, and see scores that are flat over time or maybe they are even generally trending up. Separate from any margin-of-error calculations (which was discussed in this earlier post):

    If the scores are improving, what do you attribute those improvements to (i.e. why are the scores improving)? Is knowing this important to your company (for example, so you can replicate these “bright spots” to other parts of the business)?
    If the scores are flat, why is this acceptable? Doesn’t the company want to see some ROI (results) on the investment in time, energy, and resources? Don’t you want to have “career building” measurable results for the effort?

4. Related to #3: Are you confident that scores are really increasing (or flat), or could selection bias (cherry-picking contacts) or respondent selection bias (for example, only newer contacts respond) be influencing scores? This is especially prevalent in B2B firms, where account teams can have a large impact on determining who gets a survey and/or how the survey process is communicated to the account. We’ve written about this with research results. Is understanding any bias in your data collection process important to the company?

5. Is there a process in place for distributing targeted customer feedback to the account teams (and others) and for following-up on the feedback? If not, should there be — wouldn’t an account team benefit from understanding the sentiment of the customers they serve? And, in the event of negative feedback, would an account manager know how to handle it (and how do you know)? Similarly, in the event of positive feedback, would the account team know how to handle it?

Survey programs that fail to enable action should be abandoned. They are a waste of time and resources, and they erode customer trust. They mis-set expectations and allow your competitors to gain a foothold in your accounts.

Think DIALOG and ACTION, not “survey.” If there’s no plan to act on the findings, why survey at all? And if there are no findings, then you need to fix that before you waste any more of your customers’ time.

Steve Bernstein
Steve is an experienced executive focused on Customer Experience, with over 20 years of experience in developing leading strategies with hands-on execution. Prior to founding Waypoint Group, Steve was responsible for Solutions Development at Satmetrix, the co-developer of Net Promoter(R), where he assisted clients with implementing customer success and loyalty programs based on Net Promoter while also running Satmetrix' own Net Promoter program as a showcase of best practices and real results.