Research surveys and survey reports have become important marketing tools for many kinds of B2B companies, including those that offer marketing technologies and various kinds of marketing-related services. Many B2B companies are conducting or sponsoring surveys, and they feature survey reports in their marketing programs. As a result, many B2B marketers are now both producers and consumers of research-based content.
Survey reports can be valuable sources of information about business trends and practices, emerging technologies, customer attitudes and a wide range of other subjects. But survey results and reports can also be unreliable and/or misleading.
The availability of free or inexpensive and user-friendly survey tools has made it easier for marketers to create and conduct surveys. Unfortunately, these same tools also make it easy to design and conduct surveys that don't produce reliable results.
As producers, marketers obviously want potential customers to view their survey results and reports as credible and reliable. And as consumers, marketers are increasingly using the results of surveys when making important decisions. So it's important for them to carefully evaluate the survey reports they encounter. As consumers, it's always a good idea to approach any survey report with a critical eye because as Mark Twain wrote, "There are three kinds of lies: lies, damned lies, and statistics."
In my work, I review lots of survey reports. I make extensive use of survey results and other research studies when I'm developing content for clients, and I frequently discuss survey findings in this blog. Over the years, I've developed a mental checklist of things I look for when reviewing a survey report.
I'm planning to devote three posts to this topic. In this post, I'll discuss some of the basic things I look for when I'm reviewing a survey report. My next two posts will discuss issues that can affect the validity of survey findings and/or the credibility of survey reports.
My Starting Mindset
Whenever I begin reviewing a survey report produced or sponsored by a business enterprise, I assume the survey was conducted to support a marketing agenda. Having a marketing purpose doesn't necessarily mean the research is flawed, but it does put me on alert for indications of bias in the design of the survey and/or in the presentation of the findings.
Many survey reports will briefly describe the research in the introductory section of the report, but a thorough report will also include a detailed description of the methodology used in the research.
For a survey of business professionals relating to a business subject, I look for the methodology description to include at least the following:
- Sample size (the number of responses the survey received)
- When the survey responses were collected
- How the survey responses were collected (e.g. online, telephone)
- How potential survey participants were selected
- If appropriate, how survey respondents were qualified
Most of the survey reports I encounter describe the demographic attributes of survey respondents at least to some extent. When I'm reviewing a survey of business professionals regarding a business topic, I usually want to see a breakdown of the following demographic characteristics.
- Job role/job function
- Industry verticals/types of companies represented
- Company sizes represented
- Geographic locations of respondents
Respondent demographics can be important for interpreting survey findings and assessing the relevance of those findings. For example, I recently published a post describing the findings of a survey by Gartner regarding marketing data and analytics. In that survey, 83% of the respondents were with companies having $1 billion or more in annual revenue. So the findings from this survey may be highly relevant for marketers in large enterprises, but somewhat less instructive for marketers in small and mid-size companies.
Use of a "Representative Sample"
Surveys are frequently used to capture insights about a defined population of individuals by collecting data from a small sample of that population. This approach only works, however, if the survey respondents constitute a representative sample of the larger population.
Survey sampling is a complex topic, and it's impossible to describe it fully in a blog post. The most important thing to remember is this: If the survey respondents aren't a representative sample, the findings of the survey cannot be "projected" to the larger target population. In essence, the findings are only valid for the group of people who responded to the survey.
Therefore, when evaluating survey findings, it's always important to determine whether the survey respondents constitute a representative sample. When a survey uses a representative sample, the survey report should include a detailed explanation of the sampling process in the survey methodology description.
Very few of the surveys I review are based on representative samples. Such surveys can still be useful, but they can also be misleading because some report authors ignore this limitation. A well-prepared survey report will make this limitation clear, as Gartner did in its Marketing Data and Analytics Survey 2020 by including the following language:
"Disclaimer: Results from this study do not represent global findings or the market as a whole but reflect sentiment of the respondents and companies surveyed."
In my next two posts, I'll be discussing other issues that can affect the validity of survey findings and the credibility of survey reports.
Image courtesy of Marco Verch via Flickr (CC).