Hear What the Customer Really Thinks by Eliminating the Flawed Feedback of Online Surveys that Use Table Matrix Questions

0
86

Share on LinkedIn

Businesses need to understand what their customers think, and they have turned to online surveys for quick and easy answers. They often accelerate survey speed by asking questions in a space-efficient table matrix format (questions in rows and response scale in columns) rather than one by one. The use of multi-variable table matrix questions that test hypothesized answers enables them to probe for deeper insights without having to deal with the unstructured natural language data of open-ended “why” questions. Unfortunately, studies show that this approach compromises both the depth and accuracy of insights, which has led on-line researchers to leverage machine learning and crowdsourcing techniques so they can bypass table matrix questions and hear directly from the customer.

Top Table Matrix Challenges

The first challenge with the table matrix format is that it skews answers to the mid-range. As our study shows, it may also encourage “straight-lining,” or the phenomenon of selecting the same response for all rows of the matrix. The format may even prime respondents to answer in a certain way.

A second study tested whether the problems with the table matrix format could be replicated in a different set of questions while controlling for the direction of scales, complexity of matrix attributes and length of the matrix question. Results showed that longer matrices seemed to move average scores closer to the mid-point of the scales, perhaps due to fatigue or annoyance. Amplifying the amount of effort needed to read and comprehend the answers also translated into longer amounts of time spent on each matrix.

Another challenge of the table matrix format is that it muffles the customer voice. No multi-variable table matrix question can deliver what an open-ended question can. Most researchers intuitively understand this, but a big limitation has been the text box used for open-ended questions. It produces raw, unorganized data that are difficult to practically use without further cleaning and processing. Even when ex-post free-text analytics tools are used, the answer “coding” is not robust enough to deliver the most accurate and deepest possible insights.

Users don’t like the text box, either. Many skip over it, making the researcher forgo data from those respondents. This was tested with a survey evaluating user perceptions of several ads. Each respondent could either record unaided feedback in a text box or leave it blank. Both the quantity and quality of answers suffered. Out of 3,061 possibly valid free text answers, only 2,243 useful answers were received, or approximately 73 percent. Roughly one quarter of the answers generated by the free text box were usable after accounting for all the blanks and useless answers.

Bypassing the Table Matrix

Researchers can now use a combination of machine-learning algorithms and crowd-sourced intelligence to do away with the table matrix. They can instead ask respondents for an unaided answer to an open-ended question and then marshal everyone’s participation in validating the subsequent answers. After respondents answer each question, algorithms are used to present them with statements based on other respondents’ cleaned-up answers to the same question. Each respondent can agree or disagree with others’ answers, and the step is repeated five to 10 times.

Even if each participant simply weighs in on others’ answers, the information is a valuable contribution during the “ideation” phase of the survey. This crowd-sourcing process also improves the evaluation phase of the study, since answers have been validated by a high number of participants and there is greater confidence in the natural language answers.

Respondents often report that they enjoy participating in this crowd-sourced process. They feel more engaged evaluating others’ answers than when simply filling in a text box. Researchers benefit, too – the cleaned, coded and organized survey output is immediately available since the algorithm runs in real time. Plus, the statistically validated qualitative data can be used in more traditional quantitative analyses including segmentation, pricing, or NPS studies for which natural text data become categorical variables in a quantitative model.

Computational and machine learning advances are moving on-line research closer to a free-flowing natural conversation with the customer, at scale, while seamlessly combining open-ended and choice questions. This approach can effectively replace traditional table matrix questions, avoiding their limitations and letting survey participants freely ideate and validate their own hypotheses on the fly. Such unfiltered and organic “voice of the customer” information is simultaneously validated, labeled, and organized by the underlying algorithm to make data analysis as easy as that of a traditional matrix question.

Rastislav Ivanic
Rasto Ivanic is a co-founder and CEO of GroupSolver® - a market research tech company. GroupSolver has built an intelligent market research platform that helps businesses answer their burning why, how, and what questions. Before GroupSolver, Rasto was a strategy consultant with McKinsey & Company and later he led business development at Mendel Biotechnology. Rasto is a trained economist with a PhD in Agricultural Economics from Purdue University, where he also received his MBA.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here