Not Another Word Cloud—Please!


Share on LinkedIn

Geoff Colvin writes in ‘The Simple Metric That’s Taking Over Big Business’ (May 2020, Fortune magazine) the star of Net Promoter is its follow-up question, usually stated as something like ‘How could we improve? Or, ‘Why did you give that score?’

Colvin cites Marc Stein, Senior VP at Dell Technologies, who says, “The real gem and actionable insights (from the Net Promoter question) come from the verbatim transcripts.” And Deborah Campbell, VP of insights at Verizon, adds, “It’s not about chasing the number (NPS), it’s about understanding what our customers want and need from us.”

Unfortunately, I’ve seen many companies throw their verbatims out, do a word cloud, or let their verbatims accumulate for years because they didn’t have a scientific way to analyze them.

Root of the Problem

When it comes to verbatims, the root of the problem is that in the same way customer experiences are varied and complex, comments are messy and unpredictable.

Comments can be brief or lengthy, vague or technical, general or specific. Some comments stay on-topic; others trail off. And of course, customers refer to similar issues in different ways.

And this applies not just to survey comments but other sources of customer data like reviews, chats, interviews—all of which can give insight into why customers feel, think, and rate the way they do. While unstructured data like this may appear to defy quantification, that’s not actually the case.

All data can and should be quantified because without quantification, you only have anecdotes. Not only are anecdotes not actionable, they can be misleading.

AI-based Text Analytics

So what do you do with the verbatims? Well, if you are a sizable consumer company with a large, consistent data set to match, then AI-based Text Analytics are at least part of your solution.

However, if you’re a small or mid-sized company, budget could be an issue, but the deeper issue is that it’s unlikely you’ll have enough verbatims for AI to learn from—self-learning being the hallmark of good AI. And if you’re a B2B company, your customers may be using specialized phrasing covering widely divergent topics, a level of nuance that tends to stump AI.

In these cases, applying research intelligence to your verbatims using classic social science techniques is the only reliable way to extract insights—more about this in a minute. First, here’s a quick rundown of non-solutions that companies often employ to deal with their verbatims.

Machine Assigned Sentiment Scores

Much like the word cloud, these are inexpensive tools that analyze verbatim sentiment, but what you’re really getting is a re-rendering of your Net Promoter Score. Customers who give you a Net Promoter rating of 0-2 are generally angry while those who you rate you a 10 are impressed, but so what? You still don’t know how to improve. Besides, for complex comments, there are cases in which machine-based sentiment scoring is more often wrong than right.

Thumbs Up, Thumbs Down

Some companies simply end their survey findings report with a few examples of thumbs up and thumb down comments or a word cloud to summarize customers’ verbatims.

About those Ubiquitous Word Clouds

The reason many companies turn to word clouds is they are a default chart for survey platforms. Even if you’re not using a platform, you can upload your verbatims for free and get a representation of your verbatims like this:

But what does this tell you? Well, for one that customers mention “Alice” more often than ‘dormouse;’ however, do they see Alice in a positive or negative light? And if you want dormouse to make more of an impression, how would you do that?

Then There’s the Reading Method

In the same Fortune article cited above, California Closets CEO Bill Barton explains, “he starts his day reading the previous day’s verbatims because it’s grounding.” While reading verbatims is important, the problem is that the brain’s working memory starts cutting off after seven items. So even if you read thousands of comments, you simply can’t remember and synthesize all that information in a quantifiable way.

Research Intelligence

But if word clouds and reading don’t provide actions and insights, what’s the best approach? Again, for companies with large, consistent data sets, AI should be a strong consideration for at least part of the solution. But even if you are implementing AI, you still need Research Analysts to establish and test the initial coding framework. And ideally, analysts continue to monitor your AI to ensure you are getting accurate, updated analysis.

Research Intelligence Unpacked

Since the beginning of social science, anthropologists, psychologists, and sociologists have been coding narrative data. For many companies, coding remains the most effective way to extract meaning from their survey verbatims and their other unstructured data like interviews and reviews. The best term I’ve seen to describe this activity is research intelligence.

With Research intelligence, multiple analysts work together to build a coding framework. The framework is built iteratively, going through multiple cross-checks. Then, that framework is used to classify comments by their various elements in descending order of specificity.

Like a taxonomy chart that starts with ‘kingdom’ and ends with ‘species,’ a coding framework starts broadly and narrows into specific categories.

Theme Quantification

Once all the coding is in place, it’s important to quantify the codes to reveal themes in rank order priority. Quantification is a crucial step because now you can correlate your verbatims’ themes with Net Promoter and other outcome scores.

By quantifying your themes, you learn how customers think. Moreover, you learn how to boost loyalty and every other KPI critical to your organization.

A Typical Customer Comment

Here’s a typical comment that’s been edited to protect client confidentiality. But just like most customer comments, the writing is choppy and fails the Hemingway test.


What follows is a breakdown of how to code the above comment. For accurate coding, every client must have its own codes and classifications systems, but in a general way, the classification that follows applies to many kinds of companies.

  • Substantiveness: The top level of classification, substantiveness, refers to whether the comment is intelligible enough to be coded objectively. The comment above is substantive; the customer has a point that can be understood.
  • Class: This identifies which department the comment belongs to. Here, the comment is about a repairs experience and gets directed to the repairs team.
  • Type: This shows whether the customer is voicing a complaint or explaining why they want to keep things the way they are. In this case, the verbatim is marked ‘improve.’
  • Sentiment: This is always the easiest category for researchers to determine. But because this comment starts off by doubling down favorably, it could throw some AI-based text analytics off. Yet researchers quickly understand that while this customer is usually impressed, this time, they’re frustrated.
  • Tag: This is the most important part of the entire classification system because it’s where the researcher dials in on what the comment is about. In this instance, the comment is about status updates. However, keep in mind that tags need the rest of the classification system in order for them to have actionable meaning.
  • Sub & Product Tags: These are used to clarify which products or services the comment is about. In this case, the waterways internal combustion unit refers to a part of the client’s HVAC engine line.

The Team Removes Subjectivity

Using a team of Analysts when coding is critical because when multiple analysts examine the data independently and arrive at the same codes, you have a replicable system in which subjectivity has been removed. Ensuring this kind of objectivity is a crucial pillar of science and is what makes for good research.

Seth Godin, Close the Loop

Through verbatim analyses, we’ve learned about training issues that were imminently solvable, although not a quick fix. Other issues can be solved in short order; for example, you might learn that you need to apologize to a customer for an overdue report, and email it pronto.

Whether it’s a quick fix or not, verbatims identify gaps; and when acted upon, verbatim analysis closes those gaps. As Seth Godin has said: “If you’re not going to read the answers and take action, why are you asking?”

But having a closed-loop system where you learn about customers’ issues and then resolve them is not the only reason to have a verbatims discipline.

Two Types of Insights

Close-ended rating questions like ‘how satisfied are you,’ or ‘how likely are you to recommend us to a friend’ are limiting because they only tell you how customers feel within the limitations of the questions you thought to ask. Customer verbatims, on the other hand, are a window into customers’ thoughts.

With verbatims, customers speak for themselves, so you learn about their expectations versus their perceptions. And you also learn how well your survey matches the customer experience.

Identifying gaps between your survey and the customer experience gives you two kinds of insights, the first about your survey, the second having broad application for your entire company.

For example, if customers consistently mention issues that are not addressed in your survey’s rating questions e.g., documentation quality, then those topics can be turned into rating questions to improve the usefulness of future surveys.

Other times, there are instances where you learn about a language disconnect between you and your customers. Or, you might learn there’s a language disconnect between you and your customers. For instance, customers might comment about missing inventory while you might describe this as a supply chain issue.

Recognizing this disconnect gives you the opportunity to adjust your word choice to match the words customers use.

This clarifies not only your survey but also your websites, instruction manuals, and every other kind of direction and description you use.

Thanks Geoff Colvin!

If I had one thing for you to takeaway, it’s that customers’ thoughts are messy and complex, but that’s exactly why they’re interesting and valuable! Thanks, Geoff Colvin, for clarifying it’s the follow-up question that makes Net Promoter sing. The verbatim responses it elicits are one of your most valuable data assets but only if you rise above the word cloud! Toward more meaning and fewer clouds!

The post Not Another Word Cloud—Please! appeared first on Interaction Metrics.

Republished with author's permission from original post.

Martha Brooke
Martha Brooke, CCXP + Six Sigma Black Belt is Interaction Metrics’ Chief Customer Experience Analyst. Interaction Metrics offers workshops, customer service evaluations, and the widest range of surveys. Want some ideas for how to take your surveys to the next level? Contact us here.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here