In the game of roulette, there are black rows and there are red rows. Gamblers can place their bets anywhere they choose. On so-called even-money bets (red/black, high/low, and even/odd), the odds of winning would seem to be exactly that: 50:50. Yet given enough time, the casino always wins. That’s because the casino has a subtle edge that tips the odds in its favor: a green zero and double-zero slot. So, in reality the odds aren’t 50:50 at all.
By the same token, when it comes to voice-of-the-customer data, most people jump to the conclusion that there are only two states of consumer sentiment: positive and negative. If told they’re achieving a 50% accuracy rate on their sentiment analysis efforts, their immediate reaction may be to think: “This is no better than flipping a coin.” That reaction, however, would be based on a false premise.
In reality, consumer sentiment is not binary. There are actually three potential states: positive, negative and neutral. That being the case, a truly random process would yield a 33% accuracy rate. A 50% accuracy rate may be useful, but only moderately. Next-generation capabilities can determine sentiment along a whole continuum of gradients within the positive and negative states.
Of course, sentiment analysis has limited value if the scores fail to reflect reality. Accuracy is defined as the degree of precision in measurements of a quantity to that quantity’s actual or true value. It speaks to the degree to which repeated measurements under unchanged conditions will show the same results. That’s a key concept when it comes to automated sentiment analysis. Repeated analysis of the same tweet or the same verbatim in a customer feedback form or call center transcript should always yield the exact same results in terms of sentiment score.
Text analytics as it pertains to listening to the voice of the customer has rapidly evolved in recent years. This has given marketing, market research and other industry practitioners a high degree of confidence in their ability to conduct sentiment analysis at a granular level, to the point of honing in on conversations taking place around very specific aspects of a particular product or service.
At the same time, it’s important to keep in mind that sentiment by itself doesn’t tell much of a story. Some people are happy with a product or service. Others aren’t. The question is: So what? Why is consumer sentiment accelerating or decelerating? What specific stimulus is driving the sentiment in the first place? Text analytics can be used to identify emerging patterns and outliers, making it possible to pinpoint the comments that are truly relevant, put the conversations in context, and ultimately paint a more complete picture of what people are saying and why it matters.
Generating actionable insights from voice-of-the-customer data from social media, customer satisfaction surveys, call center notes and various other sources is the focus of Thursday’s webinar, hosted by Clarabridge. I look forward to sharing my perspective on how top-performing companies are using next-generation capabilities to automatically listen to – and act upon – the voice of the customer, no matter where it originates, in a holistic and systematic fashion. I’ll also be sharing some of Gleanster’s latest research findings on best practices in omnichannel listening.