Understanding the voice of your customer is key in today’s competitive business landscape, as is developing a customer-centric management style which focuses on understanding and maintaining compelling, positive high-quality experiences for your customers.
Internet and intranet communication allows organizations to hold ongoing conversations with the people they serve. This gives them access to an enormous amount of potentially valuable information. Natural language understanding and deep learning are key to tapping into this information and to revealing how to better serve their audiences.
In this blog, I will discuss the different ways that deep learning can take you to the next level of understanding the voice of your customer including: the importance of qualitative data (unstructured feedback); the role of analytics in the analysis of qualitative data for VoC; and the role and promise of deep learning for applications (including AI Assistant).
The importance of qualitative data (unstructured feedback) in enhancing the Voice of the Customer
Today, survey remains one of the most-used mediums employed to collect customer feedback, simply because it lends itself well when it comes to structuring information. Closed questions, rating scores, and NPS (net promoter score) are all analyst-friendly ways to quantify customer satisfaction. Statistics, averages and trends can easily be calculated from quantitative survey answers to generate reports.
Unfortunately, surveys are getting less popular among customers. Survey response rates are declining, in part, because customers are solicited to answer long, complex surveys that may not even focus on what matters most to them.
We are also seeing unsolicited feedback, like customer feedback shared on social media and review platforms becoming more valuable as a source of information for businesses because it offers unbiased, free-form text where opinions are shared in customers’ own words.
Unstructured customer feedback, unlike structured feedback, is hard to compile into numbers. Free format text can be messy and can offer ungrammatical conveying opinions and sentiment that are hardly translatable into numbers or normalizable in a “one size” fits all rating scale and is difficult to convert into a structured representation that would be easy to aggregate into numbers and statistics as business key performance indicators.
Why Natural language understanding is a complex task
Natural Language Understanding is one of the most challenging areas of artificial intelligence because it involves reproducing cognitive tasks based on information available in language expressed in text format or speech.
Text is a challenging medium of communication because information can be communicated in different words, different forms – grammatical compositions.
The meaning of words and sentences is influenced by information that appears locally at the word level (ie. suffix and prefix, negation), the sentence level based on the grammatical structure (ie. verb-subject inversion in questions, conjunctions), the semantic level (ie. person’s name vs location) or discourse level (ie. a particular social context such as medical domain vs automotive domain).
The role of text analytics in the analysis of qualitative data for VOC
First rule: not all text analytics are equal.
Text analytics play a central role in converting unstructured data into structured data. It’s a technology that has been used in commercial applications since the 1990s. Coping with the complexity inherent to natural language may be addressed in various ways such as keyword based approach or machine learning (including deep learning) approach.
Keywords base approaches
The most common approach is relying on keywords lookup to seek sentences that might be relevant. Counts of positive keywords and negative keywords found in sentences are used as a means to quantify customer feedback into satisfied and unsatisfied.
Ontologies and lexicons are the cornerstone of those approaches. They define the vocabulary that models a specific industry or business. However, keywords provide limited insights and cannot cover complex sentences such as conditional sentences like “if rooms were bigger, I would have rated the hotel with 5 stars”. Matching keywords “5 stars” would generate the wrong sentiment score, as the customer actually didn’t give a “5 stars”.
Building linguistic resources to support these approaches tends to be a time-consuming task and defining an exhaustive vocabulary that would model domain knowledge is hard to achieve. These approaches tend to suffer three issues:
- They do not take into account the context of the sentence, which impacts the accuracy of the analysis.
- Acronyms “LoL”, icons “:-)” and other special symbols define a whole new set of vocabulary that varies with generations (millennials, young adults, older people). It is difficult to cope with all this variety using vocabularies
- It is very difficult to anticipate what type of vocabulary will be used by customers when conveying problems.
Machine learning approach
In response to these challenges, statistical machine learning algorithms appeared to present some answers as they tend to be more robust and less demanding in terms of linguistic resources (vocabularies). It’s a way to overcome the need for extensive cognitive work and potential human errors when coding linguistic resources.
Statistical models are based on mathematical models that capture patterns binding data points (such as words) as observed in text. Those patterns could be grammatical relations, or other types of relations such as semantic or discursive relationships. For more than fifteen years statistical models such as Markov models, and conditional random field, among others, were used to perform text analytics. These algorithms tend to capture “simple” patterns well.
Deep Learning for Text Analytics
Modelling linguistic levels defining the intended meaning of a text is challenging for statistical models. Some models perform well in modelling simple linguistic relations, but in order to take into account all linguistic phenomena impacting text meaning, rich statistical models need to be considered. Deep learning algorithms are a multilayered neural network, a family of statistical models capable of modelling different patterns and the relationship that could exist between them. This capability allows the model to learn complex patterns observed in text.
Coping with sense ambiguity (semantic level) and domain knowledge (discourse) is better modelled using deep learning thanks to the multi-layer representation capability.
The promise of deep learning for VOC
Complex statistical models such as deep learning are capable of capturing language subtlety as well as providing fine-grained insights with a high level of accuracy. It means that text analytics technology will be able to move from a shallow type of NLU (Natural Language Understanding) focusing on broad categorization such as happy and unhappy customers to a more granular and deeper understanding of customer motivation supporting loyalty and churn.
Capturing language subtlety as well as providing fine-grained insights with high levels of accuracy are crucial to allowing text analytics technologies to move from reporting capabilities toward predicting them and prescribing recommendations to improve CX.
In the near future, we will be witnessing the proliferation of CX-AI assistants powered by prescriptive analytics models capable not only of understanding customer feedback but also of generating recommendations and assessing business performance through benchmarking and comparison.