Customer Experience is a science. CX isn’t a “hard” science like chemistry or physics because people don’t behave like chemicals or atomic particles. But CX – or at least CX done right – is systematic in measurement, design, and application and sits squarely in the domain of the social sciences aside such fields as psychology, economics, and sociology.
This isn’t just academic babble. Approaching CX as a science as opposed to a feel-good activity or a practice driven by “gut” judgements has significant implications for practitioners and companies striving to deliver great experiences. Science is methodical and logical; has demanding standards of measurement and analysis; and relies on observation, testing, and experimentation.
CX is the science of how customers respond to the experiences they encounter when considering, shopping for, buying and using a brand or product. It entails the systematic study of customer perceptions, intentions, and behaviors. Using this knowledge to explain or predict the future attitudes and behaviors of customers, the objective is to deliver the experiences that increase the likelihood that customers engage in those behaviors that create value for the firm.
Studying the behavior of people is inherently messier than the “hard’ sciences, as human behavior is far more complex than the behavior of inanimate objects. Physics doesn’t have the problems of sampling error, cultural differences, or response bias, let alone the innumerable differences between people: each and every oxygen atom behaves the same under the same conditions. Not so for people. This is why we talk about the “likelihood” or probability of a specific customer outcome as opposed to a certainty – which is not different for psychology, economics, and other social sciences.
Human behavior isn’t merely a linear mechanical “A” causes “B,” where every time “A” is present “B” is the result. People are different and complex and change over time. As such, they perceive, process, and respond to “A” in various ways. These differences notwithstanding, customer attitudes and behaviors about an interaction or relationship with a company are best understood as their reaction to the stimuli they received – that is, customers react or respond to the experiences they have with the firm.
While perhaps seemingly academic, this premise leads to some critical corollaries for managing customer experiences.
1. Customer ratings and behaviors are a reaction to their perceptions of the experiences a firm delivers.
2. Changes in the experience that are perceived by customers will have an affect on customer ratings and behaviors.
3. Changes that customers do not perceive will have zero impact on their attitudes or behaviors.
In other words, it’s all about customer perceptions. And perceptions are inherently subjective, can vary over time for the same individual, and vary between people.
Measure Twice, Cut Once
This axiom of any crafts or trades person applies equally to CX (how’s that for a segue?) Not to get lost in the depths of research and measurement issues, but accurate, reliable, and consistent measurement is foundational. Questions. Scales. Feedback channel. Sample. Mode of data collection. Timing. KPIs and countless other items that demand consideration when collecting customer feedback can render you numb. But they are critical.
Sub-standard measurement in a VoC/CX program is the functional equivalent of a sub-standard foundation for a building. While the most accurate measurements by no means guarantee that a company will deliver a great customer experience, sub-standard measurement does guarantee that a firm will not have reliable information for fact-based decision making.
Science is empirical and is based on reliability in observation, experimentation, and measurement. And “without data, you’re just another person with an opinion” (Edwards Deming).
The Science of CX in Practice
What should CX practitioners start doing, stop doing, and do differently?
STOP treating measurement as a side issue: measurement is not “the” issue, but it is central to making decisions on all the issues practitioners need to address.
START taking a critical eye to every aspect of your program: re-evaluate standing assumptions that may no longer apply (or may never have been valid in the first place).
STOP defaulting to the “we have always done it this way mentality” or the assumption that maintaining historical trendlines is critical: there is no justification for not doing the best work for the organization going forward, regardless of history.
START by positing business objectives and outcomes and design the program to achieve those results. The measurement or survey is not the objective; it is a component of a program for realizing some set of objectives.
START connecting stimulus and response: how and to what extent are customer perceptions of the experience shaping their assessment of the overall experience or relationship with the company?
LOOK DIFFERENTLY at your KPIs and what you are measuring: theory is fine, but does the data validate your assumptions? Are you measuring the “right” things? Are your KPIs optimizing your ability to explain or predict your stated objectives?
LOOK DIFFERENTLY at your scores: tracking scores is important, but only as an intermediate input into the evidence and fact-based recommendations.
START testing hypotheses: build an environment that encourages testing and learning as part of the process.
STOP treating data silos as fixed and impervious and START thinking about possible solutions that might be achieved by integrating data from disparate sources.
START thinking like a scientist, while appreciating the inherent limits in our ability to measure and explain human behavior.