From time to time, my more traditional market research brethren accuse customer experience (CX) professionals of being too “soft” with their data analysis. Sure, verbatim comments with text analytics and quantitative dashboards are great, they say, but it’s not enough. “Real” researchers do hard core analytics. While I don’t disagree, I’d like to add another wrinkle to the debate – how the survey itself changes the outcome of the research. In other words, it’s time to explore the physics of customer experience.
Physicists (and customer experience researchers) wrestle with a measurement challenge called the “Observer Effect”. In a nutshell, the Observer Effect means your measurement system itself is affecting the results of the measurement. In quantum physics, the particles are so small that just inserting a measurement tool makes the particles bounce around and change energy because they’re interacting with the tool. To use an everyday example, imagine that instead of shooting a radar beam at a baseball to measure its speed on its way to home plate, you shot a basketball at it. You’d still be able to measure the speed of the baseball by how far the basketball bounced back, but your measurement system would have totally altered the path of the baseball. Although no one will confuse CX with quantum physics, CX professionals and more traditional market researchers have the same challenge.
In customer experience, we conduct focus groups and distribute Voice of the Customer surveys to measure our customers’ experience and their thoughts, attitudes and emotions associated with that experience. Our goal is to get a true, actionable picture of our customers’ perspectives, but we must also consider how our measurement system is affecting our customers’ perspective of the overall experience.
Imagine you just bought the coolest new tech on the market – like a new gaming system with virtual reality glasses. The company then sends you a survey with a glitch that directs you to an inoperable link. Before you take the survey, what just happened to your opinion of the company? Do you still think of them as a leader in the tech space or did your perception of them just take a step backward? I’m guessing it’s the latter.
On the opposite of the spectrum, imagine you just had a visit from your cable company (a cohort that rarely scores well on CX surveys) to install a new cable box with enhanced features. Immediately afterward, you get a survey that includes a picture of your installer, the survey already knows exactly what kind of cable box you had installed, it asks you questions specific to your experience and gives you access to a senior person at the cable company should you have any concerns. When you see how the cable company views you and your experience as unique, wouldn’t your opinion of them inch up a little just based on the survey experience itself?
Where physicists and CX professionals diverge, however, is how they use the output of their measurement. In physics, researchers want to have as pure a measurement as possible so they can learn more about the physical world in which they operate. The research is about learning and then applying those learnings. Two steps.
In CX, it doesn’t have to be two steps. What if we more consistently created research tools that not only measured the customer experience, but improved it at the same time? We would still get a clear understanding of our customers’ thoughts, attitudes and emotions. We’d still identify areas to improve how we interact with our customers, but we’d also use the measurement tool itself to make the experience better.
There is no doubt we are affecting our customers’ perception of us via our surveys so there is no “neutral”. We’re either making the customers experience better or we’re making it worse.
In physics, the Observer Effect is a bad thing. In CX, it doesn’t have to be.