Data is the Foundation for Managing Experiences. How Solid is Yours?

0
309

Share on LinkedIn

Great experiences — from the brand, product, and digital experience to the customer, employee, and user experience — are not a simple byproduct of great data. Those experiences — good, bad, or otherwise — are the result of the decisions companies make and the actions they take through their people, policies, processes, and culture. Measurement doesn’t move the needle. Changes stem from action.

“Fact-based” or “data-driven” decisions are the mantra of the day. In the rush to take and applaud action, however, we all too often denigrate the importance of the data upon which informed decisions are based. If you can only manage what you measure (to paraphrase Peter Drucker), a logical corollary is that poor measurement will lead to poor management and misinformed decisions. Accurate, reliable data and measurement are the foundation for educated decisions. If that foundation isn’t rock-solid, decisions based on such data most likely will be misguided at best. This is true regardless of the source of the data or the type of decision.

Generative AI, which brings the promise of comprehensive, objective analysis of data at scale at warp speeds, doesn’t solve this problem. It exacerbates the problem. The user expects the output from AI to be a “pure” analysis of information. If fed with crummy data, however, Generative AI will spit out a crummy solution based on the flawed data. Generative AI is the ultimate GIGO (Garbage In, Garbage Out) machine. If the inputs aren’t solid, the output won’t be either.

There’s Data and There’s Data

Data are not homogeneous. Some data elements are “hard” facts: the customer completed the transaction and spent $XX, the employee quit, a tech support call costs the company $YY, and ZZ% of online shoppers abandoned their carts before purchase. Behavioral or Operational outcomes typically can be measured with precision and little or no error. Experiences, on the other hand, are in the eye of the beholder. That is, the “Voice of the Customer” and “Voice of the Employee” are inherently subjective and their measurement, as such, is imprecise. Understanding and explaining the “why” people feel and respond the way they do and predicting their future behavior relies upon customer-provided feedback and data collection tools, which introduce the possibility of various types of noise and error in how and what is measured.

Some five years ago one of the senior leaders at Clarabridge, a pioneer in the world of text analytics, told me that they thought there no longer was a need to ask customers anything: just listen to what they are saying on their own and you can get an answer to all of your questions. Wishful thinking, but not realistic. We still need to ask for customer and employee feedback. (Clarabridge subsequently was acquired by Qualtrics, which boasts conducting more than 1 billion surveys annually, so I guess asking for direct customer input still is necessary.)

The continued need to collect customer/employee/user feedback, to listen to the customer’s and employee’s voice, brings us to the land of the unsexy: surveys… and therein are the data challenges and problems. The challenges stem from the science of statistics. Sampling error, confidence levels, and probabilities, however, can be calculated and managed accordingly.

The thornier problems arise from human errors of judgment regarding the collection, analysis, and reporting of the data, such as

  1. Failure to clearly conceptualize the objectives,
  2. Inappropriate design to realize the objectives,
  3. Bias in sampling and project design,
  4. Problems with methodology and execution
  5. Poorly worded questions and survey design,
  6. Insufficient attention to the respondent experience,
  7. Inadvertent (or intentional) biasing of responses,
  8. Poor sample management,
  9. Weak analysis of the data,
  10. Misinterpretation of the results.

While the challenges cited earlier are inherent to the laws of statistics, the problems noted above stem from human decisions (or indecisions) regarding customer and employee feedback programs.

The Survey Foundation

In construction, the foundation needs to carry the “load” of the building that rests on it. While perhaps less literal, surveys need to support the objectives of leadership and fact-based decision-making. Whether it’s because surveys are considered too mundane, simplistic, or boring; are delegated to more junior or inexperienced personnel; are misunderstood; or are just taken for granted, many companies pay insufficient attention to this critical component that governs so much of the experience feedback that is collected. And it shows.

Questionnaire and survey design are like advertising copy and brand positioning: everyone has an opinion and thinks they can create professional quality materials by themselves. Well, they can’t.

DIY tools, ironically may have contributed to the problem: in promising to “democratize” the collection, analysis, and reporting of experience data, these tools further lured companies into thinking that research is another plug-and-play component and that everyone could be a research pro. A science kit doesn’t make someone a scientist, Intuit doesn’t make its users accountants, and DIY feedback tools don’t make their users researchers.

For now, at least, generative AI doesn’t solve this problem, as it bases its recommendations on the already flawed body of surveys out on the Internet. AI, moreover, also still needs someone to direct it in terms of business objectives and other parameters.

The Research/CX Connection

As recently as ten years ago the majority of CX research programs were housed in the Market Research group at a company. Today, that is the exception, and CX programs — from measurement, analysis, and reporting through action and implementation — more often than not reside outside of and independent of Market Research. (By contrast, Employee Experience work still usually resides in HR.)

There are pros and cons to every organizational configuration regarding experience research. One positive consequence of this organizational reshuffling is that CX typically has far more visibility and access to senior leadership.

I don’t feel particularly strongly about the turf issues. What I do feel strongly about, however, is the need to maintain professional caliber research and measurement at the core of a company’s experience program, regardless of organizational structure. In moving CX out from the Market Research Department, many firms, unfortunately, failed to bring along or hire additional research talent or consultants to manage and continue to enhance the customer feedback programs that form the foundation of their larger CX efforts. Hence the problems regarding collecting, analyzing, and reporting customer feedback.

Best-in-class experience programs are built on professional caliber experience research and measurement. I don’t mean to sound backward-looking or to suggest that customer experience feedback should be buried back in Market Research and lose its connection to the larger organization. But you don’t want me laying the concrete for the foundation of a building any more than you should want non-researchers running the research that is the basis for customer and employee experience programs.

Howard Lax, Ph.D.

Supporting better informed decision making with technology, research and strategy. With a focus on CX/VoC/NPS, Employee Engagement and emotion analytics, Howard's domain is the application of marketing information and SaaS platforms to solve business problems and activating CX programs to drive business objectives.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here