“You can only manage what you measure.”
A natural corollary to this truism is that mismeasurement will lead to mismanagement. To a large extent, this is a restatement of the garbage in, garbage out (GIGO) axiom: The wrong inputs or measures will produce erroneous results and lead to misguided conclusions.
While we take this for granted with regards to computer codes, directions, formulas and numbers, it amazes me how little attention often is paid to this fundamental issue when it comes to capturing and analyzing voice-of-the-customer (VOC) data.
Perhaps this bothers me more than it should, but a recent excerpt from Faster, Cheaper, Better (Hammer and Hershman, 2010) on the “seven (deadly) sins of corporate measurement” really hit me, as I have seen all of these sins committed in various customer loyalty and customer experience programs conducted by major corporations. Mismeasurement in many VOC programs threatens to totally undermine the reputed efforts of those programs to strengthen customer loyalty and improve the customer experience.
Here are Hammer and Hershman’s seven sins, with examples of how they have undercut VOC programs.
Picking metrics that are easy to hit and which make managers look good.
Not happy with the percentage of top scores they receive, I’ve seen companies treat any nonnegative rating as a positive indication of loyalty or use scales that make anything short of the most egregious service failure look like success. This may make the dashboard results look better, but the illusion of excellence isn’t excellence.
Asking customers questions along organizational lines or using internal jargon that has no meaning to them.
Operational definitions may seem mundane, but they make more sense to readers than expecting them to appreciate nuanced differences between service management vs. service delivery or tellers vs. platform personnel.
Measure from the company’s perspective, rather than from the customer’s.
On countless occasions I’ve had companies insist that customers were wrong about timing measures. The underlying issue is often different staring points. The firm would track the time to resolve a problem from the point when a service tech contacted the customer or opened a service order. OK, but customers begin marking time when they first call or log the issue (or even from the first moment they experience a problem). Of course, perceived time – even if inaccurate – is ultimately what matters to customers anyway (hence the Disney “magic” of turning wait time into part of the experience).
Assuming that the company knows what matters to customers better than customers do.
I battled with a major mortgage investment player that insisted on evaluating its performance on those criteria it “knew” should matter most to customers. The company postulated a corporate advantage number that did not reflect what was most important to customers. Not surprisingly, the advantage number had little to do with customer loyalty or satisfaction with the firm.
Taking too narrow a scope of a larger issue.
Asking customers about the geographic footprint of their cell service, for example, scarcely captures their sense of the quality and reliability of the service.
Losing sight of the consequences of measurement.
If you measure and highlight the number of rooms housekeeping cleans in an hour or the call center turns in a shift, don’t be surprised if the numbers you are tracking improve but the guest or customer experience deteriorates.
Failure to take measurement seriously.
This category is where I place many of my concerns about mismeasurement and poorly-designed VOC research, including social media and text analysis. The who (sample or population), what (content), when (timing), how (mode of data collection) and why (type of analysis) of measurement need to be clearly understood and driven by business objectives. (Note: “We need to do a survey” is not a business objective.) These aren’t simply technical issues for the data wonks; these are the critical parameters that determine the application and utility of the results. In other words, these are the issues that guard against the garbage-in part of the GIGO problem.
Who is not an existential question. Rather, it is the practical issue of the people (or households, or companies, etc.) that are included in the data and the underlying key question: What larger population is the data representative of or projectable to? Is it representative of all customers? Online users only? Those who post comments online only? Customers who came into the store and who made a purchase and paid with a store-branded credit card? Or (gulp), do you have no idea how the who is defined?
Content may seem easy but how you ask what you ask is anything but. Are respondents answering the questions you intended to ask in a consistent, reliable manner? Do you have the right breadth and depth of inquiry? The what issues often bump up against practical concerns about survey real estate and the need to limit the length of the questionnaire.
When often is ignored but time of day, week, month or year can have significant impact on the customer experience, as well as the response rate. This becomes particularly important when it comes to trending.
While you might not have that many options with regards to the how of data collection, there is a mode effect (i.e., how you collect the data will affect the data). The mode of data collection also will affect how much you can ask, what you can ask and how you should ask.
Ultimately, the why is all about application. How do you plan to analyze and use the data? This is where the research meets the business objectives. In a well-conceptualized engagement, the why is specified up front and determines many of the who, what, when and how issues.
Note: This piece first appeared in Quirk’s E-newsletter February 28, 2011.