Do You Mangle What You Measure? Eight Pitfalls to Avoid

4
304

Share on LinkedIn

“You can’t manage what you can’t measure.”

Grrrrrr! It sounds authoritative. Catchy, too. I like it!

I wanted to learn who originated this hallowed maxim, and my search led to none other than W. Edwards Deming, the quality guru. He must have known what he was talking about. Thanks to Deming’s pioneering work in statistical process control, the seals on my car doors don’t leak when it rains, and when I put the vehicle in drive, it goes forward, and not into reverse.

But it turns out that Ed never made that statement. In fact, his original thought got garbled over the years. What Deming did say was “The most important figures that one needs for management are unknown or unknowable . . . but successful management must nevertheless take account of them.” That’s too long to Tweet, and it looks clunky on a PowerPoint slide. No wonder his verbose sentence mutated into those seven presentation-friendly words.

Deming’s circumspect voice has spawned a font of finger-wagging spinoffs:

  • “You treasure what you measure.”
  • “You can expect what you inspect.”
  • “You can’t control it, unless you measure it or model it.”

Measuring and metrics are smoking hot right now, with nearly 63 million search results when I last checked a few seconds ago – more than double the number I got when I entered crime prevention. And for good reasons:

  • uncertainty and risk confront us every day
  • humans love explanations
  • we have boundless capabilities for keeping track of stuff

Not to mention, we have pragmatic business needs. As Deb Calvert wrote in a recent blog, “With the emphasis on results, plus the accolades and rewards you’ve received for producing results, you may be singularly focused on the numbers, the volumes, the productivity, and the bottom line.” But even wonky Deming cautioned that one of the seven deadly diseases of management is running a company on visible figures alone. Danger Will Robinson! You might mangle what you measure!

Steer clear of these pitfalls:

1) Measuring what’s not meaningful.
Number of outbound calls made. Number of demos given. Quantity of Facebook likes. Satisfaction indices. For managing revenue risk, do these measurements matter? For some companies, maybe. But for others, no. A problem that’s compounded if employee compensation is based on measurements for things unrelated to delivering value. At one company I worked with, the district manager announced to the sales force that he was going to track Windshield Time Ratio – or WTR. He described this metric as monthly revenue divided by total miles driven. The salesperson who covered suburban Philadelphia had a more efficient ratio than the rep who covered the entire state of Nebraska. Much more. But so what?

2) Succumbing to the flaw of averages.
“The Pollyanna way of forecasting the future is to take averages from the past and project them forward,” Steve Parrish wrote in Forbes (October 3, 2012). Adding, “It is not necessarily wrong to use averages in making financial decisions, but it is dangerous to rely on this measuring tool alone. Computers are powerful tools; let’s put them to work. Why not look at various assumptions and scenarios to get a feeling for possible outcomes?”

3) Putting too much credence in “what the numbers say.”
Otherwise known as Dashboard Love. “Monitor individual departments, multiple websites and anything else using dashboards,” one website proclaims. Anything else? Impressive, if it was possible.

4) Not questioning the accuracy of the measurements. Or their origin.
“66% of the buying process is complete by the time a salesperson gets involved.” (I’ve also read 60%, and 67%). “Two thirds of all marketing content is never used by salespeople.” A separate article squawks, “70% of sales enablement content is never used.” Hmmmmm. We bandy these numbers about like they are de facto standards, without defining terms, or clarifying the context when these measurements were allegedly made.

In current public policy debates, such as racial profiling and law enforcement, we hew to metrics that support our points of view. But, as The Wall Street Journal reported in December, 2014, “It is nearly impossible to determine how many people are killed by the police each year . . . Three sources of information about deaths caused by police – the FBI numbers, figures from the Centers for Disease Control and data at the Bureau of Justice Statistics – differ from one another widely in any given year or state.”

5) Injecting biases. Confirmation bias, anchoring, bandwagon effect.
Biases are always present when using metrics for decision making. According to Bob Hayes, (The Hidden Bias in Customer Metrics), “generally speaking, better decisions will be made when the interpretation of results matches reality” – an outcome no one should ever take for granted. He illustrated with an example: “we saw that a mean of 8.5 really indicates that 64% of the customers are very satisfied (% of 9 and 10 ratings); yet, the [Customer Experience] professionals think that only 45% of the customers are very satisfied, painting a vastly different picture of how they interpret the data.”

6) Observer effect.
The measurement perturbs the results. This idea goes back to 1927 when physicist Werner Heisenberg developed his uncertainty principle, in which he proved that the way scientists observe objects changes the way the objects behave. The same occurs in business development. For example, the reason that opt-in and opt-out online surveys yield different results. With opt-in, respondents could be motivated to receive a reward. That could create a fundamental misperception of reality, a point that James Mathewson made in an article, How to Measure Digital Marketing with Observer Effects. He wrote, “we have to accept that digital marketing management is not all science. It’s almost as much art as science. If we embrace the art, we can get better information on our users, and ultimately serve them better without violating their privacy.”

7) Compiling faulty indices.
Many indices are flawed and easy to manipulate, The Economist reported in an article, How to Lie with Indices (November 8, 2014). Their sardonic expose reveals some commonly used tricks. “Above all, remember that you can choose what to put in your index – so you define the problem and dictate the solution.”

8) Embedding unneeded complexity.
Seth Goldman, Co-founder of Honest Tea (now part of Coca Cola), wrote in an article, Way too Many Metrics, “At Honest Tea, we’ve pondered many different metrics when trying to quantify the impact of our mission driven business, including:

  • The reduced calories for each bottle, can or drink pouch we sell
  • The increase in organic acreage fueled by our expanded demand, which helps support a less chemical-intensive approach to agriculture
  • The community investment dollars that we are able to generate with our Fair Trade premiums, such as support for schools, ambulances or eye care for villagers in a tea garden
  • The influencer/ripple effects of our success, when we create pressure for competitors to expand their low/no-calorie or organic options

But instead we keep it simple – we evaluate the impact of our mission by counting the number of bottles we sell.” At the time he wrote the article, he knew the number: 930,601,802 bottles from inception.

There’s unlimited room for mangling what’s measured. The faulty insight that yields the misdirected strategy or tactic. The hubris that results after statistical output is anointed as having predictive validity. “The threat is that we will let ourselves be mindlessly bound by the output of our analyses even when we have reasonable grounds for suspecting something is amiss,” wrote Viktor Mayer-Schonberger and Kenneth Cukier in their 2013 book, Big Data. Nicholas Carr, author of a recent book, The Glass Cage, gives similar caution: ” . . . templates and forumulas are necessarily reductive and can too easily become straightjackets of the mind.”

4 COMMENTS

  1. In Act 1 of Shakespeare’s “Julius Caesar”, Cassius says to Brutus: “”The fault, dear Brutus, is not in our stars, But in ourselves, that we are underlings.” Bringing the sentiment to more current times, cartoonist Walt Kelly’s Pogo Possum said: “We have met the enemy, and he is us.” For the purposes of measurement, especially around customer experience and stakeholder behavior, this suggests to me that one of the additional ways data and analysis can get mangled is that researchers can get complacent, even ‘stuck’, and take little or no responsibility for this. In other words, too often the pitfalls, which have been known for years and years, occur…..because of us.

    We, the researchers, too readily accept, or choose to look away from, the kinds of issues you’ve identified, plus others, such as metrics that don’t really measure what they claim to measure. Many of the metrics and KPIs in active use today have design problems that range from moderate to serious, plus methodological flaws, insight inaccuracies and other analytical limitations. The researcher who doesn’t challenge them, and seek more real-world actionability, is doing his employer, or his client, a disservice.

    Like potholes in the road, most of these research pitfalls can be avoided, or at least reduced, with a little more energy, discipline and focus.

  2. Michael – thanks for your comment. I am a fan of Pogo, so I will have to look up your example. I’ve been “collecting string” for this article for several months. One writing challenge was paring down the examples – it seems not a day goes by that I don’t read about a bad strategy or a poor or calamitous result from the problems I highlighted. My belief is that there’s way too much groupthink in companies, and that even though there are many employees who really do know better, their opinions are squelched by others who have a vested interest in maintaining their departmental or functional power.

  3. There’s a great quote from Andrew Grove, founder of Intel: “Complacency breeds failure. Only the paranoid survive.” After working with so many researchers and analysts for so many years, my summary observation is that they become complacent and risk-averse. This impairs their ability to be contemporary in their thinking and their use of measures, and it also stunts the actionability of their tactical and strategic studies. So, it’s often not just the measures that are mangled, but the thinking and acceptance behind them.

    Cases in point that you’ve identified: Use of mean (and not polar) data, indices that provide minimal (or inconsistent) direction, acceptance or ignorance of biases. And one that I’d add: Failure to apply metrics that are contemporary and real-world.

    In reading Deming’s content, like Out of the Crisis, he clearly understood the value of good market and customer experience research. He also knew that rigid and/or constrained systems and cultures could hold an organization down. Especially with regard to metrics and their application, he saw firsthand that the right measures could be applied to positive effect and that impaired metrics could do just the opposite..

  4. Agreed – thanks for the addition. Many managers are self-satisfied with the KPI’s and predictive analytics they receive about their companies and and content to accept measurements that are flawed, or no longer useful. As one CEO I worked with put it, when it comes to introducing new ideas and thinking into the executive suite, “there’s too little oxygen in the room.”

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here