# Metrics – Lots & Lots of Metrics

0
47

Readers of this blog have seen a lot about the indispensable value of metrics. First, ya’ gotta’ have a defined process. Then ya’ gotta’ measure it. That’s the only way to know if improvement has occurred or not, and at what rate. It’s the only way you can prove your dedication to continuous improvement. Show me the data! If you don’t have data, you’re just blowing smoke.

OK, hard to debate, but… It’s also crucial to have a lot of different metrics. Without a variety of statistical perspectives, it’s easy to misinterpret what a given metric really means. You might put some action plans in place that are incomplete, a bit off target or downright wrong-headed.

Here’s an example from the social/political world to illustrate the danger of using only one metric. Consider income/wealth inequality. The chart on the left shows that the richest 1% of Americans have 1/3 of the money, and the poorest 50% have only 1/2 of it. In other words, for every \$1 someone in the poor group has, someone in the rich group has \$676. Or you could say \$100 vs. \$67,600; or \$10,000 vs. \$6,760,000. One might conclude that such a big difference is just wrong, and that something should be done about it. Like maybe taxing the rich group & subsidizing the poor one???

Let’s take a look at a different metric regarding this income disparity thing. The poorest 5% of Americans are richer than 68% of the world’s inhabitants. Compare the US numbers to India’s. America’s poorest are, as a group, about as rich as India’s richest! Is that also just plain wrong? Should somebody do something about that too? Like maybe taxing all Americans & shipping gigantic barrels of cash to everyone in India???

My point is not to debate what should or should not be done by whom regarding income and wealth disparity. My point is that the way the question is framed and supported by data can have a really, REALLY big impact on the resulting action plan. Different sets of numbers about the same thing can tell a radically different story, and lead to radically different decisions. We need to think, discuss, debate, discover and learn just exactly what the numbers are telling us.

How do you objectively judge – using data, of course – the performance of a sales rep? Total sales? Sell cycle time? Profitability? Sales of old products to new accounts? Sales of new products to old accounts? Average sales size? Number of sales to new accounts? Number of sales to existing accounts? (I could add the other 700 or so possible sales metrics I’ve collected over the years, but I’ll spare you!)

Is it one or two or three or fifty key indicators? For every rep in every territory? Regardless of rep experience or if it’s a newly penetrated geography or segment? Is it some way cool index the MBA-toting, outside expert consultant cooked up? Is it whatever the company president chewed out the sales VP for yesterday?

This may be a shock… I don’t think the answer to that last set of questions matters all that much from a “continuous improvment of my sales process” standpoint. The truly powerful value of lots of metrics is the discussion about them.

It’s the series of deep, intelligent, challenging, painful, heated, gloriously rewarding coaching conversations that matters.

Collect the data. Analyze the data. Debate the daylights out of what the data really means. Then go sell more. Go sell it faster!

Republished with author's permission from original post.

Todd Youngblood
Todd Youngblood is passionate about sales productivity. His 3+ year career in Executive Management, Sales, Marketing and Consulting has focused on selling more, better, cheaper and faster. He established The YPS Group, Inc. in 1999 based on his years of experience in Sales Process Engineering – that is, combining creativity and discipline in the design, implementation and use of work processes for highly effective sales teams.