Most people don’t humble brag about revenue forecasts.
“Our #revenue #forecasts have been 2% off for 7 straight qtrs. Can’t make them accurate. #annoyed”
But some have honest anger. When I look online for the phrase, bad sales forecasts, I receive around 2,300 results. Nothing better than a search box for discovering sensitive nerve endings.
The phrase, Forecasts suck, and its semantic siblings yield a cumulative 3,900 results. Other searches yield a trove of moaning, griping, and hand-wringing over “bad numbers.” Forecasting has a learning curve, but for many companies, the trajectory points south. Clairvoyance: it’s a tricky game.
But numbers aren’t intrinsically bad. The processes that produce them are. Charismatic consultants remind us of that daily with a tool, honed for shaming – a PowerPoint slide titled The Definition of Insanity.
No need to look up the definition’s originator, or the definition. It’s Einstein, and “doing the same thing over and over again and expecting different results.” The saying has been beaten to death. But allow me one more use, because this topic just screams out for it. Then, I’ll retire this bombastic finger-wagging admonishment from my writing forever. I promise.
Everyone, it seems, commits forecasting mistakes. Check your newsfeeds for daily updates. Some are monstrous: “Just this January, the Congressional Budget Office projected that enrollment [for coverage under the Affordable Care Act] would be 12 million for 2015. And a year ago, it had forecast 13 million . . . A little bit of math shows that sign-ups in 2015 came in 22% below the CBO’s earlier forecast,” according to Investors.com. Throwing fairy dust into the air would have been a better use of taxpayer money. Write your representative.
When forecasts miss, accusations swirl in the wake. “Our salespeople don’t have a clue.” “Our basic assumptions were way off.” “Marketing never anticipated that regulators could discover our emissions cheating . . .”
Such speculation gets fuel from five myths:
1. Above all, revenue forecasts must be accurate. “The accuracy of most forecasts depends on decisions being made by people rather than by Mother Nature. Mother Nature, with all her vagaries, is a lot more dependable than a group of human beings trying to make up their minds about something.” Peter Bernstein wrote in his book, Against the Gods: The Remarkable Story of Risk.
2. Salespeople forecast unrealistic sales figures because they are “overly optimistic.” There’s no credible research that ties false optimism to salespeople any more than to lawyers, accountants, pilots, or programmers – though it’s hard to imagine hiring a sales candidate who says, “I have to think probabilistically about whether I can make goal.” So managers must accept their contribution to this problem. After all, they select people who demonstrate a rabid can-do attitude.
3. Only a leading-edge CRM solution makes more accurate sales forecasts possible. CRM systems are repositories for past events, but they have huge weaknesses for predicting the future. “Since we never know exactly what is going to happen tomorrow, it is easier to assume that the future will resemble the present than to admit that it may bring some unknown change,” Bernstein wrote. Others agree. “At their best, SFA/CRM systems give a comprehensive view of the pipeline, as well as detailed drill-downs on the state of play for any specific deal. Unfortunately, few CRM customers can really depend on (or even use) the forecast that the system produces. Most of the time, executives must second-guess the CRM data, making judgment calls that may not be consistent week-to-week and are rarely recorded anywhere. Worse, everyone’s first reflex is to call the rep if they need to find out what’s really going on with any account. As a result, the CRM data is seldom authoritative,” David Taber wrote in a 2012 article, Accurate Sales Forecasts and other CRM Fantasies.
4. Forecasts would be more accurate if salespeople were better at closing. Like many myths, this carries a shred of truth. Salespeople influence buying outcomes. But a good forecast model must account for what’s outside a salesperson’s control as much as it accounts for what’s within it. A forecasting system should include external forces and events that upend sales opportunities, such as new laws and regulations, personnel attrition, project delays, mergers and buyouts, changes in a prospect’s strategic objectives, supply chain disruptions, and fluctuations in international exchange rates.
5. If two or more people agree on a forecast outcome, their forecast is probably right. “Agreement among forecasters is not related to accuracy – and may reflect bias as much as anything else,” Nate Silver wrote in his book, The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t.
People often flippantly use accuracy in conversations about forecasting. Here’s a clarification of the often-used terms:
Accurate forecast – predicted revenue equals actual revenue. An impossibly high standard. The goal of an accurate forecast is to eliminate variability.
Quality forecast – the forecast predicts [outcomes] well, with the available information, and according to specific objective and/or subjective criteria. The goal of a quality forecast is to reduce variability – not to eliminate it.
Valuable forecast – a forecast that facilitates better decisions. A forecast for recurring monthly revenue can be accurate when a customer has a contractual obligation for the purchase. But its value is not high.
I empathize with people who have a desire for accuracy. After all, variability is the arch-enemy of planners. We crave certainty and consistency. Why not? Orderly decision-making and predictability go hand-in-hand. Unfortunately, there are strong entropic forces in business decision making, which makes demanding forecast accuracy so unreasonable. “There’s no case in history where we’ve had a complex thing with lots of variables and lots of uncertainty, where people have been able to make econometric models or any complex models work,” forecaster Scott Armstrong told Nate Silver. “The more complex you make the model the worse the forecast gets.” Few can argue that a B2B business decision is linear and straightforward, especially ones that require collaboration, as most do.
“What’s the solution?” people ask me. “If we’re not after accuracy, then we should just give up with forecasting?” No. “The first action is to not ask for a forecast,” Ken Thoreson wrote in a blog, Why You Can’t Get an Accurate Forecast. He recommends asking for a revenue “commitment.” He makes a good point. If accurate forecasts are unattainable, quit hounding people for them, or quit griping when they’re wrong.
I believe planning works best when people concentrate on forecast quality, which continually seeks the combination of information that best predicts when revenue will be realized, and how much it will be. Most important, forecast quality doesn’t penalize the forecast provider for being wrong. That’s inevitable. Rather, it focuses on refining information so that variability can be reduced. One characteristic that distinguishes information from data is that information reduces uncertainty. And good information reduces uncertainty better than mediocre information.
The pursuit of forecast quality over accuracy represents a subtle, but critical shift in thinking. Forecast quality keeps high predictive value at the forefront. And it keeps The Definition of Insanity from being mentioned whenever forecast revenue and actual revenue don’t perfectly match.
Andy: great points.
IMO, there’s enormous potential value in de-risking future revenues, rather than trying to better predict them.
Much to be learned on this from points Nasem Taleb makes. As summarized here: https://www.salesforce.com/blog/2013/12/b2b-sales-performance-gp.html
Trust this adds some value. – John
Andy, the Vodafone Managing Director stated that he was able to predict market share a quarter out to 1% accuracy, using Customer Value data. This implies measuring Customer Value and using it as a predictive index. Then we can predict revenues better
Hi Gautam: Vodafone’s claim is a curious one. It’s in the top-three telecomm companies in the world. The best I could determine, their services are sold in 61 countries. They have an extensive indirect channel. In a market as fluid as telecomm services, how they can make a solid claim such as this defies my understanding. It assumes consensus on the size of the global telecomm market.
My experience has been that it’s easy to make claims, but quite another to prove them. And it’s even harder to find agreement on the “proof.” If Vodafone has the clairvoyance that you mentioned, more power to them. They’ve got a capability that no other company on the planet seems to have mastered.
Andy, I am quoting Vodafone’s Managing Director for Australia, Graham Maher. I assume he was basing this on fact.
Hi Gautam: Maher made this claim in 2008, and he died in 2010. I have not read of any similar claim before – or since. He said this in the context of CVM – customer value management, and how it enabled predictive accuracy. While he might have taken facts into consideration, I don’t believe his accuracy claim was even close to factual.
CVM has strong business value, but his claim was gratuitous. CVM does not account for external factors that are volatile and influence customer buying decisions. They include competitive price maneuvers, availability of new technology, obsolescence of old technology, regulatory forces, economic forces, and channel partner sales efficacy. It does not include the evolution of concurrent technology such as mobile software development, upon which telecommunications sales depend. Not to mention the fact that predicting telecommunications market share accuracy to within one percent requires knowing the size of the telecommunications market, which in 2008 was $297 billion in the US alone. No company can credibly make such a predictive claim.
I agree, Andy. But at a point in time it gives very good indications. The next day there could be terrorist attack, and yesterday’s predictions will go up in smoke.
A very thoughtful article.
We are well advised to recognise that all forecasts are models of reality and as such are simplifications, sometimes gross simplifications. As your comment about Vodafone suggests, there are any number of factors, known and unknown (as Donald Rumsfeld so infamously put it) and even unknowable, that influence the accuracy of these models. I often find it more useful to provide a crude estimate complete with the assumptions upon which it is based, rather than a purportedly accurate forecast with spurious margins of error.
The devil is in the details. And the details are often complex, adaptive and systemic. Not exactly conducive to accurate forecasting.
Thanks, Graham. Your comment about simplifications reminded me of Philip Tetlock, who identified a thought pattern that appeared to facilitate better predictions: considering multiple points of view. He called the people who approached prediction that way ‘foxes,’ and compared them to ‘hedgehogs’. Hedgehogs take a single idea – I think we use shiny object in today’s business vernacular – and extrapolate it to darn near everything. If anything, I think hedgehogs are prone to oversimplification.