Most people don’t humble brag about revenue forecasts.
“Our #revenue #forecasts have been 2% off for 7 straight qtrs. Can’t make them accurate. #annoyed”
But some have honest anger. When I look online for the phrase, bad sales forecasts, I receive around 2,300 results. Nothing better than a search box for discovering sensitive nerve endings.
Collect and act on NPS-powered customer feedback in real time to deliver amazing customer experiences at every brand touchpoint. By closing the customer feedback loop with NPS, you will grow revenue, retain more customers, and evolve your business in the process. Try it free.
The phrase, Forecasts suck, and its semantic siblings yield a cumulative 3,900 results. Other searches yield a trove of moaning, griping, and hand-wringing over “bad numbers.” Forecasting has a learning curve, but for many companies, the trajectory points south. Clairvoyance: it’s a tricky game.
But numbers aren’t intrinsically bad. The processes that produce them are. Charismatic consultants remind us of that daily with a tool, honed for shaming – a PowerPoint slide titled The Definition of Insanity.
No need to look up the definition’s originator, or the definition. It’s Einstein, and “doing the same thing over and over again and expecting different results.” The saying has been beaten to death. But allow me one more use, because this topic just screams out for it. Then, I’ll retire this bombastic finger-wagging admonishment from my writing forever. I promise.
Everyone, it seems, commits forecasting mistakes. Check your newsfeeds for daily updates. Some are monstrous: “Just this January, the Congressional Budget Office projected that enrollment [for coverage under the Affordable Care Act] would be 12 million for 2015. And a year ago, it had forecast 13 million . . . A little bit of math shows that sign-ups in 2015 came in 22% below the CBO’s earlier forecast,” according to Investors.com. Throwing fairy dust into the air would have been a better use of taxpayer money. Write your representative.
When forecasts miss, accusations swirl in the wake. “Our salespeople don’t have a clue.” “Our basic assumptions were way off.” “Marketing never anticipated that regulators could discover our emissions cheating . . .”
Such speculation gets fuel from five myths:
1. Above all, revenue forecasts must be accurate. “The accuracy of most forecasts depends on decisions being made by people rather than by Mother Nature. Mother Nature, with all her vagaries, is a lot more dependable than a group of human beings trying to make up their minds about something.” Peter Bernstein wrote in his book, Against the Gods: The Remarkable Story of Risk.
2. Salespeople forecast unrealistic sales figures because they are “overly optimistic.” There’s no credible research that ties false optimism to salespeople any more than to lawyers, accountants, pilots, or programmers – though it’s hard to imagine hiring a sales candidate who says, “I have to think probabilistically about whether I can make goal.” So managers must accept their contribution to this problem. After all, they select people who demonstrate a rabid can-do attitude.
3. Only a leading-edge CRM solution makes more accurate sales forecasts possible. CRM systems are repositories for past events, but they have huge weaknesses for predicting the future. “Since we never know exactly what is going to happen tomorrow, it is easier to assume that the future will resemble the present than to admit that it may bring some unknown change,” Bernstein wrote. Others agree. “At their best, SFA/CRM systems give a comprehensive view of the pipeline, as well as detailed drill-downs on the state of play for any specific deal. Unfortunately, few CRM customers can really depend on (or even use) the forecast that the system produces. Most of the time, executives must second-guess the CRM data, making judgment calls that may not be consistent week-to-week and are rarely recorded anywhere. Worse, everyone’s first reflex is to call the rep if they need to find out what’s really going on with any account. As a result, the CRM data is seldom authoritative,” David Taber wrote in a 2012 article, Accurate Sales Forecasts and other CRM Fantasies.
4. Forecasts would be more accurate if salespeople were better at closing. Like many myths, this carries a shred of truth. Salespeople influence buying outcomes. But a good forecast model must account for what’s outside a salesperson’s control as much as it accounts for what’s within it. A forecasting system should include external forces and events that upend sales opportunities, such as new laws and regulations, personnel attrition, project delays, mergers and buyouts, changes in a prospect’s strategic objectives, supply chain disruptions, and fluctuations in international exchange rates.
5. If two or more people agree on a forecast outcome, their forecast is probably right. “Agreement among forecasters is not related to accuracy – and may reflect bias as much as anything else,” Nate Silver wrote in his book, The Signal and the Noise: Why So Many Predictions Fail – But Some Don’t.
People often flippantly use accuracy in conversations about forecasting. Here’s a clarification of the often-used terms:
Accurate forecast – predicted revenue equals actual revenue. An impossibly high standard. The goal of an accurate forecast is to eliminate variability.
Quality forecast – the forecast predicts [outcomes] well, with the available information, and according to specific objective and/or subjective criteria. The goal of a quality forecast is to reduce variability – not to eliminate it.
Valuable forecast – a forecast that facilitates better decisions. A forecast for recurring monthly revenue can be accurate when a customer has a contractual obligation for the purchase. But its value is not high.
I empathize with people who have a desire for accuracy. After all, variability is the arch-enemy of planners. We crave certainty and consistency. Why not? Orderly decision-making and predictability go hand-in-hand. Unfortunately, there are strong entropic forces in business decision making, which makes demanding forecast accuracy so unreasonable. “There’s no case in history where we’ve had a complex thing with lots of variables and lots of uncertainty, where people have been able to make econometric models or any complex models work,” forecaster Scott Armstrong told Nate Silver. “The more complex you make the model the worse the forecast gets.” Few can argue that a B2B business decision is linear and straightforward, especially ones that require collaboration, as most do.
“What’s the solution?” people ask me. “If we’re not after accuracy, then we should just give up with forecasting?” No. “The first action is to not ask for a forecast,” Ken Thoreson wrote in a blog, Why You Can’t Get an Accurate Forecast. He recommends asking for a revenue “commitment.” He makes a good point. If accurate forecasts are unattainable, quit hounding people for them, or quit griping when they’re wrong.
I believe planning works best when people concentrate on forecast quality, which continually seeks the combination of information that best predicts when revenue will be realized, and how much it will be. Most important, forecast quality doesn’t penalize the forecast provider for being wrong. That’s inevitable. Rather, it focuses on refining information so that variability can be reduced. One characteristic that distinguishes information from data is that information reduces uncertainty. And good information reduces uncertainty better than mediocre information.
The pursuit of forecast quality over accuracy represents a subtle, but critical shift in thinking. Forecast quality keeps high predictive value at the forefront. And it keeps The Definition of Insanity from being mentioned whenever forecast revenue and actual revenue don’t perfectly match.