“General Electric estimates that the Industrial Internet—its name for the world of interconnected, sensor-equipped, data-generating machines and other objects more commonly known as the Internet of Things—will deliver as much as $15 trillion in economic value in the next twenty years. Cisco, perhaps having learned the lesson that round figures don’t convey precision, puts that number at $14.4 trillion— but it sees that economic value coming within just seven years, not twenty,” according to a recent InformationWeek article, Hyper-inflated Tech Stats Just Got 217% Crazier.
GE and Cisco, both publicly-traded companies, have reason to pump oodles of sunshine into their revenue estimates. But how an industry analyst can posit a numerical outcome like $15 trillion in revenue, and provide a timeframe for the result amazes me. With numbers that size, you can estimate almost anything without fear of rebuttal. How about, “$9 trillion by 2038.” Sounds good to me! By then, if I’m wrong, people won’t inform me through email, but through a nano-chip embedded in my brain—something I’m not sure I’m looking forward to.
For some analysts, a trillion dollars here or there represents an estimation rounding error—hardly worth obsessing about. But many of us sweat over much smaller estimates. We’re accountable for forecast accuracy, project lead-times, and budget-to-actual variances. And we’re often in hand-wringing mode. “Can’t anyone around here estimate correctly? We can’t plan anything when our numbers are all over the place!”
What, exactly, is an estimate? One definition, “the most optimistic prediction that has a non-zero probability of coming true,” comes from a widely-respected book about project management, Rapid Development, by Steve McConnell. I know—I had to think about that one, too. But hold the idea, because I’ll get back to it.
While you’re ruminating, try this quick estimation exercise:
Without consulting online search, write down your estimated Low Value and High Value for the following items. At the end of this blog, you’ll have the opportunity to compare your estimated range to the correct answer.
1. The year of Napolean’s birth
2. Length of the Nile River in miles
3. Maximum takeoff weight in pounds for a Boeing 747-300
4. Number of seconds for a radio signal to travel from the earth to the moon
5. Number of minutes for a space shuttle to complete one full orbit of the earth
6. Height in feet of the Great Pyramid
7. Number of bones in the adult human body
No pressure, but you will be evaluated on the total average variance of your estimates. The point of the exercise, of course, isn’t to find out how well you paid attention in school. It’s to expose the challenges involved in estimation, particularly when we’re already familiar with what’s being estimated.
The first time I tried this exercise, I found that for some questions, the correct answer was included within the range I estimated. But for other questions, I missed the correct answer completely, even when my low and high values were conservatively broad. Sure, people regularly game estimates by providing generous risk buffers to ensure their estimates cover results, but that compromises the estimate’s usefulness for forecasting, planning, and budgeting.
What distinguishes an estimate from a guess-timate from a SWAG*? Very little, actually. Estimates generally involve logic, reasoning, and require the use of related facts. Guess-timates are more thinly supported. By how much? Who knows?
And SWAG‘s are just SWAG‘s.They are only marginally more credible when the S stands for Scientific, rather than Simple. In any case, by keying on the Wild-Assed-Guess part, you’ll retain an appreciation for the true quality of the resulting number. Caveat Plan-or!—Planner, beware!
McConnell connects his definition of estimate, “the most optimistic prediction that has a non-zero probability of coming true,” with its ultimate goal, which he says is “to seek a convergence between your estimates and reality.” Ah—now everything makes sense. Here are some tips for creating convergence:
1. Avoid seat-of-the-pants estimates. Though there’s wonderful lore about how a perfunctory, back-of-the-napkin calculation spawned a high-growth company or two, avoid the temptation to wing it. Instead, use an algorithm, estimation software, or a benchmark from a comparable project or program.
2. Request input from those who will be directly involved in delivering the actual results. If someone protests and says, “but Sales is too close to this to offer an objective view . . .” smack him.
3. Estimate results at a detailed level first, then aggregate the estimates. “A 10% error on one big piece is 10% high or 10% low,” McConnell writes. “The 10% errors on fifty small pieces will be both high and low and will tend to cancel each other out.”
4. When justifying the estimation range, explain why it’s unreasonable to estimate a higher value, and do the same for the lower value.
5. Identify and explain the key assumptions involved in the estimate.
All estimates include at least implied probabilities, even if the estimator doesn’t realize it. While software has made forays into probability assessment, humans still have a leg up, making estimation a blend of both science and art. The science part involves software tools and theoretical numbers, while the art involves heuristics and abstract thinking.
I estimate that humans will have a role in estimation for at least another thirty years. After that, the only safe job title will be Algorithm.
* SWAG: Simple—or sometimes, Scientific—Wild-Assed Guess
Answers:
1. Napolean was born in 1769
2. The Nile River is 4,132 miles long
3. The takeoff weight of a 747-300 is 833,000 pounds
4. It takes 1.5 seconds for a radio signal to reach the moon from the earth
5. It takes 90 minutes for the space shuttle to complete one full orbit of the earth
6. The Great Pyramid is 481 feet high
7. There are 206 bones in the adult human body
Hey Andy, great post. I’m an analytics guy and certainly agree with your perspective and would only add that sometimes, you have to make do, or make what I call a GEFN decision… Good Enough For Now… based on the data you have available or the timeframe you have to work with.
Context also matters, too, right? For example, when are you counting those bones in the human body? If you’re counting at birth, there are somewhere around 270-350, depending on which source you believe. If you count in an adult, it’s usually 206, unless something didn’t fuse properly over time. Sometimes, you just have to make averages work. In other cases, you need to dig in closer to a specific instance (such as, doing an MRI to determine the exact bone count of one person). To me, the magic is knowing when you can reply on the averages, and when you need a very specific, 100% accurate answer.
Of course, that’s where your tips for creating converge come in, and I really enjoyed those.
Stay the course,
Mike
Mike: thanks very much for taking the time to comment. I’m glad you mentioned GEFN – I have found that many companies estimate with that notion, accepting the risks that the estimate might be flawed or that the expanded range of the estimate might cost them more for implementation. (eg if anticipated product demand over the leadtime is highly uncertain, more safety stock must be carried – hence a higher Cost of Risk.)
The issue you brought up about needing a 100% accurate answer is an important one. If such a high level of certainty is required, the correct approach isn’t an estimate–which is inherently probabilistic–but rather, a specification.
Even though the test of a good estimate is reality convergence, most planners would not sign up to estimate anything if 100% accuracy was required – they would just hand the problem to engineering.
Andy,
Good point about the specification. 100% was a stretch… moving too fast; mea culpa. In most of the analysis work I do, when “accurate” answers are needed, I’m using a 95% confidence interval. I’m not building bridges or writing code for medical devices. But your point is well-taken and accurate. In either case, good post.
In terms of “now” on GEFN, that’s been variable, in my experience. In many cases, for me specifically, it just meant coming back around to programs or initiatives to fine-tune them, after hitting a challebging launch target. Mostly a “continuous improvement” approach, incorporating field feedback as well, along the way.
Mike