“Things are not always what they seem.”
We can thank Phaedrus, an Athenian who lived around 400 BC, for that insight. He’s also credited with coining a curious metaphor, “opportunity has hair in front, but is bald behind.” Phaedrus isn’t the gung-ho, up-tempo spirit I’d want on my innovation team. But he’s my kind of guy. Say what you will, but Phaedrus didn’t have his head stuck in the sand.
Consider two sales scenarios of interest to modern-day Phaedrai:
- XYZ Company bought a 40-seat software license from CorporateSoft, an ERP software developer. The transaction closed sixteen months later than originally forecast. CorporateSoft originally projected selling 1,000 seats, but XYZ kept scaling back their project. And instead of buying CorporateSoft’s premium service, XYZ approved only the basic level included in the license fee. CorporateSoft’s CFO determined that selling and support costs for this opportunity far exceeded total revenue.
- ABC Enterprises, a global company, solicited bids for a multi-million dollar purchase of outsourced programming services and consulting, stating that one vendor would be awarded the contract. Each prospective vendor was required to provide a statement of qualification and attend a bidder’s conference at ABC’s headquarters. CorporateSoft and four competitors pursued the opportunity. Sixty days later, ABC informed CorporateSoft that their offering was “non-compliant” and they would no longer be considered. The four remaining companies continued, spending more than $500,000 apiece. But before choosing a vendor, ABC suddenly announced it was merging with a competitor, and the project had been scuttled.
For CorporateSoft, which scenario produced the more favorable outcome? Which was the “win,” and which was the “loss”? If your answer has a preamble like, “it depends . . .” you’re not alone.
As a salesperson, I’ve had more Pyrrhic victories like Scenario A than I care to recount. And I’ve also had schadenfreude watching rivals go down in flames dealing with demonic clients who, luckily for me, decided against my proposal. Thank you, thank you, thank you! Sometimes, things really do turn out for the best!
These mixed outcomes are undoubtedly familiar to sales veterans. Yet, since time immemorial, sales managers have taken a binary view, categorizing sales opportunities that close as wins, and those that don’t as losses. That creates misunderstandings and raises revenue risks. Sales outcomes are as varied as human phenotypes, making them fiendishly hard to label. And using categories like win or loss taints analysis straight out of the gate.
When people reflect on an opportunity anointed as a win, they really want to know, “why did we win?” From there, they look for anecdotes and observations that conform to the label. And when data points are lacking, they are wicked-easy to invent:
- We just did a better job at selling than Competitor X.
- Our demo team was better prepared and showed the prospect exactly what they wanted to see.
- We did a superb job of establishing trust.
- The sales rep did an excellent job of connecting with the right decision makers.
- Our proposal was submitted on time and we stayed within the client’s budget.
Reflecting on opportunities classified as losses, the objective is the same: find what conforms. If that’s difficult, make something up. Most loss reports I’ve read contain observations that cluster around rep deficiencies, followed by product effectiveness, and competitive pricing maneuvers.
- Rep failed to meet with the real decision makers.
- Rep was unable to sell the client on the value of our flagship offering.
- Our solution didn’t fully address the client’s needs.
- Competitor discount levels could not be matched by our company.
- Vendor selected was the incumbent.
The winning vendor might have been the incumbent. What’s less certain was whether incumbency had bearing on the customer’s choice, and if so, how much. Still, I commonly see this artifact in Loss reports.
I’ve won sales opportunities despite tripping over myself all the way to the finish line. And I’ve lost in tight competition when my strategy was sound and my execution was borderline flawless. This exposes the opportunity cost of analyzing sales engagements through the lens of Win/Loss. When compiling Win reports, my antennae are tuned to learning the reasons my customer bought, leaving me prone to overlooking areas for improvement – some of them glaring. Likewise for Loss reports. Even when a sales engagement doesn’t produce a purchase order, that doesn’t mean there aren’t worthy innovations or discoveries to consider using in the future.
Around 2005, I abandoned the term Win/Loss analysis, replacing it with the more generic After Action Review. I did not invent this appellation. Rather, I adopted it from the US military, which, compared to a sales team, has a dire spectrum of life-or-death possibilities. Success/Failure or Win/Loss endpoints aren’t compatible with the analytical rigor the US military uses to drive improvement.
When it comes to performing retrospectives in the military, the hubris embedded in Win, and the tail-tucked contrition embedded in Loss isn’t welcome. I’m certain that following the initial elation over taking out Osama Bin Laden in Abbottabad, Pakistan, Navy Seal Team 6 took a critical look at what went right, and what went wrong. No doubt there was plenty of both. Anything less candid would jeopardize every mission to follow.
After Action Reviews focus on answering six core questions:
- What was the intended outcome?
- What was the actual outcome?
- What was learned?
- What do we do now?
- Who should we tell?
- How do we tell them?
Every sales opportunity, every business-development project, and every marketing campaign, regardless of the result, should be subjected to the same interrogative gauntlet. No pandering by playing back what managers want to hear, or reflexively confirming what they already profess to know. The After Action Review hierarchy of questions is key to obtaining the most effective pathway for improvement. Questions 1 – 3 are designed to expose gaps, both positive and negative, and help analysts reflect on likely causes. Questions 4 – 6 focus on ranking the urgency of recommendations, and figuring out how to communicate them. Too many times, strategic and tactical priorities never reach managers capable of implementing them.
In the case of Scenario 1 (above), the After Action Review would draw the analysis into explaining the delta or gap between the original expectation of 1,000 seats and the final purchase of 40. The artifacts to be uncovered are often invaluable for preventing estimation errors and other problems in future selling opportunities.
What’s missing from both Win/Loss Reviews and After Action Reviews is accountability. Once recommendations reach the right people, then what? Neither approach ensures that someone who needs to act won’t be complacent and sit on the recommendations. That’s a risk companies must address through their own change management processes.
Sales outcomes are too varied and too nuanced to simply pigeonhole as Wins or Losses. Shouldn’t our analyses provide managers and sales reps more than confirming what they already believe? They can. After Action Reviews offer a better way to for organizations to learn, and to act upon what they discover.