Don’t Misuse Proof of Concept in System Selection

2
231

Share on LinkedIn

Call me a cock-eyed optimist, but marketers may actually be getting better at buying software. Our research has long shown that the most satisfied buyers base their selection on features, not cost or ease of use. But feature lists alone are never enough: even if buyers had the knowledge and patience to precisely define their actual requirements, no set of checkboxes could capture the nuance of what it’s actually like to use a piece of software for a specific task. This is why experts like Tony Byrne at Real Story Group argue instead for defining key use cases (a.k.a. user stories) and having vendors demonstrate those. (If you really want to be trendy, you can call this a Clayton Christensen-style “job to be done”.)

In fact, use cases have become something of an obsession in their own right. This is partly because they are a way of getting concrete answers about the value of a system: when someone asks, “What’s the use case for system X”, they’re really asking, “How will I benefit from buying it?” That’s quite different from the classic definition of a use case as a series of steps to achieve a task. It’s this traditional definition that matters when you apply use cases to system selection, since you want the use case to specify the features to be demonstrated. You can download the CDP Institute’s use case template here.

But I suspect the real reason use cases have become so popular is that they offer a shortcut past the swamp of defining comprehensive system requirements. Buyers in general, and marketers in particular, lack the time and resources to create complete requirements lists based on their actual needs (although they’re perfectly capable of copying huge, generic lists that apply to no one).  Many buyers are convinced it’s not necessary and perhaps not even possible to build meaningful requirements lists: they point to the old-school “waterfall” approach used in systems design, which routinely takes too long and produces unsatisfactory results. Instead, buyers correctly see use cases as part of an agile methodology that evolves a solution by solving a sequence of concrete, near-term objectives.

Of course, any agile expert will freely admit that chasing random enhancements is not enough.  There also needs to be an underlying framework to ensure the product can mature without extensive rework. The same applies to software selection: a collection of use cases will not necessarily test all the features you’ll ultimately need. There’s an unstated but, I think, implicit assumption that use cases are a type of sampling technique: that is, a system that meets the requirements of the selected use cases will also meet other, untested requirements.   It’s a dangerous assumption. (To be clear: a system that can’t support the selected use cases is proven inadequate. So sample use cases do provide a valuable screening function.)

Consciously or subconsciously, smart buyers know that sample use cases are not enough. This may be why I’ve recently noticed a sharp rise in the use of proof of concept (POC) tests. Those go beyond watching a demonstration of selected use cases to actually instal a trial version of a system and seeihow it runs. This is more work than use case demonstrations but gives much more complete information.

Proof of concept engagements used to be fairly rare. Only big companies could afford to run them because they cost quite a bit in both cash (most vendors required some payment) and staff time (to set up and evaluate the results). Even big companies would deploy POCs only to resolve specific uncertainties that couldn’t be settled without a live deployment.

The barriers to POCs have fallen dramatically with cloud systems and Software-as-a-Service. Today, buyers can often set up a test system with a just a few mouse clicks (although it may take several days of preparation before those clicks will work). As a result, POCs are now so common that they can almost be considered a standard part of the buying process.

Like the broader application of use cases, having more POCs is generally a good thing. But, also like use cases, POCs can be applied incorrectly.

In particular, I’ve recently seen several situations where POCs were used as an alternative to basic information gathering. The most frightening was a company that told me they had selected half a dozen wildly different systems and were going to do a POC with each of them to figure out what kind of system they really needed.

The grimace they didn’t see when I heard this is why I keep my camera off during Zoom meetings. Even if the vendors do the POCs for free, this is still a major commitment of staff time that won’t actually answer the question. At best, they’ll learn about the scope of the different products. But that won’t tell them what scope is right for them.

Anther company told me they ran five different POCs, taking more than six months to complete the process, only to later discover that they couldn’t load the data sources they expected (but hadn’t included in their POCs). Yet another company let their technical staff manage a POC and declare it successful, only later to learn the system had been configured in a way that didn’t meet actual user needs.

You’re probably noticing a dreary theme here: there’s no shortcut for defining your requirements. You’re right about that, and you’re also right that I’m not much fun at parties. As to POCs, they do have an important role but it’s the same one they played when they were harder to do: they resolve uncertainties that can’t be resolved any other way.

For Customer Data Platforms, the most common uncertainty is probably the ability to integrate different data sources.  Technical nuances and data quality are almost impossible to assess without actually trying to load each system.  Since these issues have more to do with the data source than the CDP, this type of POC is more about CDP feasibility in general than CDP system selection. That means you can probably defer your POC until you’ve narrowed your selection to one or two options – something that will reduce the total effort, encourage the vendor to learn more about your situation, and help you to learn about the system you’re most likely to use.

The situation may be different with other types of software. For example, you might to test q wide variety of predictive modeling systems if the key uncertainty is how well their models will perform. That’s closer to the classic multi-vendor “bake-off”.  But beware of such situations: the more products you test, the less likely your staff is to learn each product well.

With a predictive modeling tool, it’s obvious that user skill can have a major impact on results. With other tools, the impact of user training on outcomes may not be obvious. But users who are assessing system power or usability may still misjudge a product if they haven’t invested enough time in learning it.  Training wheels are good for beginners but get in the way of an expert. Remember that your users will soon be experts, so don’t judge a system by the quality of its training wheels.

This brings us back to my original claim.  Are marketers really getting better at buying software?  I’ll stand by that and point to broader use of tools like use cases and proof of concepts as evidence. But I’ll repeat my caution that use cases and POCs must be used to develop and supplement requirements, not to replace them. Otherwise they become an alternate route to poor decisions rather than guideposts on the road to success.

Republished with author's permission from original post.

2 COMMENTS

  1. Here, here! Coming from a Sales Executive from a large software vendor, I couldn’t agree more. I agree with Mr. Raab’s statement that Cloud Software can be accessed very quickly and he’s also 100% right in saying that there ARE NO SHORTCUTS! Whether it takes 1 day to set up or 1 month doesn’t change _what_ the prospective software can do OR how different the functionality is from what the customer is currently using. In fact, I would posit that the longer it’s been since a company selected a particular software, the _longer_ the evaluation would need to be for replacing it. Software changes so quickly, things that had to be custom 5-years ago are typically basic tables-stakes today. New software is not harder nor easier, just SO different that quite a bit of time is needed just to understand the new landscape. If you’ve ever wondered why vendors do _not_ like to demo first, I’ll tell you: the first vendor (or 2) spends MOST of their time educating the prospective customer on the current state of the technology that most of the evaluation team actually can’t evaluate the functionality of the software! It’s only once there is a base-level of understanding how things work nowadays that the team can then start to understand what can work for them. POC’s don’t help that and, I would argue that the more complex the functionality is, the less a POC will help (baring a significant expenditure of time to LEARN how to use it). Expense software – pretty straight-forward. ERP – not so much.

  2. Thanks Burton. The point about people getting trained on software during a POC is especially important — as you say, very few do, and without it they won’t learn much. Same problem for limited-time free trials, which often run out before the trial-er gets around to playing with the tool.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here