Infer Keeps It Simple: B2B Lead Scores and Nothing Else


Share on LinkedIn

I’ve nearly finished gathering information from vendors for my new study on Customer Data Platform systems and have started to look for patterns in the results. One thing that has become clear is that the CDP vendors fall into several groups of systems that are similar to each other but quite different from the rest. This makes sense: most of the existing CDP systems were built to solve specific problems , not as general-purpose data platforms. Features will probably converge as vendors extend their products to attract more clients. But right now the groups are quite distinct.

One of these categories is systems for B2B lead scoring. I found three CDPs in this group: Lattice Engines (which I reviewed in April), Mintigo (reviewed in June), and Infer, which I’m reviewing right now.

Like the others, Infer builds a proprietary database of pretty much every company on the Internet by scanning Web sites, blogs, social media, government records, and other sources for company information and relevant events. It then imports CRM and marketing automation data from its clients’ systems, enhances the imported records with information from its big proprietary database, and builds predictive models that score companies and individuals on their likely win rate, conversion rate, deal size, and lifetime revenue.

The models are applied to new records as they enter a client’s system, creating scores that are returned to marketing automation and CRM to use as those systems see fit. The most typical application is deciding which leads should go to sales, be further nurtured by marketing automation, or discarded entirely. But Infer customers also use the scores to prioritize leads for salespeople within CRM, to measure the quality of leads produced by a marketing program, assess salesperson performance based on the quality of leads they received, and even adjust paid search campaigns based on the quality of leads generated by each source and keyword.

Infer differs from its competitors in many subtle ways: the scope of its data sources, its matching processes to assemble company and individual data, the exact types of scores it produces, its modeling techniques, and reporting. It also differs in one very obvious way: it returns only scores, while competitors return both scores and enhanced profiles on individual prospects. Infer gathers the individual detail needed for such profiles, but has decided so far not to make them available. Its reasoning is that scores provide the major value from its system and profiles would detract from them – perhaps because sales people might ignore them scores in favor of profile data. Focusing on scores alone also makes Infer simpler to set up, operate, and understand.

Infer might be right, but it’s hard to imagine they’ll will stick with this position once they start selling directly against competitors that offer scores plus profiles. They will surely lose many deals for that reason alone. On the other hand, Infer’s initial clients have been companies where free trials versions generate huge lead volumes, including Box, Tableau, NitroPDF, Zendesk, Jive and Yammer. Scores that accurately filter non-productive leads are more important to those companies than individual lead profiles. Perhaps there are enough such firms for Infer to succeed by selling only to them.

Whether or not Infer expands its outputs, it faces another challenge: convincing buyers that its scores and data are better than its competitors. This might well be true: based on the information I’ve gathered, Infer seems to have a richer set of data sources and more sophisticated identity matching than at least some competitors. But my impressions may be wrong, and most buyers will won’t dig deeply enough to form an opinion. Instead, their eyes will glaze over when the vendors start to get into the details, and they’ll simply assume that everybody’s data, matching, and modeling are roughly equivalent.

The only real way to measure relative quality is through competitive testing of which scores work better. Each buyer needs to run her own tests since results may vary from business to business. How many buyers will take the time to do this, and which vendors will agree to cooperate, is a very open question.

That said, I did speak with some current Infer users, who were quite delighted with how easy it had been to deploy the system and with results to date. This is hardly a random sample – these were pioneer users (the system was only launched about a year ago) and hand-picked by the vendor. But their experience does confirm that performance is solid.

Infer pricing is based on the number of records processed and connected systems. The vendor doesn’t reveal the actual rates but did say it is looking at options to make the system more affordable for smaller clients.

Republished with author's permission from original post.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here