Pepsi vs Coke: Why Marketers Shouldn’t Be Fooled By the Technology ‘Taste Test’

0
221

Share on LinkedIn

When it comes to competition, no two brands have had a more fierce, long-standing head-to-head battle than Coke and Pepsi.

In fact, historically they’ve had many famous clashes, but one in particular stands out. Pepsi, in an attempt to undercut Coke, launched an infamous “taste test” campaign that drew a lot of attention and short-term gains.

The basic premise of this challenge revolves around a blind taste test. Each participant gets two small cups, one filled with Pepsi and the other with Coke. Once they finish drinking, they choose which one they liked the most.

Pepsi brought this event to public locations and found that people preferred its product. However, there’s a critical flaw which undermines the effectiveness of this marketing strategy: Pepsi is sweeter, so the first taste is more appealing than Coke. Over the course of drinking the entire cup, the sweetness becomes less desirable and ultimately doesn’t achieve its ultimate goal as well as their competition – to quench your thirst.

If Pepsi had asked its participants to drink two full cans instead of sipping out of a small cup, the results would be much different.

Applying These Lessons to Evaluating Marketing Technology

When you’re researching potential technology vendors, be wary of traps like the one Pepsi created. A “taste-test” may seem tempting and simple, however, selecting a solution based on your first sip rather than considering the long-term marketing and business benefits will ultimately leave you unsatisfied.

Predictive technology presents an even greater challenge. With more than 62% of B2B marketers implementing predictive and most following a “taste-test” method, there is a critical need to change the way we buy predictive.

So, what’s the alternative?

The best way to compare predictive vendors is to…

Focus on the outputs specific to your company’s pain point and use case, as well as the ability of the providers to partner effectively with your requirements, technology ecosystem, and team

If you don’t consider all of these elements, you risk investing significant resources in the wrong technology. Unfortunately, to date, buyers have limited their evaluation criteria to model accuracy, resulting in poor selections and higher than expected churn due to misaligned expectations.

“Most marketers judge model accuracy, which is not enough.” – Kerry Cunningham, Senior Research Director at SiriusDecisions (click to tweet)

To address this widespread issue, we partnered with Kerry from SiriusDecisions to deliver a new, simple five-step framework that every potential buyer should follow. Kerry has guided hundreds of organizations toward the best solution, and we’re confident that investing 45-minutes reviewing his framework will save you many days and headaches throughout the buying process. Hear Kerry unveil the five-step framework and download his presentation for yourself.

So what makes the technology evaluation for predictive unique, and if you’re not careful, complex and risky?

Unique Challenges with a Predictive Evaluation

What makes predictive so powerful – the data, the many applications, the options for vendors – also exposes buyers to risks. There are three challenges that every buyer should be aware of through the evaluation process:

1) Timeliness of the predictive provider’s dataset is devalued

predictive-analytics-b2b-marketing

You run into a few problems with the traditional “taste-test” vendor evaluation process. A training model with historical data devalues the timeliness of the predictive provider’s dataset. Frequently updating data such as intent and event is devalued during a bake-off.

You only see a small example of the product’s performance. What happens when you want to run a large-scale model? Is the system you’re looking at going to crash and burn as you make your way through the entire can?

2) You only evaluate one out of many predictive use cases

predictive-analytics-use-cases

An assessment may limit itself to only one out of many predictive use cases. Your organization’s needs change on a shifting basis and the ideal application for one team may be completely different for other areas. A thorough evaluation must occur to know whether you’re actually getting what you need.

For example, if you’re evaluating predictive to help you better understand prospects and customers, and apply new insights to improve conversions, then the predictive segmentation use case is for you. A vendor’s ability to deliver on predictive segmentation could be assessed by several criteria, such as the ability to:

  • Demonstrate relevant predictor variables
  • Deliver complete market coverage
  • Increase campaign conversions with hypersegmented leads

However, if your organization is focused on growing customer value and retention with improved upsell and cross-sell campaigns, then your predictive solution must deliver on customer insights and scoring use cases. Your evaluation criteria will change to:

  • Predictive models are able to integrate and ingest customer data
  • Account and customer success teams approve high-scored customers
  • Programs tested against model-sourced customer in given segments see upsell lift

Determining your use cases and relative success criteria a vendor should meet is essential to selecting the right solution.

3) Your expectations don’t match your buying process

predictive-analytics-buying-process

While we want surefire bets when making any investment, many buyers expect vendors to show them bulletproof ROI and business cases in unrealistic time frames prior to purchasing. Understanding what a vendor can prove within a given buying process is critical to both determining which buying process you should follow and what success criteria you will have available to base your decisions on.

For instance, as Kerry puts it, “The shortest form is a trial, or what I like to call a ‘Sniff Test’.” A vendor should be able to show how their product can solve your pain points with product demos, case studies, and customer references. This process could be as short as one week.

Most organizations take the middle road that involves some proof of concept (POC), which Kerry calls a ‘Taste Test’. However, if conducted correctly, a POC will allow you to see the results of initial predictive models based on your use case. Some vendors can deliver back a POC in several days, others can take months.

The last option is the ‘Sample Meal’, or pilot. Here vendors can build and deploy models, and buyers can test those models by running campaigns and delivering model-sourced leads to small teams. Conducting a pilot is the only option if you want to see conversions or impact on bottom line, and they are typically paid. Length can vary greatly and should be based on your sales cycle and average campaign response curve. Expect at least 30 days, but some teams run pilots over the course of several months.

Keep an Eye on the Long-Term Objectives

You need to base your predictive vendor evaluation on your long-term needs.

Predictive lets you target your best prospects with a personalized message and enables you to build a relevant customer journey. But in order to be successful, you need a platform that is capable of delivering on its promised value for many years to come.

If you want to learn more about the predictive vendor selection process, or need help identifying a suitable vendor for your business, watch our webinar with Kerry Cunningham from SiriusDecisions.

Republished with author's permission from original post.

John Hurley
Modern marketing leader defining a new category of enterprise technology. Startup mentor, former founder, and life-long marketer. Joined Radius founding team 6+ years ago.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here