Creating a Comprehensive Digital Measurement Strategy: The Data Science Roadmap


Share on LinkedIn

In my last two posts, I’ve described two foundational steps in building a comprehensive digital measurement strategy. These two steps: an objective assessment of the state of your digital measurement system and the creation of high-level model of the digital channel comprise the “here” and the “there” of a good strategy. The model describes the digital channel and, at a high-level, what’s required to measure and optimize it. The objective assessment describes the current state of the system and the tools and resources available to tackle the missing pieces.

So what’s missing? We still need a plan for getting from here to there. In the next three steps, I’ll show how the process we’ve created fills in the route map for getting from here to there – creating a truly comprehensive strategic plan.

Building a Measurement Framework

The first of these three steps is a combination of our measurement framework (creating a Two-Tiered Segmentation and corresponding success framework) along with a fairly detailed data science roadmap. Both are vitally important.

I’ve written so extensively about the Two-Tiered Segmentation that I’m reluctant to reprise those discussions here. However, I do want to touch lightly on why a measurement framework is strategic. In the model above, some activities (like Prospect Acquisition) are easily described from a measurement perspective. We may not know some of the details (such as how to calculate a Predicted Lifetime Value), but we know that we need measures of cost, reach, conversion, and value. Other activities aren’t so simple.

One aspect of retention, for example, may be Customer Support. When a visitor visits a digital channel for Customer Support, we need to know what the apporpriate measure of success is. At a high-level, the measure might be retention and share-of-wallet growth, but iIt’s unrealistic to expect page or visit measurement to be able to show statistically significant differences over these types of variables. You can’t optimize your existing customer support site with a retention metric.

How you choose to measure success is going to have a significant impact on your digital measurement strategy. If the best measurement of success requires a page-level rating system, then you’ll need to have the technology, people and processes for using that data. If the best measurement of success is pre-post experience sampled research, you’ll need a different set of people, process and technology to handle that. In laying out a strategy, we want to account for these decisions and provide a framework for prioritizing them appropriately. Our goal is to tie EVERY single technology and resource decision to a specific set of digital measurement problems we intend to solve.

In previous posts, I’ve described how we create a two-tiered segmentation and then elaborate a full measurement system by tying business goals to customer intent and then creating a strategy for measuring the success of those business goals for each type of visit.

This is our measurement foundation and it’s a fundamental piece of a really good measurement strategy.

Creating a Data Science Roadmap

It isn’t, however, a complete map of the route from getting from “here” to “there” in digital measurement. For that, you also need a Data Science plan. The Data Science Roadmap is an analytics plan. Its purpose is to describe the analysis projects that need to be accomplished before the business can truly understand the digital system. In our Acquisition model there was at least one obvious example of this: calculating a Predicted LTV:

Building a Model of the Business - Process Detail 5

In many acquisition systems, you simply can’t do an effective job of optimization unless you have a good model for predicting Lifetime Value. If you’ve invested hundreds-of-thousands of dollars in creating an attribution system but haven’t bothered to find out if the campaigns you’re so carefully crediting source fundamentally different types of customers, you’ve missed the boat.

So it’s a good bet for this particular system that creating a Lifetime Value Model would be a core part of the Data Science plan. And just as decisions about success measurement in the foundation drive decisions about people, process and technology, so too do decisions about Data Science projects. To create a predicted LTV model for digital customers requires the combination of digital campaign data, digital behavioral data, customer cost and value data, and, often, customer demographics or 3rd Party data. If you have to do this analysis, you need a platform (not Web analytics), you need the core data sources integrated, you need the tools necessary to solve the problem, and you may need 3rd party data enrichment.

Knowing that you have to do the analysis just IS the justification for all of those things. The model explains why the analysis is necessary. The Data Science plan describes the data, techniques and resources you’ll need to accomplish the analysis.

A Predicted LTV Model isn’t the only type of analytics project that might flow out of an analytics system like this.

We often, for example, recommend a technique we call variation analysis. It’s designed to study and isolate sources of variability in a Pay-Per-Click (PPC) Program. By studying the relationship between spend, traffic, audience, and conversion over time, this analysis seeks to identify the causes of variation in a PPC system. Why is that important?

If you know what drives variation in a system, you almost always have significant opportunities for program optimization. In one of my favorite examples of this, we found that for an online Traffic Site, weather systems were a source of dramatic variation. Big surprise, right? Super-storms, blizzards, and general bad weather drive enormously greater interest and use of online traffic. So if you’re an online traffic site, tying your buying strategies to regional weather events can significantly improve the use of your budgets both intra and inter-day. It’s obvious that weather impacts usage of online traffic – but it also impacted quality of visitor and potential acquisition opportunities. Optimizing buying for variations takes more work (and it’s something the average agency might not bother with), but it’s a critical factor to account for in creating your digital measurement strategy.

We’re not done yet. The pass-off from Website to Call-Center introduces an obvious and all too common gap in the measurement system. You can’t measure true conversion of campaigns unless you can track from Web to Call. And if you can’t measure, you can’t optimize.

Adding this level of measurement doesn’t just fill a basic gap in the measurement system, it also enables several potentially valuable Data Science projects.

By tuning the placement and highlighting of phone numbers, you have a significant amount of influence on how many visitors will pick-up the phone and call. Finding the optimal balance between web and call-center is tricky. It often requires a combination of controlled testing and regression analysis to identify the best campaign and page-level strategy for providing phone drives. It’s not dissimilar, in many respects, to the merchandising analytics methods I described in a previous post and whitepaper.

For this data science project, you’ll need online campaign information, the intercept survey (audience) data, Web analytics data, and call-center data. Ideally, you’ll need visitor-level integration of this data to be able track patterns of conversion.

This, in many ways, is a classic site-side analysis and it’s the type of thing that’s bread-and-butter for Semphonic. It’s a mistake, however, to only think about site-side analytics.

In any lead-generation system, one of the most important analytic projects you can do is to create a model for matching leads to operators.

In some systems, this is largely about classifying the leads by quality and optimizing for performance. However, that’s not always the best way to think about the problem. Most operators are better with certain types of callers. I’ll always remember a time many years back when I was working for a company writing real-time trading software. We’d hired a German intern and when a company from Germany called and he answered the phone, we got one of our earliest sales. That’s pure chance, of course, but most operators will do better with certain demographics and certain regions. By matching lead types to operator skills, you can significantly improve call-center conversion. This call-matching is a fourth Data Science project I’d recommend for this Acquisition system.

Bringing it all Together

These four projects might constitute the Data Science Roadmap for the Acquisition system. Keep in mind that there are five different systems in this business (which is about the minimum). So it’s not unusual for the Data Science Roadmap to contain 20-30 different analysis projects.

These projects are a REAL plan. They constitute the work of many months and, usually, multiple years. They are also the driving force behind the selection and prioritization of technology, the resourcing requests, and the data integration plan that flesh out a good digital strategy.

What’s particularly compelling about this approach is that the Data Science Roadmap provides a clear connection between what you are asking for and what you are going to deliver. If you get the people and the technology you’ve asked for in an area, these are the business questions you are stepping up to answer.

In my next post, I’ll show how we take the Data Science Roadmap and translate it into a Technology Stack and then a Resourcing Plan. With those steps we’ll almost (but not quite) have a complete digital measurement strategy.

End Note: The Road (much) Less Traveled

Webinarmaggedon is over but I haven’t gone completely quiet. I’m doing a more personal presentation; it’s most definitely not a Web analytics presentation – this is a “softer” lecture series internal to single large enterprise, in which I’m just one of many participants. I ended up creating an offbeat presentation that covers a blend of modern analytic philosophy, my own take on how ethics and modern neuroscience might more fruitfully meet, and some reflection on how this might actually apply to life-thinking. I’m not sure if the deck on its own is “scrutable”, but if you’re interested drop me a line. I’m happy to pass it on.

Republished with author's permission from original post.

Gary Angel
Gary is the CEO of Digital Mortar. DM is the leading platform for in-store customer journey analytics. It provides near real-time reporting and analysis of how stores performed including full in-store funnel analysis, segmented customer journey analysis, staff evaluation and optimization, and compliance reporting. Prior to founding Digital Mortar, Gary led Ernst & Young's Digital Analytics practice. His previous company, Semphonic, was acquired by EY in 2013.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here