Creating Virtuous Cycles in Digital Analytics

0
50

Share on LinkedIn

Consistent analytics. Cadence. Testing. Communication. I described these as the essential elements in building virtuous cycles in analytics. Virtuous cycles are all about feedback loops – and feedback loops need consistent application. You have to be able to analyze, test, and re-analyze continuously or else the operational teams that drive the actual changes can’t be kept consistently engaged and busy. For most organizations, the answer to this demand for continuous testing has been to disassociate analytics and testing. By creating open testing intake processes and encouraging experimentation, many organizations find they can generate quite a high-volume of testing ideas. There’s nothing wrong (and actually much that is good) in having this type of test generation and intake process. There is, however, a great deal that is wrong in also removing analytics from the cycle. Analytics shouldn’t be the only driver of testing, but an effective analytics program should be a significant driver of testing and a critical part of the evaluation and prioritization of all testing proposals.

In the vast majority of cases, analytics doesn’t have that role. And the reason isn’t primarily organizational or structural (even though I’ve complained about the common structural problem of separating analytics and testing), it’s methodological.

The vast majority of cases where our clients have been successful in creating analytics cycles share a common feature – analytics vets winners but doesn’t suggest new alternatives. Landing pages – the classic front-line of MVT testing – are one example. Product detail pages on ecommerce sites are another good example. Digital Marketing Campaigns are yet another example.

It works like this:

Company X deploys 10 PPC campaigns. Analytics are used to measure the CPA of each campaign. Investment dollars are shifted from the lowest performing campaigns to the highest and new campaigns are created. The process iterates.

There’s no doubt this process works. But by leaving analytics out of the creation process, a hugely valuable piece of the virtuous cycle is left un-closed. You aren’t learning what makes for successful campaigns – you’re just winnowing out the least successful campaigns from the list. It’s improvement, but it’s too unguided to be optimal.

This basic optimization story doesn’t just apply to campaigns. It’s much the same when it comes to Websites or Mobile applications. Unguided change generation is paired with winnowing by success to create a process of evolutionary improvement. As with Darwinian evolution, however, a process driven by unguided change will need a long time (and a lot of money) to converge on the best solutions.

It’s probably unfair to describe the change generation process as random. Suggestions for change are usually generated by intuitions about business or design opportunities – often from seasoned professionals who may have fairly good odds of guessing right about where problems reside. What’s more, as results from tests start to emerge, there’s often a convergence phenomenon as new tests tend to concentrate on areas of the site and problems that demonstrate tractability to testing. In other words, we tend naturally to focus new tests on the same areas that were productive in the previous round.

It would be wrong to dismiss business intuition and convergence as strategies for driving test generation and intake, but it should be fairly apparent that they lack rigor.   Both strategies are highly likely to over-focus the organization on a small set of problems. It’s not uncommon in the organization to have a few stakeholders who are deeply attuned to potential testing opportunities. These individuals are likely to dominate ideation, and while their intuitions may be great, they often represent only a portion of the company’s business or only a segment of their audience.

I want, particularly, to hammer home this point about segmentation. Testing ideation usually involves some form of identification and empathy – the ability to understand a customer need and recognize that it isn’t being addressed. In the nature of things, most of us are only able to do that with specific segments – often segments we are personally connected to. By relying on business intuition to drive testing ideation, you’re highly likely to over-focus on market segments that just happen to be the ones your key people resonate to.

Interestingly, you can see that the convergence phenomenon will exacerbate not alleviate this tendency. Convergence tightens the testing focus around a process that already tends to narrow testing opportunities too much.

Given this, the goal of analytics around testing ideation should be to help broaden out the focus to more segments and functions and to sharpen the process of pinpointing potential areas of improvement. So how do you do it?

Over the years, we’ve explored and used many different methods around digital analytics. Back in 2005 or thereabouts, I was working on Functionalism – surely one of the earliest attempts to craft a cohesive Web analytics method. We still use many of the concepts originally embedded in the Functional approach, but while Functional analysis does tend to create a fair number of testable recommendations for improvement, it isn’t particularly suited to driving a continuous cycle of improvement. It lacks a means of prioritizing or valuing opportunities and it lacks a foundation in segmentation; both these are critical to supporting iterative testing.

Other methods we’ve developed around funnels analysis, site topography, and even segmentation are similarly flawed.

Digital segmentation, for example, is a foundational exercise in analytics. Creating a segmentation can be tremendously impactful and a good segmentation can be deeply embedded in reporting and subsequent analytics. But it simply doesn’t make sense to constantly iterate a segmentation. Indeed, a segmentation gets most of its value from the very fact of it’s being static. Marketers have to know and understand a segmentation to use it effectively; meaning it can’t change too often or it won’t be useful.

The one analysis that stands head-and-shoulders above the rest when it comes to ideation for testing and iterating a program is Use-Case Analysis, particularly when paired with re-survey techniques.

Use-Case analysis is based on our two-tiered segmentation. The ideas behind it are quite straightforward. We identify the different types of visits on the Website. We identify the types of visitors that typically engage in each type of visit (this is the two tiered segmentation). We use statistical methods to understand the behaviors that determine each visit type. We validate those models with online survey data. For sites without definitive business success outcomes online (most sites that aren’t eCommerce), we use re- survey techniques to isolate and analyze true business value. These methods allow us to statistically determine (even – maybe especially – for no- ecommerce sites) all of the following:

  1. The actual value of an average visitor in each Use Case
  2. The metrics or behaviors that correlate with downstream actions and business value
  3. The actual impact of improving Website performance in terms of real business value
  4. The value of improving customer satisfication in any given use case

Finally, you examine the behaviors inside each use-case to determine where potential problems exist.

This data is essential for a huge variety of core digital analytics tasks. It allows us to understand which use-cases/visit-types are the most important (where we should focus). It allows us to understand which use-cases are least successful (where we need improvement). It allows us to measure the true impact of changes to the Website even in non-ecommerce or long lead-time environments (whether or not tests work). It helps us identify the specific places in the user journey where a use-case is broken (what to test). It even, by providing a segmentation framework, allows us to understand who is doing each visit type, who is most prone to failure, and what the value of each vistor/visit-type is (what creative brief should drive testing).

I hope it’s clear why this is an incredibly useful framework for creating a test plan and then iterating it. By constructing a framework for accurately measuring the importance of and true business value of EVERY visit type on ANY kind of Website, Use-Case analysis provides data driven answers to the questions that ultimately should drive testing ideation and prioritization:

  • Which types of visits are most valuable?
  • Which types of visitors do those visits?
  • Which types of visits are most problematic / least successful?
  • Which types of visitors in those visit types tend to fail most?
  • What is the business dollar value of improving the performance of a Visitor-Visit Type?
  • Which behaviors distinguish unsuccessful visits by Visitor-Visit Type?
  • What do visitors with those behaviors tell us about their needs / goals?
  • When we tested a change, did we impact the identified Web behaviors that indicate success?
  • When those identified measures changed, was there subsequent downstream success?

It’s a big list because the method spans the entire testing process and provides the critical answers at every step. It helps isolate where to focus tests. It provides critical data about which visitors struggle within any given experience. It allows you to understand those struggles in both behavioral and attitudinal terms. This isn’t quite test ideation, but it’s everything you should need to actually ideate meaningful tests and to prioritize external testing ideas.

Even better, the process allows for the creating and execution of simultaneous multiple tests (because tests can be targeted at individual combinations of Visitor-Visit type) and for accurate measurement of the impact of tests, even when there is no definitive online outcome. What’s more, it measures true outcomes not subjective proxies. You may think, for example, that measuring lead generation is an adequate proxy for testing success on a lead form. Lots of our experience suggests otherwise, since the quality of lead can easily be impacted by changes in the content or marketing experience. Because Use-Case analysis with re-survey ties direct web behaviors (like lead generation) to long-term business value (like actual sales), that becomes much less of an issue.

When you iterate a Use-Case analysis, you don’t typically repeat most of the steps. A full Use-Case analysis includes all of these steps:

  1. Identify the types of visits on the Website
  2. Identify the types of visitors on the Website
  3. Use statistical methods to create behavioral patterns that identify each visit type
  4. Validate the methods with online survey data
  5. Isolate and study each use-case for behavioral break-downs
  6. Match break-downs against online survey and other VoC data
  7. Re-survey to measure downstream conversion and choice

Steps #1-#4 don’t need to be iterated. So when you repeat a Use-Case analysis, the subsequent analytic efforts are much smaller. The segmentations that underlie each use-case are simply re-run and studied in terms of their current success – usually with a particular focus on the impact of changes driven by the last cycle.

Our very best clients have established a regular cadence of analysis based on the Use-Case and re-survey techniques. That cadence often varies by property – ranging from quarterly to annually. But the important point is that the method continually drives focus on the best opportunities across the entire Website, is a rich driver of ideation for new testing ideas, and is an ironclad way to drive data-driven decision-making about which tests to actually prioritize and implement.

Of course, Use-Case analysis isn’t the only method around for driving continual improvement. Methodologies like Six Sigma have been around for years and have internalized the process of continuous problem identification and resolution. Cadence is as much a part of Six Sigma as is error identification. But while Six Sigma is perfect for many types of continuous improvement, it’s not particularly applicable to most marketing problems – digital or otherwise.  The reduction of process error on which Six Sigma is based just isn’t the best lever for iterating improvement in marketing.

Use-Case, with its incorporation of segmentation and attitudinal data to measure satisfaction and business outcomes, is a better foundation for digital and marketing analytics. It captures the right improvement focus and generates the right kind of metrics to steer and measure improvement.

If you’re interested in creating virtuous cycles in digital analytics, Use-Case analysis with re-survey is the best methodology I know. It works. It works well. And it works continuously.

Republished with author's permission from original post.

Gary Angel
Gary is the CEO of Digital Mortar. DM is the leading platform for in-store customer journey analytics. It provides near real-time reporting and analysis of how stores performed including full in-store funnel analysis, segmented customer journey analysis, staff evaluation and optimization, and compliance reporting. Prior to founding Digital Mortar, Gary led Ernst & Young's Digital Analytics practice. His previous company, Semphonic, was acquired by EY in 2013.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here