Site-wide Customer Satisfaction: It Isn’t Interesting and it Isn’t Comparable Across Sites.

0
24

Share on LinkedIn

If there’s one question in Digital Measurement that I genuinely hate, it’s this: “How does my “x” rate compare to my competitors?” where “x” might be conversion rate, bounce rate, shopping cart abandon rate, or any other fairly important metric. I hate the question, because it is unanswerable. I’ve written before how movement in any single site-wide metric is un-interpretable in the sense of being action-guiding and, for similar reasons, it’s virtually impossible to meaningfully compare any single site-wide metric between two or more competitive sites. If a metric doesn’t mean anything when applied to your own site, why would you expect it to mean something when compared to another site?

If you are doing this, you’re misleading (I originally wrote “lying to” instead of “misleading” but perhaps not everyone actually knows better) your stakeholders.

It’s just that simple.

Your site-wide conversion rate (for example) is a function of your site design, your visitor population, your marketing program and your brand (and probably a bunch of other stuff too). It cannot be meaningfully compared to even the most accurate benchmark set at the site level.

Is it any different in the world of opinion research? Is survey data any more comparable than Conversion Rate?

As a site-wide metric, the answer is clearly no. Site-wide Satisfaction is a function of your site design, your visitor population, your marketing program, and your brand (and a bunch of other stuff too). It takes unified data collection, meaningful segmentation, and the ability to hold constant a large number of potentially influencing factors before any comparison can be done – and that comparison will NEVER, EVER be at the site-wide level.

So comparing your Overall Site Satisfaction to any competitive set, be it true competitors, industry leaders, world class Websites or Websites featuring clown pictures is just not useful. There simply is no learning you can take from a comparison of overall Site Satisfaction. Note that I’m not saying this because of the (very substantial) difficulties in building a valid competitive set. Those difficulties are real and legion – but my criticism of site-wide satisfaction holds even when we all agree that the benchmark set is perfect.

At a deeper level, however, survey data CAN help compare your performance to competitors in a way that behavioral numbers cannot. Survey data can indeed establish competitive benchmarks; however, their utility is entirely dependent on the type of sample you use for the job.

In the online world, the typical online survey sample is bound up with your site visit population – a population that is heavily driven by day-to-day changes in your marketing program. Under such circumstances, the one thing you can be sure of when you measure your total site satisfaction and compare it to other sites (using their intercept survey results) is that YOU ARE NOT MEASURING THE LIKELY SASTISFACTION OF A RANDOM CONSUMER VISITING EACH SITE AND THEN RECORDING THEIR SATISFACTION WITH THE EXPERIENCE.

So you cannot use a traditional site-based online intercept approach to the survey sample if you’re intent is to create a valid competitive benchmark.

To illustrate why this is so, consider an example of four companies that are exactly alike in their business and are, therefore, 100% comparable. Each company has five visit types. Satisfaction for these visit types range from 59% to 71%. Each site is completely identical in structure and design except for the name of the company and each visit type has identical satisfaction scores on every site.

I hope you’ll agree that this benchmark set is an absolutely implausible best-case in the real-world. Any problem with this benchmark is certainly not in the chosen competitive set.

Suppose, however, that the distribution of visit types to these identical sites is as follows:

% of Visits

Visit Type

Visit Satisfaction

Site 1

Site 2

Site 3

Site 4

1

59%

20%

15%

25%

30%

2

63%

20%

18%

25%

25%

3

67%

20%

20%

20%

20%

4

69%

20%

22%

15%

15%

5

71%

20%

25%

15%

10%

Site Total Satisfaction

66%

67%

65%

64%

Even with four perfectly identical sites, the average site satisfaction across all visitors ranges from 64% to 67% in our hypothetical example. In other words, even with four completely identical sites, with five completely identical sets of visit types, and the exact same satisfaction with every visit type on each site, a small difference in the distribution of visit types on the site can result in a significant variation in overall site success when measured across the entire visitor population.

Such a difference might easily result, for example, from differences in SEO programs where external site link building generates different page rankings and skews the visit types in one direction or another.

Any decision-maker, looking at Site Total Satisfaction comparison, will believe that Site 4 is worse than Site 2 – even though the sites are COMPLETELY IDENTICAL IN EVERY RESPECT INCLUDING CUSTOMER SATISFACTION BY VISIT TYPE AND CUSTOMER SATISFACTION ACROSS EVERY SINGLE MEANINGFUL VARIABLE.

I can’t help but think that showing a decision-maker this one number is a gross misrepresentation of reality.

This effect is not limited to visit type. It is true for every single measured variable and it is true for any effort to “trend” the data.

Imagine the implications for the real-world where “comparable” sites are actually dramatically different in marketing drives, structure, function and audience.

In no way does sampling your Website visitors constitute a single broad consumer population for ANY business and the concatenation of multiple Websites all with very different audience biases does not solve the problem. As I pointed out in my last post, when you gear up your PPC program, improve your SEO, or take down your Display advertising, you are changing the population you are sampling on your Website and changing your top-line numbers. Nine times out of ten, that’s what you’re measuring when your top-line Satisfaction scores change.

So the critical point about benchmark samples is that – to be valid – the numbers have to come from a single controlled sample that is independent of site marketing efforts. If your competitive set comes from numbers collected by other sites in the same fashion as you collect yours, then keep in mind that every member of your competitive set is changing their sample in ways that you simply cannot measure or understand. Adding your bad sample to their bad sample doesn’t make for a good sample. And comparing your bad sample results to a number produced by a good (independent) sample doesn’t solve the problem either. The limitations on your site sample make effective site comparison over time essentially impossible.

It’s true, of course, that in the example above we could – by drilling down into additional variables – show that the sites are identical in all but Visit Type distribution. To do this requires full access to the ENTIRE set of competitive data. The simple top-line comparison is meaningless. What’s more, only the extreme simplicity of my example makes it possible for drill-down on additional variables to easily succeed.

In the real-world, where large enterprises have hundreds or thousands of marketing programs going and constant web site changes underway, isolating a comparable population will be challenging even with full access to the underlying samples.

Do any of the enterprise scorecards that show a comparative satisfaction benchmark do that work? I think not.

Do you have identical visit types to your competitors? Identical demographic questions? Identical qualifying strategies? Identical survey branding? Identical visitor type categorizations? A controlled single sample?

A single, independent sample and a single, carefully crafted survey instrument are the minimum requirements for the job. Even then, you’re going to find it very challenging to answer the basic question you probably (should have) started with- given a typical consumer of some type, does your site experience for a specific task yield a higher or lower satisfaction than the competition?

If you’re using a sample based on traditional site intercept methods, a sample driven by your site marketing efforts, and you are relying on a single top-line Satisfaction metric to compare site performance, you might as well – like ancient Greeks before battle – be shaking bones on the sand to predict the future.

Small differences in the visitor population, it’s sourcing, or your sample can easily create the impression that you are doing better or worse than your competitive set; an impression that has absolutely no basis in reality.

To evaluate your business based on such numbers is madness.

It’s just another great example of a metric that looks oh so interesting but serves to hide, not reveal, the truth.

Republished with author's permission from original post.

Gary Angel
Gary is the CEO of Digital Mortar. DM is the leading platform for in-store customer journey analytics. It provides near real-time reporting and analysis of how stores performed including full in-store funnel analysis, segmented customer journey analysis, staff evaluation and optimization, and compliance reporting. Prior to founding Digital Mortar, Gary led Ernst & Young's Digital Analytics practice. His previous company, Semphonic, was acquired by EY in 2013.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here