After I posted my blog on measuring and benchmarking overall Site Satisfaction, Marshall Sponder sent me this comment/question:
Another great post! Question – how does NetPromoter scores figure into the points? They are, after all, Survey based.
It’s a really interesting question, because I believe there are both similarities and differences relevant to my discussion. A quick recap – in my last post I argued that overall Site Satisfaction suffered the same issues as almost any other site-wide metric. Site-wide metrics – be they Conversion Rate or Revenue or Site Satisfaction – all confuse multiple factors together in a way that makes them almost useless and un-interpretable. This is contrary, of course, to the broad industry view of KPIs, but it’s a topic I’ve canvassed thoroughly in previous posts and I have yet to hear a convincing argument to the contrary. In addition to this problem common to nearly any site-wide variable, Survey data – when collected by traditional site intercept means – also suffers from a sampling problem. Because your site population varies with your marketing efforts, you’re mostly measuring shifts in the underlying population you’re attracting when you measure (or compare or trend) site-wide Satisfaction scores.
So what about NetPromoter?
On the whole, NetPromoter scores will suffer from pretty much the same problems. When you measure NetPromoter scores using site-intercept surveys, your likely measuring changes in your sample population not changes to your actual customer likelihood to recommend. So a trend or benchmark of NetPromoter scores is no better, in this respect, than Site Satisfaction.
However, there are a few differences. As I thought about Marshall’s question, I realized that in many respects my criticism of overall Site Satisfaction mirrors my criticism of Total Mention Counts in Social Media. In Total Mention Counts, you’re adding up fundamentally different things into a meaningless whole (mentions in the NY Times + Twitter Customer Support Mentions doesn’t equal an interesting Total Mentions). It’s similar with Site Satisfaction. Adding Site Satisfaction for Customer Support visits to Site Satisfaction for Pre-Purchase Visits to Site Satisfaction for Brand Visits doesn’t really add up to a meaningful number. The meaningful numbers are all at or beneath the Visit Type level.
NetPromoter scores, on the other hand, ARE coherent across both Visit Types and an entire population. Willingness to recommend is independent (to some extent) of whether you are buying, or getting support, or finding out about the brand. You’d still probably want to understand the impact of visitor and visit type on NetPromoter score but it’s not totally unreasonable to think about NetPromoter as an attribute of an entire population.
This also tells us something about what NetPromoter isn’t. It isn’t, for example, a good way to measure success by visit type. Site Satisfaction is actually much better for that.
Indeed, as I re-read my posts, I don’t want to leave the impression that I dislike Site Satisfaction as a metric or that I am opposed to online intercept surveys and their use. Not at all. Online surveys are incredibly valuable as is the Site Satisfaction question. You just have to understand and work effectively within the limitations imposed by the sampling method. In fact, I think Site Satisfaction is a better metric than NetPromoter if you’re intent is to measure visit-level site experiences. It gets at something much more specific and real. What Site Satisfaction isn’t, is a metric independent of those visit types in a way that makes it a plausible candidate for aggregation or site-wide benchmarking.