Crikey! What the UK Election Fiasco Tells us about Enterprise Voice of Customer

0
26

Share on LinkedIn

I won’t pretend to be an expert on UK politics and even less on UK polling. But in the wake of the disastrous performance of pollsters in the UK predicting the outcome of the general election there, I think it’s worth reflecting on the lessons to be learned. If you’re not familiar with the broad storyline, it goes something like this. In the days leading up to the election, pollsters were reading a toss-up between the incumbent Conservatives and the Labour party with expectations of a divided Parliament and much confusion. It didn’t go down that way, with the Conservatives winning a flat out majority of seats in a fairly decisive victory.

Now, the polling wasn’t as far off as that simple story may imply. There is a powerful disconnect between seats and raw voting percentages (as witnessed strikingly in the Scottish elections). You can win much less than 50% of the vote and still win far more than 50% of the seats, meaning that poll numbers aren’t necessarily reflective of seat wins. But there’s little doubt that the pollsters had the election quite wrong.

After the election, Nate Silver (who I happen to think is pretty frigging great at this stuff) and team weighed in with a series of blogs which discussed the errors in their model. There’s a ton of interesting stuff in this discussion, but of particular interest to me was the following quote:

“Polls, in the U.K. and in other places around the world, appear to be getting worse as it becomes more challenging to contact a representative sample of voters. That means forecasters need to be accounting for a greater margin of error.”

[There’s also a supporting and fascinating redux by 538’s Ben Lauderdale which shows how a huge part of the error in the prediction was driven by a seemingly very reasonable choice about which of two alternative questions around party voting likelihood would better represent actual preference. 

They chose to use (as seems reasonable), a much more specific version of the voting question, but it turned out that the very general version was closer to capturing reality. I’m not sure there’s a clear lesson here (I doubt it’s always true that the general form of the question will work better) EXCEPT that if you have two seemingly similar questions that yield very different results, you’d best beware!]

If you’re a data analyst, I’d expect these discussions into the mechanics of voter modeling to be pretty fascinating. But of far more importance to non-analysts should be the troubling implications of Mr. Silver’s take on polling and its declining accuracy. Because here’s the thing – his comments apply just as surely (maybe even more surely) to opinion research done for commercial purposes. There are real differences between political opinion research and our commercial variants. But they both suffer from the growing challenges of getting a good sample.

In fact, when you get right down to it, the biggest difference between polling for political work vs. commercial research is that the political pollsters have a real proof point. When the votes are counted, they know if they were right or wrong.

Your enterprise survey research almost certainly doesn’t have a proof point. If you’re voice of customer opinion research is fundamentally skewed, how would you even know?

Not only does that tell me that your commercial polling likely has all the same errors as the political guys, it probably means it’s far worse. Why? When you don’t have a proof point, you’re that much less incented to make sure you get the results right and you have far less opportunity to correct your mistakes.

I strongly suspect that the work that team’s like Silver’s do is more careful and better than the overwhelming majority of the survey work done in the commercial sector. If that’s true, and if they are having a hard time getting it right, think about what it means for enterprise voice of customer research.

I often get significant push-back from enterprise analysts skeptical of online voice of customer. I get that. And what I’m saying may be reasonably taken as grounds for that skepticism. But those same skeptics aren’t taking at aim at the increasing challenge of getting accurate VoC results in the offline world. I’m pretty sure that commercial opinion research is significantly less accurate now that it was twenty years ago (just try random digit dialing these days!) and it may well be harder to get a representative sample offline than online. Certainly, I see no grounds in today’s world for assuming the opposite. Getting a good sample is hard and getting harder. Without a good sample, you are in constant danger of drawing the wrong conclusions from the data. And don’t even start on that tired and utterly incorrect idea that you protect yourself from this by “just looking at trends.”

So what’s the solution?

First, I think people need to reevaluate their biases around offline vs. online surveys. Traditional attitudes around online surveys and their biases revolve around a couple of issues that I think are largely historical. Back when in the days when intercept surveys first became popular, it was understood that online samples were unrepresentative of the broad population. True then, but nowadays online populations in the U.S. are probably quite a bit more representative out-of-the-box than is easily obtained from most traditional techniques (canvassing, mail, phone and mall intercepts all have huge biases these days). Of course, putting a survey on your own website automatically introduces a significant selection bias. But it’s now routine and easy to pop surveys on 3rd party platforms that eliminate that bias entirely. Many social media platforms, of course, do have significant biases in their user population and likely in your fan base. But given the incredible reach of platforms like Facebook, there’s no reason why you can’t build out excellent samples based on the top social networks. In all these cases, what you retain in the online world is the ability to collect large numbers of respondents very rapidly and with far less cost than with traditional techniques. I’d the be the last person in the world to argue that online voice of customer isn’t challenging. But it’s a bit frustrating to see the offline world get a free pass on the same or worse set of problems when it comes to sampling.

Second, people need to find a proof-point for their voice of customer data. If you’re going to pay attention to it and let it influence your decision-making, you need to find ways to test its accuracy and predictive power. This isn’t just important to your peace of mind. It’s important because without those proof points you have no way to improve your VoC. Go read Lauderdale’s description of the likelihood to vote questions they used and tell me which you would have chosen! Not only will establishing proof points give you a clear path to improving your Voice of Customer research, I venture to suggest that it might also push you to improve the actionability of that research. If your opinion research is too fuzzy to yield predictive models, it problem isn’t very interesting to begin with!

Unlike Silver and his mates, you won’t find much discussion in the enterprise space around the accuracy of our enterprise brand tracking and product surveys. It isn’t because they are better. It’s because nobody knows if they are any good at all and without being forced to, nobody is anxious to put them to the test.

When it comes to enterprise VoC, perhaps it’s time to call an election.

Republished with author's permission from original post.

Gary Angel
Gary is the CEO of Digital Mortar. DM is the leading platform for in-store customer journey analytics. It provides near real-time reporting and analysis of how stores performed including full in-store funnel analysis, segmented customer journey analysis, staff evaluation and optimization, and compliance reporting. Prior to founding Digital Mortar, Gary led Ernst & Young's Digital Analytics practice. His previous company, Semphonic, was acquired by EY in 2013.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here