U.S. pollsters got quite a surprise in the early morning hours of November 9, 2016.
That’s when it became apparent that their sophisticated voter research had completely failed to predict the outcome of the U.S. Presidential election. Longtime Republican political strategist Mike Murphy went so far as to assert that “data died” that night.
Yes, the 2016 U.S. Presidential election was a highly visible casualty for data-driven research, but far from the only one.
In 1985, Coca-Cola announced the rollout of “New Coke,” an updated formulation of the venerable soft drink, designed to appeal to changing consumer tastes.
In launching the new formula, the company cited research indicating that taste was the primary driver behind the brand’s market share slide. The firm also pointed to blind taste tests which indicated that a majority of consumers favored New Coke over its predecessor (and competitor Pepsi).
As it turns out, the research pointed Coca-Cola in the wrong direction. Three months after rolling the revised formulation out, the company acknowledged widespread public discontent with the new product and returned the original Coke to store shelves. New Coke was ultimately killed off in 2002.
What went wrong? One thing that Coca-Cola failed to account for was the emotional dimension of consumer buying behavior. Even if people said they preferred New Coke in taste tests, many had an emotional attachment to the original formula that – outside of the research bubble – superseded their rational judgement on taste.
This is why an overreliance on traditional research methods (i.e., asking customers what they want or like) can lead a company astray. Surveys and questionnaires do a poor job of accounting for the emotional considerations that drive customer behavior.
As behavioral science has clearly demonstrated, it’s those emotional considerations that often exert the strongest influence on individual decision-making. (Or, as renowned psychologist Daniel Kahneman has described it, “the emotional tail wags the rational dog.”)
Post-mortems on the 2016 election polling have also referenced the emotional “blind spot” of traditional research methods. Evans Witt, President of the National Council on Public Polls, highlighted this issue to NPR, noting that “polls do a poor job with emotion/enthusiasm/commitment” and that may have been an important behavioral influence on what was a very polarized electorate.
There’s another reason, though, why traditional question-based customer research can mislead, and it comes down to this simple truth: There’s a big difference between what customers say and what customers do.
Wal-Mart found this out the hard way in 2009 when it launched a store redesign effort dubbed “Project Impact.”
The company had conducted customer surveys which indicated that shoppers didn’t like Wal-Mart’s cluttered, dimly lit stores. They wanted cleaner, more streamlined layouts.
Project Impact sought to deliver on this apparent customer preference by de-cluttering the store – removing endcaps, widening aisles, and improving navigability.
Even the store’s famed “Action Alley,” the main corridor separating departments, wasn’t immune to the changes. Traditionally dotted with palettes piled high with fast-selling items, Action Alley was cleared out by Project Impact, opening up sight lines across the entire store.
It all sounded like a good idea… until same-store sales started to plummet. The reason? In order to streamline the store layout, Wal-Mart had to eliminate, by some estimates, 15% of its store inventory. When customers could no longer find their favorite brand at Wal-Mart, they went elsewhere to pick it up – and shifted their shopping to competing stores which offered a wider product selection.
In addition, it turns out that Action Alley – while perhaps contributing to store clutter – also triggered a lot of impulse buys among Wal-Mart shoppers. When Action Alley disappeared, so too did a lot of sales.
Since its founding by Sam Walton in the 1960’s, Wal-Mart’s strategy had always centered around offering low prices and a wide selection (“Stack ‘em high, watch ‘em fly,” as Sam liked to say).
Sam apparently knew his customers better than the company’s modern day researchers, because it turns out people shop at Wal-Mart for – you guessed it – value and selection. Shoppers might have said they wanted a clutter-free store – but in reality, the clutter was part of the appeal for them, feeding into their hunt for great deals and impulse purchases.
What could Wal-Mart have done differently? Instead of just asking customers what they wanted, they should have observed them in action, navigating the store and making purchases. They should have spoken to shoppers one-on-one, to better understand what shaped their purchase behavior once they stepped foot into a Wal-Mart.
It’s precisely this type of context and nuance that traditional customer research methods miss out on – because what customers say they want is sometimes quite different from what they actually value.
Indeed, that which the customer values the most may also be the thing that’s hardest for them to articulate. Hence the mismatch between what people say and what people do.
Traditional customer research has merit, but its precision is often oversold. To steer your business in the right direction, don’t just look at the data, look at your customers.
Immerse yourself in their experience and observe them in their natural habitat – because that’s where you’ll find the priceless insights about how to better serve them.