I dislike politics; probably not least because I worked in political direct mail for a half-dozen years after graduating from college. Some people take to it and some don’t. I was a don’t. But whatever your political opinions or however sanguinary or pessimistic your view of American politics, there’s no doubt that Presidential elections represent fascinating case-studies in persuasion, media-buying, PR, and, these days, analysis.
The recent election was particularly interesting because the two camps (and numerous media outlets) were unusually varied in their predictions of the outcome leading up to the election. What’s more, those predictions appeared to have a significant impact on the candidates strategy in the closing weeks of the election. That, at least, indicates that the opposing camps actually believed their own projections – which isn’t always the case. And given that a preponderance of media attention in an election is focused on the “horse-race” not the issues, it’s probably no surprise that the accuracy of various forecasts was hotly debated right up to election night.
That night turned out to be an impressive vindication of the analysts behind Obama’s team and in the time since the election there’s been quite a bit of attention given to the methods they used. I particularly want to reference a very fine piece in the MIT Technology Review that provides a fascinating look at the two campaigns and their approach to analytics.
There’s a great deal of interest in the MIT article, but the part I particularly want to focus on is the role of survey research and the tight integration of survey research and behavioral methods to create a continuous process of persuasive experimentation. In my kick-off post for 2013, I suggested that one of the most important ways that enterprises could “make measurement matter” was to significantly re-tool their online opinion research program.
Virtually all Semphonic clients do at least some level of online opinion research. Unfortunately, that level is by no means high. By far and away the most common situation for our clients is to have a fixed online survey instrument reaching several thousand respondents each month. That survey instrument is usually long, heavily focused on site experience and site elements, and designed to provide enterprise-wide reporting on site-wide customer satisfaction. Each one of these is problematic.
Let me contrast this small sample size, focus on static questions, and horse-race mentality with how the Obama campaign used opinion research. The Obama campaign’s use of Opinion Research was innovative in two respects.
First, they conducted massive amounts of opinion research – far more than is commonly necessary to achieve statistical significance. They did this because they were integrating the opinion research with behavioral data to create micro-targeting and very fine-grained turn-out predictions.
Here’s the description from the MIT article:
“As the 2010 midterms approached, Wagner built statistical models for selected Senate races and 74 congressional districts. Starting in June, he began predicting the elections’ outcomes, forecasting the margins of victory with what turned out to be improbable accuracy. But he hadn’t gotten there with traditional polls. He had counted votes one by one….His congressional predictions were off by an average of only 2.5 percent. “That was a proof point for a lot of people who don’t understand the math behind it but understand the value of what that math produces,” says Mitch Stewart, Organizing for America’s director. “Once that first special [election] happened, his word was the gold standard at the DNC.”
The significance of Wagner’s achievement went far beyond his ability to declare winners months before Election Day. His approach amounted to a decisive break with 20th-century tools for tracking public opinion, which revolved around quarantining small samples that could be treated as representative of the whole. Wagner had emerged from a cadre of analysts who thought of voters as individuals and worked to aggregate projections about their opinions and behavior until they revealed a composite picture of everyone. His techniques marked the fulfillment of a new way of thinking, a decade in the making, in which voters were no longer trapped in old political geographies or tethered to traditional demographic categories, such as age or gender, depending on which attributes pollsters asked about or how consumer marketers classified them for commercial purposes. Instead, the electorate could be seen as a collection of individual citizens who could each be measured and assessed on their own terms…
The scope of the analytic research enabled it to pick up movements too small for traditional polls to perceive. As Simas reviewed Wagner’s analytic tables in mid-October, he was alarmed to see that what had been a Romney lead of one to two points in Green Bay, Wisconsin, had grown into an advantage of between six and nine. Green Bay was the only media market in the state to experience such a shift, and there was no obvious explanation. But it was hard to discount. Whereas a standard 800-person statewide poll might have reached 100 respondents in the Green Bay area, analytics was placing 5,000 calls in Wisconsin in each five-day cycle—and benefiting from tens of thousands of other field contacts—to produce microtargeting scores. Analytics was talking to as many people in the Green Bay media market as traditional pollsters were talking to across Wisconsin every week. “We could have the confidence level to say, ‘This isn’t noise,'” says Simas. So the campaign’s media buyers aired an ad attacking Romney on outsourcing and beseeched Messina to send former president Bill Clinton and Obama himself to rallies there. (In the end, Romney took the county 50.3 to 48.5 percent.)“
This should be sobering news if you’ve listened to your online survey vendor about how many completed respondents you need to have a viable opinion research program. I’ve argued for years that traditional views about sample size are based on out-moded techniques where survey data ISN’T married to behavioral data. Marrying online survey data to behavioral data fundamentally changes and opens up the research program; largely invalidating traditional thinking about sample size.
Even more important, however, is the USE that the Obama campaign found for this remarkably detailed, real-time survey information. Instead of just focusing on the “horse-race” (who’s ahead), they focused on how individuals were responding to key messages. They used this information to craft and field-test persuasive approaches:
“One way the campaign sought to identify the ripest targets was through a series of what the Analyst Institute called “experiment-informed programs,” or EIPs, designed to measure how effective different types of messages were at moving public opinion.
The traditional way of doing this had been to audition themes and language in focus groups and then test the winning material in polls to see which categories of voters responded positively to each approach. Any insights were distorted by the artificial settings and by the tiny samples of demographic subgroups in traditional polls. “You’re making significant resource decisions based on 160 people?” asks Mitch Stewart, director of the Democratic campaign group Organizing for America. “Isn’t that nuts? And people have been doing that for decades!”
An experimental program would use those steps to develop a range of prospective messages that could be subjected to empirical testing in the real world. Experimenters would randomly assign voters to receive varied sequences of direct mail—four pieces on the same policy theme, each making a slightly different case for Obama—and then use ongoing survey calls to isolate the attributes of those whose opinions changed as a result.
In March, the campaign used this technique to test various ways of promoting the administration’s health-care policies. One series of mailers described Obama’s regulatory reforms; another advised voters that they were now entitled to free regular check-ups and ought to schedule one. The experiment revealed how much voter response differed by age, especially among women. Older women thought more highly of the policies when they received reminders about preventive care; younger women liked them more when they were told about contraceptive coverage and new rules that prohibited insurance companies from charging women more.“
Not one in a hundred of today’s enterprises are doing anything remotely as useful with their opinion research. Using these techniques, the Obama campaign was able to adapt individual voter communication strategies in near real-time. It should be hard not to see the applicability of this to business. Online survey techniques provide an extraordinarily low-cost technique for recreating this type of micro opinion-research in the commercial sector.
It’s hard to believe that an ephemeral activity like a Presidential campaign should be able to do analytics better than an established large enterprise. But for all their challenges, campaigns are a unique crucible – the intense win or lose environment rewards (and allows for) aggressive innovation. That analytics has become fundamental in political campaigns and that the techniques used have often advanced beyond those common in the private sector isn’t, perhaps, quite so surprising.
There are important lessons to be learned here and I’m confident that the learning won’t be limited to Presidential campaigns and political parties. Enterprises that fail to adapt these techniques will find themselves out-guessed, out-communicated and out-strategized by their competitors.
In my next few posts, I’m going to lay out a vision for creating a Customer Intelligence System in the enterprise based not just on these learnings but on a broad vision for how customer attitudes can be collected, standardized, integrated and used. The fundamental tenets of this vision are:
- Online Survey Research is the most powerful, flexible and cost-effective means of understanding customer attitudes, decision-making and drivers of choice. To be effective, this research must be flexible, constantly configured and business-specific, and focused on specific customer issues and decisions.
- Customer attitudes can be tracked with a variety of mechanisms, from online survey research to social media measurement to call-center to opinion-cards to offline research studies.
- The standardization and integration of all these sources is every bit as necessary and valuable as the standardization and integration of transactional information in a warehouse.
- The integration of behavioral data and attitudes data is fundamental to the appropriate use of this data and changes the shape of the research program including necessary sample sizes and types of questions that should be asked.
- Dashboarding and reporting around customer attitudes is the next frontier for Enterprise Intelligence and presents unique opportunities to dramatically improve the enterprise-wide understanding of customer’s and drivers of choice.
Thanks for this very interesting take on online research. I particularly liked:
“Marrying online survey data to behavioral data fundamentally changes and opens up the research program; largely invalidating traditional thinking about sample size.”
and
“The standardization and integration of all these sources is every bit as necessary and valuable… ”
It really opens up an intriguing perspective that should be taken seriously in primary research.