Data planning and qualitative research – mind the gap

0
84

Share on LinkedIn

I once attended a research debrief to report the results of a survey into the communication effects of a direct mail campaign. The survey asked if the target group had received the direct mail piece and what they thought of it. The survey results were not good. According to the research, hardly any of the respondents could recall seeing the DM pack and even fewer claimed to have responded. There was disappointment; it was a big mailing and a strong offer, surely someone must have seen it and been motivated to respond. But all was not lost. In reality, away from the results of the survey, the campaign had in fact been very successful. I knew that the campaign was in the process of beating all its response, conversion and sign-up targets.  From a hard data point of view this campaign was on track to become one of the most successful DM campaigns ever run by the client.

So why was the recall in the research so low and the actual response so high? I can think of three explanations:

First, we were targeting a large group of the population. It was possible that even though the hard data results were good, we were drawing our DM response from portions of the population that simply hadn’t been included in the sample.   If we had a 25% response then that was a record-breaker from a DM planning point of view, but it still meant that the vast majority of the target – 75% – hadn’t responded. Those that had engaged with the mailing were far more likely to recall it than those who had not. So if our sample happened to comprise of 85% or 90% of those who did not responsd, then the recall results would be much lower than the response actually experienced.

The second explanation is more intriguing. Could it be that even though 1 in 4 of the target had responded, those that did respond had failed to make the connection between the what they’d actually done and what the research was asking them? In this scenario the sample is accurate and reaching our 1 in 4 respondents, but those who had responded forgot that they had done so when asked in research. Had they failed to connect the research question to the campaign and to their response behaviour?

The third explanation is that some of the respondents deliberately disconnected their actual behaviour from the answers they gave in the research. In other words, they did respond, but they didn’t want to say so.  They were using the research as a communication channel to share a point of view along the lines of ‘I’m not going to tell you exactly what I did. What I am going to tell you is that I didn’t like being perceived to be in your target audience, or perceived to be the sort of person who would buy the sort of product you were offering’.

Whatever the explanation, this taught me an important lesson; market research and behavioural data can say very different things. Asking people what they did, or think they did, can be very different to what they actually did. If market research tells you something, take it as an indicator not a fact. If it’s something big, do more digging around the research before you act on it.  But if hard data tells you something, whether it’s good or bad, whether you like it or not, you can be sure that it reflects changes in actual behaviour, the ultimate measure of marketing success or failure.

Republished with author's permission from original post.

Simon Foster
I am currently Head of Analytics for EMEA at m/SIX, an agency co-owned by The&Partnership and WPP. I work with advertisers to help them increase their marketing effectiveness across digital and traditional media channels using advanced evidence-based techniques.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here