Holding Customer Research Firms Accountable For Misleading Research

43
1423

Share on LinkedIn

It’s been a bad week in a bad year for customer research and the companies that produce information about customer and consumer behavior. While there’s a torrent of research findings coming out of firms like Forrester, The Temkin Group, TOA, Zendesk, and others, much of the research is badly designed, or reported in ways that mislead the reading public and purchasers of reports based on this research.

Point: ZenDesk Infographics

In an blog post dated July 5, customer service company Zendesk posted a rather good looking infographic about the importance of customer service. What’s striking about this is that nowhere on the page is there an explanation of where the many numbers mentioned come from. Or disclaimers about the limitations and accuracy of those numbers.

Typical of customer research coming from firms that sell services in the customer service space, all of the "data" reported show how important customer service is.

But even if we don’t know how the numbers were collected, are they accurate? Are they based on properly collected data that is properly interpreted? In this case at least some of the numbers are misleading, basically causing one to have to question the credibility of both the information, and Zendesk.

The infographic states that 85% (of customers) "would pay up to 25% more to ensure a superior customer service experience". There are other studies using survey methods, that have come up with different numbers around the same theme.

But here’s the catch. In social science research we know that responses on surveys are very POOR predictors of what people do. We also know that the phrasing of survey items, even small alterations, can completely alter how people respond.

As you will see, Zendesk appears to make the same mistake that almost all customer research firms make, assuming that survey responses will tell us not only what customer DO do, but how they will behave.

In fact, survey research can only tell us what people SAY they will do, not what they do. That’s assuming the customers understand the survey items the way the writers do, AND that customers actually know well enough how they make decisions. (See How We Decide By Johnah Lehrer). Often that assumption simply doesn’t hold.

If you’d prefer to think in terms of whether this infographic is consistent with what customers DO, consider this: Each day millions of customers self-serve themselves gas to save a few dollars, while more millions go to low cost, and almost despised companies like Walmart, no-service warehouse outlets like Costco, no frills grocery outlets and so on. Clearly customers are NOT that willing to pay more for customer service or the customer experience, or at least not according to daily behavior.

As a further example, Zendesk says: "95% of customers have taken action as a result of a bad experience", 79% indicating they told others about their experience. Intuitively sensible, but wait. WHAT actions did they take? As for the 79% who told others, one must remember that telling someone, and having one’s words heeded and acted upon are two different things. Having a voice is not the same as being able to affect behavior, which is what counts. After all, if that 79% told their toddlers about the bad experience, it’s hardly something companies should act upon.

Zendesk is not unusual here. It’s the norm. But who cares? At this writing this blog post has been shared close to 200 times on social media platforms, and embedded in other websites, thus relaying what is almost certainly faulty and misleading information about customer behavior.

Point: TOA Technology

In a press release appearing on their own site, dated June 14, 2011, TOA Technology makes some sweeping statements about customer behavior and customer service. Here’s a few quotes:

"…today released the results of its study on customer behavior and the use of Twitter in customer service. The survey found that more than 1 million people per week view Tweets related to customer service experiences with a variety of service providers and that more than 80% of those Tweets reflect a critical or negative customer experience."

They go on to talk about the implications of this "finding", which contradicts several other earlier studies looking at the exact same issue. Those previous studies indicate that over half of brand mentions are informational, not judgmental, and that of the judgmental tweets, praise is more common than criticism. In research one always looks at what comes before for context.

Be that as it may, where did their numbers come from? After all, their broad statements refer to "customers" meaning all customers on Twitter. Does there data support that?

No. Later in that very press release the following text appears:

"The statistical sampling of over 2,000 Tweets was collected during the period of February 25 to May 2, 2011 and focused on terms that included “the Cable Guy” and “installation appointments,” among other terms. TOA’s study found that during the selected time period, 82% of the surveyed Tweets contained negative (or somewhat negative) sentiments about customers’ cable appointment experience…"

They sampled only 2,000 tweets, AND they only sampled tweets having to do with a particular subset of tweets — about the "cable guy" and installations. The sample is biased, because it looks only at one small segment and keywords almost always used in conjunction with criticism. So, while their results "might" apply to a small subset of Twitter people, it certainly DOES NOT accurately represent "customers" in general. Yet that’s not what they say in their press release, OR the actual report.

There are other issues in this research, which we’ll leave out for brevity’s sake. In a nutshell, this study tells us NOTHING, and the claims about the report are not only wrong, but so wrong on the surface that most people should recognize that it is fundamentally worthless. But they don’t. We don’t know how many, but I’ve seen links to this study tweeted and retweeted at least fifty times just among the people I track on Twitter. People who work in customer service, and who should know better.

Point: The Temkin Group

There’s a fair amount of very interesting material coming from the Temkin Group, headed by former Forrester employee, Bruce Temkin. In a post abstracting some findings from their "for purchase report" ($195.00) entitled "The Current State Of Customer Experience"

Temkin summarized research he describes as based on 140+ large North American companies, and it makes sense that we are not presented with the full methodologies uses, since presumably that would be contained in the report if one purchases it. The post sounds credible and professional, and reflects the generally high quality presentation and research one appears to find on the Temkin Group site.

But the TITLE. Does this research actually reflect the STATE of Customer Experience, as is stated in the title. From the abstracted findings, it appears not. It represents something quite different which is the PERCEPTIONS of the people and companies involved as respondents. Not that these perceptions are trivial. Perceptions are important, but they are NOT behavior, and they are not the reality.

If you want to measure what companies are doing, you have to do so much more directly by looking at what they do, and not what they say they do. A hard task but the ONLY way the findings have value.

To illustrate the point, there is a fair amount of research that looks at customer perceptions of customer service versus the CEO’s (or other executives’s) perceptions of the customer service provided by the company. Guess what. CEO’s rate their companies as providing much better customer service compared to their customers. In fact, the gaps are often huge. BOTH sets of respondents offer PERCEPTIONS, not reality. Again, that’s not to say perceptions are unimportant. They are. It’s just that they ARE not indicative of behavior.

Others Both Large and Small

American Express conducted a study, and appears to have confused survey results with customer behavior. In fact, the majority of research reports on customer service and/or social media make a similar confusion. Or their own interpretation of basic survey data is faulty, where claims are made that are not derivable from the actual data. On top of that, it is often the case that the actual details of the research, the how, why, who, context, previous research, caveats, and so on are not easily found. The headlines can be found, are found, and disseminated, often widely.

But often the headlines are wrong.

Conclusion and Some Questions

One has to wonder where the problems lie in the customer research industry. Is it because the people doing the research are incompetent in research logic and design? Is it because there’s a bias at work because most research companies provide other services that are based on a particular slant on customer service or social media? Is it sloppy copywriting?

There’s no way to tell, and I make no comment about the particular companies above, except for what appear to me to be basic flaws in how the research is reported. In addition to the "why" questions above, here are a few more.

What are the implications and costs for businesses if they take action based on "research" reports that provide incorrect conclusions?

What level of accountability should these companies have? (Journal research tends to hold researchers accountable in various ways, while research companies in customer service don’t appear to be accountable to research experts applying research standards?

What are the other consequences of poor conclusions being circulated hundreds or thousands of times online by people who either don’t read the source documents, or lack the ability to critically assess the research?

Robert Bacal
Robert began his career as an educator and trainer at the age of twenty (which is over 30 years ago!), as a teaching assistant at Concordia University. Since then he as trained teachers for the college and high school level, taught at several universities and trained thousands of employees and managers in customer service, conflict management and performance appraisal and performance management skills.

43 COMMENTS

  1. Robert, you rightly point out that research shouldn’t be taken at face value. However, while I’m sure that some studies are faulty (no such thing as perfect research), in my experience it’s more of an issue of marketing and communications.

    For example, for years we heard that CRM projects were failing at rates of 50% or more. Media reports liked that headline, I suppose. But when we (Dick Lee, David Mangen and myself) conducted our own study we found that in fact around 65% of projects we successful, based on ROI or perception.

    So part of the problem is that even with well-designed studies, the media (and marketers) are looking for a sound bite. As a result, much of the qualification and nuance of the study is lost. Not misleading research, but incomplete reporting.

    Marketing is another culprit. Whatever point you’re trying to make, you can find research (or some portion of a study) to support that point. Again, much is lost as the marketing message is blasted in the market. Still not misleading research, but the message may not represent the complete story.

    Customer service/experience vendors and consultants say experience is what drives success. And for some companies, it seems to. Yet common sense says that many people still buy products based on their capabilities and the price. If price didn’t matter, why is Groupon so popular.

    My research has found “experience” is about 40% weighted as a loyalty driver, but product (40%) and price (20%) are the balance. So experience is not the only thing that matters, but you’d never know it based on the marketing of the CeX industry.

    I like customer-centricity as a way of doing business, and have research to support that it “works.” But is it the only way to succeed? No — just ask the customers of Ryan Air if the airline is customer-centric and they’ll say no. But Ryan Air’s low-cost and bordering on customer-abusive model generates profits, and customers seem to live with it the poor experience.

    As to who should be held accountable, I’d say business managers who believe the headlines and don’t ask some of the questions you posed on the survey sampling, design, etc. Just reviewing the actual questions asked can shed some light on whether the survey was designed to prove a point, or for objective research. I’ve been surprised at how difficult it is to get a marketer to release this information, which makes me suspicious that the reported results are misleading or at least incomplete.

    In the end, there’s no substitute for an executive to have good judgement and maybe even a bit of gut instinct as to how to make a business succeed. Research, however designed or reported, is not a substitute for common sense.

  2. Robert –

    The points you make about the ZenDesk blog (I commended on this July 5th blog myself) have some merit, but in my view you’re cherry-picking, missing the forest for the trees, and incorrectly branding the firms involved in generating the data ZenDesk used. Saying that “It’s the norm”, i.e. reporting data you consider to have errors, borders on being inflammatory. The piece was principally to suggest that, from the ZenDesk perspective, customer experience is more important than advertising. Here’s how I responded:

    “1. As dynamic and powerful an influence on future behavior as word-of-mouth and brand impression, resuilting from customer experience, can be, advertising can’t, and shouldn’t, be shoved aside in the communication mix. Research by Keller Fay, and others, repeatedly demonstrate that it’s the effective combination of peer-to-peer communication, advertising, and promotion that will drive business results.

    2. As a metric, satisfaction has some overall value; however, customer advocacy (brand impression, informal communication activity, and continued purchase intent) correlates much more closely and consistently to actual business results (see my CustomerThink article, which has had over 6,500 views in the past year: http://www.customerthink.com/article/marketing_case_customer_advocacy_me…. My new book, The Customer Advocate and The Customer Saboteur, explains this in more detail. On the Zengage page which presents the reasoning behind customer satisfaction measurement, the CEO of Wildfire Interactive even identifies customer advocacy as an end goal.

    As a passing note, doubtless some of the stats quoted in your blog come from work I did while a SVP of Stakeholder Relationship Consulting for Harris Interactive, and also work partnered on with RightNow Technologies.” These studies focused on downstream communication and specific behavior as a result of customer experience. Though your reference to talking about bad experiences to toddlers is a cute way to make your case, I’d suggest that it misrepresents what is actually being measured.

    I’d also point out that social science research and customer experience research, though they overlap in some areas, have different objectives and design parameters. In customer research, such as our advocacy framework studies, we’re focused on closely correlating responses to actual business outcomes. We’ve proven the relationahip of probable behavior to actual behavior through ongoing panels. So, I’d take issue with your statements about survey research based on my professional experience in this field.

  3. Bob, for the most part, I look at what the research companies are writing about their own research. I’ve given up on expecting the media to report research with even a most level of critical reading and attention.

    So, yes there are problems in the media, and there are problems with people taking results at face value, but my point is this: The research companies themselves are doing faulty research, and reporting it in misleading ways, and that this is a fairly CONSISTENT thing.

    It is the research companies themselves that are at the root of the problem. They are expected to know what they are doing, and if they publish junk, how in the world can managers or the media ever get it right?

    Irony, these companies doing customer service research and in social media have not responded to most of my attempts to discuss their research errors for specific posts on their sites.

    Zendesk? Nope. Completely ignored by tweet to them about the material mentioned here. Temkin? No response to attempts to contact on twitter and linkedin. Got a first response, finally, when i sent a pointed email to him.

    TOA I didn’t bother with. The stuff they put in their press release was so ridiculous, I figure no amount of anything could help, even if they cared.

    What about Forrester? Errors there too, along with attempts to get a response. Nothing.

    If CNN consistently broadcast faulty news, how long do you think they’d last if on every newscast there were 2 items that were completely false, and stated as fact? yes readers need to think, but they should also be able to trust the sources. For the most part the research sources in customer service are shamefully bad, likewise for social media.

  4. I disagree on almost every point you make, except for the title. First of all Zendesk is responsible for what it produces, and my issue is with them regarding the infographic they produced. If they are “reporting” wrong information, it’s THEIR responsiblity, and it’s in their interests not to do that.

    I doubt I will ever take zendesk seriously or as a source of legitimate knowledge. We don’t need fancy graphics when the content is wrong.

    As for cherry picking, no. I’ve seen the same problems over and over again, simply by checking tweets and stories linking to “research reports”. I only included three examples, because including the full set of junk findings I’ve found would fill a good fifty pages.

    As for the link between survey results and actual consumer behavior, you would have to demonstrate that link EACH time in order to produce valid conclusions. You never ASSUME the link in research.

    The research on relationships between survey results and behavior is so clear, and I bet that almost nobody in the “research company” business has ANY understanding of the basic social psychology involved in survey response, and I don’t see any indication you have read any of it.

    I don’t know you, but I’d suggest, from your response, that you are part of a huge problem. Undertrained? Lacking in basic theory and practice about the tools you use? I don’t know if that applies to you, but perhaps it’s endemic to the industry. I’m trying to find answers and find amazing resistance from inside the industry — from people such as yourself, and I expect it’s because they don’t know any better.

    If one’s understanding of survey research and psychology is not informed by significant background in research via formal education in psychology, statistics, sampling, and item design, one is simply incompetent.

    You can’t learn this stuff “on the job”. It’s way too complex.

  5. Robert –

    Your tone and entrenched negative point of view leaves me unwilling to even continue a dialogue. Undertrained, part of the problem, lacking in basic theory about tools and understanding of the social psychology involved in survey response, and no indication that I’ve read any of it? Really? Try these credentials on for size:

    – Professional background which spans over 35 years in stakeholder and brand research and consulting, at senior executive levels on both the client and vendor side
    – Author of five books on customer behavior, several of them endorsed by Professor Philip Kotler and Professor Leonard Berry
    – Author of over 150 customer-related white papers, articles, and blogs
    – Active customer behavior presenter and strategic customer life cycle workshop developer/facilitator at conferences in the U.S. and over 30 foreign countries.

    You may have found some discrepancies and inaccuracies, and even misstatements, in customer research articles or other content. They are, from my broad experience as a consultant and author, the small exception that proves the rule of largely unbiased, factual, and actionable reporting.

    As far as the link between survey response and actual customer behavior, I wouldn’t claim it unless I’d actually proven it. And, I’ve done that in multiple b2b and b2c industries. The fact that you won’t even consider the possibility of this linkage to business outcomes is consistent with the rest of your blog and response.

    Michael W. Lowenstein, Ph.D., CMC
    Executive Vice President
    Market Probe (www.marketprobe.com)

    – Author of Customer Retention (1995), The Customer Loyalty Pyramid (1997), One Customer, Divisible (2005), The Customer Advocate and The Customer Saboteur (2011)
    – Co-author of Customer WinBack (2001)
    – Contributing author of Customer.Community (2002)

  6. Robert, you advocate holding research firms accountable, and say that “for the most part, I look at what the research companies are writing about their own research.”

    Yet your examples don’t really support your point. AMEX, TOA Technology and Zendesk are not customer research firms.

    As for Temkin’s research report on the State of the Customer Experience, you draw your conclusions from reading an abstract, not the actual report. Is that really fair?

    I think you make a fair point that customer perceptions are not the same as behavior. Sometimes that distinction is lost in the marketing lingo or headlines.

    That said, you slam customer research firms for “misleading” research without providing compelling evidence to back up that contention. You’re certainly entitled to your opinion, but it appears your post is itself an example of what you decry.

  7. Bob, I chose three examples from at least 100 “research” studies reported by the companies themselves that are faulty, and the list includes pretty much ALL of the companies doing customer related research.

    The examples in the article are NOT exceptions.

    As for reading from the abstracts, you bet, because that’s what the overwhelming majority of people read, then spread around as if the data “says” something it does not. The abstracts and press releases are the very things that buyers use to purchase the full reports, often at a high price point.

    If the press releases and abstracts written BY the companies are presenting conclusions different from the report, that’s misleading. If they are presenting faulty conclusions that ARE in the report, THAT is misleading.

    As for compelling evidence, I’m not sure what would be acceptable to you. So why not tell me. And I’ll see if I can meet your standard.

    I have many more examples I can access, but do you really want to read 50 more examples of faulty research and/or faulty reporting.

    Better yet, do something I challenge everyone to do, including myself. Go ye forth and attempt to DISPROVE your assumptions. YOU look, if you have the expertise in research, and I guarantee you, your conclusions will be pretty close to mine.

    I have no interest in slamming anyone, although I have to admit incompetence bothers me and begs for exposure in this social media world. I do however have serious concerns that companies will ACT upon false research interpretations by the originators, invest money in things they believe have been documented to work, and will fail.

    I’m interested in educating the “consumers” of this research, and I expect that’s not what some companies want.

  8. Michael, please go back and read what I wrote, and try to be a little more literal in interpreting what I said, OK. I asked questions, since I’m not sure where you “fit”.

    I’d LOVE to spend days documenting the overwhelming quantity of poor research along with explanations, but I bet even if I started posting urls of research, that very few people could identify what’s wrong with the studies.

    Come to think of it, I’ll do that in this comment section. If you are up to the challenge, and you are as able as you say, let’s see if you can identify flaws, potential flaws in what i post.

    Nobody ever takes me up on these challenges, so I hope you’ll be one of the first.

    One more thing. I have no problem with you having proved the link between survey responses and behavior. I’ll take you at your word that YOU produce proper research and you’ve validated your instruments properly (you have, right?)

    Very few do. And each instrument MUST be properly validated, for each population, or sample, unless it’s used in a standardized way, with proper norms, known distributions, etc. In other words your surveys may be valid and predict behavior. Because you did things right? That IS the exception. And your “linkages” can’t be generalized to any other situations.

    Sorry, that’s just basic research stuff an undergraduate in the social sciences learns.

  9. Ok. Here’s the challenge. I’ll post url’s to reports on research/press releases and you tell us whether there are flaws in the reports.

    This time: Two different urls, one to a study that is almost flawless (can you spot what’s missing), and the other quite flawed, and in essence, capable of misleading.

    1) http://www.loyalty360.org/industry_news/exceptional_customer_service_most_important_aspect_of_customer_loyalty_acco/

    This brief summary is from Affinion Loyalty Group and VIPdesk. and relates to loyalty and affluent customers. Are the conclusions logically valid. Can you spot the rather glaring flaws?

    2) The second is from Sysomos and since I read a lot of customer service AND social media research, I’m including it. It’s almost perfect. Why is it almost perfect? Is there something missing?

    http://www.sysomos.com/insidetwitter/engagement/

  10. Robert, I think it would be more productive to just write an article that helps consumers understand what kind of statements to watch out for.

    In the business world, everyone is doing surveys and reporting the results. Sometimes the surveys are not well designed. Other times the samples are limited. This is not formal research, but rather “factoids” generated from surveys.

    What bothers me is your sweeping generalization that all consumer research is misleading. As you say, you have 100 examples that includes “pretty much ALL of the companies doing customer related research.”

    So, your point is that not one firm in the industry is doing research that is not misleading? Seriously?

    There are many shades of gray out there, and even the most impeccably designed academic research has limitations. If you’re trying to educate consumers, then I’m not sure that attacking the entire research community will help your cause.

    Maybe you could share an example of just one study that has been done correctly in your opinion, and explain why. Perhaps some of your research?

  11. Robert:

    First of all, I agree with your high-level premise — people should hold market research companies accountable for good research. I’ve been running research organizations for more than 12 years; and I see a lot of variability out there.

    But I think your argument about our research doesn’t hold up to the standards you espouse. Your comment about the title of the Temkin Group report from June 2010, “The Current State Of Customer Experience,” makes sense. You made that same point on my blog last December and I responded in agreement; acknowledging that it should have said “Customer Experience Management” instead of “Customer Experience.”

    Does that make the research or the findings misleading? I don’t think so. Anyone who read the report would clearly see the methodology. The methodology is even clearly stated on the blog post that introduces the report. Where is the “misleading?!?” Is it a bad title? Probably. Does that make it misleading research? No way. I’ve never heard from anyone else who thought that research was misleading.

    Also, you might note that I’ve published an update to that report called “The State Of Customer Experience Management, 2011” (http://experiencematters.wordpress.com/2011/05/17/new-report-the-state-of-customer-experience-management-2011/)

    And, if you want to find out about the actual “State Of Customer Experience” then check out our Temkin Ratings website (http://www.temkinratings.com/) where we evaluate companies based on feedback from consumers.

    We all need to be held accountable for what we research and what we write. I hope that you apply that same standard to what you write in the future.

  12. Robert,

    Bob Thompson wrote to me asking me to respond to the debate.

    First, since you asked for comments on the surveys posted for the two URLs, I thought I should respond as you are using this as a means to determine competence. The answers may or may not be what you were looking for, but they are most definitely valid (and would have been pointed out by me and others were these studies to seek publication in a peer reviewed scientific journal).

    http://www.loyalty360.org/industry_news/exceptional_customer_service_most_important_aspect_of_customer_loyalty_acco/
    The survey suffers from selection bias. Specifically, the release states that "the survey was conducted during a webinar presented by Affinion Loyalty Group and VIPdesk, and produced by Loyalty 360.” Clearly, this is not a representative sample, therefore the results cannot be expected to reflect the population of interest.

    http://www.sysomos.com/insidetwitter/engagement/
    The main issue is that all tweets are treated the same. This manifests problems in two ways:
    1) Mass tweets by celebrities, news personalities, corporations, etc. are treated as identical to tweets among members of a close knit social network. Replies would be much more likely in close-knit groups than to mass tweets.
    2) The content of the tweet is likely to affect the reaction to the tweet. No attempt was made to group tweets by subject matter.

    Now, to the matter at hand. Without question, there is a lot of bad research being published (just like there was a lot of bad finance being practiced by investment companies which did wonders for the global economy). This has been pointed out by me and others in the industry. In fact, I co-authored a book Loyalty Myths to expose many of the fallacies associated with bad research in the loyalty space.

    Even the best research in the most prestigious scientific journals represents an oversimplified view of reality. It is impossible to take all externalities into account. And right now, researchers at Duke medical school have been forced to recall four papers on cancer research (which affected the treatment of dying patients) because of very bad research.

    The key is our need to replicate our findings, and to be willing to accept that much of what we believe to be true will ultimately be proven wrong.

    Best regards…
    Timothy Keiningham, Ph.D.
    Global Chief Strategy Officer, Ipsos Loyalty

    ———————————————–
    Oridinarily, I would not include my bio, but in this case, I thought I should include it:

    Timothy Keiningham is one of the world's most highly acclaimed loyalty experts. Tim is global chief strategy officer at Ipsos Loyalty, one of the world's largest market research firms.

    A prolific writer, Tim has authored and edited eight books. His most recent book, Why Loyalty Matters, provides compelling insight into how our loyalties, large and small, lay the foundation for our happiness, and determine the kind of world we live in. The book offers a comprehensive guide to understanding what loyalty is, what it isn't and how to unlock its power.

    His prior book, Loyalty Myths, was ranked as the Number 4 best business book of 2006 by The Globe and Mail newspaper (Toronto, Canada), one of the 30 best business books of 2006 by Soundview Executive Book Summaries, and was a 2007 finalist for the Berry-AMA Book Prize for Best Book in Marketing.

    Tim's research on the importance of loyalty has received over a dozen prestigious scientific awards, including:

    • INFORMS Society for Marketing Science, top 20 most influential articles of the past 25 years.
    • Marketing Science Institute / H. Paul Root Award from the Journal of Marketing for the article judged to represent the most significant contribution to the advancement of the practice of marketing (twice).
    • Citations of Excellence "Top 50″ Award (top 50 management papers of approximately 20,000 papers reviewed that year) from Emerald Management Reviews.
    • Service Excellence Award (best paper) from the Journal of Service Research.
    • Outstanding Paper Award (best paper) from the journal Managing Service Quality two years in a row (2007 and 2008).

  13. This is not a continuation of dialogue. Instead, it is a reaction to the responses of my professional colleagues. It is also a reaction to your responses, so far, to them (and to me).

    Acknowledging that weaknesses may sometimes exist in methods or reporting of market – or academic research, at this point it’s more interesting to observe how you choose to communicate and support your point of view. Bruce and Bob have appropriately asked that you throttle back on the individual and collective attack mode, and perhaps give due that, in the main, researchers are fully accountable for both the quality of what they design and the defensibility of what they publish.

    The positions you’ve taken, and the responses you’ve received, have succeeded in drawing many CustomerThink readers to this blog. As a means of giving some useful air to the important issues of research design and reporting, this is certainly a good thing. Maybe that was your intent. But, from my perspective at least, the attraction is more akin to the way a traffic fender-bender or someone doing card tricks on the street pulls in gapers. People are curious and interested to see what’s going on – without getting directly involved.

    You’ve made assertions that much of the consumer/customer research out in the ether is badly designed, misleading, or both, offered examples that you believe to be the norm rather than the exception, and made multiple challenges to have readers submit counter-arguments (because, it is assumed, you don’t believe they can). Rather than encourage folks to agree with your positions, though, the original statements and responses you’ve offered are very likely to have the oppositie effect.

    Michael Lowenstein, Ph.D., CMC
    Executive Vice President
    Market Probe

  14. Unfortunately, all the heat is obfuscating good comments from multiple perspectives. So I would like to very calmly suggest the research community should be doing a much better job policing itself – and landing on some of the “bad actors” in the community.

    For example, prior to the study Bob mentioned initially that he, David Mangen and I conducted measuring CRM success rates, we conducted a study measuring post-purchase satisfaction with many brands of CRM software. At the time, Siebel Systems was loudly proclaiming 99% customer satisfaction. Aberdeen had conducted a “study” for Siebel backing them up, and an “independent” research firm in California was totally onboard. I’m on vacation right now and can’t access my records, but I’m virtually certain Siebel’s customer satisfaction rating was below 60 points (different than “%”) – by any research standards a disaster – and below many systems Siebel routinely poo-poohed.

    How could our numbers be so different than the two others’? Simple, David Bob and I set out to find the truth, and were uncompromising in doing so. Reporter Lee Gomes of the WSJ called me to ask about the Aberdeen research. My input was I didn’t believe anything Aberdeen published (and still don’t), because subjects of their research were paying for the “study.” So Lee called Aberdeen, and some indiscrete VP of marketing admitted under questioning that Siebel had paid the freight. Lee turned his confession into a feature article in the business section. Aberdeen didn’t try to defend itself. It just fired the VP.

    As for the other “research” company’s numbers, when Siebel’s in-house researchers called us to discuss our numbers, they admitted they were part owners of the company (or it may have been full owners). Nuff said.

    The marketplace is full of this type misrepresentation disguised as “research.”. Which compromises everyone’s work from a perception standpoint. Talk about a “broad brush.” And there’s no antidote except legit researchers taking on the poseurs bluntly and publicly. Sitting by in silence condemns all researchers to be held in disrepute in many quarters.

    And BTW, this type phone research appears to be self-serving, but it’s not. Perhaps if Siebel had looked at itself honestly back then it might have survived.

  15. Frankly, I find some of this debate humorous. {sarcasm on}

    – A press release designed to announce and publicize a publication engages in puffery designed to increase sales. Astounding!! Who would have thought that they would ever do that!! The cads — why I am appalled.

    – People — including CEOs and supposedly responsible professionals within their firms — consuming the press release and/or abstract and assuming that all relevant material is necessarily presented in that snippet — say, has anyone paid any attention to just how freaking lazy some people are these days, the high levels of functional illiteracy that is endemic in this society, and that many organizations are too cheap to pay for high quality research.

    – Research firms not knowing what they’re doing — I’m shocked, simply shocked. Why certainly the recent BA graduates hired to design and implement the research so that the profit margins demanded by the founders can be maintained are of course the best and the brightest and can design, implement, and analyze the data better than any of those old, highly educated and expensive geezers who may in fact have some experience with understanding the nuances of research and the implications of approaching a project from one POV vs. another POV.

    {sarcasm off}

    Sorry, but I suspect that there is more than enough blame to pass around on all parties to this process.

    – Without a doubt we have incompetent and/or misleading researchers who need to be called on the carpet.

    – Firms contracting for research — or should I say the responsible professionals within the firms contracting for research — should have the intellectual capacity to understand that demanding commodity prices on research will yield a commodity product with marginal value and usefulness. Yet they buy on price, or steal by relying on PR snippets — and then wonder why their research leads them astray

    Over the years I have had to deal with:

    – Purchasers of research who were willing to promise me a project — if I was willing to guarantee what the results would say. (And I mean literally tell me that to my face. They didn’t even care if I collected any data, just that I’d say that I had and put my reputation behind their fabrication!)

    – I have had other firms pull out of projects when I attempted to create questionnaires that might elicit responses that they didn’t want to hear.

    – I have had clients try to re-write my reports/conclusions while continuing to attribute the conclusions to me.

    – I have had clients present to me reports prepared by research firms where the report author has used the PowerPoint default graphics options to create graphs that lie with statistics — and the research firm was dumb enough that they fell for the lie/did not catch it in the report preparation stage. (And the client swallowed it hook, line and sinker until presented to them with comparable axes and in a fashion designed to expose the lie.)

    So should the research industry do a better job of policing themselves? Yes indeed, they should. It is a matter of ethics. But there is also the matter of being an ethical (and intelligent) purchaser and consumer of research, and anybody who believes that there are no consequences associated with their decisions and that all research is unbiased and without problems — is a fool.

  16. I wade into this debate (melee?) somewhat reluctantly, unsure if I'll be met with personal epithet, professional indignation or simple character assassination. To the extent that it matters, like some of my esteemed colleagues whose comments precede mine, I'm a Ph.D. with a long list of publications and speeches, have taught B-School, and have 30 or so years in market research.

    Fundamentally, Robert, I agree with your contention that there is far too much crappy research out there. I'm not sure if the caliber of work in customer service is any better or worse than research in other arenas, but there is no shortage of poorly designed research and/or ill-conceived conclusions supposedly based on such research. While perhaps not as vituperative as you, I have been stridently critical of VOC researchers for failing to be on top of the "Who, What, When, How and Why” of their design and the mismanagement that stems from mismeasurement (http://www.customerthink.com/blog/the_seven_deadly_sins_of_voc_research) and have taken one of the leading figures in the field to task for shortcomings in his research (http://www.customerthink.com/blog/david_vs_goliath_i_by_howard_lax and http://www.customerthink.com/blog/david_vs_goliath_round_ii_challenging_fred_reichheld_on_the_economics_of_loyalty_again_by_howar). (For the record, both Keiningham and Lowenstein have written extensively about such research failures as well.)

    Is the problem endemic to market research? A client-side colleague recently remarked that the proliferation of DIY tools was great for him, but horrible for the industry: now anyone with a computer can "do market research” – or blog, for that matter. The barriers to entry to get into this business are about as steep as becoming a Tarot card reader. And, some might say, our predictions of future outcomes have been as reliable.

    That said, is all market research poorly done? An emphatic no is my response. Should buyers, users and readers kick the tires on the work presented to them? Of course. Should the industry produce consistently better work? While that would be nice, the problem is how? "The industry” is a democratized marketplace. There are no licensing requirements. While there are associations with standards, they are not governing bodies. In some respects, market research is just another service being sold and delivered to customers.

    There is solid, high-caliber, responsible and professional market research being done by many firms and practitioners. Most of it is found in proprietary work done for specific clients, not that such work is without its flaws as well. The best work typically is not, however, necessarily found in the sound-bites or in syndicated work you cite, which often is more for promotional purposes and PR to get clients to buy consulting and other services. Much of the formal work done for public release, by contrast, is quite good, in part because it is explicitly designed for public scrutiny.

    As for the marketing of the work, well, it's marketing, hype and all. You had the audacity to publish a book entitled Perfect Phrases For Customer Service — are those phrase really "perfect”? But it probably sold better than if you said the phrases were "pretty good.” Not to endorse what might be misleading labeling or hyperbolic marketing claims, but this is more an indictment of the practice of marketing than the practice of market research.

  17. I don’t want to sound testy here, and I know you are expressing your opinion, but if you don’t mind, I will decide, along with my readers and customers what is “constructive”.

    But, in any event I HAVE written material on the topic you suggest including a kindle mini-guide on the things one should look out for in social media research. It’s called “Giving The Business To Social Media Research”.

    I’ve posted other articles on the importance of managers and d-makers learning basic research interpretation skills.

    That said, there are a lot more consumers of research than there are originators of research, and as I will mention in my reply to Bruce later, where is the accountability of companies that produce social media and customer service related research?

    What are the responsibilities, considering these companies are looked up to by decision makers as being competent, even expert, and presumably the research authors HAVE the skills to do this properly.

    We have laws against false advertising and similar related things.

    Posting research, badly done, or that provides misinterpreted findings is, in my opinion a breach of trust, AND eventually actionable in civil court.

    Ok. Not a lawyer here, but if I bought one of these “reports” for a grand or so, implemented the suggestions, only to find out the research is badly done, and the conclusions presented, wrong, I’d be sorely tempted to go after the company, particularly if I experienced detrimental business effects.

    Lawyers, doctors, many other professionals CAN be held accountable for both intentional and unintentional misconduct, and malpractice.

    My point is that research producing companies should act WAY more competently, and transparently, and if they don’t, I’m REALLY comfortable with government regulation. Then again I’m Canadian.

  18. Thanks, Bruce for the comment and your participation.

    I am truly perplexed by your comment that a headline that proclaims certain results, then cannot back that headline up, is not misleading.

    I spent six years on an editorial board of a peer reviewed journal in education, and I can tell you that if you pulled the “stunt” of having a title that did not reflect the actual findings (given proper interpretation), that you would have been “binned” rapidly, and with great excoriation and amusement. Privately, we’d call you incompetent or worse.

    So, I’m not sure how one cannot think saying one thing in the title and not delivering on it with data and interpretation that stays within the data is not misleading.

    As I said above, I’m waiting for the civil suits on this. Many professions expect certain standards from those within the sector, and variation means censure, even civil suits for malpractice.

    What I see, and yes, it applies to your “poor” headline, is malpractice.

    To be absolutely clear Bruce, I have no evidence that you or anyone else does this kind of thing intentionally to mislead, but mislead you do.

    Doesn’t matter. I see your work, sometimes good, sometimes not so good, tweeted and retweeted as gospel, and while you can’t be accountable for the behavior of others, IMHO, it is an obligation — due diligence, considering you know your audience, and you know that YOU and your staff understand research practice(?) and they do not, to do better.

    If you know readers lack the skills to interpret raw data and design, and you present them with false conclusions, what exactly WOULD you call it?

    A week or two ago, I was thinking companies such as yours, that rely on credibility and trust, need to hire an external research auditor to review your work BEFORE data collection, AND then again, BEFORE release.

  19. Question: I believe we agree that the headline in questions could have been better. I wonder if you might clarify, even in very general terms, how a “poor” headline can be written and not caught before publication?

    You’ve been in the commercial research business a long time, so any insights you might offer into how bad research gets into the wild from major firms would be really instructive.

    Often things look very different from the inside and the outside.

  20. You make the statement that I’m testing competence by asking for input on the two cases. Where did I say that?

    Sharing my gut feeling here, why should I even bother to read the rest of your post? (I will, like I said, it’s a gut feeling).

    I believe I called it a challenge.

    So, thanks for what appears to be whatever your attempt to establish competence was, but well, reading skills are good too (well, it could also be that MY writing skills have resulted in misinterpretation). One never knows. But gawd, I’m thinking that’s the second person who somehow believes that by stating a bunch of awards and whatever, that that will somehow demonstrate competence.

    People demonstrate competence when talking about, and behaving with respect to their area of “competence”.

    Bad is bad. I’ve had the honor of telling some semi-illustrious scholars with Ph.D’s (albeit in polite ways)l their work was illogically designed and improperly interpreted and thus, NOT worthy of publication. Oddly enough, I was in that position prior to getting my Masters.

    I gauge competence on what I SEE, not awards, not publications, not any of that. Having lived on the inside of academic research, one learns that’s all one has.

  21. If you or anyone else wants to comment about how I write, my tone or anything of that nature go to it. I won’t be participating or responding because it obscures the issue. I have to wonder about that.

    As for your comment:
    “Bruce and Bob have appropriately asked that you throttle back on the individual and collective attack mode, and perhaps give due that, in the main, researchers are fully accountable for both the quality of what they design and the defensibility of what they publish.”

    No, they are clearly NOT accountable, and over the months coming, when I see more terrible stuff coming from companies doing customer service research I’ll post it here.

    So, could you kindly explain how a customer base that is simply not educated about even basic logic errors, like the difference between correlation and causation, or the difference between words and behavior is in any position to do much BUT trust the conclusions offered? Let alone any research that actually uses statistical techniques, even basic ones like standard deviations, within group variability?

    THEY aren’t going to assess the quality of the research.

    Who are companies producing research as a commercial endeavor accountable to? By what mechanism?

    Here’s the situation. Two parties: reader/customer and expert/producer. Reader looks for expertise he lacks and looks to the expert producer in a trust based relationship.

    But what if the expert isn’t so expert?

    In medicine and many other fields, it’s a similar relationship, BUT there ARE methods to hold doctors, lawyers, accountants, accountable.

    And please, don’t give me the “market” will hold you accountable. That presumes an educated, informed market place and contradicts the obvious expert-customer relationships.

    I seem to be about the only one trying to hold companies accountable for their research conclusions and methods.

    Bueller? Bueller?

  22. Sitting here staring blankly at the screen trying to find something to say. I’m afraid all I got is “thank you”.

    You touch on something quite fascinating and it links up with the two examples I posted in the “challenge”.

    (hey, I found something to say)

    The Sysomos study “flaw” if I’m being picky is that it doesn’t appear that they looked at or tried to explain why their data was significantly different than data from other research on the topic. Ball park figure is that prev. research indicated about 92% of tweets never received any response, but the Sysomos number was something like 62%.

    To be credible they need to have mentioned that, and made some suggestions as to why. They didn’t appear to do that. Did they READ previous research?

    I hope so. But I shouldn’t have to “wonder”.

    One more comment I’ve made before. I don’t believe I’ve seen more than one study out of a hundred actually find something that showed that the “buzzy” customer service thing was worthless, or was a failure.

  23. You should hear me on a bad day. I could hardly sniff out an iota of sarcasm from you. And humor? I take issues seriously, myself not so much.

    Great post, thanks for your input on this. Oh, and yes, I agree about educated consumers and their responsibilities, but there are some psychological factors operating here having to do with how numbers are perceived and how they provide a high credibility level.

    Heck. When I go to a doctor, yes, I try to educate myself about whatever relevant issue is happening, but I’m NEVER going to be as expert as the doctor, one hopes, and often, I’ve left knowing that I didn’t even ask all the right questions.

    So, IMHO the onus is on the experts/those who know more to ensure they do not betray the trust afforded them, and off of which they make big money. But I’m happy if the courts do it — that accountability thing

  24. Your reply is hilarious in its overt dishonesty as to the intent of your “challenge”, especially given your complaint about misleading headlines and research in general.

    You state "You make the statement that I’m testing competence by asking for input on the two cases. Where did I say that?” You then go on to attempt to denigrate me by saying, "So, thanks for what appears to be whatever your attempt to establish competence was, but well, reading skills are good too.”

    Forgive me, as I must have misread your statement to Michael Lowenstein, which set up the "challenge.” Specifically, you goad him with "If you are up to the challenge, and you are as able as you say, let’s see if you can identify flaws, potential flaws in what I post.” It is difficult to imagine that this is not a test of competence, particularly with the "if…you are as able as you say” comment.

    I have no problem with your basic argument. In fact, I cannot imagine anything that I have said being a challenge to your basic premise. Strangely, agreeing with you doesn't seem to be good enough.

    Finally, with regard to listing credentials, you are of course correct that in and of themselves, they do not demonstrate expertise. It is one of the reasons that I insist upon putting my ideas through the peer-review process by publishing them in the scientific literature. While that does not necessarily mean that what ultimately gets published is flawless, it does mean that they are critically examined by objective, anonymous reviewers who are selected based upon their expertise in the subject matter. The awards for these papers merely demonstrate that the scientific community believed them to have contributed significantly to the field.

  25. Robert,

    A little late to the discussion but here I am. I was asked by Bob to contribute to this discussion. I can feel your frustration with research that is "reported in ways that mislead the reading public…” I, too, question the quality of some of the research reports I see. I find that the best way to challenge published research is to conduct your own and share the results with the larger community. Also, while I won't address all the questions posed throughout this discussion, I would like to make some general comments about measurement issues regarding psychological constructs that I think might help the CustomerThink community members.

    I have been conducting research and consulting in this the CEM/CRM (whatever you want to call it) for 20+ years. As part of my formal education (I have a PhD in industrial-organizational psychology), I received extensive training in research and quantitative methods and, specifically, psychological measurement. This field of psychological measurement is called psychometrics; psychometric is concerned with measurement of such constructs as knowledge, abilities, attitudes and traits. There are scientific standards against which tests and surveys are measured (please see Standards for Educational and Psychological Testing: http://www.apa.org/science/programs/testing/standards.aspx). Customer surveys typically measure attitudes (e.g., satisfaction, loyalty). As such, psychometric standards can be applied to customer surveys. Here is a sample report that summarizes a validation study of a company's customer relationship survey (http://www.slideshare.net/bobehayes/validation-of-customer-survey). It includes some information about various forms of reliability and validity of the customer survey and provides evidence that the survey measures what we think it's measuring. The bottom line: a good customer survey provides information that is reliable, valid and useful.
    Applying psychometrics to customer feedback data can reveal very useful things:

  26. Create reliable summary metrics: Psychometrics helps you identify which questions are measuring the same construct. The creation of reliable metrics rests on how well they are statistically correlated with each other. Psychometrics shows us which questions we can average together to get a reliable metric: In the CEM space, we know that three common loyalty questions (overall sat, recommend and continue to buy) can be combined (averaged together) to get a single, reliable measure of customer loyalty (and yes, there is plenty of evidence that people do what they say).
  27. Modify surveys: Identifies which survey questions can be removed without loss of information; Helps you group questions together that make the survey more meaningful to the respondent.
  28. Test theories: Does the likelihood to recommend question (NPS) measure the same thing as overall satisfaction? Turns out, there is plenty of evidence that these measures assess the same construct – see True Test of Loyalty – http://businessoverbroadway.com/wp-content/uploads/2011/01/QP_June_2008_True_Test_Of_Loyalty.pdf).
  29. Evaluate newly introduced concepts into the field. This is one of my biggest pet peeves of the CEM space. Customer engagement, for example, is a term that consultants use to describe the health of the customer relationship. I have looked into the measurement of customer engagement via surveys and found that these measures of customer engagement literally have the exact same questions as advocacy loyalty (e.g., recommend, buy again, overall satisfaction; please see Gallup’s Customer Engagement Overview Brochure for an example: http://www.gallup.com/consulting/121901/customer-engagement-overview-brochure.aspx). Gallup says nothing about the psychometric quality of the Customer Engagement instrument. What is the factor structure of the questions? How do they calculate scores? They need to convince me that what they are measuring is important and different than what is already being measured by current instruments (example of discriminant validity) and they haven't. Here is another example. I looked at Bruce Temkin's site and found that he has ranked companies in two different ways: 1) "forgiveness” ranking (how likely are you to forgive vendor for a mistake?; and 2) "customer loyalty” (based on overall sat, recommend, continue to buy). I am wondering how useful this new forgiveness metric is. My guess is that it measures the same thing as the measure of customer loyalty. Using his two ranking tables, I correlated the two rankings (forgiveness ranking and loyalty ranking) of the top 17 retail companies and found that the correlation between the two rankings is .70. That is companies with high customer also have high forgiveness ratings; companies with low loyalty have low forgiveness ratings. A correlation of .70 is an extremely high correlation given that I am only using the top 17 companies (restriction in range attenuates the correlation). My guess is that correlation would be much higher (well above .90) if all companies were used in my analysis (there are a reported 143 companies in the study). So, the forgiveness rating may not tell us anything above and beyond what a customer loyalty metric tells us when it comes to ranking companies. A useful analysis would be to conduct psychometric analysis on the specific responses to understand how respondents view these questions. Bruce, if you have quantitative evidence that the "forgiveness” question measures something different than "customer loyalty,” I would be interested in looking at it.

    Psychometric standards are a regular part of the development of selection tests for employment decisions. Why don't more companies apply those same psychometric standards to their customer surveys? I think it's because there is perceived low risk to the company for poor measurement of customer attitudes. Selection tests can be challenged in court and a poorly developed test could lead to great financial cost to the company in the form of employment discrimination cases.

    Bob E. Hayes, PhD
    Business Over Broadway
    [email protected]

  30. At the beginning of your blog, there’s a statement of belief: “Much of the research is badly designed or reported in ways that mislead”. There’s further branding of customer research’s lack of accountability and reporting incorrect conclusions. Though the professional market research community can, and does, offer acknowledgement that there are some challenges to the design, analysis, and reporting of their findings, to clients and/or for public consumption, clearly your views on consistently poor customer research data quality and reporting, put you on a different planet.

    On your customer data quality planet, you also support, and even recommend, using employees as researchers. Your customer service content on the Articles911/Work911 portal states “Every Employee A Researcher”, further explaining: “Who knows customers best? Employees, and that is one reason why every staff member can contribute to gathering data/information about your customers” and “…employees are in ideal situations to collect and make sense of data about customer service in your company. The data they collect may not tell the whole story, but it will tell a good bit of the story that executives and managers often miss.”

    There are at least three challenges to this line of thinking, especially as it relates to customer service research data quality: 1) Employees are not trained customer interviewers or data gatherers, 2) Employees are not trained research data analysts and reporters, and 3) Employees are often biased in favor ot the company, and so any insights they collect, or results they analyze, can lack the desired objectivity offered by third party researchers.

    There is a fourth issue. Employees are often out of perceptual alignment with customers on what constitutes good and poor service performance, or how important service elements are or aren’t. Targeted ‘mirroring’ studies, such as http://www.slideshare.net/lowen42/mirroring-customers, often reveal significant perceptual gaps between customers and employees. This can color whatever contributions they are making to management decision-making

    Easily accepting that there is genuine cultural cohesion, training and engagement, and customer centricity value in having employees participate in gathering customer service insight, there are also significant potential data quality issues in having the product of their ‘research’ and analysis of customer insight contribute too actively to management decision-making.

    Based on your other responses, doubtless you will remain on your planet and disagree.

    Michael Lowenstein, Ph.D., CMC
    Executive Vice President
    Market Probe

  31. I’m glad I was able to provide some levity. I posted the challenge, and tweeted it to the #custserv tag on twitter as an exercise to get usrs there to think more thoroughly about customer service research. So people could test themselves much as I wouuld do in a seminar exercise.

    As for my comment about reading, it wasn’t meant personally but as a comment on the general problem of people not reading things on line, but skimming, a finding that’s been replicated in a number of different studies. In fact the skimming phenomenon, which we “all” do is part of this very problem, since people reading “research” don’t read it, per se.

    If we know that, and all of us in customer service research should understand this online issue, then it ups the anty o responsiblity in what we write about research, and accountability. It’s a known issue. The general rule is once you know of something, if you don’t take it into account, it’s. Negligence under the law.

  32. The issue as to whether a lot of research in a lot of fields is poor is a good one, but not particularly germaine. Reminds me of conversations I’ve had with my wife, where, on occasion, if I point out that she’s made a mess in the kitchen, she points out that I made a mess in the bathroom. Which is usually true. But it doesn’t change her mess at all..

    We’re talking here about “pop” research carried out by companies that often have a vested interest in getting a particular result, where there is no peer review, no accountability to follow research standards and ethics.

    Perhaps market research is poor, or not. I don’ know, because I don’t read that much of it. I do read a lot of customer service and social media research, and it’s pretty terrible, and it’s used to buttress claims re: what businesses should be doing.

    I note that I don’t recall, so far, anyone addressing the questions I posed at the end of the article, which is unfortunate

    As for audacity, I’m not sure that’s a dig or not, but two points. 1) I don’t control the titles of most of my books. McGraw-Hill does, for the most part. 2) The title, nor the contents of the book do not claim to be based on specific research. If, for example, the title was “Perfect Research On Customer Service, and I then included a bunch of terrible studies, you’d have a point.

    Personally, I don’t like the titles which, in the pursuit of sales, are created by the publishers, and in this case, it’s a branding/franchise issue. I wrote a book on small business in this series and the title is so not indicative of the value of the book, that it’s the only McGraw-Hill book I’ve done that hasn’t sold.

    Sigh..

  33. Good stuff Bob. On the nose with psychometrics. The problem is that the cost of doing proper research, validating, testing for reliability, and all the techniques involved is huge and time consuming. And as you know it requires some very advanced skills in a lot of areas like discriminant analysis, factor analysis, multiple regression, etc.

    It can be done. Is it commonly done? I doubt it, or rather In hundreds of studies I’ve looked at I see no mention.

    Also as you know creating survey items is tough, and requires iterative stats analysis to validate each item. A small change in working can give completely different results. For example:

    Change “Have you ever walked out of a store because…” is diferent than “How many times have you walked out of a store because of….in the last month”.

    A lot of times items might seem to get at an issue, but do not. Then there is the issue of validating to behavior. Predicting behvior, and you can’t assume survey results are linked to behavior. Take TV research. Why did ratings companies move to using black boxes rather than logs alone? Because of the errors in reporting from consumers (and of course availability of new tech.)

    For example, oft repeated studies that people will pay up to x% more for better service? But do they actually pay more for service? What segments of the market? Rich, poor urban? What sectors? Groceries, high end hotels and restaurants? And, how do we know what they DO?

    Is there methodologies being used to look at behavior, rather than self-reports?

    Also, we know that customers/people have very poor access to their internal workings involved in making decisions (See How We Decide). Often, when asked WHY they did something (assuming they did it), they make a best guess at reconstructing what they think they might have been thinking.

  34. You leave something out about that article, and I’m not sure how your comments are germaine anyway. What I did NOT say was to take employee input and feedback as “the truth”.

    I did not say to take that raw input, publish it under a misleading headline and pretend it’s a universal truth about service, or quality improvement. Neither did I say publish and generalize way beyond the data.

    I’ve done a lot of consulting, and one thing I’ve found (and there is good research support for this) is that managers/C-Suites are often out of touch with important aspects of their operations, and that it’s often the case that if you want to find out what’s working and not working, you hit the “shop” floor.

    I’ve used a process called “dehassling the work place” which involves having line staff identify barriers for them doing their jobs (eg. Delivering quality service), then identifying what can be changed, what can not, and integrating their “data” with business needs, big picture, etc.)

    It doesn’t work if you accept the input as complete or even completely accurate, but it’s an amazing process for improving quality, streamlining, AND employee engagement.

    THAT, sir, is how you do it Integrate multiple sources of input.

  35. As a professor, I would like to ask an even more basic question: What is the importance of service to the consumer? I like pumping my own gas because I can finish the transaction much more quickly. I also prefer shopping at a discount grocery store. I don’t need to think about selection because I’m offered only one product in each category and am virtually certain that I’m getting the best price from my experience with prior comparisons. The checkout line is the one key service point; my favorite discount grocer, Aldi’s, excels in this area with the fastest checkout clerks that I’ve ever encountered. I can get in and out in much less time than at a full service grocery store.

    Much of the time the services that this research is talking about may be ones that I don’t want or don’t need. The one service that I do care about is talking to a real live service agent who speaks American English and has the authority to solve my problem.

  36. Thanks to all for the discussion.

    It’s seems there’s general agreement that:

    1. Misleading research is in fact being produced, and sometimes it’s due to commissioned “studies” out to make a point.

    2. Even the best research has limitations. These should be disclosed, of course. But where?

    3. It’s difficult to spot problems with research or conclusions, especially since the full report is often not available for review.

    4. Everyone spins headlines and marketing copy to some extent. This shouldn’t be a cop out for misleading messages, but hopefully we can agree that “Tastes great. Less filling” shouldn’t be taken as a a reasons to buy a particular brand of beer.

    On a personal note, over the years I’ve come to know quite a few research firms that do a quality job — in my opinion at least. (I invited a number of them to weigh in here.) I came to that perception after interviewing, reading reports, articles etc. I find some research firms really focus on a sound methodology and have deep expertise built often over 2-3 decades. But could I prove that every firm I trust has done perfect research? No.

    On the other hand, I don’t pay much attention to some analyst firms that seem to pump out research reports every month that (surprise!) prove their sponsor’s product is the key to industry leadership. I would hope that business managers would be smart enough to do the same, but I’m skeptical.

    So, how is a business manager expected to know whether the quality of the research / conclusions is any good?

  37. Bob asked me to weigh in on this lively conversation. As I think everybody in this debate agrees, there is no such thing as a flawless research study. Every study, even those in the academic arena, has to make compromises in design. I think we also all agree that a good researcher explains the methodology employed and describes any limitations and compromises they had to make.

    From my perspective, customer research is meant to help guide manager decision making, not make the decisions for the manager. And in my humble opinion we should be giving the people who buy and/or read this research more credit for knowing when and how to use the findings presented to them.

    That said, reading this dialogue it does seem that as a community of professionals there may be some benefit in getting together to put our best thinking forward on how to educate and inform the recipients of research information on what questions to consider when research findings are presented.

    Here's my stab at that list…

    (1) How big is the survey sample?
    (2) How representative is the sample of the population the study references? How was the sample sourced? What is the quality of the sample source? Is this a convenience sample or a true random sample?
    (3) Are the data weighted in any way? If so, how?
    (4) How old are the data? When was the survey fielded?
    (5) How were the data collected? What mode was used: online, telephone, paper, other?
    (6) What was the response rate?
    (7) Were respondents incentivized to participate?
    (8) Are differences referenced in the study statistically meaningful/significant? At what confidence level?
    (9) How well crafted are the survey questions that were asked? Are they phrased in a simple, clear, unambiguous way? Do they avoid double-barreled or biased phrasing?
    (10) If a construct (e.g., Customer Satisfaction, Customer Engagement, NPS, etc) is referenced in the work, how was this construct validated?

    I am sure there are many other questions we’d advise our clients, readers, students and colleagues to consider when considering research findings in whatever form they find them. What else should we add to the list?

  38. Kate, this is a great list. Thanks for starting this off…

    I think another important question is: Who funded the research. Was it done independently or “commissioned” by a vendor with a vested interest in a certain result?

    Unfortunately, as David Mangen pointed out, it’s not that unusual in business, politics or wherever money is involved, for research to be influenced by the people paying for it. Executives should take conclusions from such studies “with a grain of salt” until they get answers to the other questions Kate proposed.

    Any other suggestions?

  39. There has been some excellent dialogue on key, and core, elements of customer research quality; and now discussions of real-world actionability are beginning to emerge. Bob Hayes points out that some of the newer measures, such as engagement and forgiveness, provide some useful guidance; however, they don’t necessarily correlate to actual behavior at any better, or more consistent, rate than satisfaction or NPS. Kate, rightly as well, identifies the need for validation and accountability, i.e. how well the constructs represent actual behavior.

    Once all the objective design i’s have been dotted, and t’s crossed, Kate’s point #10 – validation – is where the accountability rubber meets the road. This brings me to customer advocacy measuerment, a concept and research framework which has been presented to CustomerThink readers for some time (see articles and blogs on this subject, below). Built principally on a foundation of brand favorability and evidence of, and volume of, informal, voluntary positive and negative communication (online and offline) on behalf of a brand, service or product, customer advocacy’s ability to correlate to actual customer behavior has been rigorously tested and proven through international panel research, and in multiple b2b and b2c business sectors. At noted, this has been reported several times in the CustomerThink portal:

    http://www.customerthink.com/article/marketing_case_customer_advocacy_measurement

    http://www.customerthink.com/article/customer_advocacy_and_the_branded_experience

    http://www.customerthink.com/article/corporate_reputation_and_advocacy_linkage

    In sum, the customer advocacy research framework has consistently shown that it is a contemporary, robust, and actionable method for understanding downstream customer behavior likelihood based on product or service experience. It doesn’t necessarily seek to replace any other legacy constructs already in active use – satisfaction, loyalty, engagement, recommendation, etc. – but, rather, extends their value and accountability.

    Michael Lowenstein, Ph.D., CMC
    Executive Vice President
    Market Probe (www.marketprobe.com)

  40. Michael, you’ve written about advocacy for a few years. I like the concept and trust that you’ve developed a model that links to business results.

    But I can’t independently validate your model with academic or other independent research. So, how would a business manager validate that your model “works” and can be trusted?

    Part of the problem is that loyalty research is complicated. NPS has become popular in part because of its simplicity. Your approach may be more robust, but it’s obscure as to exactly how you come up with an advocacy score. You say the model works, but has this been validated in an open process elsewhere?

    This is the frustration I wrote about in Find the “Ultimate” Loyalty Metric to Grow Your Business nearly 4 years ago:

    The NPS camp (Bain, Reichheld and Satmetrix) has a simple metric with proprietary data they say proves it works. Academics and loyalty researchers often make their data public, at least for peer review, but keep their loyalty models locked up in their own black boxes.

    I respect the work you’re doing, but I’m also trying in this discussion to develop some specific steps that business managers can take to make good decisions about research. How would you advise a business leader to evaluate your claims about your advocacy model?

  41. Bob,

    I too trust Michael, and know him to be a man of integrity. So my comment is a general one, not one about his model specifically.

    The general problem with validation of models is that even most relatively sophisticated managers (in terms of their knowledge in research methodology) lack the capability to adequately assess the quality of models presented to them. Instead, they are forced to rely on trust and some intutive sense that the model makes sense. The problem, as Dr. Keith Baggerly (one of the researchers who uncovered the recent problems with Duke’s cancer research) noted is, “Our intuition is pretty darn poor.”

    The easiest way for managers to assess quality is to insist that the models used to have undergone peer review in a scientific journal. That will never be the norm, however, as most models used would not be seen as contributing significantly enough to the marketing literature to warrant publication in a serious scientific journal even if they worked as promised.

    For those models not subjected to peer review, managers should insist upon two things: 1) they will be provided the data used in their own research, and 2) all models presented to them will be clearly explained. Any significant models presented to them should then be replicated (as closely as possible) by someone without connections to the research firm. As most research firms will not give full details regarding their proprietary models, researchers should at least be able to assess whether or not the data appear to warrant the claims being attributed to these models. If they don’t seem supportable by an independent analysis of the data, then managers should treat the models as suspect.

  42. Bob –

    You make extremely important points. Not surprisingly, I have responses, and am glad to explain.

    First, the advocacy research framework is based principally on brand favorability and informal positive and negative communication. A review of many of the loyalty research approaches and models confirms what you describe – they are almost universally complicated, black-box techniques. Conceptually and practically, customer advocacy is straightforward and almost as non-complex as NPS. That said, the actual calculation of customer advocacy levels is proprietary to Market Probe.

    Next, stakeholder researchers are business people, too; and we wouldn’t ask a product, brand, or marketing manager to assume downstream business outcomes as a leap of faith. The validation approach we frequently recommend is classic pre/post research, which has real client benefits over one-time, snapshot studies. Here are the basics:

    1. Pre, or Baseline: The research framework applied first provides the segmentation of customer advocacy levels (Advocate, Allegiant, Ambivalent, Alienated). The accompanying analysis generates a baseline for the prioritized advocacy behavioral impact of the rational (tangible, functional) and relationship (communication, service) performance elements. In other words, the multivariate approach applied tells us and the client exactly which performance elements can drive higher advocacy levels, and which will, if not corrected, drive higher alienation (negative communication and potential churn) levels. Both, obviously are important for any business, from both monetary and cultural perspectives.

    Companies can then take action on the specific functional and relationship elements which have been identified, through analysis, as most impacting positive (advocacy) and negative (alienation) customer behavior. This might be staff training, communication approaches, support processes, product or service enhancement, etc., depending on what has come out of the research. The amount of time needed for these initiatives to take effect, of course, varies somewhat by industry.

    2. Post: Once the initiatives have been seeded/applied, a second round of validating research, augmented by actual marketplace performance results, i.e. lift level as seen in new or increased purchases, upsell/cross sell, new customer acquisition, etc., will determine a) if, and how much, advocacy level has increased and alienation level has declined and b) which performance elements can now be targeted for further increased advocacy and decreased alienation.

    Further details are available in my new book, The Customer Advocate and The Customer Saboteur: http://asq.org/quality-press/display-item/index.html?item=H1410&xvl=76103196

    Happy to provide additional explanation, as needed.

    Michael Lowenstein, Ph.D., CMC
    Executive Vice President
    Market Probe (www.marketprobe.com)

  43. It seems the problem is that loyalty researchers don’t want to disclose their models, and rightly so because it’s their competitive edge. But they don’t routinely disclose their data, either.

    That leaves business people with “trust me” as the answer? I think we can agree based on the discussion here that the market research industry is not above reproach for putting out bad research. Yet there is no real impetus to deal with this.

    Now, I don’t expect Colonel Sanders to reveal the recipe of its chicken seasoning, or Coke to disclose how they make their fizzy drinks either. And software vendors certainly don’t disclose their proprietary software code.

    But the difference is that there are 3rd parties like Consumer Reports, JD Powers, ACSI, Yelp, or any number of review/opinion sites that give consumers useful insight to make an informed decision when they buy goods and services.

    Do we need something like this for market research firms?

    Getting back to Robert Bacal’s initial point, is there any real effort on the part of market researchers to hold themselves accountable? Doesn’t appear to be. If not, then I guess the only answer is for a 3rd party to help. (Hey, maybe that’s a job for CustomerThink?!)

  44. Bob –

    Your points are well taken, but I’d offer some further explanation. As you and Tim point out (and as researchers like Jim Barnes and I have also reported), there are a number of customer loyalty research models around. Most have proprietary elements. Some models are simple, and some are complex. Some are newer and more reflective of current levers of customer brand, product, and service decision-making, and some have core concepts that go back twenty, thirty, and even forty years. Some models do a better job of correlating key insights and calculations with actual business outcomes, and some are less effective at it.

    In addition to their being proprietary in nature, the data these models generate for clients are largely confidential. So are the conclusions drawn from model segmentation and analysis. As bound by CASRO and ESOMAR rules, when custom studies are conducted for an organization, project findings cannot be released without the expressed authorization of the client. Clients, as well, typically see no benefit to them in releasing marketplace or other performance results of initiatives taken as a result of research conducted for them. Even when clients are co-presenting with their research partners at conferences, my experience is that these results are often carefully cloaked.

    Market Probe, like many other research companies, is a custom research and consulting organization. As such, we don’t publish research results for individual clients. In instances where we are conducting industry-based studies, and looking at performance of individual companies, we can, and do, release our findings, such as the global benchmark results we recently published based on over 100 customer advocacy studies: http://www.marketprobe.com/newsarticles/global-advocacy-benchmarks-062011-copyright-MarketProbe-2011.pdf

    Michael Lowenstein, Ph.D., CMC
    Executive Vice President
    Market Probe (www.marketprobe.com)

  45. Michael,

    I fully understand the desire to have confidentiality and/or intellectual property as a competitive edge in the marketplace, but for almost all of these models it is a false hope — if somebody *really* wants to find out. They may not be able to reproduce it exactly, but the odds are that they can come close. Why do I state this with such a high degree of confidence? Because frustrated clients of other companies have come to me and asked me to try and reverse engineer models so that I could tell them how the “black box” works. When asked, I am typically able to get results that correlate in the 0.95 to 0.98 range.

    Now, this presumes that there are personnel at the client with the intellectual moxie to comprehend the explanation of the model that they will get from me, and my experience is that client-side expertize has decreased a bit over the years as budgets have shrunk.

    However, regardless of the skill set of the client, the fact that the models can typically be reversed engineered without too much difficulty makes me ask: Why bother? Is not the attempt to enforce IP rights in fact somewhat indicative of customer hostility? For that reason, I no longer bother; I’d rather earn my clients’ respect with transparency, and trust that this will do a better job of getting repeat business than my futile attempts to wrap up my algorithms in a nice black box.

  46. David, Bob:

    Let’s assume that companies can and even should protect their proprietary models and processes. Let’s put aside the reverse engineering argument, which I think is thought provoking.

    My thinking on this is that the proprietary models and the evidence to support that the models “work” are quite different. Easiest to use an example.

    I develop a customer behavior model and a data gathering tool (let’s say some sort of survey) that I claim can predict a wide range of customer real world behavior generally, on the basis of demographics of customers, AND individual customers.

    For example, if I can predict that a certain demographic will do something (return to a store) given that the store owner does x y and z, I have something special and valuable.

    I want to protect my model, data gathering tool, and the methods I used to generate the model and tool that make it so powerful, since I can make a fortune with it.

    In other words, I want to make it hard for competitors to do the same thing (OK, maybe a pipedream).

    So, I’m NOT going to disclose the inner workings, the how my company is doing it.

    But, it’s also in my business interests to prove how well my “black box” works to do what I say it does. I WANT that information out there, I want it so the methods to “validate” my black box are as perfect as possible. I want to use validation methods that are transparent, and commonly used by everybody in the field. Because when companies believe me, have copious open evidence on validation, I’m going to get a lot of work.

    I’m not going to use proprietary “models” to validate what I’m doing, or prove its value. It all has to be open, subject to criticism, because that’s going to make me rich.

    So, if we think of the “black box”, and we think of the validation process as being two distinct things, I don’t see any issue about protecting IP.

    Or, am I, as an industry outsider, missing something?

  47. There are tons of reasons why “crowdsourcing” is problematic in general, and some big reasons why we don’t want to use it within this context, but I’d like to leave that to a separate thread/post.

    So, what’s the alternative to a) increasing the credibility of companies that are doing great customer service research, and b) protecting research “consumers” from the poor stuff?

    If I was in the industry, AND I was following proper research procedures, etc, here’s the things I would be interested in.

    I WOULD absolutely hire a third party “research auditor” who would be available to answer questions from potential customers online, and who would approve research both in the design phase and in the final report phase. I’d trumpet that loudly on my sales sites, and in all communication. I’d differentiate my company on the basis of being SO committed to quality research that I’m paying an arms-length auditor.

    I would work towards trying to create a “customer/consumer research” code that provides both ethical and technical guidance, PLUS can be adhered to voluntarily by organizations who might like a designation that would be available.

    Obviously this would work best if sponsored by an industry wide professional association (I presume some exist?)

    My feeling is that accountability for companies doing research is a potential boost for business.

    Self-regulation is key here. While I doubt that the U.S. government would actually try to regulate companies doing consumer research, I can imagine law suits tied in to consumer protection laws. Some of the “research” I see is so bad, and so misrepresented that negligence or false advertising claims should be relatively easy to prove.

    I do wonder if eventually, other countries might regulate the industry, being far less “worried” about such things.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here