Pop quiz time!
Suppose a company measures its customer satisfaction using a survey. In May, 80% of the customers in the survey said they were “Very Satisfied.” In June 90% of the customers in the survey said they were “Very Satisfied.” The margin of error for each month’s survey is 5 percentage points. Which of the following statements is true:
- If the current trend continues, in August 110% of customers will be Very Satisfied.
- Something changed from May to June to improve customer satisfaction.
- More customers were Very Satisfied in June than in May.
Answer: We can’t say with certainty that any of the statements is true.
The first statement can’t be true, of course, since outside of sports metaphors you don’t ever get more than 100% of anything. And the second statement seems like it might be true, but we don’t have enough information to know whether the survey is being manipulated.
But what about the third statement?
Since the survey score changed by more than the margin of error, it would seem that the third statement should be true. But that’s not what the margin of error is telling you.
As it’s conventionally defined for survey research, the margin of error means that if you repeated the exact same survey a whole bunch of times but with a different random sample each time, there’s an approximately 95% chance that the difference between the results of the original survey and the average of all the other surveys would be less than the margin of error.
That’s a fairly wordy description, but what it boils down to is that the margin of error is an estimate of how wrong the survey might be solely because you used a random sample.
But you need to keep in mind two important things about the margin of error: First, it’s only an estimate. There is a probability (about 5%) that the survey is wrong by more than the margin of error.
Second, the margin of error only looks at the error caused by the random sampling. The survey can be wrong for other reasons, such as a bias in the sample, poorly designed questions, active survey manipulation, and many many others.
Margin of Error Mistakes
I see two very common mistakes when trying to understand that the Margin of Error in a survey.
First, many people forget that the Margin of Error is only an estimate and doesn’t represent some magical threshold beyond which the survey is accurate and precise. I’ve had clients ask me to calculate the Margin of Error to two decimal places, as though it really mattered whether it was 4.97 points or 5.02 points. I’ve actually stopped talking in terms of whether something is more or less than the margin of error, instead using phrases like “probably noise” if it’s much less than the margin of error, “suggestive” for things that are close to the margin of error, and “probably real” for things that are bigger than the margin of error and I don’t have any reason to disbelieve them. This intentionally vague terminology is actually a lot more faithful to what the data is saying than the usual binary statements about whether something is statistically significant or not.
Second, many people forget that there’s lots of things that can change survey scores other than what the survey was intended to measure, and Margin of Error doesn’t provide any insight into what else might be going on. Intentional survey manipulation is the one we always worry about (for good reason, it’s common and sometimes hard to detect), but there are many things that can push survey scores one way or another.
It’s important to keep in mind what the Margin of Error does and does not tell you. Do not assume that just because you have a small margin of error the survey is automatically giving accurate results.