Big Data is being touted as the next big thing for businesses. The benefits of Big Data are apparent in many areas, from search results and recommendation engines to customer experience management. By analyzing massive amounts of quickly expanding, diverse data, businesses are able to gain the insights they need to beat their competitors. A major roadblock to discovering these insights, however, is the lack of people with the skills to analyze these data. For example, in a late 2010 study, researchers from MIT Sloan Management Review and IBM asked 3000 executives, managers and analysts about how they obtain value from their massive amounts of data. These respondents said that the number one obstacle to the adoption of analytics in their organizations was a lack of understanding of how to use analytics to improve the business. Also, McKinsey and Company estimated that the US faces of huge shortage of people who have the skills to understand and make decisions based on the analysis of big data. There are simply not enough people with Big Data analysis skills, suggesting that the unmet need for data scientists is a problem for Big Data and undercuts the value of Big Data.
To combat the shortage of data scientists, software vendors have touted their solutions as a way of creating an army of data scientists, ready to analyze and interpret results. For the remainder of my post, I would like to talk about how data scientists need to understand and appreciate, at least, the idea of sampling error. I will begin by discussing the difference between samples and populations.
Samples and Populations
Statistically speaking, a population is a “set of entities concerning which statistical inferences are to be drawn.” These inferences are typically based on examining a subset of the population, a sample of observations drawn from that population. For example, if you are interested in understanding the satisfaction of your entire customer base, you measure the satisfaction of only a random sample of the entire population of customers.
Researchers/Scientists rarely, if ever, work with populations when they are studying their phenomenon of interest. Instead, they conduct studies using samples to make generalizations about the population. Specifically, medical researchers rely on the use of a sample of patients to study the effect of a treatment on disease. Polling organizations make nationwide predictions about outcomes of elections based on the responses of only 1000 respondents. Business professionals develop and implement company-wide improvement programs based on the survey results of only a sample of their entire customer base.
Sampling Error and the Need for Inferential Statistics
Researchers make conclusions about the population based on their sample-based findings. Sampling error reflects the difference of the sample from the population (for a fuller discussion of sampling error, see Measuring Customer Satisfaction and Loyalty). That is, when samples are used to estimate the population, you need to consider errors in estimating the population. Because the sample is only a subset of the population, our estimation includes error due to the mere fact that the sample is only a portion of the population.
Inferential statistics is a set of procedures applied to a sample of data in order to make conclusions about the population. Because your sample-based findings are likely different than what you would find using the entire population, applying some statistical rigor to your sample helps you determine if what you see in the sample is what you would see in the population; as such, the generalizations you make about the population need to be tempered using inferential statistics (e.g., regression analysis, analysis of variance).
When Decision-Making Goes Wrong: Tampering with your Business Processing
Used in quality improvement circles, the term, “tampering,” refers to the process of adjusting a business process on the basis of results that are simply expected due to random errors. Putting data interpretation in the hands of people who do not appreciate the notion of sampling error can result in tampering of business processes. As a real example of tampering, I had the experience where an employee created a customer satisfaction trend line covering several quarters. The employee showing me this trend line was implementing operational changes based on the fact that the trend line showed a drop in customer satisfaction in the current quarter. When pressed, the employee told me that each data point on the graph was based on a sample size of five (5!). Applying inferential statistics to the data, I found that the differences across the quarters were not based on real differences, but due, instead, to sampling error. Because of the small sample size, the observed differences across time were essentially noise. Making any changes to the business processes based on these results was simply not warranted.
Big Data Tools will not Create Data Scientists
There has been much talk about how new big data software solutions will help create an army of data scientists to help companies uncover insights in their data. I view these big data software solutions primarily as a way to help people visualize their data. Yes, visualization of your data is important, but, to be of value in helping you make the right decisions for your business, you need to know if the findings you observe in your data are meaningful and real. You can only accomplish this seemingly magical feat of data interpretation by applying inferential statistics to your data.
Simply observing differences between groups is not enough. You need to decipher if the observed differences you see in customer satisfaction reflects what is happening in the population. Inferential statistics allows you to apply some mathematical rigor to your data that go well beyond merely looking at data and applying the inter-ocular test (aka eyeballing the data). To be of value, data scientists in the area of customer experience management need a solid foundation of inferential statistics and an understanding of sampling error. Only then, can data scientists help their company distinguish signal (e.g., real differences) from the noise (e.g., random error).