If you’ve been alive and reading anything published in the last year, you’ve probably seen two terms – Big Data and The Cloud (a great name for an indie electronic synth-pop band, btw). I have to admit that every time I hear the term “Big Data” I picture a server room with network storage devices containing petabytes of data, and sometimes it looks like that secure computer room from Mission Impossible. As an aside, whenever I hear “The Cloud” I picture a brilliant blue sky with a single white fluffy cloud… containing network storage devices (wireless, of course) filled with petabytes of data.
Here’s the thing, Big Data isn’t always big in volume. According to concepts originally posited back in 2001 and a definition in a recent Gartner report, “Big Data are high-volume, high-velocity, and/or high-variety information assets that require new forms of processing to enable enhanced decision making, insight discovery and process optimization.” The “and/or” is critical for me. In my role as a customer insight analyst focused on bringing the customer experience into business strategy, I rarely deal with data larger than a few hundred megabytes. The data aren’t of particularly high velocity, either.
Where I see the primary intersection between big data and customer experience analysis is in the variety of data we use. We routinely combine data from 4, 5 or even 20 different sources representing scores of different data categories to understand the impact of customer experiences and how to effectively improve them. The second part of the Gartner definition about the data requiring new forms of processing, is also true in my experience. This combination of multiple data sources and streams has required that we think about and analyze data differently than we did 5 years ago.
So, before you think Big Data only affects those who have terabytes or petabytes of information at their fingertips, think again. Big Data affects most of us in the business analytics world, and we have to adapt to it in new and inventive ways.