A recent TED Talk by Shyam Sankar (also the subject of this recent blog post by Leslie Pagel) nicely argues that man-machine cooperation is the real story of technological development. In a 2010 article, chess grandmaster Garry Kasparov, who famously played IBM’s Deep Blue in the late 1980s, relays a wonderful anecdote illustrating the power of this symbiosis between humans and computers. In 2005, a freestyle chess tournament was held where humans and computers could work together or play separately. At this point in history, a computer program running on a standard laptop could routinely beat many grandmasters. However, this freestyle tournament produced two interesting and relevant results:
- a strong human player with a weak laptop soundly defeated even the best stand-alone chess computer, and
- the overall winner of the tournament was not a grandmaster with a powerful computer, it was a pair of amateur chess players using three relatively weak laptops.
According to Kasparov analysis, the team’s winning edge was a superior interface between all humans and computers that effectively counteracted the superior chess knowledge and/or computational power of their opponents.
From this story both Kasparov and Sankar conclude that the decisive factor for determining the analytic capability of any human-computer combo is the friction between humans and computers. By designing a better interface that reduces the friction, you increase the analytic capabilities derived from the same human and computer at an ever-increasing, convex, rate.
While I completely agree that designing friction out of the interface is a decisive factor, I think there is one other critical element both Kasparov and Sankar overlook – the rules within which you conduct the analysis. Let’s take the chess example again. The rules of freestyle chess require players to make moves much faster than regular chess. Therefore, the ability to have a computer crunch the data and provide the human player with a short list of potential moves is a critical advantage. Given more time between moves, the experience, knowledge and creativity of a chess grandmaster begins to override the speed, computational power, and efficiency of a computer chess program.
All these factors are perfectly relevant to customer predictive analytics and customer-focused decision-making. First, useful predictive analytics require both a skilled data scientist and a powerful, but usable, analytic workbench. However, it is not uncommon for an analyst to only spend 10%-40% of their project time actually running analysis and extracting usable insights. We still need to work on reducing the friction between the analyst and the computer if we are to realize the full benefit of predictive analytics.
Data scientists also need to fully understand the context they are working in and the written and unwritten rules that apply to it. For instance, conducting a predictive analytics project within a strategic accounts group with a small set of accounts and deeply embedded account managers is not going to produce as much return as a similar exercise in an outbound call center where fairly inexperienced sales reps are each responsible for hundreds of accounts.
The future of business will require more reliance on powerful computer-based analytics, but the best uses of those computers will come by carefully designing the human interface into the system and understanding which contexts will yield the greatest returns for our efforts.