It’s hard to pick up a newspaper these days without seeing companies cutting more costs. Part of this story is that companies are shifting their spending to invest in a new flavor of business intelligence technology that predicts the buying behavior of each customer or prospect – predictive analytics.
Predictively modeling customer response provides something completely different from standard business reporting and sales forecasting: actionable predictions for each customer. These per-customer predictions are key to allocating marketing and sales resources. By predicting which customer will respond to which offer, you can better target to each customer.
As your company prepares to deploy a predictive model, there are best practices that avert the risk the model won’t perform up to par. Here are three guidelines to ensure this risk is minimized.
1. Don’t evaluate the predictive model over the same data you used to create it.
When evaluating a predictive model, never test it over the same data that you used to produce it, known as the training data. The data used for evaluation purposes must be held-aside data, called test data, which provides an unbiased, realistic view of how good the model truly is. If it’s not doing well on that data, you need to revisit model generation, change the data, or change the modeling method until you get a better one.
2. Only deploy your predictive model incrementally.
Once you have a predictive model that looks good and ready for deployment, start by deploying it in a “small dose”. Keep the current, existing method of decision-making in place, and simultaneously – perhaps 5% of the time – employ the predictive model. This way, the old and the new stand in contrast, so you can see whether indeed the value of the model is proven – that profits have increased or that response rates have increased.
3. Always maintain and test against a control set.
Finally, in similar vein to (2) above, keep this kind of A-B testing in place moving forward, pitting “use the model” against “don’t use the model”. Ideally, you always keep that going, so that you have a small control set for which things continue the old way, or, in any case, for which decisions are automated in a way that does not require a predictive model. This serves as a baseline against which the performance of the predictive model is constantly monitored. This way, you are alerted when a predictive model’s performance is degrading, at which point it’s time to produce an updated model over more up-to-date data.
In sum, by following these best practices, your company can benefit from the accurate targeting of predictive analytics while minimizing risk.
For further predictive analytics reading, case studies, training options and other resources, see the Predictive Analytics Guide.
Great advice as usual Eric. All I would add is that understanding the decision you hope to influence and that decision’s impact on your objectives and KPIs is essential if you are to judge the effectiveness of a model.
Models are not good or bad based just on technical issues, the business value of the model must be real too.
JT
James Taylor
JT on EDM blog
My ebizQ blog
Author of Smart (Enough) Systems
Sound advice Eric.
One challenge we currently face is the recession. There is considerable evidence that customer behaviours have changed already. Yesterday’s high growth customers may be today’s reduced growth customers and tomorrow’s defectors. And if the American Economics Association is right and the recession only bottoms out in two years, customer behaviour will likely change quite a bit more in the near future.
The implications for modelling are obvious. If you developed models using customer data sets more than a few months old, i.e. before the onset of the recession started to change customer behaviour, then they may well be obsolete already. Even if the models are only a few months old, they should still be tested again against more recent customer data set to make sure they are still relevant.
I am hoping that the recession triggers a renewed interest in understanding customers’ needs better too. Statistical models are always limited by the data they don’t include. When the models only explain 20-30% of the variability in the data, you know they are only part of a much bigger picture. A better knowledge of customers’ needs, even if not directly factored into the models would help CRMers create significantly better offers.
Graham Hill
Customer-driven Innovator
Follow me on Twitter