Climbing the Data Analytics Ladder for Customer Success


Share on LinkedIn

The practice of Customer Success is taking off across the SaaS industry and beyond. It’s become an executive priority. It’s baked into the culture at winning companies. It’s even a job title you are seeing everywhere.

Customer Success teams are now more data savvy than ever. Talk to a Customer Success Director, and she will tell you that having a timely and integrated view of her customers’ usage, support, relationship, and subscription data is a requirement for doing her job. She’ll also tell you she wants her team to act faster and more decisively on early warning signs of customer churn. And she may let on that her team struggles with information overload, that uncovering customer health signals can feel like searching for a needle in a haystack.

So what are these meaningful customer health insights? How does our Customer Success Director discover them, and why should she put trust in them? That’s where the world of data analytics comes into play, and why it’s so important to have data smarts (or access to them) on the Customer Success team.

In this post and the next, I’ll explore what kinds of data analytics Customer Success teams are putting into practice today, what the biggest challenges are, and I’ll introduce the 5-Step Data Analytics Ladder for Customer Success.

Data Science — Hype or Hope?

Anyone paying attention to technology hears important-sounding terms like data science, data mining, big data, or predictive analytics thrown around all the time. The general concept is that you want not only to describe and visualize your customer data, but also to extract useful knowledge from it. And for folks in Customer Success roles, the holy grail is being able to accurately predict which of your customers are likely to churn and what are the most effective actions to you can take to keep them.

Sound too good to be true?

Well, often it is. The hype around data science techniques suggests that, with the right smarts or software, you could just point a firehose of customer data at a black-box predictive model, and it will magically spit out a list of customers who are going to cancel. Not surprisingly, these kinds of claims raise some healthy skepticism among Customer Success practitioners.

But let’s not knock down that straw man, and instead let’s look at how data analytics is being used in practice today by Customer Success teams, and how they are taking steps up the analytics ladder towards the ultimate goal of accurate churn prediction and effective mitigation.

How Are Customer Success Teams Applying Analytics Today?

All Customer Success teams are performing some kind of data analysis today. Either they are doing it themselves in spreadsheets, collaborating with Analysts at their company, or using specialized software applications. The level of analytical depth and sophistication varies, often based simply on their number of customers, overall company maturity, and access to customer data sources.

We can break these activities into what I call the 5-Step Data Analytics Ladder for Customer Success.

Step 1: Summarization

This is the most basic way to understand your customers based on the data you already have on hand, by quantitatively or visually describing their main characteristics.

It can be as simple as running a report in that shows all your active Accounts along with their Tier and Last Login Date. Or creating a dashboard that shows each Customer Success Manager’s amount of Monthly Recurring Revenue (MRR) up for renewal.Data summarization is a good starting point for any deeper analysis of your customers, and enables spot checking that can be helpful in uncovering errors or operational issues.

But if you stop here, you are not likely to get many epiphanies that supercharge your Customer Success team’s productivity and results. You are missing rules for how to group up customers so you can treat them differently based on their situation, and failing to realize connections between customer activities and outcomes (like churn or renewal).

Step 2: Segmentation

Segments are subsets of customers that share one or more meaningful characteristics, such as their value to your company or their usage behaviors. They serve as a basis for Customer Success programs that treat different groups of customers distinctly, or as triggers for risk-based alerts to CSMs. A segment is not a permanent list of customers — customers can jump between segments when their characteristics change.

Customer segmentation becomes more powerful when you can combine multiple kinds of customer data, such as application usage, subscription information, customer support, demographics, etc. If you use a Customer Success Management app to create and save a list of “High Risk Enterprise Customers” that (1) are paying at least $2,500 MRR, (2) have completed implementation, but (3) have used your application less than four times in the last month, you are performing customer segmentation.

In a formal sense, segments would be defined as mutually exclusive and collectively exhaustive — that is, every customer is a member of one and only one segment. But in practice, Customer Success teams often define overlapping groups of customers for complementary programs and outreach. For instance, a customer could be in an “onboarding” group that gets a series of predefined communications, and also a “low engagement risk” group that gets an offer of special 1:1 training.

Another flavor of segmentation that Customer Success teams utilize is a cohort analysis, for instance to see if customers who started after your big new product release or revamped onboarding program are performing better than their peers who started beforehand.

One limitation of this kind of analysis is that customer segmentation rules are often arbitrary in practice. For the “High Risk Enterprise Customers” example above, why is the segment defined as “less than 4” logins last month, and not 3 or 10? Where is the meaningful distinction in logins, and for that matter, why use logins at all instead of some other usage metric?

Most Customer Success teams rely on their own judgment or their collective wisdom to define these segments. A perfectly fine starting point – nobody knows your customers better than you. But as we move up the ladder, we’ll see some techniques for uncovering the “right” segment criteria in your customer data.

Step 3: Descriptive Insights

This is where things start to get interesting. Descriptive insights expose useful customer knowledge that has been compiled from your underlying raw customer data. (Think Klout score.) These insights don’t predict anything per se, but they offer real meaning that’s understandable at a glance. But they tend to require a higher degree of difficulty in terms of data collection, analysis, and calculation.

A simple example is a visual badge that shows if a customer has “Good, Normal, or Bad” performance for a particular metric (e.g. Seat License Utilization) relative to its segment or relative to its own history.

Another example is Frontleaf’s Usage Intensity metric, which answers the question of which companies and people are your heaviest product users. Frontleaf takes all your application usage events and uses them to score customers on a scale of 1-10, with special provisions to smooth outliers and gradually weigh more recent usage more heavily than past usage. This is a useful insight because it’s immediately understandable, it’s thoughtfully constructed, and it hides all the messy details.

A last example is a Customer Health Index that uses any number of configurable data inputs to produce a single Red/Yellow/Green classification. There can be tremendous flexibility in setting up how the Index is calculated — which is good and bad. Good because the Customer Success team can fully reflect their deep understanding of how customer health is represented within the data. Bad because if there are any biases or misunderstandings on the team, it will be reflected in the Health Index. In other words — there isn’t any “learning” from the customer data.

There are other kinds of descriptive analyses that are in use by Customer Success teams. Regression analysis can be used to help figure out which customer metrics should go into a rules-based formula, by indicating which attributes are most strongly and uniquely correlated with past customer outcomes. Survival analysis models the amount of time it takes customers in different segments to reach certain stages, such as “inactive” or “cancelled”.

One last note: For any of these insights to be… well, insightful for Customer Success Managers, they will almost always reflect findings in time series data, such as how customers use your product or derive value from it over time (which is part of the reason they have a higher degree of difficulty). In customer data analytics, it’s at least as important to understand how a customer’s behavior or results are changing over time as it is to know there they stand at the present moment.

Step 4: Predictive Models

There is a lot of great material out there about predictive analytics. And most of the time it’s applied to things that are not relevant to Customer Success teams, such as fraud detection or credit default.

When I talk to Customer Success practitioners, some ask for something that will “tell me something about my customers that I don’t already know.” A Customer Health Index is fine and good, but it’s just a handy and systematic way to reflect our own opinions, which many not be 100% accurate.

Predictive models offer the opportunity to break from the mold of rules-based insights — and instead to produce novel learnings from your data about which customers are likely to churn and why. You don’t need a predictive churn model to be perfectly accurate for it to be useful and provide a compelling ROI. It just needs to be better than what you are currently doing to identify and act on signs of customer risk.

Of course, deploying a predictive model comes fraught with the danger of getting things flat out wrong.

There is one fundamental requirement of predictive analytics for Customer Success — that you are able to connect the outcomes you aim to predict (e.g. churn) with the activities you think may be predictive of them (e.g. product usage, customer relationship signals, etc), all in a single customer data set with a master customer ID. There are a bunch of interesting things you can learn by analyzing customer activities without customer outcomes — however predicting churn is not one of them.

Now, there are all sorts of machine learning techniques and statistical models out there, but the basic mechanics are the same whether you hire a data scientist, use a software application, or some combination:

Select the customer outcome you want to predict, such as a cancellation, renewal, or upsell.
Gather your customer data inputs (called model “features”) that you think may have explanatory power for the selected outcome — things like usage, support experience, customer milestones, relationships, surveys, etc. This is where your customer understanding and intuition is most important.
Do the dirty work. Preparing these data inputs, or model features, often takes massive data engineering efforts to collect, match up, massage, aggregate, transform, and even impute missing values of data from various sources. It may require tracking customer activities and creating customer data where none has existed before. This is definitely where the heavy lifting occurs and where predictive analysis efforts most often run into a brick wall. A rule of thumb I’ve heard from data scientists is that 80% of the work of developing a churn model is the data preparation.
Choose which statistical model to use. Okay, don’t worry, you don’t have to literally choose a model all by yourself. There are several in vogue with data scientists, with fancy names like Random Forests, Neural Networks, Markov Chains, and Bayesian methods. They vary in complexity and by when each is most useful. But the bottom line is that they are just different techniques to solve the same problem, and someone or something with technical knowledge — either your data analyst or your software application — will pick one (or will try a few). Assuming the data is good. For analysts, it’s relatively easy to find the right statistical model to process a good data set, but it’s usually impossible to compensate for a bad data set with a more sophisticated statistical model.
– Now it’s time to crunch the numbers and check out the results.

So what are the results? The model will deliver findings for each customer in the form of either a flat out yes/no churn prediction, a probability of churn, a numerical risk score, or a risk classification such as high/medium/low.

Will such an exercise actually provide you any clear findings? If not, does that mean customer churn at your company is essentially random? If so, will you trust the results enough to start acting on them, even when they contradict your intuition? And if it really is a fairly repeatable process, with such potential benefits, why aren’t more Customer Success teams building statistical churn models?

We’ll explore these questions in Part 2 of this series.

Step 5: Prescriptive Action Plans

The final step on our data analytics ladder is to use machine learning to indicate which types of customer-facing outreach, programs, or playbooks are most effective at mitigating churn risk for a particular customer.

Suppose your churn model classifies a customer as high-risk. A prescriptive action plan would then recommend a specific set of customer-facing actions, based on (1) the underlying reasons the customer is at risk, along with (2) the history of which actions have been most effective at mitigating churn risk for similar customers in the past.

Of course, many companies have defined Customer Success playbooks and have set up business rules to apply each of them in specific situations, such as when a customer is flagged for signs of poor onboarding or low engagement.

What makes a prescriptive action plan different is that it employs a model similar to a recommendations engine in order to match at-risk customers to playbooks, and automatically evolves its recommendations based on measuring what happened to similar customers that received that playbook in the past (e.g. did their churn risk return from elevated to normal?).

However, there is a major requirement before attempting something like this. You need to standardize and track all of your team’s customer-facing activities and programs. Without a lot of specificity and uniformity around customer playbooks, it just won’t work. This may feel uncomfortable or cumbersome for many Customer Success teams, or necessitate new kinds of intelligent applications like RelateIQ for automatically tracking communications. In any case, it’s pushing the envelope.

Looking across the SaaS industry, I have seen almost no examples of prescriptive action plans that are fully implemented and running on autopilot. They represent the top of the data analytics ladder, something for all of us to shoot for as our Customer Success practices mature.

Wrapping Up & Coming Next

Customer Success teams are more data savvy than ever, and are utilizing data analytics to get an edge in their efforts at customer retention. I’ve categorized the wide variety of data analysis in practice by Customer Success teams today into a 5-Step Data Analytics Ladder for Customer Success:

1. Summarization
2. Segmentation
3. Descriptive Insights
4. Predictive Models
5. Prescriptive Action Plans

Customer Success teams are scrambling their way up this ladder, with the help of their friendly data analysts that work down the hallway as well as rapidly evolving and specialized software solutions. In the second part of this series, I’ll describe five challenges Customer Success teams face as they ascend the data analytics ladder, and I’ll give you my take on how to overcome them.

Tom Krackeler
Tom is an all-around SaaS guy, and the Co-Founder & CEO at Frontleaf, which provides Customer Success software.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here