3 Practical Ways Your Customer Support Team Can and Should Listen to Customers

0
292

Share on LinkedIn

Image by Couleur from Pixabay

I was recently thinking about certain practices we talk about in the world of customer experience. I can’t help but think that we’re sometimes a bit careless about how we speak of them as if we can start doing them with the flip of a switch. Listening to the voice of the customer (VOC) is one such practice. It sounds so simple.

Can I send a survey once a year to a percentage of my customers and claim to have a VOC program? What if I choose one customer to randomly call each week where I ask them about what they like or dislike about our company? Or what if I spend an hour each month shadowing an agent in my contact center? Yes, these activities can all be part of a VOC program, but the sample size is so small that it’s far from comprehensive.

For me, VOC is a daily act of listening to customers through a variety of means, analyzing data and trends, sharing and collaborating with the rest of our organization, and making consistent meaningful improvements to our products and services as a result.

The reality is that, doing this requires systems and processes to be put into place to ensure that we’re hearing from as many customers as we possibly can. People throughout the company must be trained and empowered to understand and carry out their role in helping us listen.

In this article, I’ll share three practical areas our customer support team actively listens to and engages with customers and some of the positive impact it’s had on our customer experience.

Listen for (dis)satisfaction

It’s no revolutionary concept to survey customers to determine if they are satisfied with their experience. Or perhaps you prefer a different flavor like gauging their loyalty (Net Promoter Score) or effort to solve an issue (Customer Effort Score). We prefer customer satisfaction (CSAT) because it’s readily available in our helpdesk platform (Zendesk) and is easy to act based on customer responses.

And when it comes to survey responses, your response is far and away more critical than the score you receive. Here’s how we handle satisfaction survey responses:

  • Read every comment — positive or negative
    We handle the positive and negative comments on CSAT surveys a bit differently but they all are read no less. Positive comments are posted to a Slack channel via an integration with our ticketing system. This allows our entire company to read and celebrate our wins. It also helps us to keep an eye on the occasional positive comment that features a customer suggestion.
  • Respond to the negative ones
    Tickets that receive a negative CSAT rating are instantly reopened and assigned to a manager for follow up. From there, we take a couple of actions. First, we select a reason for the dissatisfaction in a custom field we’ve created. This allows us to track the reasons customers are dissatisfied and work to improve them. Second, we read, review with our team as needed, and respond to the customer to try and make it right. Do we turn every customer around? Definitely not! But I can’t tell you how many customers are floored by the fact that we take the time to respond at all.
  • Include social media and review sites in your efforts
    As an extension of our CSAT survey, we find that the REALLY happy and REALLY unhappy customers take to social media (like Facebook, Twitter, and Instagram) and review sites (like TrustPilot and SiteJabber). Oftentimes, this comes shortly after submitting a ticket without giving us the time to even attempt to assist them. Regardless of how you characterize this cry for help, it effectively moves the customer to the front of the line, and we can ill afford to ignore them. Keep a cool head and prioritize these customers. By doing so, it’s entirely possible to turn a potential enemy into a friend of your brand.
  • Listen for sentiment
    Our ticketing system has a feature where it can detect when a customer might be upset and give us an opportunity to create workflows to address those tickets. In full transparency, upgrading to that feature costs more money. It’s on my wish list and a smart way to more proactively identify customer issues. For the voice channel, tools like speech analytics can have a similar effect.

The Result

Putting all of this together, by understanding our top drivers of dissatisfaction, we have been able to improve the way we handle certain ticket types. For example, one of our most common support issues is helping our customers port their telephone numbers to other carriers. For the longest time our team would respond to customers and solve the ticket and customers would rate the ticket negatively because the number hadn’t completed porting. Now, our team keeps those requests open until porting completes, addressing any hiccups along the way. Negative CSAT tickets attributed to this driver have dropped dramatically, pushing CSAT scores that had dipped into the 80s back into the low to mid 90s.

Listen for feature requests

How often do your customers contact support asking for services or features that you don’t provide? Are your agents able to come up with creative solutions that preserve the relationship with the customer or are they simply telling customers “Sorry but we can’t do that” and sending them on their way? Talk about missed opportunities.

Customers and potential customers are telling us what it would take to earn their business or give us more business and we need to know what those requests are and how often. Here are three key ways my team focuses their listening in this area.

  • Tag feature requests
    I define a feature request as functionality that isn’t currently available in our system. Any time our team receives one of these, they are trained to do three things. First, they work to find a workaround that will solve the customer’s problem. Second, they apply a tag called “featurerequest” to the support ticket so we can track the frequency of each request. Third, they thank the customer for taking the time to make the request.
  • Tag bugs
    We had to make a distinction between bugs and feature requests with our team — and this is an important one. A bug occurs any time our system fails to do what it’s designed or intended to do. Our team follows a similar process to feature requests but, depending on the impact of the bug, we are much more likely to escalate this to our engineering team to address with higher priority.
  • Tag upselling attempts
    In other cases, our customers ask for functionality that is available but for an additional fee. Our support team is knowledgeable on our entire suite of products and is trained to both inform the customer of the additional service that we offer and to apply an “upsell” tag to the ticket.

You’ll notice that we use a system of tagging our tickets to track feature requests, bugs, and attempts at upselling additional services. Next comes the “listening” where we go through on a regular cadence and review the problems customers are presenting and how successful we are at solving them. This also gives us an opportunity to circle back with customers when features are added and bugs are fixed.

The Result

In a recent review of our upselling tickets, we found that there were five key reasons we were upselling additional services to customers and, because of our tagging, we were able to quantify how often we received these requests and gauge the success of these upselling attempts. We were able to share this with our marketing team and the decision was made to lower the process on one of our services to boost the adoption rate of a certain product and feature. The ability to bring insight and data to conversations with the other departments in our organization is a big win.

Listen for opportunities to improve self-service

The last major focus of our attention as a support team is on customer self-service. We happen to use AnswerBot from Zendesk, which I characterize as a “light chatbot” that surfaces knowledge base articles to customers based on questions they ask. Not wanting to create too much friction, we make it relatively easy for customers to contact support.

Though we use Zendesk, these recommendations for listening should apply to a wide range of artificial intelligence and self-service applications including your knowledge base and chatbot.

  • Read rejected help articles
    For many support tickets, our customers are given knowledge article suggestions prior to their ticket being filed. We’re able to see any articles that customers deem either helpful or unhelpful. In cases where an article is unhelpful, this gives us the opportunity to then either edit existing or add new articles and then test AnswerBot to see if better, more helpful information will surface the next time a customer asks a similar question. As an extension of this, be sure to pay attention to article comments in your knowledge base — especially when customers say that information is incorrect, outdated, or unhelpful.
  • Read aggravating chatbot interactions
    As I mentioned earlier, our chatbot only leverages information available in the knowledge base — though we may consider additional automation in the future. In cases where there’s a handoff from the bot to customer support, we can read the customer’s interaction with the bot to gauge its effectiveness. We ask questions like:
    • Is the bot surfacing correct answers?
    • Do our customers have the patience to interact with the bot or is this aggravating? If customers type I NEED TO SPEAK WITH A HUMAN over and over again that’s a pretty good clue that something needs to change.
    • Is our bot boosting sales or turning customers away?
    • What issues are impossible for customers to self-solve because we haven’t empowered them or created a tool to let them do it themselves?
  • The Result

    The point here is to continuously review all customer attempts to self-solve their issues. It’s the only means by which we’ll ever improve and offer relevant, helpful solutions. Currently, our self-service rate is around 5% of tickets which, frankly is lower than I’d like, but perhaps I’ll be able to write another article about how we significantly improved that rate in the future. For now, the point is that there’s no set-it-and-forget-it with chatbots. If you want to improve it and avoid aggravating your customers you need to monitor it daily.

    As I conclude, how many of you thought I would stop at a customer satisfaction survey when it comes to listening to our customers? At the core of what I’ve shared here is that our customers come to us with problems. The only way in which we can improve the way we solve those problems is to consistently and daily listen to their feedback about our current solutions. And with that, I’ll be constantly looking for new ways to listen to our customer interactions and share those insights with the rest of our organization.

Jeremy Watkin
Jeremy Watkin is the Director of Customer Support and CX at NumberBarn. He has more than 20 years of experience as a contact center professional leading highly engaged customer service teams. Jeremy is frequently recognized as a thought leader for his writing and speaking on a variety of topics including quality management, outsourcing, customer experience, contact center technology, and more. When not working he's spending quality time with his wife Alicia and their three boys, running with his dog, or dreaming of native trout rising for a size 16 elk hair caddis.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here