Your Contact Center Monitoring and Coaching May Be Doing More Harm Than Good

0
431

Share on LinkedIn

Discard the illusion of fairness and actionability of the “random sample of 10 cases!”

When I interview contact center supervisors and managers to explore their biggest frustrations, one of the most prevalent issues is the amount of time and hassle associated with customer service rep (CSR) evaluation. Much of the frustration comes from a serious disconnect between customer service objectives, HR-mandated CSR evaluation, and organizational performance improvement.

This article first describes the operational problems, then the disconnect between standard practice and rational objectives, where text and speech analysis may be helpful and suggests a less painful, more practical approach. While I comment on how customer surveys are part of the context for evaluation, the best practices and mechanics of surveys will be left to another article.

The Problem

The foundation of contact monitoring and evaluation in most companies is a very common practice of selecting a “random sample” of ten contacts per CSR every month. This foundation is not a valid sample.

  • Ten calls from approximately 1,000 contacts the average CSR handles in a month is NEVER a statistically valid sample! Even at a 70 percent confidence level, the margin of error is over 16 percent.
  • Calls are seldom randomly selected – short perfunctory calls are always excluded and long calls (often the most valuable) are excluded by most supervisors or quality analysts to save time.
  • Most supervisors primarily act as quality inspectors or sheriffs, assuring compliance and flagging errors, rather than cheerleaders for CSRs that do things right.
  • Very little is learned from 70 or sometimes 80 percent of the contacts that are selected because they were simple, vanilla, and straightforward. Therefore, 70percent of all monitoring efforts are wasted because you are not learning anything of value for improving the CSR’s performance!
  • On the other hand, one or two difficult calls can result in a bad review – meaning a score significantly lower than the usual target of 8.5 out of ten that many companies use as an acceptable minimum. Further, CSRs become justifiably defensive and feel that their monthly score has been destroyed by one bad call that often was about a subject they were not empowered to remedy. This is especially true if satisfaction surveys are included in the evaluation. If the customer was told “no” due to company policy, their dissatisfaction with the lack of resolution bleeds into the evaluation of the CSR.
  • The factors scored by the supervisor on the monitoring form have often not been rigorously established as true drivers of customer satisfaction. An example is using the customer’s name three times in the conversation, which seldom proves to be a key driver of satisfaction with the contact. Usually, a third of the factors evaluated are important to the company (such as branding the closing) but not to the customer.
  • Chat and email conversations are becoming the norm, especially in B2B environments, and the drivers of satisfaction for chat and email often differ from phone calls.
  • Studies like CCMC’s 2021 National Delight Study suggest that “very satisfied” is often not the appropriate potential top score – it should be “delighted.” [1]

Here’s a summary of what I’ve just discussed.

Prevalent Problems with Monitoring Samples and Analysis

Causes: Muddled Objectives Begetting Muddled Data Collection

Evaluation objectives are usually over-simplified. They are often articulated by a single customer satisfaction target for the contact center. This target is set by what sounds like a reasonable number, e.g., 85 percent top-two box on a five-point scale, 8.5 on a ten-point scale. Other companies set targets for Net Promoter Score (NPS) of a 65 for consumer packaged goods (70 percent promoters minus 5 percent detractors) or an NPS of 40 for an auto company contact center (55 percent promoters minus 15 percent detractors).

What is the best indicator of this oversimplified objective problem? Mixed messages from different data sources. For example, CCMC recently visited a major auto company contact center to review its operations. The head of service quality cheerily announced that the reps were doing very well on monitoring, most received more than 95 percent on the monitoring score. The gloom came when we asked what the customer satisfaction tracking survey score was. It was a 65 percent completely satisfied. In many companies, the satisfaction score from surveys is measured with a question asking if the customer would recommend the CSR or, worse, the company, and the answer is “no” if the problem was not resolved.

There are two major reasons for the disconnect. First, the supervisor who conducted the call-quality monitoring and evaluation was listening for two sets of items: 1) items important to the customer (such as first call resolution, clarity of CSR explanations, and empathy), and 2) items important to the company (such as appropriate greeting, complete logging of the call, adherence to policy, and bridges to cross-selling or up-selling). The disconnect arises when the CSR does very well on the company priorities and less well on the customer priorities. The sum of the two scores may be very respectable (e.g., an 85 percent), while the customer’s key needs remain unfulfilled, thus the score of 65 percent from the customer’s perspective. In addition, the dimensions thought important to customers such as using the name three times, are based on folklore or personal opinions.

The second reason for the disconnect between evaluation objectives and how the data is collected is that the call mix evaluated include issues that the CSR is not able to resolve to the customer’s satisfaction. If, out of the ten selected calls, the CSR happens to receive three calls where the customer has a difficult issue such as repair comebacks (repeated failed attempts to repair the car) or out-of-warranty repairs, there is a high probability that the CSR will have at least 30 percent detractors. This assures a maximum satisfaction score in the 50s or 60s, and therefore a failed evaluation.

In fact, there should be four separate evaluation objectives that require different data and analysis. These are: CSR performance improvement, celebrating great CSR performance, contact center process improvement, and organizational (corporate) improvement. In most companies, only the first objective is addressed. This is the fundamental problem.

  • CSR Evaluation should have the objective of improvement of CSR consistency and compliance while helping them develop professionally and enhancing their performance of critical skills needed to handle challenging issues. All this usually is distilled into the flawed objective of getting an 85 percent rating. However, the 85 percent assumes a random sample of calls including only one challenging issue and also does not allow for the identification of the two skills that a particular CSR needs to improve. Ideally, there should be two scores, one high score e.g., 95 percent on easy contacts and a lower score, e.g., 75 percent for difficult calls where the representative must negotiate a resolution that might not be totally satisfactory to the customer.

    Finally, the emphasis of CSR evaluation, which is usually the primary, and often the only objective for monitoring and evaluation, is misplaced. Our experience is that only 20-30 percent of possible improvement can come from improving CSR skills. The vast majority of customer satisfaction improvement comes from improving the contact center processes and response guidance and empowerment.
  • CSR Positive Motivation entailscatching people doing things right and celebrating these victories, ideally office-wide. Research by Paul Zak at Claremont Graduate University found that “lack of appreciation” led to the majority of turnover.[2] Monitoring, real-time or after-the-fact will identify many actions that CSR take that were positive. Zak calls this the OFactor because public recognition (in conjunction with flexibility) leads to much higher oxytocin in the brain, which in turn leads to loving to come to work. He has measured such impact on employees at many companies including Zappos and The Container Store. An effective approach, used at Blinds.com among others, it to walk to the employee immediately after hearing or reading the positive interaction and providing verbal feedback, ideally within hearing of their peers. This peer recognition is as important as the supervisor’s recognition.
  • Contact center process improvement is the biggest opportunity for improving service performance although most contact monitoring does not even address process improvement. The improvements are identified by the supervisor or monitor asking themselves two questions, “Why did this customer have to call?” and “How could the CSR be better equipped to satisfy the customer?” The objective is to identify broken processes and ineffective response rules that do not leave the customer feeling treated fairly. The critical analysis required is to ask what problems are receiving good CSR monitoring scores but poor customer ratings? As the customer rating is most important, the response rule must be modified to enhance the customer rating.
  • Organizational improvement identifies unhappy or confused customer contacts that could be prevented via enhanced marketing transparency and products and customer education. In addition, organizational as well as ensuring that supply chain provides current data to the Knowledge Management System (KMS). At one food company, the supply chain function has an internal service agreement requiring that ingredient lists be updated every time a supplier is changed so that the contact center can accurately answer questions about ingredients and allergens. These opportunities should be identified via evaluation of the cause of contacts and dissatisfaction and addressed in concert with the corporate Continuous Improvement Function.

In summary, answer these questions to ensure the proper focus on CSR skills and contact center operations.

Proper Focus of Analysis

Challenging Issues for using monitoring and evaluation surveys for CSR evaluation and motivation

The following issues must be addressed in every monitoring and evaluation process. While there is no perfect solution, each issue should be explicitly addressed rather than glossed over.

What contacts should be monitored?

I believe that the random sample concept is spurious, as mentioned above. If you cluster ten calls per month for three months, thirty calls out of 3,000 still is not a rigorous sample. A better approach is to evaluate ten challenging contacts, ideally with an eye toward improving a particular skill, such as diffusion of anger, negotiation, or troubleshooting. If HR objects, point out that each CSR gets a random offering of difficult contacts, and the sample is addressing the difficult issues where each particular CSR has challenges. The goal is improvement, and each CSR may have different areas where improvement is needed.

Resolution should be judged by type of issue

The CSR is scored on the drivers of satisfaction over which they have control. CSRs should be scored where they are given flexibility for solutions using flexible solution spaces (FSS). The focus should be where there are multiple ways of resolving issues depending on the circumstances – vs. not penalizing CSRs for uncontrollable factors like follow-through by a car dealer on a repair.

Who does the evaluation?

Supervisors are the usual evaluator, but some organizations use a separate quality team to assure consistency. If the quality team evaluator is not familiar with a particular CSR they are less able to focus on where that CSR needs developmental coaching, and the score, a function of contact issue mix, is much less credible with the CSR. Supervisors are credible but are very pressed for time and often do not select contacts to support skill development. A better approach that should be implemented at least some of the time, is to have a peer do the evaluation. This saves supervisor time while a fellow team member listens to calls or review emails and provides feedback. Feedback from peers is less stinging than feedback from supervisors. Also, one auto company has CSRs evaluate their own calls and take the evaluation to the supervisor to discuss. We found that the CSRs were often harder on themselves than the supervisor.

What data should be collected?

  • Type of call – this is the most important piece of information and should be noted with granularity. Less than 40 percent of companies even note this item in evaluations. Ideally there should be 10-30 call types. Of these, as noted above, 70 percent will be easy and straightforward, with most CSRs getting satisfactory scores. It is the other 30 percent where need for improvement or excellent performance will be identified.
  • Call handling skills to be measured include: empathy, diffusion of anger, troubleshooting, clear explanations, negotiation based on FSSs, and provision of a fair resolution. Whether these activities are successful should be confirmed and calibrated by comparison to customer surveys. The survey must differentiate between satisfaction with the CSR and with the company.
  • Data on the performance of operational skills such as logging key information, correct issue classification, use of the KMS for troubleshooting and providing a clear explanation for the resolution suggested should be collected.
  • Privacy, authentication and compliance actions must all be recorded even if not used in the evaluation.
  • A simple flag should be set by the CSR as to whether the issue was preventable if the customer had used self-service or search. We find that 30 percent of contacts are preventable. This flag is very valuable to the corporate Continuous Improvement function.
  • Data describing the action taken by the CSR and/or a wrap code is needed.
  • Evaluation data should include attempts to create delight with enthusiasm, cross-selling, or education, when appropriate. The reason that attempts to delight should be noted is that it is reasonable that only half will be successful. The critical factor is to encourage CSRs to make the attempt, understanding that the customer often will not be welcoming. The success rate rises as the CSR becomes more adept at “reading” the customer and the situation and making the attempt at the best times.
  • Talk time should be noted but seldom should be a key factor in evaluation.

How is monitoring data collected?

The two options are real-time and digital recording. The benefit of real-time collection is that you can immediately give positive feedback to the CSR. This has tremendous value. The advantage of recording is that you can sort through calls and identify several of the same type so that you can focus on a particular issue or skill. The recording also is useful when escalations or regulatory complaints are encountered.

How should monitoring data be analyzed?

The three most important aggregate analyses are:

  • analysis by issue (to identify which issues have systematically low scores, indicating that the response rule or process are defective)
  • by issue and by satisfaction survey score (to identify the issues where customer and monitoring scores differ by more than ten or fifteen percent)
  • by CSR for performance appraisal and recognition

Annually, there should be an aggregate analysis of how accurately the dimensions measured predict customer satisfaction. A correlation coefficient of less than 0.7 should instigate a re-examination of what you’re measuring. Additional analyses should look at talk time by issue, whether the issue was preventable via self-service or proactive communication, and whether delight was attempted, and if so, did it work.

What is the Role of Speech and Text Analytics?

Speech and text analytics tools are still relatively expensive but have two big advantages based on the fact that they can process a large number of calls quickly. This analytical horsepower must still be appropriately guided and interpreted by a human being. This is especially true when going beyond the basic score to develop an assessment of a particular skill and then an action plan for the employee.

  • First, they can easily sort through hundreds of calls or messages to identify a sample of contacts all addressing a particular skill or customer issue. This type of sample can be considered a valid sample, for the purpose of evaluating a particular skill or approach to handling a particular issue.
  • Second, they can analyze content for sentiment and highlight the really good or the very dissatisfied contacts, which may merit analysis. Often the tool can also combine talk time from the CTI system as well as reason for contact and whether preventable to validate CSR coding and/or highlight possible improvement opportunities. Also, phrases signifying attempts to delight as well as successful delight can be flagged.

How should the evaluation be fed back to the CSR?

The three key issues are the timeframe, the format, and who transmits the evaluation.

  • Timeframe is important because the more time following the contact, the less the CSR remembers it and their rationale for doing what they did. On-the-fly feedback right after contact is the best. Also, more frequent feedback is better than a monthly meeting, although a monthly meeting can aggregate data from multiple contacts.
  • The format can be a walk-over verbal attaboy, daily automated feedback from speech analytics or surveys, or a formal meeting. Formal meetings can be a one-way lecture from the supervisor or evaluator, a joint listening and critique to several calls, or a self-critique of a call(s) by the CSR. As noted, more frequent is better but harder for supervisors to achieve.
  • Who provides the feedback has a great impact on how well the feedback is accepted as valid. Validity is inversely proportional to the distance of the evaluator from the CSR. The least valid will always be a quality team outside the contact center. The supervisor is viewed as valid but “not like me.” A peer is very valid because they are a co-worker facing the same constraints. The most valid is the CSR themselves critiquing their own call. Ideally, a combination of the above should be used.

To recap, consider these practices to improve how feedback is monitored.

Best Practices for Monitoring Feedback

I will not provide an in-depth review of the second two objectives of monitoring, Contact Center and Corporate improvement, except to say that they often provide bigger opportunities for moving the overall service quality needle than focusing on the CSRs. I will address these issues in another article.

Recommendations

In summary: focus on difficult calls where the CSR can demonstrate their expertise and which can be the basis for fair, substantive coaching and goal setting.

  1. Stop saying you are drawing a random sample of ten calls per month. You instantly destroy your credibility. Pick five to ten challenging calls (ideally all of the same type), based on talk time or reason for call. Focus on one or two skills and look for opportunities to celebrate great actions.
  2. Listen to key parts of the call and identify strengths and opportunities for that type of issue.
  3. Supervisors should also ask themselves the process improvement questions, “Could this call been prevented?”, and “How could the CSR have been better equipped to handle this issue?.” These responses should be systematically input to Continuous Improvement.
  4. The evaluation meeting should set one or two goals for CSR skill improvement as well as provide frequent positive feedback, on the fly, several times a week.
  5. Next month, evaluate at least a few of the same type of call to determine improvement and confirm progress or a need for further work.
  6. Tie the monitoring results to the customer surveys of the same type of call – this will confirm if the supervisor is measuring dimensions that are also important to the customer. Hint: Monitor calls where you have the survey results back. If you are doing an after-the-contact email survey (we recommend!), you will know which contacts have surveys returned within 48 hours. Select your sample from those contacts.
  7. Annually, execute a formal study to assure that the factors being monitored are true drivers of satisfaction for difficult calls (beyond compliance assurance for the lawyers 😊)

Contact me for more detailed article on all of the above. Get further information on employee satisfaction and survey best practices at https://customercaremc.com/insights.

Notes

[1] John Goodman, “Making Delight Intentional” , Call Center Pipeline, November 2021

[2] Paul j Zak, Trust Factor, AMACOM, 2017

John Goodman

Mr. Goodman is Vice Chairman of Customer Care Measurement and Consulting (CCMC). The universal adages, “It costs five times as much to win a new customer as to keep an existing one.” and “Twice as many people hear about a bad experience as a good one.” are both based on his research. Harper Collins published his book, “Strategic Customer Service”, in March, 2019. He has also published, “Customer Experience 3.0”, with the American Management Association in July, 2014. He has assisted over 1,000 companies, non-profit and government organizations including 45 of the Fortune 100.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here