Why the Quality Assurance and Training Study Results Scare Me


Share on LinkedIn

In late May, the QATC (Quality Assurance, Training and Connection organization) published the results of their quarterly survey on critical quality assurance and training topics in call centers, focusing on quality monitoring call calibration practices.  Having worked for a third-party call monitoring company for 8 ½ years, I found the survey results to be quite interesting (sometimes scary), but for very different reasons than highlighted in the QATC report.

1)  Quality Monitoring Calibration requirements – According to the survey, 24% of respondents indicated that calibration participants were not required to review calls prior to the call calibration meeting.  In these cases, it is a feel-good, group-think exercise and not a true call calibration session.  Yikes! Assuming the Quality Assurance team in the call center does not grade every call by committee, such an exercise is ineffective at gauging the degree of disparity that exists within the current call monitoring process.  And since disparity is not being measured, the effectiveness of call calibrations cannot be quantified. Result: Waste of time.

2)  Measurement of Quality Assurance calibration effectiveness – As a “numbers person” I was alarmed … and surprised that nearly 40% of survey respondents did not quantify the impact of their internal quality monitoring calibration process.  That aside, another 53% of respondents used standard deviation to measure effectiveness, which is fine and good as long as the quality team doesn’t care about QUALITY (only consistency).  Standard deviation reflects the “average” dispersion around a central point, the mean.  The smaller the standard deviation, the more consistent the measurement.  But using this approach (and only this approach) to quantify the impact of call calibration assumes that the mean is correct!  I made this mistake (too) many years ago when I became increasingly pleased over a period of months as the call monitoring calibration standard deviation for one of my clients consistently decreased.  That was, until I joined one of those call calibration sessions.  The quality assurance team had indeed become more consistent …. and WRONG, and lenient in grading call center agent calls.  A truly comprehensive call calibration process should measure BOTH consistency of grading and accuracy.

For more information on how to measure both consistency and accuracy, get your complimentary copy of the ebook titled Eliminating the Worst Call Center Practice: Quality Monitoring Calibration, the case study in it will shock you.

3)  Number of recorded calls graded – According to the QATC survey, only 1% of survey respondents graded more than 5 calls in their quality monitoring calibration sessions. Another Yikes!

In order to measure not only consistency, but accuracy as well, the call calibration process recommended by Customer Relationship Metrics utilizes a person known as ‘The Standard’ to generate the correct call scoring.  The scoring of all members of the Call Center’s Quality Assurance staff is then compared to ‘The Standard’ using a statistic known as Pearson’s Correlation. (For more information, get the Call Calibration ebook).  The p-value (the likelihood of getting a more extreme value than the test result when there is no effect in the population) of Pearson’s coefficient is largely contingent upon sample size.  Small sample sizes (such as the same sizes of less than 5 calls utilized by a majority of survey participants) provide less precision in the decision of whether an observed difference between ‘The Standard’ and the QA staff is significant or not.

I could go on and on, in fact, I did in the ebook.  Here is the fact:  Standard Deviation is just fine if “you” want to be average (or don’t really care about getting it right).  Keep mind that your “customers” demand that you deliver a higher quality than average.

If you want to view the complete QATC survey results please follow this link:   http://www.qatc.org/newsletters/spring2011/surveyresults.html

Read an Interview with the author of the ebook titled Eliminating the Worst Call Center Practice: Quality Monitoring Calibration

Learn how to get more samples, more insights, for less costs with your Quality Assurance program.

Republished with author's permission from original post.

Carmit DiAndrea
Carmit DiAndrea is the Vice President of Research and Client Services for Customer Relationship Metrics. Prior to joining Metrics, Carmit served as the Vice President of Behavior Analytics at TPG Telemanagement, a leading provider of quality management services for Fortune 500 companies. While at TPG she assisted clients in measuring behaviors, and provided management services to assist in affecting change based on newly created intelligence.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here