On our quest to help companies increase the quality of their customer service, Klaus is sponsoring the #quality channel in the Support Driven Slack. Here’s our insight into a popular topic discussed in the community: how many factors should we rate when assessing quality? Which ones? 

Behind every great support team, there is a great conversation review process – aka customer service quality assurance. If you’re aiming to provide top-notch support, you’re probably already doing it, too.

However, if you’re not quite sure how many rating categories to include in your scorecard, don’t worry. It’s a question that has come up a lot in Support Driven’s #quality channel and we’ve got you covered.

Instead of giving just the good old but tiredly vague “it depends”, we dug into what’s happening in the world of conversation reviews. Since we provide a conversation review tool called Klaus, we have a large data set at our fingertips where we can examine the question, “What are people who are doing quality assurance looking for?” We pulled anonymized meta-data from our tool to see how our users work with rating categories. 

Here are the spring/summer 2019 trends in conversation reviews. 

Most teams use fewer than 5 rating categories

When we looked into the most popular number of rating categories to have in scorecards, we had a clear winner.

62% of our teams use 3 rating categories for quality evaluations.

76% of customer service teams do internal conversation reviews using 2-4 rating categories. There are clear benefits of having a small number of general rating categories: 

  • It’s easier to make 2-4 mini-decisions than to assess tens of aspects in each conversation. Short scorecards help you keep reviewers focused and avoid decision fatigue that can lower the quality of the feedback.
  • Giving a few ratings per ticket makes the conversation review process fast so that people are more likely to participate in it. The longer and more complex it gets, the more tedious the task might feel. 

It’s easy to make conversation reviews a part of your daily routine if you keep assessments down to 2-4 criteria. Rating tickets based on short scorecards can take just a couple of minutes’ work.  So, it’s the perfect solution for peer- or manager reviews and for all teams with limited time in their hands.

3 rating criteria for support QA

If you’re short on time for QA, here are the top 3 things to assess

Remember when we said that 62% of our teams use 3 rating categories for quality evaluations? This is what we found the majority of support teams focus on when assessing the quality of the responses they’re sending out:

  • The completeness and correctness of the solution
  • Empathy and tone expressed in interactions
  • Accuracy in product knowledge

If you’re looking for a quick way to set up the QA process, start with these three evaluation criteria. You can change them and add new ones as you learn what works best for your team.

Sometimes more is more

However, we also found out that almost 7% of companies use 10 or more rating categories in their conversation reviews. These are the teams who find it easier to give feedback when scorecards are broken down to specific data points. 

The team with the highest number of rating categories evaluates their conversations based on 45 criteria. Some teams set up different categories for specific tickets, based on support channels or content.

Here’s why you should consider having a long scorecard with multiple rating categories:

  • Capture the nuances in your customer conversations. With precise evaluation points, you can zoom into the details of your support interactions. For example, you can check whether the agents used proper opening lines, or count the times that they mentioned the customer’s name in their responses.
  • Clear-cut criteria make feedback actionable. It’s easier for your agents to understand and act upon the evaluations that point to specific parts of their performance. That’s where your agents can start leveling up their game. 
  • Find input for training and coaching sessions. Keep an eye on how your team performs in all categories across time to find any negative trends you need to address. For example, if you notice that your agents are continually missing their chances to provide additional information that could upsell the product, you might want to revise this practice in your next team training.

While most teams prefer to keep conversation reviews short and simple, there’s enormous value in having lots of rating categories. If that’s what you’re after, and you have the resources to make it happen, then use a scorecard that includes all the information you need to maintain a high level of quality in your customer service.

Customer service team conducting conversation reviews

So… It depends

Though we promised to refrain from giving the usual “it depends” type of advice; we are going to have to bring it up anyway. That’s the Support Driven way, after all! The answer to most questions in the SD Slack is often “It depends.” The number of rating categories you should use in conversation reviews rests on what you’re looking for in the results and on the resources you’ve got in hand. 

Here’s what you should consider when building your scorecard:

  • What are your goals? The more targets you have for conversation reviews, the more criteria you have for evaluation. Make sure your scorecard includes at least one rating criteria for each of your goals.
  • How far are you from your benchmarks? If your agents are already performing high and you want to maintain an even level of quality across your team, a few basic categories can be enough. However, if you’re on a quest to make drastic improvements to the quality of your customer service, you might need to set up 5 or more rating categories to cater for different aspects of the conversations that agents need to improve.
  • Who conducts reviews? With dedicated QA staff, the length of your scorecard might not be a question of time and efficiency at all. However, if you’re doing manager or peer reviews, you’ve probably got a limited amount of time in your hands. To keep support feedback efficient, you might have to cut down on the assessment criteria. 

You’ll benefit from having many rating categories if your team has multiple goals or lots of room for improvement. If you also have enough resources to conduct those thorough conversation reviews, there’s a lot you can win in this game. 

How many categories do you evaluate when assessing quality, and what are those categories? Join the #quality channel and invite your team over, too – we’d love to hear how everyone is approaching this. Read more about customer service quality on Klaus blog and sign up for weekly Support Driven newsletter. Meow.

 


Valentina Thörner, Head of Product at KlausValentina Thörner is the Head of Product at Klaus. Additionally, she is an opinionated writer, pragmatic solution-finder, German expat in Spain, twin mom, barefoot runner, expert in leading teams across geographies and time-zones, author of the remote leadership bible “From a Distance”. Valentina has over a decade of experience leading and working location independent and has learned a thing or two about wrangling a team, 397 conflicting priorities, and two kids.

 

%d bloggers like this: