How closely and consistently do the interactions you have with customers align with how you think they should be? Training for and maintaining a high quality support experience, especially for high growth teams, means you should be measuring much more than volume and response times. I’ve already discussed on this blog ways to think about a more holistic set of support metrics. One of the things I mentioned, but didn’t dive into, was measuring the quality of your interactions. Support QA is something I’m really excited about.

How can support teams set clear expectations around what a high quality interaction is? How can the qualitative be quantitatively measured? How can you integrate your expectations around quality into the very fiber of your team culture? Support QA is a big topic and could easily fill a book. This will not be a book. In this post, I’m going to try to cover (briefly, but hopefully effectively) how to distil values into scorable metrics, figure out tracking, cadence, and feedback loops. I’ll also go into calibration and program roll-out with integration into hiring and training.

What is a High Quality Support Interaction?

 I was at a workshop for customer support leaders recently. One of the things we discussed was support QA. We broke up into groups and were asked to list all the things we valued in our support communications. Not specific communication action items – like following an approved greeting script or other company protocols – but the core values we wanted displayed in our communications. Then each group was asked to pick their top four values. Members of the teams represented many companies spanning many different markets. As much as we all like to believe that what we value is unique to our companies and teams, the top four values were remarkably consistent from group to group. What we converged on at the workshop is, I dearly hope because I place a lot of emphasis on these things on my team, likely what customers value, too:

  1. Correctness/Accuracy
  2. Completeness
  3. Empathy (top of the list for every team!)
  4. Professionalism and Tone

Welp. That’s great. But how does all this fluffiness get us to a QA score? Let’s start first by defining very clearly what each of these things means in the context of an interaction. For the four values above, I might start with something like this:

  1. Correctness/Accuracy: Did we correctly interpret the core question? Did we give a direct and correct answer to that question?
  2. Completeness: Did we cover all the bases? Did we answer all the questions asked? Knowing the questions an answer is likely to prompt, did we answer those potential follow-up questions, too?
  3. Empathy: Were we being kind humans talking to humans? Did we acknowledge the likely emotions around the question? Or connect with personal details brought up during the course of the conversation?
  4. Professionalism and Tone: If written, was everything spelled correctly? Formatted in a way that was clear and made sense? If spoken, were we confident and clear? Overall, are we signaling that we are intelligent, trustworthy professionals?

Now take this back to your team. If these are to become your core communication values, you best get some consensus.

How Does One Score Empathy?

You’ve reached quorum on the values and what they mean for your team. It’s time to figure out how to, as objectively as possible, see if those values are present in your communications. Let’s hammer out a scorecard! I like to keep forms like this super dead simple. Here is a sample email QA form I have successfully used to show just how dead simple you can make this:

qa form sample

As long as you are clear on what your values mean, your form does not need to be fancy and complicated. Don’t make it fancy and complicated if you can avoid it. If your intention is to closely monitor quality, you want your form to be quick. And easy. And allow you to do samples of multiple tickets per agent without the person doing the scoring wanting to die. And you want all this and to still get actionable information out of it.

The form above scores a sample of 10 email tickets for a single agent. When checking to see if a ticket is “Correct”, run through the questions we set next to “Correctness” in the last section. If we can answer all of them with a yes – that box gets a 1. Otherwise a 0, and a note to the right detailing what was off. No half-points or scales to muddy things. Simple binary. Add the columns to understand where this person is performing beautifully and where they might need more focus. Get an average ticket score for the overall. 4 is amazing! 0 is very not amazing!

We have a form! But wait, there’s more!

Testing and Calibration

Now that you have a form, how do you know that it’s good and it works? Before this form becomes canon, it must be tested and calibrated. You want to be sure that what you’ve put together adequately captures what’s going on in what you are scoring, that your scoring is consistent, and that the results are useful for you and your team. There are a number of ways to do this. Things I have done in the order that I prefer them:

  1. Test the form on your team applicants’ writing samples: I love this approach. I require writing samples from anyone interested in joining our team. When I’ve tested and rolled out QA during a high volume hiring cycle, giving team members the form as a framework to evaluate writing samples was a great way to test. You can also do multiple scorecards per candidate to see how consistent your scoring ends up being.
  2. Test the form with peer review: This can be a great way to get feedback on gaps in the form, broader usability, and give people a structured way to practice constructive feedback with each other. This can be a high bandwidth approach, depending on the channel you are scoring, and people will likely be scoring only one or two people. With so many people scoring and little scorecard overlap, checking for score consistency can be hard.
  3. Choose a core team: Identify in advance who your quality scoring folks are and have them run a sample. Make sure your team is aware and be very clear that this is just for testing. Do a few runs and make sure that your core team is making similar scoring decisions for as little variation in scores as possible.

QA Program Roll-Out and Team Integration

You’ve defined values. You’ve figured out how to score them. You’ve tested the form and found it good. Now you need to turn this whole regular quality testing thing into a thing. Hopefully, with lots of team enthusiasm and buy-in. How? I’ve found an incremental approach useful:

  1. Start with new hires: Develop training assets that detail your support communication values and involve existing team members in communication best practice training. QA becomes a valuable resource to the new hire to figure out, with their manager, where to focus effort.
  2. Include more tenured team members: Once QA is an established part of training and onboarding, broader team roll-out becomes a more natural extension of your team’s quality program.

Cadence and Feedback Loops

How often should you score quality? The real question is how often is it useful? When beginning a quality program, it can be helpful (once you have an idea of your baseline) to set performance goals, score with more frequency, and use the information gained to coach your team to those goals before tapering down. For reference, I generally start with once a week until we’re hitting the mark, then taper off to once a month or so. Unless we’ve just launched something big and new – then back to once a week. As simple as I hope you have made your form, QA still takes time and resources.

Taking the time to put together a thoughtful support QA program is worth the effort. It forces you and your team to think deliberately about, and reach some consensus, and what you value and how those values manifest. It keeps those values top of mind and makes them real.