Why do we need support?
And it takes time to fix things.
And when things break customers get angry and frustrated.
And when they need help, your support team is the one to take the mantle!
Supporting customers is hard.
But it shouldn’t be hard to analyze and measure your support teams performance.
Of course, the easiest way to check performance would be to measure How happy is your customer?
In this piece, I will walk you through the actionable metrics required to analyze customer happiness and quality of customer support.
Additionally, I am giving away a plug-and-play sheet to track the aforementioned metrics.
How to quantify customer happiness?
The customer satisfaction score has been in use for over a decade. To understand how satisfied your customers are, companies send a question like the one mentioned below or a variation of it.
How would rate your overall satisfaction with our services?
This question is usually sent right after an interaction with the customer support team or completion of services requested or at the end of a survey.
The customer is supposed to respond back with one of the following options
- Very unsatisfied
- Very satisfied
This data is used to ultimately calculate satisfaction, simply by the formula:
(Total number of satisfied customers/Total CSAT responses)X 100
This gives us a number between 1 and 100.
The ideal score varies, depending on the product, industry, and duration of the interaction. Though, for a B2B SaaS company, anything above 90% is a pretty good score.
CSAT score though simple to measure should never be used in isolation.
To begin with, CSAT scores are subjective and are driven by emotions. Which will be affected by an individual’s(in this case, your customer’s) current emotional well being.
This subjectivity adds quite a healthy dose of uncertainty to the score.
The best way to use CSAT is to correlate it with some of the relevant metrics mentioned below
- Average resolution time
- Average response time
- Total number of Unsatisfied/Satisfied customers
By our very nature, we are more moved by being non-satisfied than we are by being satisfied.
Effectively, the probability of an unsatisfied customer giving a negative feedback would always be higher than the probability of a happy customer giving you a positive feedback.
In most cases, the number of very unsatisfied customers would always be more than the possibility of getting a very satisfied rating from the customer.
This means your measure would always be tipping towards more unsatisfied customers.
It makes sense to keep track of how many customers go unsatisfied, for which type of issue and why?
The CUSAT score, like CSAT score, is calculated on the total responses
(Total number of unsatisfied customers/Total number of responses)X 100 = CUSAT
This helps us understand the total number of unhappy customers out of all the customers who came to your team for support.
And when correlated with the type of issues, average resolution and response times gives us actionable data points that we can use to improve overall satisfaction.
Average Resolution Time
Average response time is the measure of how much time does it take to resolve a query from the point when the issue was raised by the customer to when the ticket was marked closed.
Those who have tracked this metric have always had one major point of concern.
Different problems take different amount of time to resolve
Considering that, an average for this metrics doesn’t make sense especially when tracking performance.
It becomes necessary to,
- Segregate your queries into named groups
- Approximate average time needed to resolve these queries
Then measure this metric.
Correlate it with your CUSAT and CSAT score to gain actionable insights.
Average First Response Time
Average first response time is the time taken by the team or the team member to respond to a customer query.
Correlation plays the key here, does your CUSAT score increase with average response time? or does it stay stagnant?
Total Number of Satisfied and Unsatisfied Customers
Satisfied customers consist of a combination of two sets:
- Customers who sent a satisfied response via the survey
- Customers who interacted with support teams and have upgraded or renewed your services
An increase in the number of customers in this sets shows your support team is working. Compare it with CSAT and CUSAT scores for a better view of how your support team is performing.
How would you use these metrics?
Usually, you would look at using these numbers and associated comparisons to:
- Improve and optimize your processes and performance for better support and happier customers
- Build reports for your weekly, monthly, quarterly or yearly stand-ups or meetings
Process and Performance Optimization
Optimization can be either team-wide or for individual team members. A crude list for each is mentioned below:
Each of these metrics, specifically team metrics, will be asked and used in your weekly, monthly or yearly reports.
Though the problem is not the metrics itself, it’s how you use it.
Your reports should drive action and promote vertical thinking.
Consider correlations, CUSAT vs Churn, Average response time vs CUSAT. These help you drive change, what would happen if you could decrease the average response time?
Would it increase CSAT score or lower CUSAT?
Look at the trends, does average response time lower over time or is it increasing?
Compare this with your correlation to find possible reasons.
Ultimately, metrics can only show you if something works or not? It’s up to you to correlate actions to outcomes and build a system that works for you.
A sample spreadsheet with some dummy values for metrics, score, and correlations can be found here. Feel free to download and customize as required.
At Kommunicate, we are envisioning a world-beating customer support solution to empower the new era of customer support. We would love to have you onboard to have a first-hand experience of Kommunicate. You can signup here and start delighting your customers right away.