These are edited notes from our AMA about tagging with Michael Rihani (Tilt), Andrew Spittle (Automattic), and Diana Potter (Customer.io) moderated by Morten Jensen.

What value do you personally get out of tagging support tickets? From the perspective of a support team.

Diana: I tag everything so I can then report on them in more detail. I can go in and know that 20% of our support volume pertained to this 1 particular feature, and of that 20%, 15% were bugs and the remaining 85% were questions. And of those questions 50% could be covered in docs. It takes some setup, but it’s super valuable to have that data and then watch the numbers shift.
But for me, I have some kind of categorization on every ticket.

Andrew: I get value out of being able to look back on my own work and understand where, product-wise, my time went.

Michael: We use custom Zendesk ticket fields to capture similar information others are referring to. We use tags for more one off things such as “needs_faq” as a tag if we need to make an FAQ to answer that ticket’s general question. And tags for time sensitive things that will only exist for a week or two. I like to think of it as tagging everything for the few exceptional things.
Since we can require agents to fill out custom fields, we gather critical categorization fields on every single ticket. Whereas our Zendesk tags are completely optional.
Every ~2 weeks we pull up all tickets that have “needs_faq” and once we make the FAQ, we remove the tag from the ticket

What sort of process do you have for reviewing the data?

Diana: I check the reports weekly, to tag anything missing a tag or just checking the reports to see what what tags are being used, etc.
And once every couple of weeks we do the same as Michael and grab the ones tagged as needs_faq or update_faq etc and do those doc changes.

Andrew: Nothing fancy, just manually pulling up a report or downloading the CSV from Olark.
I have a recurring todo item every Friday to look at things. Aim to hit it every-other-week at the latest.

How many people are on your teams, and does everyone categorize/tag?

Diana: We’re 4 in total on the support team, and 13 in total working on tickets (whole company support yo) and everyone tags, but not everyone remembers on every ticket. So I review weekly to tag anything missed and remind people.

Andrew: People have to remember to tag. At least, I’m not the only one who, in Slack, sometimes goes “Shit, I haven’t been tagging!” after like 2 hours of live chat.

Michael: We have 3 full time CS employees on our team and all 3 of us are required to fill out our categorization ticket fields.

Do you guys have preset tags that everyone knows of, or do your team members come up with tags on the fly?

Diana: I have a master list that need to be followed, and people can propose new ones, but they can’t add them on the fly. I keep a spreadsheet that anyone can see, with a list of tags and descriptions for when they’re used (http://take.ms/FzyGz). I like to think they all make sense, in that they’re named after the parts of the software.
Because I can’t make it a required field, sometimes it’s easier to just tag them myself once a week. Honestly, right now it’s doable for me to tag the missing ones while I work with my team to be more consistent. I think going forward looking for something like custom ticket fields that I can require will be more necessary (which isn’t possible in Help Scout).
I like having an eye on things to prune tags and keep the list up to date. Like @andrewspittle mentioned. You can’t ignore things.
For me I just add “featurerequest” as a tag to let us know to log it as a feature request (elsewhere). That way we have a separate picture of how popular certain requests are, can keep track of who asks for what (because I notify everyone who requests a certain feature when it’s released) and I keep bloat to a minimum because it’s just 1 tag, not a separate one for each request.
Anything temporary is removed (we have tags for bugs and anything we keep pending to investigate or log as a bug) and then every couple of months I prune the tags to make sure they’re still valid. Maybe we made changes in the product, maybe it makes sense to split out the different API wrappers into separate tags rather than just “api” if we get dozens of php questions and very rarely questions about other languages, etc.
I automate a ton with temporary tags. Anything that involves needing a developer, needing investigation, etc…I have nag automation that reminds us to make progress and not let tickets linger.

Andrew: We have both. We have a high-level set of pretty broad tags we like to track. There’s overlap, though, in what folks do individually or within one of our various support teams (https://cloudup.com/cmF29JUsdgi). The challenge is bloat and consistency. With something like 12 teams and 100 people everyone has their own ideas of relevant tagging and if even 5% of people forget that’s significant.
I guess that’s how we fix temporary tags. Tags and support interactions can guide and inform product work but ultimately the outline and issues for product work live elsewhere (Github, Trac, etc.). And those bug tickets, for example, would include a link to the particular transcript if we wanted to follow up.
We auto-tag incoming mobile app interactions (primarily around login issues).

Michael: Honestly we rarely use “optional” tags, right now it’s primarily for FAQs or other one off things we want to categorize. We have 3 additional ticket fields. We require all tickets to categorized into a “Tilt Function”, which are basically all the ~70 things you can do, feature wise, on our platform. Then our last ticket field is “Action Item” such as Bug, Product Improvement, etc. You can force a ticket field as Required. So what we use is required “Ticket Fields”.
We went page by page of our entire service and came up with almost every single thing a customer could email us about.
Furthermore, we went through past tickets to ensure we could put the ticket into one of our new function’s field.
It took 2-3 employees about a half day meeting to comb through everything and standardize the list. Once we came to the ~70 we tried it out for a week or two and merged a couple, added a few, and deleted a few until we felt comfortable that the list was solid and encompassing.

Denise: I tag them as “feature request”, but I note the request down separately in a spreadsheet. I have an automation for on-hold tickets if we switch it from on-hold to any other state. And i have an automation for forwarding messages to the marketing and BD teams.

Crystal: I just have to categorize my feature requests for the product team. I pull a report for them each month. So my tags are usually FR_automatic reporting.
That’s what I do = API_pull star rating for groups.

So what are your top tips to manage bloating and maintaining consistency?

Andrew: A couple ideas we’re in the middle of currently:
– Regularly reviewing what you’re tagging. In other words, treating tags like a garden. If you ignore it they’ll fester and rot.
– Not trying to tag every part of every feature or every question when you have a really large product (or a large team).

Are there any best practices that you would advise new support team managers to keep in mind?

Diana: I have 50 tags right now, but it’s deliberate. Once we’re out of a certain software period (we just released a beta app and are running 2 apps in parallel right now) I’ll be pruning my tags down to about 20.

Andrew: I’m starting to think that the best Step 0 to effective tagging in support is to first ask product and development leads what they wish they knew more about in terms of customer issues.

Who outside of support benefits from tagging? Do you have any developer-facing resources built on top of your existing taxonomy? Some sort of “Top Tags” list, something like that?

Diana: So I use all of the information I have in weekly and monthly reporting that goes to the entire company (which admittedly is 13 people). It drives priorities in terms of bug fixing, new features, etc. I’m in a unique position that it’s a small company and I have a fair bit of say in things. But having data is a big deal. Want to convince the product team that x is a problem? Tell them 30% of customer questions involve x.
My tags predate most everyone else in the company, so I’m super lucky in that all of this data has existed for a long time to be a driving force in product decisions.

Andrew: No. Yeah. Right now I don’t think we’ve really hit on an effective process or flow for taking tags within support issues and translating them in to meaningful information for product teams or those outside of support. So who does? No one, really, for us.
Our “master” list isn’t very intense. Always in a state of flux, for what it’s worth.

Michael: We use all the categorized ticket data to present trends in company wide weekly emails and monthly meetings.

Crystal: Tags are huge to us. We’re about 25 people and I still thankfully have a seat at the product table. But I think it’s because I implemented the importance of showing this data to them early on.

Denise: I don’t have that master list and that’s the missing link.

Who COULD/SHOULD benefit from tagging outside of support?

Diana: Everyone! Just like every part of the company comes back to the customer, right? Well tags can give you a summary of the customer. At least in terms of what we’re reacting to. Our sales team benefits, our marketing team benefits, product team, etc, etc, etc. Even if it’s just an easy way to access specific ticket types to dive into questions and talk to specific customers to get more information.

Andrew: I think having well-done tagging can give you a storyline to communicate as your product evolves. Sort of like, why did you make ___ change? Or, why did you ship ___ feature? Well, in part because we knew ___% of customers were asking us about it.
So that touches a bit of dev teams, marketing, etc.

Crystal: Even our content marketing team benefits. I tag content ideas for questions our customers have about the review management industry and then share weekly. I think dev team benefits the most from mine.

Let’s look at the dev teams, what value could I bring to them next month if I started tagging today?

Diana: It’s way more manual than I’d like right now. But basically I look at things like this http://take.ms/KSHM3. And then I compare them rather painfully week by week. Well some tags I ignore, like misdirect.
But yeah, I’d see that say campaign_issues was 20% higher than a week before and then dive into why. Is it something to call out as a problem, or something temporary that we fixed. Something that a change in the product copy would fix, etc. I checked those numbers daily to keep an eye on things. I suspect as the team grows that’ll become tougher, I’m working out how to scale it better (grabbing the info from the API and building reports myself is part of that) (is working on getting better data out of help scout).
My tags give info in terms of where are bugs happening. We have twice a month “retros” where we talk about the good and bad in the software. And I use a lot of that tag driven info in the stuff I bring up.

Andrew: In my experience dev teams are regularly swamped by bug reports from support. Having effective tagging in place can help them understand where to dive in. To trust that by tackling ___ feature or bug they’re going to have a measurable, meaningful impact on the lives of support and users.

Michael: We run weekly reports using GoodData and Zendesk to find increased ticket trends and report to the company why specific labels/tags are on the rise and what action items we need to take.
425% week over week increase in Receipt Requests, we can explain what happened and what suggested changes we have for fixing anything in the past and preventing such requests in the future. As an example.
Sometimes the action requires developers fixing bugs. Sometimes it’s product improvements. But most of our reporting helps the dev/product teams.

What sort of time investment is needed to make tagging work, and to create those reports and meet with the other teams?

Diana: Like anything else in life, it’s an investment that pays dividends. It took me probably a day in total to set up my first list of tags (reviewing common questions, the software, etc, etc) and maybe a hour or two a month to review them plus another hour a week to make sure everything is tagged and to report on it. Worth it.

Michael: (1) Initial setup of a “master list” of tags or important ones you’d want to track over time.
(2) I highly recommend requiring key data points/categorization for all tickets to ensure high accuracy and reliability.
(3) Setup weekly or monthly email reports to the company and also meet with key team leaders to communicate what the customer ticket data and trends are telling the company. +1 on it all being totally worth it.

Any painful lessons you’ve learned that you would like to share to help spare others?

Diana: 1. Don’t get fixated on your list. Get buy in. Realize people will try and add new things.
2. Not everyone will care as much as you until you show them why it’s worth it, then they’ll be like “OMG why did I ignore you????” but you have to get them there first.
3. Don’t have a crazy list of things.
4. Have your list of categories or tags follow the product as much as possible.
There is no such thing as a perfect list. It’s a living thing that will change over time.
If you see it breathing or looking at your strangely, run. But otherwise, just go with the flow
If you’re on a small team just jump in feet first and realize you can experiment and fix things. It’s a category. The customer doesn’t see it. Worst case scenario you just wasted some time and learned something.

Andrew: Yeah, I was going to say “Start small” as a lesson learned. +1 to @dpotter’s #4, too.
If you’re at a large company or on a large team I’d suggest picking one area or development team to start with. Build that out through repetition to find what works and what you can take across the company to other teams.
I think our early misstep with tags was to try and tackle everything that about 150 people across 20 teams worked on.

Michael: Painful lesson is we deleted old custom ticket fields, assuming Zendesk would still have record of them on old tickets. However, that was not the case and those tickets then just had blank records for those fields. Not a huge mistake as we only lost a little bit of data… And you’ll never get it “perfect”, so we’re always adapting and changing. But it was a good mistake we made early while we were “just doing” and “starting small” – which I also highly recommend.

Is there any other data you would like to be able to pull out of your support tickets that no one has figure out how to do yet?

Diana: I’ve been working on the churned part manually, along with reporting on account activity (what kind of actions did they take in their account?) to find patterns.
My next big project is to start merging some of our support data with our customer data to find patterns. If someone finds a magical way to do that it would save me so much time. I want to be able to say that customers who have been with us for 1 year, who use these 3 features, and send us x number of support tickets per month are x% less likely to churn, or whatever crazy stats I can come up with.
I already have comparisons between # new customers and # new support tickets in a given week. That’s been driving some of our onboarding docs and how/when we reach out to people.

Andrew: I don’t know about “no one” but we haven’t tackled sentiment analysis yet. We’re hoping to start working on both churn and conversion soon. My hope is it can illustrate (to both support and !support) the impact those conversations have.

Michael: Yes, merging support data with our customer data will unlock other patterns. Such as, are customers who reach out to the support team more or less valuable than the average customer? Quantifying revenue, retention, and profits from various customers who have been helped would be great to see.

Ben: One of the things we’ve talked about doing is automating a way to identifying cases “for review” by another teammate as way of getting a 2nd set of eyes on conversations. The goal is to sharpen our support acumen.