This article is from Emily Chapman (@emilychapman in the chatroom) and it originally appeared on the Trello Blog.

Our support team leveraged user feedback surveys in order to connect with Trello users on a more human level.

Our support team leveraged user feedback surveys in order to connect with Trello users on a more human level.

If you’ve written in to Trello’s support team in the last few weeks, you may have noticed a little bit of extra flair. We added a “how did I do?” rating option at the bottom of our emails back to you. We’ve been tracking those replies since we started gathering them, and it’s led to some unexpected (and useful!) insights for our support team.

Sometimes, a frustrated user might walk away from the case. And, frankly, can you blame them? If the only contact they have is with an agent who is perhaps misunderstanding them, it’s easy to see why they might think their issue is going in one of our ears and promptly out the other. We wanted to give our users another outlet to let us know about a case that may require extra attention.

So far, it’s been a success: we’ve heard valuable feedback from users who’ve had frustrating experiences, and, in many of those cases, have been able to reach back out and help resolve the user’s pain point. In a few cases, we’ve even changed team policies based on user feedback.

Essentially, these one-on-one pieces of feedback are the opposite of big data—it’s tiny data, and it helps us provide better support.

How does tiny data work?

The support team uses Help Scout to manage incoming and outgoing tickets. Help Scout has a built-in feature to add rating text at the bottom of each email, with the option to customize the copy of the text itself. (We took advantage of that to replace “okay” with ¯_(ツ)_/¯.) Users pick one of three options (great, okay, or not good) by clicking on a link in the email that they receive.

Those ratings are piped automatically into Help Scout’s reports system. We can see reports by time period and by agent, and quickly get an overall view of how things are going.

We analyze the results of this data, but instead of resting our laurels on the 91% rated Great, we dig into the 5% rated Not Good. This method isn’t designed to harp on the negative. Rather, it’s a way to examine each and every pain point on a granular level.

We do this because at Trello, the individual matters. We want to listen and, more importantly, learn from our users just as much as we want to solve their problem. This drilled down focus is at the heart of tiny data, and we think it’s more valuable than any trend chart or data dump out there.

So, what does it mean?

Sometimes, the feedback means the specific agent who handled the case misunderstood something about how the app works, and needs to be brought up to speed—and perhaps that same info needs to be documented so that there’s no confusion in the future.

Or, it may mean that we need to change one of our saved replies, or even how the support team handles entire types of cases. When a user wrote in with some very valid negative feedback about the tone of our support inbox’s autoresponse, we took time to evaluate the feedback, and realized she was right.

We thought that our messaging gave an accurate idea of our scope—clarifying that there are millions of Trello users and only four of us— but instead it read as callous. To add insult to injury, a line that was intended to give a sense that Trello often changes (good! dynamic!) came across as, “figure it out—we won’t help you” (mean! bad!).

Without that user feedback, we never would have known that those changes needed to be made, though they definitely did. Our current autoresponse was changed to be more in line with what we want to convey to our users.

No room for one uppance

Right off the bat, our team was very clear about one thing: the satisfaction ratings aren’t a way of grading our support team. Our team is small (four people total), so we still rely on reading each other’s support tickets and discussing them to help evaluate how well everyone is doing.

Satisfaction ratings aren’t intended as a support team Thunderdome—they’re more of a support team traffic report, pointing us to tickets that may need a little more attention. Often, the feedback left by users in a rating is enough to help us address the specific problems of the ticket in question.

Thankfully, most of the feedback we’ve received has been positive. And in those cases, we want to share the kind words! That’s why we use a built in Slack/Help Scout integration to pass our feedback from Help Scout into a private Slack channel. From there, a team lead parses out the positive feedback, and posts it into the main Trello support channel.

The end result? Support agents get to see their own successes, as well as their team members’. And, any other company folks who happen to be hanging out in the support channel see the work that support is doing, as well. Tiny data helps celebrate little victories, which keeps our support team motivated.

What have we learned?

Our focus on these individual pieces of feedback, rather than overarching qualitative metrics, helps us get to the core of who we are as a support team. Trello is focused on a delightful user experience. Delight doesn’t happen to groups of users—it happens to individuals. So, focusing on individual feedback is the right move for us.

We’ve learned just how much our users value feeling heard, and feeling like an actual person is responding to their email. No one wants the email equivalent of being on a voice-activated phone menu with the cable company.

Going forward, we plan to use those ratings to help highlight our blind spots, as well as continuing to track our progress—and to reinforce our commitment to a delightful experience for individual users.

So, if you write in to our support team and have a few seconds, please feel free to rate the reply—we’d love to hear your thoughts!