🤖 AI in Support

Discover how AI is revolutionizing customer support. In this article, we’ll review some of the innovations that are being made by using generative AI. We’ll break them down based off of the KPI they’re aimed at improving.

🤖 AI in Support
Do not index
Do not index
The promise of AI is greater than ever with the release of easy to use, conversational generative AI tools like ChatGPT, Claude, Llama 2, etc.
In this article, we’ll review some of the innovations that are being made by using generative AI. We’ll break them down based off of the KPI they’re aimed at improving.

Improving speed

Large language models (LLMs) have the ability to quickly process prompts and generate outputs based off of massive amounts of data they’ve been trained on. They can also be tuned to specific information (for instance, your knowledge base) so that they give more accurate responses for your application.
Each of the following features speed up your ticket response times by either completely removing the need for agent interactions (eg, conversational chatbot) or by removing steps in the response process (eg, auto routing, suggested responses).
The most common features are:
  • conversational chatbot: This is a chatbot that lives in your chat widget that will respond to your customers questions using natural language (ie, they can ask questions however they would like and the chatbot will interpret them and then give a response based off of what is written in your help center). If an adequate answer isn’t available in your help center, then the chatbot should create a ticket for your team.
    • things to look out for:
      • make sure that the chatbot you’re considering using handles hand-offs to your team gracefully for questions that can’t be resolved by the bot
      • some tools allow you to train the bot to use a branded tone; this can give a much better user experience for your customers since it removes some of the rigidity and lack of authenticity that customers frequently complain about with chatbots
  • auto routing: When new tickets come in, you can have them sent to the appropriate person on your team. You can usually hardcode these rules based off of the content of the ticket, who the ticket came from, the channel, etc. Auto routing, however, is built to handle the creation of these rules in a dynamic, on-the-fly manner so that you don’t have to continually update routing rules.
  • suggested articles: Automatically generate articles for your help center based off of questions you receive from your customers and how you answer them in your ticketing system. Not only can this be a time saver in keeping your help center content up to date, but it can also give your conversational chatbot the information it needs to answer more questions.
  • suggested responses: When you receive a ticket from a customer, suggested responses are proposed to you based off of various sources (typically, from the help center, but these can also come from other tickets that have been solved and other available customer data sources).
 

Improving quality

The following features enabled by LLMs aim to improve the quality of your customer support:
  • translation on the fly: LLMs have extremely good translation capabilities. When implemented, translation on the fly lets you receive questions from your customers in their native language, detects intent, responds, and then translates accordingly for you.
  • product insights: while less common, some of the most powerful AI tools available today give you product insights on specific areas in your product where your customers are struggling (eg, pages that are confusing, buggy, etc.). When implemented, these can be completely preventative of large swaths of customer questions.
 

Applications

The future of support will leverage AI heavily to give better responses to customers faster than ever before. Here are a few things you can do to help your team stay ahead of the curve:
  • train your team: you should teach your team how these tools work and what to expect from them. No AI tool available today is a panacea, but they can meaningfully reduce the load on your team so they can spend more time in other areas (eg, proactive support). The best way to train your team is to use a battery of test questions to demo how different AI features respond to help them build intuition.
  • update your processes: In your earliest stages, some (if not all) of these AI features may not yield much of an impact since you typically have too little data for them to operate effectively (eg, if you don’t have a help center, a chatbot trained on your help center won’t do much; if you’ve only got two agents, routing is usually not very helpful; etc.). It is, however, important to be aware of these tools so that you know when to implement them. Throughout the remainder of this guide, we’ll introduce each of these tools at the stage where it makes the most sense to use them.
 
One of the most common problems that comes from using AI tools is a sense of disappointment when it doesn’t behave as expected. Before running any of the above tools on autopilot, it’s important that you build some intuition on how they work.
 
The best way to build intuition for how these tools work is to try them out. If you have categorized tickets already, then create a list of 1-2 sample tickets from each category (ticket number, initial question, response, assignee, priority, etc.). Enter the initial question from each ticket and see what sort of responses you get from each tool (eg, how is the ticket assignee by auto routing; what responses are suggested from suggested responses; etc.). If the AI responds poorly, there are a few things you can do to debug it/help it improve:
  • fine-tuning: some tools will allow you to fine-tune their AI model on your own data. If this is an option, you should do this (with the caveat that you need to have enough data — which you may not have at this stage, typically around 10k tickets and/or 20-30 help center articles are needed for this to be meaningful)
  • feedback: some tools utilize reinforcement learning from human feedback (RLHF) to improve their weightings (similar to fine-tuning, but done over time with agent feedback rather than all at once).
 

Warnings

When you’re using LLMs, there are a few common problems that you may run into:
  • inaccurate responses (hallucinations): LLMs don’t inherently have any measure of truthfulness built in and use statistical predictors to essentially guess what they should say given the other text they’ve ingested. As a result, they can confidently write text that is realistic (per their training) but completely fabricated and untrue. For more complex questions, it’s typically best to use suggested responses for your agent rather than the autopilot chat.
  • oversimplification: for similar reasons to the above, nuances often aren’t caught or understood by LLMs and they can provide overly simplistic responses to questions (you can think of this as a sort of washing out of answers by averaging over lots of different potential responses).
  • over reliance on tools: while LLMs can lighten your support load in a substantive way, it’s important that your team doesn’t forget how to handle tickets effectively by staying up to date on the topics your customers are asking about and the responses your chatbot is giving. This will ensure that when there are incorrect responses, you can jump in to correct the bot.
  • lack of transparency: LLMs are extremely complex and not well understood even by their creators (that is, why they are giving a particular response). This lack of transparency can often make it difficult to understand why they are/aren’t working and to gain an intuition for how they will respond to questions and where they may fail.
 
Per the prior section, it’s important that you gain some intuition for how these problems might look on your tickets specifically by using your battery of test questions.
 

Notes

For the interested reader, here’s a quick guide to some of the common jargon in the AI world right now:
  • large language model (LLM): any model that uses a very large number of parameters; GPT-4, for instance, uses ~1.8 trillion parameters
  • generative AI: models trained to generate outputs with a random component to them that can make them feel creative or novel
  • conversational AI / bots: models that take plain English inputs and yield plain English outputs, similar to a chat conversation
  • prompts: the plain English input you give to a LLM (can be a question, directive, or really anything you want to write)
  • ChatGPT, Claude, Llama 2: some of the most popular and performant LLMs right now, created by OpenAI, Anthropic, and Meta, respectively

Amp up your customer support practices!

Get access to the latest trends, updates, and content

Subscribe
Jon O’Bryan

Written by

Jon O’Bryan

CEO, Atlas Inc