The tale of a sarcastic bot, and other GPT-3 stories

0
1380

Share on LinkedIn

Something huge happened in the AI space this summer. A major breakthrough in the way a machine can understand and generate human language, that took the AI community by surprise. OpenAI released its latest large-scale natural language model called GPT-3. I’ve talked about natural language models, GPT-3, and its Customer Support implications in this article.

This time OpenAI decided not to open-source the model as they did previously with GPT-2. Instead, they provided it as an API service. Although still in private beta with access restricted to a limited number of developers, the OpenAI’s API has significant potential in Customer Service and natural language tasks in general.

In this article, I will discuss how the API works and how it’s different from other AIs. I will also present a few examples of applications that illustrate its potential.

Shortcomings of existing AIs

So far, natural language AI models had to be trained and fine-tuned to perform well on a given task, whether text classification, summarization, or translation from one language to another. The training process is usually complex and involves a significant amount of dedicated work. Check out my article on AI automation for a detailed explanation of this process.

One of the biggest difficulties in building an AI model is gathering and labeling the training data. You usually need hundreds of positive and negative examples for each label. They have to be carefully selected and correctly annotated. Even a small number of mistakes could jeopardize the overall accuracy.

How is GPT-3 different?

So how is GPT-3 different from other AI technologies seen in Customer Service use cases so far.

First of all, it offers a general-purpose “text in, text out” interface that could be applied to any language task: classification, summarization, text generation, translation, etc. Actually, at the core GPT-3 is a text generator that works by predicting the sequence of words with the highest probability given an input text (called the prompt). Very similar to its older brother, GPT-2, which we used to build the Agent Helper’s sentence auto-complete feature.

Second, you can teach it a task by actually telling it what to do, like you would teach a middle schooler. Rather than giving it a large number of training examples, you simply describe the task to perform. Pretty cool!

And third, it learns fast! You only need to give a couple of examples and it “picks up” the pattern to continue. In AI terms this is called few-shot learning and has been the holy grail of machine learning since the very beginning. Something that we humans are very good at, but machines have been struggling with.

How GPT-3 works

As explained above, the GPT-3 API is a text generation interface. You give it a text prompt (the text-based input or “instructions”) and it will return a text completion that tries to match the context or pattern that you gave it. You can “program” it by crafting a description or writing just a few examples of what you’d like it to do.

Let’s take a very basic example. In the picture below I’m asking it to produce a list of Marvel Avengers simply by giving it two names: Hulk and Ironman. The API is able to detect the intent (that I need it to produce a list) and the pattern (the list should include other Avengers names), only by taking the first two names as examples. Just like a human would do.

avengers

Bear in mind that I haven’t trained it at all with loads of examples and I haven’t built a new AI model for this particular task. It’s just one API call. Incredibly good at picking up patterns and continuing it!

Marv, the sarcastic chatbot

Let’s give it another task now, more complex than producing a list of Avengers names. We want the API to respond to some general knowledge questions, but not just in a simple factual way. We want it to be a slightly sarcastic 😏 bot personage.

The process is similar to the previous example. There is no need to train a heavy model for this task. Just one API call, where the input includes only four examples that “teaches” it how to respond, and the actual question. After the user asks the question, Marv responds with the answer but also adds a bit of sarcasm at the end without being totally rude.

Let’s ask it: Who wrote “War and Peace”?

marv, the sarcastic chatbot

It’s not just that the API is able to respond correctly to this general knowledge question, but is also capable of altering the responses to a certain pattern that was provided in the input prompt. In this case, by adding a sarcastic comment at the end. I asked the question many times, and each time I got a different answer. My favorite one is: “Leo Tolstoy. It took Leo a long time to write it. It took me a long time to read it.” 😂

Summarizing customer support tickets

Now it’s time for an example relevant to Customer Support. We’ve all been receiving those long, poorly written emails from customers that take a while to decipher and comprehend. In the example below, I’m prompting the GPT-3 API to summarize a customer complaint by responding to four predefined questions:

      What product was this about?
      What is the customer’s complaint?
      Is the customer asking for a refund?
      How is the customer feeling?

And the response that came back was (unbolded text):

customer complaint summary

As you can see, the API is able to pick up not just the product that is the subject of the complaint (an iPhone), but also what the complaint is (faulty battery), whether the customer is asking for a refund or not, and how the is customer feeling (disappointed). Again, all into one API call and without me building and fine-tuning a model dedicated to this task.

Democratisation of AI

AI has come a long way since the dawn of deep learning and significant advancements were made in both of its sub-areas, Computer Vision and Natural Language Processing. The race of transformers pushed forward the limits of language understanding and generation, culminating with the GPT-3 release.

While OpenAI was the first to get this far, they won’t be the last. Expect to see such progress from other organizations, whether cloud service providers like Amazon or Google, or members of the AI open-source community such as Hugging Face or Facebook.

Most importantly, the development of better and more accessible AI technologies will continue the democratization of AI, which will soon become a commodity available to teams of all shapes and sizes.

Sorin Alupoaie
Sorin Alupoaie is the founder of Swifteq, a company developing intelligent assist apps for customer service agents. An experienced software technologist and entrepreneur, he loves shipping products that solve painful customer problems. Sorin strongly believes that any Customer Service interaction represents a huge opportunity for a business to listen and improve how they deliver value to customers. Insights and automations enabled by Artificial Intelligence should be used to remove friction from these interactions and provide a better and faster service.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here