The Challenge of AI Voice Assistants in Customer Service

5
962 views

Share on LinkedIn

During May, Google’s I/O 2018 conference was held to show the latest in Google’s offerings to developers around the globe. While Google demonstrated a lot of different new tech at the conference, it was their keynote demonstration of its latest “Duplex” technology which has lit up the internet.

Duplex uses Google Assistant to call companies on a user’s behalf to perform simple, structured tasks, such as booking a haircut or scheduling a restaurant reservation. While voice synthesis isn’t exactly new, it was the humanlike inflections and natural conversational flow in these calls that many found to be jaw-dropping (or, alternately, terrifying).

If you haven’t yet seen Google’s demo, click through to watch it now, and prepare for your mind to be blown. (Skip to 43 seconds to get straight into the demo.)

Although this technology isn’t yet consumer-grade, Google says it will start to test Duplex within Google Assistant as early as this summer. How then should our customer service operations handle this upcoming customer-side automation in voice calls?

Identify Verification & Trust

Part of the reason why Duplex has caused so many ripples is because it gives a glimpse into a rather dystopian future – one where humans can’t tell whether they’re talking to an AI (causing many to wonder if the Turing test has been passed by this new tech).

Before now, voice assistants haven’t been capable of holding natural-sounding conversations. But the calls demoed by Google, complete with inflections such as “Mm-hmm” or “Ah, gotcha”, sounded so lifelike that it’s clear the human operators on the other end had no idea they were speaking to an AI.

That in itself has caused outrage – with commentators pointing out that ethical problems occur when service workers and call center staff are unsuspectingly experimented on by Google’s human-sounding AIs. Google reacted to this outcry by asserting that working versions of Duplex should have the ability to identify itself built in.

But whether AIs self-identify or not, the cat’s already out of the bag for anyone considering whether their identity verification processes will need to change as a result of this technology – the answer is undoubtedly, yes. The key to how exactly processes will need to change lies in whether AIs are required to self-identify or not – whether by Google themselves, governments or any other regulatory body.

If AIs are required to self-identify themselves as such and state that they’re acting on behalf of a human, should agents be responding to their wishes as if they were that human? I can easily envisage scenarios where AIs can eventually make payments, change data or perform any other process that has impacts on customer or company – only for the customer to respond that the AI’s actions were a mistake and not authorized by them. How then can we determine human intent behind the actions of an AI?

If AIs are not required to self-identify, issues emerge around trust and standards. As it stands, technology like Duplex is only effective in a limited range of scenarios, making it easy to ask a question that sits outside of the AI’s programming to test whether it’s a robot or not (for example, “Who is the president of the United States?”)

Having agents ask these types of questions to try to weed out the “robots” from the humans is reasonably straightforward. But how will those questions evolve as AIs get smarter? Will they constitute a new, more intrusive layer of data protection processes that we have to subject unsuspecting customers to? What happens then when we speak to human customers who cannot answer these questions – through health issues, a lack of shared cultural understanding, or anything else? Could we be dooming them to be treated like little more than unfeeling robots?

Emotion & Empathy

Speaking of feelings, Duplex brings big questions as to what will constitute effective customer service in the future. Our current, human-focused model of optimal customer experience runs on the premise that if you focus on solving problems quickly, accurately and in a friendly manner, you’re likely to achieve good customer outcomes.

But AIs don’t feel. All the niceties and small talk in the world don’t matter to them. Considering that humans and AIs have different needs and priorities during issue resolution, we could see two distinct sets of standards emerge.

The first relates to service standards for humans – and as beings who have thought and felt in much the same way for thousands of years, I can’t see these undergoing any huge revolution in the future.

But a second set of service standards relates to how we can provide optimal service to AIs. I can see these standards relating to focusing on clear language, accurately clarifying intent, and decreasing emotionality in speech which could cause confusion to an AI – quite the opposite of the emotion-centered training we’ve been giving to front-line agents for decades.

Taking Humans Out of Interactions

Thinking about the role of our front-line customer service agents in the potential applications of this technology, we must consider the messages that Google is implicitly sending about the service employees customers speak to every day to get things done.

PC Magazine sums this up deftly: the implicit message embedded within Duplex is that there’s no need for customers to ‘suffer’ through speaking to service employees to get things done. In one of Duplex’s demonstrations, the lady taking the call has a thick accent that is a little difficult to understand. The AI handles this with little awkwardness, making it clear that even in service situations that can be tricky for customers, machines can handle this instead, removing all of the ‘bother’.

I still believe that human interaction and emotion is what humanizes our brands, and makes them friendly and accessible. And putting myself in the shoes of my agents, there’s something that stings about the implicit message within AI-driven voice calls – that other people see talking to them as a waste of their time.

But I do believe that the best kind of customer service is invisible, that is, mediated through access to a range of easy self-service and digital options available to prevent customers from needing to make inconvenient phone calls. Maybe then we need to focus less on the perceived value of individual interactions, and think instead about downsides of the phone as a communication channel that have caused Duplex to become a customer need.

Phone Calls as Inconvenience

The development of Duplex points to an issue innate in customer service operations – and that is, while phone calls are often the best way for a customer to accomplish a goal, they aren’t always convenient. The rise of live chat, self-service and social messaging channel options has happened as a result of this issue. These channels allow more customers to connect with organizations in ways that don’t take up all of their time or attention, require them to take time out of their day, or prevent them from multitasking while they solve problems with organizations.

The necessity of Duplex (and its positive reception by many) shows that while many organizations see cost or effort barriers to providing service over non-voice channels, clearly for some customers that isn’t good enough. Given that organizations such as Deloitte predict volume of voice interactions to businesses to fall from 64% of all channel communications in 2017 to 47% in 2019, organizations need to consider better ways to connect with their customers than by relying on voice-centric service models.

Automation promises to hold the key to dismantling these cost and effort barriers to multichannel service, as we’re now seeing within Chatbot uptake by firms big and small all over the world. While we’ve been exploring Duplex as a tool for customers to take advantage of automation in their own lives, let’s look at what the impacts are when the tables are turned and organizations can use tools like Duplex to evolve and improve their service offerings in a multichannel climate.

What if Duplex Could Help Organizations?

In the spirit of Moore’s law, it’s feasible to consider that given the current pace of technological advancement, and as a privately-owned company, Google will be looking for other ways to apply this technology, helping them to profit from it and secure its future development.

Because of this, I predict that it won’t be long at all until AIs like Duplex are pitched as a replacement for customer service agents on voice channels.

We can already see the evolution from human-led to AI-led service within other channels. Chatbot services are now handling a good percentage of everyday organizational queries over live chat. Considering that studies show that it’s realistic to aim to deflect between 40% – 80% of common customer service inquiries to chatbots, the same deflection principles could be used to help technology like Duplex to drive the same change for voice.

For voice as a channel, the closest thing we have to this right now is the dreaded IVR. The difference between IVR and AIs, however, is in the promise of service that truly helps, rather than hinders. While IVR is almost universally viewed as an unwelcome hurdle to jump on the way to service from a human agent, chatbots are proving that for certain service scenarios, AI can be as efficient as humans – if not more so, due to their speed, constant availability and scalability.

Projecting the development of this technology for voice interactions within the contact center, we’re faced with some questions. What types of voice queries are ripe for automation, and how can we channel these to AIs in a way that doesn’t add more options to a traditional IVR? What happens when customers can’t tell whether a voice agent is human or an AI? Whether that AI self-identifies or not, how does that reflect on our companies? Could we even be ushered into an age of universal mistrust in customer service where our human agents are treated badly by customers, as if they were robots, because our customers just can’t tell the difference?

Perhaps exploring automation within live chat can throw some light on these questions. I’ve seen many organizations who are meeting these issues head-on within chat – and many are digging deep into customer needs and preferences to harness this technology in ways that are both comfortable for their customers, and effective for their businesses.

A Values-Centered Approach to Automation in Customer Service

Now is the time to reflect on how our businesses will handle customer-side automation coming this year, and how more organizations can handle automation-related issues generally as technology develops.

We can take the lead from design ethicists such as Joe Edelman to consider how best to work with this technology in a way that doesn’t result in negative outcomes for our businesses, our agents or our customers.

Edelman proposes a values-centered approach to the design of social spaces online, and by using this same philosophy, we can consider how AI voice assistants detract from or complements the values of customers and other stakeholders interacting with it. Whether it’s us or the customer who’s automating, great service design will come from a consideration of not only what each party aims to achieve but also how their service preferences are denied or accommodated.

When we can consider the values of our customers and our employees, and how those interface with the needs of our businesses, we can start to use this technology in ways that are helpful and useful to them, morally sound, and which deliver the time and resource benefits that both businesses and customers want.

Is your organization preparing for Google Duplex? I’d love to hear more – drop me a note in the comments below.

5 COMMENTS

  1. What an intriguing post…and very well written. My mother told me that when Rich’s department store in downtown Atlanta went from a friendly elevator operator to an automated, push-a-button elevator, people were certain customers would stop coming to the store. We know how that turned out. Some people struggle today with even the concept of self-driving vehicles. But there will be a time–in the near term–when AI assistants will be as common as seat belts. Your post, however, reminds us of the importance of being thoughtful regarding where technology leads us. If an AI can handle my routine calls (like making a hair appointment), do I trust the next iteration with babysitting my grandchild? Would I trust an AI assistant at the pharmacy taking routine calls from physicians calling in a patient’s medication? The issue of trust abounds all around; the definition of what it means and matters to be human is not far away.

  2. There’s an old song that says “the trouble with the world today is coffee in a cardboard cup”, meaning that the emotional interaction with a restaurant service person providing trusted, human connection through hand-delivering fresh-brewed coffee in a ceramic mug was gone forever. As noted, humans, increasingly, are giving way to technology in many aspects of service delivery. This will only continue. It’s all labeled as labor-saving societal progress, and, like Duplex, it’s often creepy – actually, creeping meatballism ala Jean Shepherd – but inevitable.

  3. Hi Kaye: I like how you’ve shed light on some of the ethical challenges of AI. While Google’s achievement represents a significant milestone in making information technology more user-friendly, it was nonetheless amusing to hear the audience gasps evident in the Google video. The use cases were remarkably easy, and even the one the presenter deemed challenging was straightforward in terms of adhering to the “happy path” of a transaction. Even in making a restaurant reservation, there are so many “curve balls” that a customer can throw: “I want a table in a quiet section of the restaurant.” “Can [entree name] be prepared vegan?” “Can you accommodate a high chair or wheel chair?” I’ll stop here, because there is a long list of common requests that aren’t routine. And Google’s Duplex, breathtaking as it might seem, is still a long way from handling much of what customers may want to know or ask.

    As I’ve shared in many comments on this website, I am rarely confident that advances in information technology will always be used for benign purposes. And I see great potential for nefarious uses for Google’s Duplex technology. Telephone scams that prey on the vulnerable (e.g. elderly, cognitively-impaired, those in financial distress) are already widespread and growing. They create billions of dollars in unrecoverable losses among people who can least afford to lose. Imagine how Duplex, unleashed without regulations requiring disclosure, and tepid penalties for misuse, could create havoc. And with Duplex, as with any new technology proposed for widespread use, we must ask and answer, “what could go wrong?”

    In general, I think society is too quick to embrace new technology, and to laud it as the Next Great Thing. That opens the door to fully-automated vehicles riding on our highways (Tesla), or blood assay equipment being deployed in the field (Theranos) before they are adequately tested, let alone understood. The result, as we have learned the hard way, can be fatal. And while I recognize that every innovation has a development curve, I think our adulation of shiny AI gizmos should be tempered with circumspection about whether they are wanted, or capable of delivering on the promises the developers have assured us we can achieve.

    “Have your AI-Assistant call my AI-Assistant, and we’ll do lunch!” Now that would be truly amazing! I think you ask a great question: “What happens when customers can’t tell whether a voice agent is human or an AI?” Unless we have laws requiring disclosure, and the backbone to enforce them, I can guarantee one thing: people are going to get hurt.

    I also like your comment, “Whether it’s us or the customer who’s automating, great service design will come from a consideration of not only what each party aims to achieve but also how their service preferences are denied or accommodated.” That reflects an empathetic view. But whenever algorithms are driving the conversation, and when we can’t explain how those algorithms learn, we run the risk of injecting dangerous bias into the transaction.

  4. Thank you Chip, Michael, and Andrew for commenting – you all have really thoughtful ideas around this new technology. I agree that just because the technology is there, doesn’t mean that we should roll it out without any real consideration of how it affects agents, customers and other stakeholders in our businesses, what precedents are set, or what impacts there will be not just now but in the future.

    As someone who was an agent back in the bad old days of high-volume, low-quality contact center working practices, I’m very concerned with contact center ethics. It now feels like we’re finally getting to a place where organizations understand the importance of agent experience and quality-focused practices, so I would hate for technology to unwittingly roll back this good progress! That’s why we need to approach any new technology with careful consideration and assessment – we absolutely have the power to use technology to continue to make contact center interactions fairer and better for everyone involved. We just need to be very intentional in how we do that.

LEAVE A REPLY

Please enter your comment!
Please enter your name here