What CX leaders should know about artificial empathy


Share on LinkedIn

Lately, there has been a lot of talk about the role of humans in a world of artificial intelligence. With the stellar rise of generative AI tools like Bard, ChatGPT, Midjourney or Dall-e, the Big Questions are these:

  • What will humans do?
  • Where will we shine?
  • Where will be our added value, as employees for our organization, but also for our customers?

A lot of people tend to answer these questions with ‘typical’ human virtues and skills like passion, critical thinking, creativity, ethical skills and – perhaps the most popular one – empathy. We really believe that these are the big differentiators, the USPs of humans in an AI world.

But, at least when it comes to empathy, things have been changing, and fast. There was this really fascinating study, recently, where a panel of experts preferred ChatGPT’s responses over physicians’ in almost 79% of the evaluations. On average, OpenAI’s solution also scored 21% higher than physicians for the quality of responses and it was perceived as 41% more empathetic.

Let me repeat that: an AI was perceived as 41% more empathetic than a human. A big reason ChatGPT won out in the study is because the bot’s responses to the questions were longer and more personable than the physicians’ brief, time-saving answers.

So have we passed a certain barrier in AI where humans are slowly becoming obsolete? This issue is a lot more complicated than a yes or no answer, of course.

It’s complicated

First of all, the health sector is pretty notorious for its lack of empathy. Stephen Trzeciak and Anthony Mazzarelli, two physician scientists have even been talking about a compassion crisis in healthcare. In fact, 71% of respondents of a 2019 study said they experienced a lack of compassion when speaking with a medical professional, and 73% said they always or often feel rushed by their doctor. And that was even before the pandemic, and the ensuing mental health crisis. So if there is one domain in which it is easier for an AI to “win” in empathy and compassion, it’s probably this one.

But that does certainly not mean that we should underestimate this evolution. We’ve been seeing it everywhere, actually, not just in healthcare. I remember this post from an employee of Endurance, who was a little dismayed by some of the responses to a very vulnerable post on LinkedIn about someone tragically losing a loved one, which she described as “tone-deaf and lacking in empathy”. As an experiment, she asked ChatGPT to respond, which did a pretty good job, compared to the humans:

“I am deeply sorry for your loss and I know how difficult it can be to cope with the death of a loved one. It’s important to remember that it’s okay to feel whatever emotions come up, even though it may be difficult. Please know that I’m here for you and I’m here to listen if you need someone to talk to.”

Now I know that all of you are probably thinking “yes, but a bot can’t really feel empathy, they only mimic it, so that does not count”. Well, I don’t want to be cynical here but doctors who do show sympathy are paid to do so, so that’s perhaps also not ‘real’ in the strictest sense. It’s part of an exchange. What does matter though, to use Maya Angelou’s words, is “how you make people feel”. And even if it’s a bot making you feel better, we’ll probably get used to that. Just like we stopped talking about how online interactions weren’t the same as real ones. There’s currently this shift happening, where bots are becoming better at mimicking humans and that’s an extremely fascinating domain for those of us working in CX.

A flawed system

But perhaps the most interesting thing here is not the fast evolving technology, but how it exposes gaps in our ‘old’ systems. Crises are not just difficult periods. They tend to uncover what is wrong with the system. The war between Ukraine and Russia triggered an energy crisis, for instance. But that crisis was only able to happen because Europe was overdependent on one nation for its gas supply. It’s always better to diversify. Because when you put all your eggs in one basket, and it’s taken away… Well, we know what happened there.

In exactly the same way, the doctor versus AI challenge is (at the moment) not just about a technology getting so powerful that it is better than humans. But it rather exposes a fragile and poorly constructed system where physicians are overworked, burnouts are raging and 56% of physicians said they just “don’t have time for compassion” (in 2012!). Based on a survey conducted by the American Psychological Association, 45 percent of psychologists reported feeling burned out in 2022. Nearly half also reported that they were not able to meet the demand for treatment from their patients.

That is why I believe that this is a crucial moment. We have two choices: we keep the flawed systems and mitigate their gaps with technology or we use the technology to make the system better for employees/doctors as well as customers/patients. We can’t just leave the empathy to the bots of this world, while for instance doctors keep in the background with very little patient interaction. We must have the bots help the doctors so that they have more time, and more compassion for the severest cases, that really need human empathy and support.

There is a big difference between the two and to truly understand that is really essential.

A human transformation

What worries me, though, is that the study concluded how doctors could use AI-assistants like ChatGPT to “unlock untapped productivity” by freeing up time for more “complex tasks”. “Well, Steven is that not the exact same thing as you state here above”, you might think? I don’t think so. Because this clearly shows that, here too – exactly as is currently the case in the current health sector – the focus is on productivity (to the organization’s benefit, and not that of the employees or customers) instead of on patient care, compassion and empathy.

To end on a positive note: it’s great that we are realizing this now. Let’s not put an AI icing on an old, mouldy, flawed organizational cake. Let’s make a better, tastier cake and add a superb frosting. It’s the same as always, really, everything inside our company – from how we treat our employees or suppliers, to how we communicate, set KPIs, organize processes etc. – has an impact on our customers. We cannot fix that with technology. We must first fix those flawed processes, KPIs, communications etc. and then make them better with technology.

Every digital transformation calls for a human transformation first.

In fact, the digital part will be the easy one. You can find companies who will get you all the AI you need. You can train your people to work with it. But the human transformation – where we really focus on skills and organizational development – will be the real challenge in my opinion. So I would advise you to go for a parallel track: when you invest in AI, always invest the same amount of energy and time in fixing your human system so that you will be able to differentiate yourself as a human company in a world of automation.

Republished with author's permission from original post.

Steven Van Belleghem
Steven Van Belleghem is inspirator at B-Conversational. He is an inspirator, a coach and gives strategic advice to help companies better understand the world of conversations, social media and digital marketing. In 2010, he published his first book The Conversation Manager, which became a management literature bestseller and was awarded with the Marketing Literature Prize. In 2012, The Conversation Company was published. Steven is also part time Marketing Professor at the Vlerick Management School. He is a former managing partner of the innovative research agency InSites Consulting.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here