How to Prevent This Catastrophic Error So Many Are Making With AI

1
176

Share on LinkedIn

Organizations are making a common mistake with AI. From a strategic standpoint, organizations are losing opportunities to improve their ability to enhance their Customer Experiences with this impressive and impactful technology and, well,…building them wrong. Changing the strategy could create a significant competitive advantage.

For example, a large telecom company designed an AI system to identify customer churn. It worked in that they could tell which customers were going to churn. The issue was the AI didn’t pinpoint why the customers were leaving.

Watch Colin talking about this on YouTube:

Here’s the thing: AI models are outstanding at predicting customer behavior. However, the trade-off is that it does it without making the connections about why in the data. Moreover, there’s no way to tease out what those connections mean. It all happens beneath the surface.

Those who have read The Hitchhiker’s Guide to the Galaxy might agree that it’s a bit like the supercomputer Deep Thought telling us the answer to the question of life, the universe, and everything is 42. That’s an answer; it’s probably even correct. But, unfortunately, we don’t get it; we cannot understand the context.

Therefore, the mistake organizations are making with AI is how they set it up. The result of the setup is the equivalent of the answer 42. What we need is that context, which explains why it is 42.

Customer Science Can Help

You might recall that I have talked about how Customer Experience is retreating as a wave of change and becoming part of business as usual. Customer Experience is becoming part of every business strategy, like the four Ps, Continuous Improvement, or Customer Relationship Management systems. This recession of Customer Experience as a new thing makes way for the next new thing: Customer Science.

You might also recall the three pillars of Customer Science: data, AI, and the behavioral sciences. So, AI is an integral part of the next big wave of change. How you design your AI is critical to the insight you will get from it. Two ways of developing AI can help. These include machine learning and deep learning.

There is a difference between machine learning and deep learning. Consider the following:

Machine learning uses algorithms. Someone writes the code used, and then the AI collects the data using the code. The issue here is a common phrase you might have heard before, “Garbage in, garbage out.” In other words, you will get incomplete answers if you feed incomplete data into the algorithm.

By contrast, deep learning builds upon neural networks where the AI effectively discovers patterns themselves. Unlike machine learning, where you have to tell it what everything is, deep understanding works out what things are on its own. This process requires more data than machine learning does.

For example, if you were working in machine learning, you must tell the AI that a tomato is red, round, and has a green stem. However, if you were using deep learning, it would determine over time that tomatoes are red, round, and sometimes have a green stem.

However, one must understand that one isn’t right and the other is wrong. They are different ways to get to an answer. If you feed all the information into it, like with machine learning, you will get an answer, and if you give the system a bare-bones structure and let the AI figure it out, as with deep learning, you will get a response, too.

The question is, how much context will you get? Will you know why that is the answer?

Why Flat Earthers Shouldn’t Write AI

I have mentioned Flat Earthers before, as you might remember. If you have time, you should watch this explanation from the ABC News YouTube channel:

If you didn’t have time to watch the video, my quick summary is that Flat Earthers think the North Pole is at the center of a flat disk, then the “world” is around that, and then the outer edge is Antarctica to keep it all contained, like an icy pizza crust. Moreover, the whole thing operates as a floating snow globe in space.

(For those who don’t know me well, you should know that I think this is barking mad.)

I mention the Flat Earth theory because, in another podcast, we talk about how our biases influence AI. As the ones entering the code into the programming, our preferences appear in the code, which affects what we get out of the machine.

Imagine that the person feeding all the information into the AI is a Flat Earther. How might that worldview influence what comes out of the machine? I cannot tell you for sure what that insight might be, but whatever it was, it would fit within the context that the Earth was flat. I, for one, wouldn’t trust that insight.

The same thing is happening with AI regarding Customer Experience. The idea that Customer Experience is only about rational things is as out there as the idea that the Earth is flat. Customer Experiences are about emotional things. Therefore, leaving out the emotions in Customer Science data is akin to throwing out all the pictures of our beautiful blue orb of a planet taken from space by the technology we sent there to see it.

In other words, without the emotional part of the experience included in the data input, the AI output about Customer Experience will skew toward the rational parts of the experience.

I, for one, wouldn’t trust that insight, either.

Behavioral Science Can Help Here

The opportunity lies here for organizations to prevent mistakes with AI by including emotions in the data. Furthermore, those organizations embracing the concept that we need this emotional side in the data will get answers from AI that will provide a significant competitive advantage.

For example, I went into ChatGPT and asked it to tell me what data I needed to create a practical AI system to improve Customer Experience. It said to me that I would need the following data pools:

  • Customer demographic data
  • Customer purchase history
  • Customer interactions
  • Customer feedback, website app, and usage data.
  • Social media Data support
  • Ticket data
  • Customer satisfaction metrics.

There is no mention of customer emotions or behavioral science here.

Now let me take a step back. I am not saying that ChatGPT is the answer to life, the universe, and everything. Besides, we already know the answer to that is 42. However, I am saying that emotional data would enhance the output of ChatGPT regarding Customer Experience. That’s because the better the data you have to feed into AI, the better the answers you get out of it.

Behavioral science can help here. First, it might help us identify better data sources to include, especially if we’re doing some of the higher-order modeling. Behavioral science could direct the structure of the models used to write the code. It might also work as a parallel track to help determine the “why” of the AI’s “what.”

For example, if the AI model kicks out results that say, “Okay, so, these people right here are going to churn,” behavioral science can help us go back through those customers’ behavior to develop some hypotheses about why. Then, we could test them.

It’s Not Too Late

No one designing AI for Customer Experience is asking me for help. Convinced that they need only analytical data, they are happy to have an answer without understanding why it is the answer. The answer is 42, which is all they needed to know.

AI systems will identify patterns of behavior that are caused by customer emotions. However, unless you’ve told the AI system about feelings, it won’t allude to them in the insights it provides.

In the next wave of decision-making over the next several years, AI will be predictive about experiences and get them right, too.

But I wonder if we gave it the proper tools to understand why the predictions were what they were and how emotions played a role, would we be able to get it right, too?

Colin has conducted numerous educational workshops to inspire and motivate your team. He prides himself on making this fun, humorous, and practical. Speak to Colin and find out more. Click here!

Republished with author's permission from original post.

Colin Shaw
Colin is an original pioneer of Customer Experience. LinkedIn has recognized Colin as one of the ‘World's Top 150 Business Influencers’ Colin is an official LinkedIn "Top Voice", with over 280,000 followers & 80,000 subscribed to his newsletter 'Why Customers Buy'. Colin's consulting company Beyond Philosophy, was recognized by the Financial Times as ‘one of the leading consultancies’. Colin is the co-host of the highly successful Intuitive Customer podcast, which is rated in the top 2% of podcasts.

1 COMMENT

  1. Good article, Colin! But it is not clear to me HOW you would fix this issue. I agree that adding customer emotion is critically important, but are you suggesting that adding sentiment analysis to the data sources above is sufficient, or do you have additional suggestions?

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here