Yes, 2019 is only a few days old. But it won’t take long for five trends in AI applications to make headlines in CRM circles. Look for these developments as the year progresses:
- Icy AI applications will get a dose of empathy
- Winning CX AI applications will effectively combine crowds of wisdom
- Consumers will push for more security and control of their data
- Consumers cry out even louder for AI transparency, and against machine bias
- More businesses will ditch their legacy segment-based marketing systems for AI-based systems
Icy AI applications get a dose of empathy
Can humans teach AI empathy? Some think so. At MIT, for example, a project is underway called Deep Empathy[i]where the objective isn’t so much to humanize AI, rather to simply teach it to recognize what sparks people’s emotions. Armed with that knowledge, a system could, for instance, display altered pictures of a person’s local surroundings, simulating the impact of a disaster there. What’s the intended outcome? More donations to a far-off relief cause.
Today’s AI is generally level-headed, some might say annoyingly so. It’s a natural human reaction to get frustrated with something that’s too mechanical. And the trouble is, these interactions tend to disintegrate. Yet there’s effectively no reason why systems can’t be trained to be more emotional and sympathetic, and that starts with them being able to sense emotional states. Armed with that awareness, they could provide a tailored response to positively affect and change the mood of the other party. But today, they’re generally oblivious to emotional state, and simply respond in a cold manner.
That’s already changing, however. Some have begun to train their AI tools to be more perceptive, warm, friendly, and even sympathetic.
For example, Humana’s CX AI system detects conversational cues to guide call-center workers through difficult customer calls. It recognizes that pitch inflections in a customer’s voice, or instances of cross-talk, are causes for concern. The system then grades each call experience on a scale from 1 to 10.
Next, we’ll see this same scenario become totally automated, with no human in the loop. It’s amusing to witness someone raising their voice at a device like Alexa, except it’s also telling (highlighting an opportunity for a better human-machine relationship). Why shouldn’t she recognize that you’re frustrated with the interaction, learn from that, and work to improve it? 2019 will mark the year that fully automated systems become more aware and understanding. What’s the payback? Improved loyalty. Believe it, because in 2017 Forrester’s US CX Index [ii] again showed that emotion was the primary factor in driving loyalty in nearly every industry.
Another effort aimed at injecting empathetic intelligence into AI’s inner workings, and spreading the love, comes from Affectiva. They’re marketing an Emotion SDK and claim it’s machine has already learned from analyzing nearly 7 million faces across 87 countries. And now that face brain is available to be implanted in any application. Developers embed an SDK in a device, such as a mobile app, and instantly enable it to recognize facial expressions in real-time.
Look for escalating efforts in 2019 to get today’s cold AI to recognize emotions and moods, begin to understand them, and react in ways that make it appear AI applications are warming up and care.
Winning AI applications effectively combine wisdom of human and machine crowds
In 1953 (again at MIT), Marvin Minsky, one of AI’s fathers, proclaimed: “We’re going to make machines intelligent!” Standing beside him was Doug Engelbart who replied: “You’re going to do all that for the machines? What are you going to do for the people?” And by 1968, Engelbart would “deal lightning from both hands” in his now ground-breaking Mother of All Demos, showing how personal computing could serve as the ultimate augmentation to human intelligence.
Fast forward to today and for most recipe-based, tactical analytical tasks, machines either dominate now or will soon. Yes, we’re increasingly ceding roles to our machine friends. Yet we aren’t doomed. AI creatures are still no match for our multi-faceted brains. We can strategize, brainstorm, and hypothesize across a variety of unrelated topics. We excel at surveying complex structures, concepts, and systems, and drawing on analogies to posit adjustments or replacements, and we’ll be better at it with AI by our side.
In 2019, AI’s support role in strategic decision making will expand further, as evidenced by solutions like Palantir’s Gothem [iii], which supports human-driven analysis. As stated on their website, “Gotham brings intelligence, people, and data together to empower one another. As users collaborate and build off one another’s work, they create and grow a body of shared intelligence for their organization.”
We’re locked in an ironic race – make AI more human and humans more machine-like. But cultivating machine-like humans to efficiently deliver service will prove a disastrous CX strategy. First off, when we drive workers to be productive, they become less empathetic. Second, people are inherently inefficient at repetitive tasks for long stretches, under harsh conditions, whereas machines aren’t. Still for now, expect short-sighted strategies (and thus plenty of cases) where humans are equipped with devices to ratchet up their productivity.
For example, at Amazon warehouses, operational workers are wired up with GPS, time tracked, and held to ever-increasing productivity standards. And the moment there’s clear payback for a robot replacing a human, Amazon executes (no pun intended). Humans are destined to lose the mechanization battle. In spite of that there is good news: For the time being, human agents are delivering empathetic service that machines can’t.
Further, each of us also has advanced reasoning and planning facilities that machines don’t yet possess, and crowds of us can collaborate in ways that lead to better ideas and innovations. And every CX solution needs enhancements. What teams of humans bring are a vast array of specialized skills to assess a system, propose designs, iterate, and eventually streamline it. People (and teams) are inherently curious and innovative (willing to test, experiment, fail, try something different, and learn), have associative and metaphorical powers, and can imagine change.
Similarly, there isn’t one AI. There are many specialized AIs. So, designers must weave together disparate AIs into solutions that optimize machine-human interactions. To illustrate, an AI tasked with detecting a person’s emotions can pass that information to another AI instrument whose job is to score and assess that person’s overall emotional well-being. In turn, using intelligent routing, it can decide to engage the appropriate human expert. Systems like this are already being used in call centers, where voice data is fed to speech analytics, combined with data like call length and customer value, and escalated to humans only when necessary.
Companies that adopt agile and modern organizational models that effectively team-up what humans do well with what machines have mastered will excel over their peers. Cognitive workloads will be accomplished by humans-machine systems where AI augments people in their work and people assist machines. Smart approaches will train machines to perform routine tasks, and seamlessly escalate and present to humans for complex situations and edge cases.
The most successful workers and organizations of the future will be those that can best leverage the collective intelligence of many AIs and humans to accomplish their daily tasks.
Consumers cry out for more security and control of their data, while commercial personalization carries on
In 2018, sadly, data breaches officially reached ho-hum status, with each new large-scale event barely making news or keeping our attention for even a day. Remember Starwood, Under Armor, Facebook, and Panera Bread (to name just a few)? No? You’re not alone.
No matter how fast we become accustomed to writing each off, collectively consumer and regulatory forces, already in full swing, rallied further for stronger rules and penalties. For instance, during 2018 we witnessed the roll-out of GDPR and in its wake the enactment of new consumer data protection legislation in California. Many other states passed laws to strengthen data breach notification rules.
Meanwhile, firms exposed in 2018 like Cambridge Analytica (who played a role in targeting voters in the Brexit and Trump campaigns) continued to plug on behind the scenes joining solutions together with others like Unruly and Lotame. Together, they’ve created AI applications with targeting systems using consumer data. Aimed at manipulating consumer’s emotions, they’ve built individual emotional profiles en masse and use them to fuel programmatic advertising platforms that in turn serve personalized messages for political and marketing campaigns. How effective are these? Unruly claims to have systems “about twice as effective as rational advertising.” Soak that in.
All of this leads us to 2019, where data breaches will again abound and seem commonplace. Personalization practitioners will further step up their games, pushing creepy boundaries, and counterbalancing consumer forces will press forward with renewed vigor. Expect the first big lawsuits to surface where data protection legislation like GDPR and ePrivacy, with the right to be forgotten and the right to full decision explanations, get road-tested in courts.
Consumers demand transparency and explainable AI, revolting against biased algorithms
There’s a clause in GDPR that speaks directly to algorithmic decision making, and people’s right to fully understand these decisions. It states that firms must provide:
“…meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”
Yes, it’s legalese, and as such leaves room for interpretation on both sides. Regardless, there’s at least one very dirty little secret behind the scenes. Most companies haven’t a clue where to begin to untangle some of their logic, let alone provide it as meaningful information so consumers can assess the consequences. In fact, many workers today use AI like an unassailable scapegoat, as if the company and employees who work beside it aren’t accountable to its actions. Recently, I asked a company how it’s model decided the ranking of people showing up on a TV monitor in a waiting area. That answer I got was, “Oh, the system determines that. We don’t really know how it does it.”
In 2019, that answer becomes much less acceptable, as scrappy consumers demand answers to tough questions.
Further, when that logic is buried in layers of complex models, like neural networks, it may be virtually impossible to provide meaningful information. In this case, firms may be forced to pull these models out of production use.
Also, this year consumers and businesses will begin to better understand the importance of data as the teachers of these models. Any model is only as objective as the data it learns from. If a model is fed biased data, even if inadvertently and indirectly, the model’s actions will reflect those biases. And those training it will be held responsible.
Here are some cases of algorithms making sensitive CX decisions where bias may pose a risk:
- Loan approval
- Use of discretionary budgets on fees
- Collection actions
- Fraud / loss prevention detection
- CX worker hiring decisions
Sensible companies will take a staged approach to how they train, scenario test, monitor, re-train, and adjust their models, ensuring responsible humans are always at the helm. That way, they can still benefit from automated processing while simultaneously managing the chances of adverse side effects.
Businesses ditch legacy segment-based campaign management, and demand RTIM solutions
Real-time interaction Management (RTIM) is defined by Forrester as:
“…the phenomenon of delivering contextually relevant marketing to users across devices.”
For marketers and CX professionals, RTIM is ground-breaking. It’s always-on, real-time, considers context, and renders instant individual decisions in the moment consistently across channels. For years, marketers have relied on bucketing customers into segments, and batch and blast treatments. With RTIM, however, they no longer need campaigns, segments, and cells, and instead can use a system that decisions on an individual’s profile, and optimally arbitrates the next-best-action for every interaction.
At the core of RTIM is machine-learning models that also re-calculate in real-time and render propensity scores for a consumer’s likelihood to churn, affinity to a certain product, or probability of response to a given offer. RTIM’s decision engine does math to rank options, and serves them in milliseconds, even when brands have hundreds of millions of customers interacting every day.
Prior to 2019, only a handful of large brands had deployed RTIM systems that proved able to scale to many channels and millions of interactions per day. One such example is CBA’s (Commonwealth Bank of Australia), which in 2018 scaled to 18 channels and 20 million interactions per day. And in 2019, with plenty of proof cases like CBA as tailwinds, and more firms catching on to the competitive advantages of this technique, expect a new round of big brands to adopt an RTIM engine and reorganize their people, process, and data around a 1:1 approach.
It’s 2019 – one year from 2020. Thirty years ago, 2020 was envisioned as a far-off future with everything flying and ruled by robotic AI. Marketing and customer service in 1990 were delivered almost exclusively in person, by mail, or over the phone. Whereas now, it’s predominately online, with much delivered over mobile devices.
And although 2019 won’t be the year CX workers sprout wings or become cyborgs, it will mark the end of a dramatic 30 years of incredible transformation and serve as the milepost for the start of the next 30 years of CX innovations, empathetic AI, and counterbalancing consumerism.
MIT Media Lab, https://deepempathy.mit.edu/, 2017
Forrester, The US Customer Experience Index, 2017