Top

Customer experience, opaque AI and the risk of unintended consequences

Adrian Swinscoe | Sep 4, 2017 318 views 6 Comments

Share on LinkedIn

Rob Walker keynote at Pegaworld 2017

In recent days we have seen an escalation in the war of words between Elon Musk and Mark Zuckerberg surrounding the dangers of artificial intelligence (AI).

Musk worries that if unregulated AI will grow and grow in influence and could ultimately pose an existential threat to humanity. As a result, he is advocating that governments need to start regulating the technology.

Zuckerberg, on the other hand, disagrees on the need for more regulation and is more sanguine about the prospects of AI.

Now, this is a macro level argument about the prospects and nature of AI. And, it is one that is set to rumble on and on.



But, there is a more micro level challenge facing firms that are using AI technology right now.

This is particularly relevant for organizations that are using AI to enhancing their customer experience by making it more personalized, making their service more proactive or using algorithms and data sets to predict the most likely outcome of a particular situation, the next best offer or next best action for a customer.

The challenge was articulated by Dr. Rob Walker, Vice President, Decision Management and Analytics at Pegasystems, during a keynote speech at Pegaworld that took place in the early part of June in Las Vegas.

In his keynote, Rob explained that there are two types of AI. The first is Transparent AI, a system built around a machine learning algorithm that can explain how it works and can be audited.

The second is Opaque AI, a system, again built around a machine learning algorithm, that is more ‘black box’ in nature and one that cannot intrinsically explain itself and cannot be audited.

[Note: You can watch Rob’s keynote here and here is a link to a follow up discussion that I conducted with him for my podcast.]

Now, Opaque AI systems tend to be more powerful than Transparent AI systems, given that requiring a system to have to ‘explain’ itself and to be audit-able can tend to act as a ‘brake’ or restraint on its effectiveness and analytical ‘horse-power’. And, given their power they are likely to prove increasingly popular amongst organizations that are searching for tools and technology to help them differentiate themselves and deliver better business and customer outcomes.

But, Opaque AI comes with its own set of risks. It may be more powerful than a transparent system but, because of its nature, we are also limited in understanding what sort of attitudes it will develop and what outputs it might generate.

Remember Microsoft’s racist AI chatbot? Or, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did? We don’t want a repeat of those incidences, right?

As such, companies need to make conscious choices about what type of AI technology they want to use (Transparent or Opaque), how they use it and when to use it.

Dr Nicola Millard, Head of Customer Insight & Futures within BT’s Global Services Innovation Team brings this choice to life in an upcoming whitepaper (Botman vs. Superagent) when she writes:

“The debate extends to robo-advice in the financial services industry, and medical diagnosis. If a person is given a personalized recommendation based on the output of a machine learning algorithm, how is that advice regulated if the learning algorithm can’t show us how it came to that particular conclusion? ”

In the face of such challenges, Rob Walker suggests that firms are likely choose to deploy Transparent AI systems in areas that are subject to regulation, compliance and risk management issues.

However, in others they are likely to adopt the use of Opaque AI.

But, given the ‘black box’ nature of Opaque AI, they will need to be accompanied by testing systems that establish the attitudes and biases that the system is developing. In addition, its outputs will also need to be subject to ‘ethical’ and quality sign off mechanisms in order to make sure that they comply with existing laws, regulation, brand policies, customer promises and company procedures etc.

Now, these mechanisms do not need to be markedly different from other established governance and quality policies and procedures that normally exist but they will need to be updated to take into account of the impact and risks of using this type of AI and then built into existing operations.

Nicola Millard describes the situation very well when she says:

“As with any system, AI is very much a case of “garbage in, garbage out”. If we want AI that produces “good” answers, we need to feed it “good” data. We need to be responsible parents, teach it, supervise it, give it a healthy data diet, and work alongside it, rather than leaving it to its own devices”.

However, right now, it is not clear that these type of practices are fully developed or widespread.

But, they should be.

To not have them in place risks opening up your organization, your customer experience and your customers to some potentially damaging and unintended consequences.

This post was originally published on Forbes.com.

Print Friendly, PDF & Email

Republished with author's permission from original post.


Recent Editor's Picks:


Categories: BlogCustomer AnalyticsCustomer ExperienceThink Tank

318 views

6 Responses to Customer experience, opaque AI and the risk of unintended consequences

  1. Andrew Rudin September 5, 2017 at 7:26 am (240 comments) #

    Hi Adrian: I’m glad to see you putting the spotlight on ethical use of AI. I’m confused by a couple of your statements, and I hope you can clarify them.

    1) In distinguishing opaque AI from transparent AI, you describe it as “a system, again built around a machine learning algorithm, that is more ‘black box’ in nature and one that cannot intrinsically explain itself and cannot be audited.” I understand the possibility exists that an AI system that cannot intrinsically explain itself. But what makes an AI system resistant to audits or unable to be audited? If, in fact, this could ever occur, I am surprised that any company would deploy such an “un-auditable” system, given the enormous ethical, legal, and other risks they would absorb.

    2) If “opaque AI” exists as you describe it, I am unclear why, by dint of its opacity, opaque AI would “tend to be more powerful” than ‘transparent AI’ systems, or have more “analytical ‘horse-power.'” I have not experienced the ability to audit (or lack) to correlate with the potency of a system, algorithm, or process.

    Note: when I think of “opaque AI” in the CX context, I consider it in terms of what’s visible to consumers, rather than its mechanisms also being opaque to the companies that deploy it. To me, that just seems odd.

    If you could give some more detail, that would be great.

  2. Ed Davis September 5, 2017 at 4:30 pm (1 comment) #

    Hi Adrian;

    Boy… unchartered waters where opaque AI dares to tread. I think the world is demanding more transparency ( Who do you trust ?) Customers are not driven not as much by pricing as they are about trusting that the service lives up to the promise.

    Opaque AI can be all powerful but beware of the Cracken.

  3. Rob Walker September 6, 2017 at 2:49 am (2 comments) #

    @Andrew

    Re 1) I think the point about opaque AI is that it *should* be audited. In my experience this task is taken too lightly. Many assume that if the data fed to an AI is ‘innocent’, the AI won’t be able to develop any undesirable biases. For instance, there might an assumption that an AI couldn’t develop misogynist traits, say, if ‘gender’ was withheld from the data it’s fed. This is not at all the case. And if the AI is non-transparent, it’s also near impossible or even impossible to verify that by inspecting the AI’s ‘logic’. It can only be done by auditing the AI’s actual behavior. We should, in short, audit AI like one would audit a human decision maker and not like a conventional IT system.

    Re 2) Opaque AI is not some future thing. Many algorithms currently in use, in commerce as well as other areas (like playing Go), are inherently opaque. They work, but can’t explain how. When these types of AI are used in a Cx context, they may, for instance, develop behavior that goes against brand policy (or regulations/legislation). They may reject certain customers for a loan without an explanation why, or use a marketing budget to advertise (with good results) in unsavory places. They will, in other words, make decisions that affect customers and in many areas it’s probably important to understand exactly how those decisions were made.

    Lastly, to insist on transparency is to insist on a subset of knowledge representations that can be used to capture intelligence. This limits the potential power of AI. If, for instance, in Cx, it can only model customer behavior in simple formats like a decision tree and not, say, in a multi-layered neural network, this is curbing the AI’s power. Mathematically, everything can be expressed in simple formats like decision trees but those trees would become so large and complex as to be completely opaque themselves.

  4. Andrew Rudin September 6, 2017 at 2:30 pm (240 comments) #

    Hi Rob: thanks for your clarification. I understand the difference between cannot be audited and should be audited. The article uses former expression, but according to your explanation, it should be the latter. Assuming that’s the case, I agree. As a risk management practitioner, implementing a system, process, or procedure that cannot be audited would be malpractice. And a heinous error where customer, employee, supplier, or investor safety and security are involved. Disclosure: I am not a legal professional, but my work involves advising companies on their risks for legal liability. An “opaque” AI system as Adrian has described it, would be a non-starter, in my view. But the issue might be moot: I have yet to find a technology-based system that is “un-auditable,” or one that “cannot be audited.”

    Personally, I have never harbored any illusions that AI data doesn’t skew results, or perpetuate biases. If that were the case, our “smart” HR systems would continue to select male applications for traditionally male-dominated fields because in the past, the vast majority of those meeting a “successful” profile were . . . men. Same for women-dominated jobs, too. Anyone who believes algorithms – or their results or interpretations – are fair is naive.

    Finally, I understand your point that customer decision processes can be described as opaque because many activities occur on the neural or synaptic level, and the chemistry and mechanisms are poorly understood or not yet known. I am not aware that the same issue exists with AI. Even with ones considered “large” or “complex”, they are not “completely opaque” to those who created them or use them – and probably aren’t opaque at all. Is there a single AI system in use whose activity or action cannot be distilled to defined lines of code or instructions expressed in 1’s and 0’s?

  5. Rob Walker September 7, 2017 at 1:29 am (2 comments) #

    Hi Andrew: expressing AI logic in 1’s and 0’s is not the same as understanding it. I don’t believe the AlphaGo designers understand exactly (or perhaps even mostly) how their AI plays Go, why it made certain moves, or how it acquired the insight to play that particular move. Many believe that our own, analog, neural synapses can ultimately expressed digitally but that doesn’t mean we understand how they ‘think’. If you follow Adrian’s link to the keynote I think you’ll see AI examples (notably the ‘Bob Ross’ video) where it seems hard to argue that humans can follow all that’s going on.

    At Pegasystems, we sometimes use evolutionary algorithms (using evolutionary principles to evolve models or designs) and we can audit the performance and behavior of the (predictive) models it produces. They work. Often beautifully. Do we understand how they work? Not always. That’s why we put controls in the software itself to make sure any company using our AI can insist on transparent algorithms where necessary and opaque where safe/acceptable.

  6. Adrian Swinscoe September 7, 2017 at 9:24 am (37 comments) #

    @Andrew Thanks for your comment and questions.
    @Rob Thanks for pitching into the conversation and addressing Andrew’s questions 🙂

Add Your Comment (All comments are reviewed by moderator, no spam permitted!)