Opaque and transparent AI and the ethical implications for customer experience – Interview with Rob Walker


Share on LinkedIn

Rob Walker's keynote at Pegaworld

Today’s interview is with Dr. Rob Walker, Vice President, Decision Management and Analytics at Pegasystems, a leading provider of software for customer engagement and operational excellence. I had a chance to sit down and chat with Rob when I met up with him at Pegaworld in June. We talk about ethics, artificial intelligence (AI), impact and the balance of the human touch and technology in customer experience. This is the last of three interviews that I conducted at Pegaworld so do check out the others with Don Schuerman, CTO and Vice President of Product Marketing at Pegasystems here and with Mattijs ten Brink, Chairman & CEO of Transavia here.

This interview follows on from my recent interview – Towards zero percent agent turnover in the contact centre – Interview with Tom Goodmanson – and is number 224 in the series of interviews with authors and business leaders that are doing great things, providing valuable insights, helping businesses innovate and delivering great service and experience to both their customers and their employees.

Highlights from my conversation with Rob:

  • In some cases the mix of human touch and technology in customer experience is out of balance and there is a lack of the human touch.
  • But, the emergence of some new technology, principally fueled by advances in AI, is making more interactions with technology more human-like.
  • That for many people (customers and companies alike) may be enough.
  • At the same time, some of the new technology is also getting smart enough that it is starting to recognise when it is best to hand over to a human and then doing that seamlessly.
  • We are seeing an emergence of a collaboration between humans and technology.
  • Often we are seeing technology driving strategy rather than the other way around. This manifests itself in cases where companies are adding ‘sexy’, new technologies/channels but don’t have the budget to do it well so they end up adding a channel that is disconnected from the rest of their operations and customers experience.
  • Pega’s Customer Decision Hub seeks to address this by providing one central ‘brain’ that all channels connect to.
  • Even if a channel is well developed and implemented well, if it isn’t connected then it will still beak the customer experience.
  • When you centralise intelligence then the channel becomes mere presentation, is much easier to control and to manage the hand-overs that take place between channels.
  • Pega’s Customer Decision Hub is, therefore, both a data view (a single unified view of the customer) and an action view.
  • Engagement is the whole currency of a customer relationship.
  • Many companies are looking at the volume of engagement rather than the quality of engagement. That’s a mistake.
  • By separating their inbound from their outbound channels, many companies are missing significant opportunities to be more proactive in their outreach. This, in turn, will help them improve engagement.
  • Having a unified customer strategy and tying inbound and outbound channels together will help build and deliver a much more engaging proposition.
  • However, to achieve this companies need to step back and design their own framework around what sort of relationship that they want to have with their customers and how the technology will help execute and support that.
  • That is a much more mature way of developing engagement rather than just talking about things like ‘share of wallet’, which is very transactional, siloed and not relationship based.
  • Pega recently conducted a global study called: What Consumers Really Think About AI.
  • At Pegaworld and during his keynote, Rob drew a distinction between opaque and transparent AI. (Btw you can watch his keynote here).
  • Opaque AI is one that cannot intrinsically explain itself. That does not mean it is not effective but it is high-risk because you don’t know how it works or what sort of attitudes it has developed. Therefore, you have to test for that.
  • Meanwhile, transparent AI is not as high-risk as the system can explain itself and is auditable.
  • Therefore, when considering uses of AI involving risk management or similar areas then you would probably always insist on the use of transparent AI. However, in others the use of opaque AI may be fine.
  • But, let us not forget that humans tend to often be pretty opaque when it comes to their decision-making.
  • Opaque AI tends to be more powerful than transparent AI. The fact that a transparent AI system has to be able to explain itself is a constraint on its effectiveness and analytical ‘horse-power’ as it has fewer degrees of freedom for it to evolve etc.
  • The power of these technologies will require companies to make choices about how and when to use them as well as having ‘ethical sign off’ mechanisms.
  • You cannot control AI just by controlling the data. Your AI will be able to make all sorts of inferences from the data that it is given. Therefore, you have to test for the attitudes and biases that it is developing based on its outputs.
  • The ethical sign off mechanism that is required is no different from other established governance policies but needs to be updated to take into account of the impact and risk of using AI.
  • The risk is that if you use opaque AI then companies will not know if the outputs adhere to established policies and guidelines, unless you explicitly test it.
  • ‘With great power comes great responsibility’.
  • But, we also need to understand that at some point we will need to learn to trust AI even if we do not understand how it came to a decision.
  • For example, when an opaque AI comes up with a diagnosis for a medical decision that is more likely to be right than the diagnosis from human physician then what do you do?
  • The challenge going forward will be whether companies are willing to accept more ‘errors’ or below what is possible because they insist on transparency and the use of transparent AI.
  • If companies choose that course there is a risk that they will be consciously choosing to become dumber.

About Rob (taken and adapted from a bio on Pega’s site)

Rob WalkerDr. Rob Walker is Vice President, Decision Management at Pegasystems. Prior to the acquisition of Chordiant by Pegasystems in April 2010, he was responsible for managing the strategic direction and development of Chordiant’s industry-leading predictive decisioning technologies. As a member of the office of the CTO, Rob also provided support to Chordiant’s global sales activities and partnership initiatives. He joined Chordiant when the company he co-founded in 2000, KiQ Ltd., was acquired by Chordiant in December of 2004. KiQ was a specialist provider of decision management and predictive analytics software. Prior to joining Chordiant, Rob spent eight years in positions of increasing responsibility with leading global information technology consulting firm, Capgemini. From 1999-2000, he served as Program Director for Capgemini during which time he was responsible for the creation, development and evangelization of technological innovations in the area of Information Capitalization (business intelligence and predictive analytics). Rob holds a Ph.D. in Computer Science from Free University in the Netherlands.

Check out Pegasystems, their global study on What Consumers Really Think About AI, say Hi to them on Twitter @pega and connect with Rob on LinkedIn here.

Republished with author's permission from original post.

Adrian Swinscoe
Adrian Swinscoe brings over 25 years experience to focusing on helping companies large and small develop and implement customer focused, sustainable growth strategies.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here