Machines Won’t Take Over CX…But A Few AI Titans Might

2
394

Share on LinkedIn

Lately, if you’re like me and enjoy following the AI CX narrative (even if just for grins & giggles), you’re inevitably sucked into philosophical wormholes that always seem to pop you out at the same place – a world where machines rule all.

Strangely, though, we rarely encounter future scenarios that follow a path we’re already on, where machines are but tools used to assist us. If we project this scene forward, some interesting questions to ask are, “What does that world look like, and who are its haves and have-nots? Are AI titan firms forming?”

TechTitans
Source: http://cila.com.do/2016/?folio=technology-titans-special-committee

AI, for all its hype and promise, is still very much in its infancy. Far from being able to get up, put on its clothes, and take your job, AI today is less of a super scary robot, and more like a smart washing machine (funny you should ask, as there is one of those). It can help us conserve resources and do specialized tasks more efficiently, like getting clothes clean using fewer resources, but it really can’t do higher order thinking we take for granted like abstract judgement and reasoning. However, that super smart washing machine (and all its other specialized variants) has an owner, and together they can wield tremendous influence. And anti-trust laws (put in place over 100 years ago to prevent corporate behemoths from controlling entire markets) may be full of loop holes in the digital age.

Using a singularity argument where machines alone rule provides a convenient escape from a more complex debate about a future where various human and machine forces collide and collapse together. In this scenario, a select set of firms use walled garden data to feed their AI, and as such, seize unprecedented levels of control, influence, and power.

Here’s an example. We’re already seeing a massive rationalization of power and influence collapsing into AI titans like Google, Facebook, Apple, Microsoft, and Amazon (controlled by surprisingly few individuals); not pure machines, but formidable entities nonetheless, fueled by AI, and directed by small pools of mighty people already circling their wagons around a plethora of data.

In the short run, we (the consumers) seem to benefit, getting innovative little features and conveniences such as travel guidance and digital yellow pages, but unbeknownst to most, to get these we sacrifice gobs of data and hence privacy. Each time we travel with GPS on, our whereabouts are tracked and stored. Each time we search, we provide preference footprints. Meanwhile, the behemoths rack the data up, building behavior and preference repositories on each of us.

So what’s the rub?

First, it’s our data. Thus, it would be nice to be able to view it, and if it’s wrong, correct it. The European Union passed a law recently that goes into effect in May 2018 called GDPR – General Data Protection Regulation. Its intent is to give consumers more rights and transparency with their digital data. Other consumers outside the EU could use similar privacy protection laws.

Second, to some extent, without being cognizant of it, our choices are already being limited. For example, when you search in digital maps, perform online comparison-shopping, or ask a voice pod for restaurant recommendations, the top options returned may not be calculated objectively. Ranking algorithms already place higher emphasis on businesses that pay more to play, and search conglomerates, like Google, rank their interests (including businesses they have a stake in) higher.

Each time we purchase something, we’re casting a vote. When we go through a buying cycle, we are creating implied demand, and when we purchase we’re reinforcing that the supply is meeting the demand we created. When this cycle is cornered, choice becomes an illusion. To illustrate, on June 27, 2017 the EU slapped Google with a record-breaking $2.7 billion fine, charging the AI titan with doctoring search results giving an “illegal advantage” to its interests while harming its rivals.

Third, firms can and will use your data for their benefit, and not necessarily yours. Prior to the digital age, people stereotyped others by their physical choices such as their house, car, job, shopping habits, and clothes. Although today those choices still factor in, we also project digital personas: where we surf, what we share and like on Facebook and Instagram, what devices and channels we use, how we interact online, and so forth. When these behaviors are crunched and codified, they become rich fuel for algorithms that can manipulate, discriminate, or even do harm, without the algorithm’s owners having any concerns for side or after effects. Show preference for fast cars and thrill-seeking vacations, and not only will you receive more of those offers, but you might also receive higher insurance premiums. Share enough medical history, and an insurer’s algorithm may score you at high risk for a chronic disease, even when there’s no medical diagnosis, and there’s no certainty you’ll ever develop that condition. That might make it very hard to get medical coverage.

Admittedly, not all of the use cases lead to undesirable outcomes. In late 2016, American Banker ran an article on next-gen biometrics detailing how banks use consumer digital behavior signatures to detect fraud and protect consumers from its effects. And although consumers initially do benefit from such a service, what’s interesting (and concerning) is the nature of the behavior data fed to the fraud detection algorithm: the angle at which the operator typically holds the smartphone, pressure levels on the touch screen, and cadence of keystrokes.

Unquestionably, the bank’s primary goal is predicting whether an imposter is behind the device in question. Nonetheless, what’s stopping this same bank from using that data to predict a consumer’s likely mental state, such as likelihood of inebriation, legal or otherwise? Moreover, whether that prediction is ultimately accurate is irrelevant to the immediate recommended action and the subsequent consequences. We have little protection from the effects of algorithmic false positives, and today, except for credit scores, few brands have any accountability for model scoring accuracy.

Here’s a scenario. An algorithm thinks you’ve been drinking based on your smartphone behavior and flags you as too drunk to drive and disables your car, forcing you to find another way home. That’s one thing, but think about this – that same data might also be available to prospective employers, who use it to forecast your job performance, scoring you lower than other candidates based on its dubious drug use prediction.

Who owns and manages your digital behavior data? Are they subject to use restrictions? The answer is (although the data is about your profile and your behavior) – you don’t own it and your rights are limited. And although some of the more inconsequential data is scattered about (such as name, address, date of birth, and so on), the deeper behavioral insights are amassed, stored, and crunched by the AI titans, with seemingly no limits or full transparency, and with little insight into where its shipped, and who else might eventually use it. They suggest we simply trust them.

“Those who fail to learn from history are doomed to repeat it”

History is always an amazing teacher. In the 19th century, railroads consolidated into monopolies that controlled the fate of other expanding industries, such as iron, steel, and oil. They dominated the distribution infrastructure – just as today’s AI titans, in many respects, control the lifeblood of modern day companies – their prospect and customer traffic. And those other expanding industries (iron, steel, oil) were no different. They too controlled the fate of other expanding industries, which all needed their materials.

Soon after their start, Google’s founders adopted a mantra, “Don’t be evil.” In October 2015, under the new parent company Alphabet, that changed to “Do the right thing.” Although the revised phrase still rings with the implication of justice, it raises the question of who benefits from that justice, and if there’s a disguised internal trust forming.
Everyone knows that business, by its very nature, is profit driven. There’s nothing wrong with that, yet history teaches us that we need checks and balances to promote a level playing field for other competitors or potential entrants, and for consumers.

In his 1998 book “The Meaning of it All,” Richard Feynman, a famous scientist, tells a story of entering a Buddhist temple and encountering a man giving sage advice. He said, “To every man is given the key to the gates of heaven. The same key opens the gates of hell.” Unpacked and applied to AI today:

1. The term “every man” can imply an individual, or organization made of people, or humankind as a whole.
2. Science, technology, data, and artificial intelligence are but tools. As history shows, humans use them for good and evil purposes.
3. AI’s impact on the future isn’t pre-determined. Each of us can play a role in shaping how it turns out.

Let’s ensure we live in a world where many (not a select few) benefit from AI’s capacity and ability to improve lives, and that those responsible for its development, evolution, and application are held to fair and ethical standards.

Can AI be the rising tide that lifts all boats?

The power and potential of artificial intelligence technologies is clear, yet our ability to control it, and deploy it sustainably is not. Who should regulate and control it (and its fuel- our data) is an evolving and ongoing debate.

Used responsibly and applied democratically, we all stand to benefit from AI. Paradoxically, while it renders some of our old jobs obsolete, it retrains us for a new world where it and we play new and more rewarding roles – where living standards rise and mortality rates fall.

What’s our guarantee we’re marching toward that future?

Honestly, there are no guarantees – our world is devoid of certainty. However, we can influence likely outcomes by advocating for practical checks and balances. Call me a dreamer, but I envision a world where our privacy is valued and respected. Where we better understand the value of our data and get a reasonable exchange in return when we share it. Where we appreciate what happens when we release it, and can hold those accountable that illegally mangle or pawn it; and a world where we have assurance that when we share data, others uphold their end of the agreement, and we have recourse if they don’t.

If you would like to continue contemplating some of the top ethical implications of AI’s evolving story, click on this link:

Top 10 ethical issues in Artificial Intelligence

Here’s my favorite quote from it:

“If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.”

Vince Jeffs
Vince is a Senior Director for Product Strategy in Marketing & CX at Pegasystems. He has spent over 30 years as a product manager, consultant, and analyst in Marketing Technology. He works with enterprises to bring value to their customers by effectively using technology to improve customer experience. He has also been with IBM, Unica, SAS, and various Marketing agencies, and is a certified Professional Direct Marketer. Vince has spoken, blogged, and written on hundreds of Marketing technology topics. He is a proud alumnus of Georgetown University and the University of Maryland.

2 COMMENTS

  1. Hi Vince: thanks for this well-written and thoughtful article. Ethical issues in AI – and in business in general – are generally not well covered . . . until there’s a newsworthy debacle. Back in 2016, I wrote about a related topic in an article titled The Dark Side of Online Lead Generation, which examined how data is used in consumer exploitation (see
    http://contrarydomino.com/2016/05/the-dark-side-of-online-lead-generation/) .

    It concerns me that sales and marketing technology forums overwhelmingly laud new capabilities for capturing capturing and storing intimate customer data, and through analytics, developing detailed profiles of their behavior. When Carnival Cruises introduced their breathlessly-hyped Ocean Medallion “personalization” system, I was one of the few people who was appalled by considering the intimate personal details that the company could glean. Mostly, people just seemed jazzed that the device allows customers to receive their alcoholic beverage of choice without ever having to go though the indignity of actually talking to a crew member to ask for what they want.

    It seems silly when people write that “consumers have more information power than ever.” Nothing could be further from the truth. Somewhere in the executive suites at Amazon, Google, Apple, and Facebook, there are executives who are delighted that this misinformation is so commonly promoted.

    I am not as sanguine as you are that there is a future where customer privacy will be valued and respected – assuming its companies that are doing the valuing and respecting. There are three reasons:

    1. There’s too much money to be made through exploitation (in the late 1800’s, would railroad companies have voluntarily rerouted the tracks or curtailed construction because of possible environmental damage?)

    2. The growing population of digital natives do not seem concerned. Without public will, what would catalyze efforts to protect consumers?

    3. The current political environment favors less government “red tape” and regulation. Invariably, it’s spun as stifling business. It’s hard to get elected when endorsing such a position. Not everyone can be Elizabeth Warren or Al Franken.

    The result? More inequality. Those who can afford to pay for privacy, or who are savvy enough to ensure they can get it, will have distinctly better futures.

  2. Thanks for taking the time to comment Andrew! I’ll check out your piece – thanks for pointing me to it.

    Given the trajectory, I couldn’t agree with you more, and why I wrote this article. Figured it’s something I could do to (if nothing else) build some further awareness – especially with a community at the center of this area.

    I’m not actually all that sanguine on this topic if you asked me to place a bet, hence why I said “Call me a dreamer….” ….so I was just displaying some hope

    Yet as we know, hope is not a strategy

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here