AI and Machinelearning in 2017 – What to Expect

0
435

Share on LinkedIn

2016 has been the year of Artificial Intelligence and machinelearning. With the year being almost at an end, let me chime in to the gang of pundits who venture into prediction land and pronounce what we get out of our glass balls. So here are my 5 plus 2 bonus ones.

AI gets mainstream in Consumer Environments

Alexa paved the way, the Google Assistant is on its heels, Microsoft Cortana wants to get there, too – and Apple, amazingly, is a late starter in this environment. Amazon started with a pretty smart strategy by not overselling the capabilities of its underlying AI, as Apple did with Siri, which caused some grief for Apple and some laughs for many people around. More and more helpful Alexa skills are developed and implemented that improve its usefulness. Similarly Google; they started late but are in the game now, too – following a different strategy of adding new functionality by just making it available in contrast to Amazon, who opt to have users individually enable ‘skills’. Identification of what these systems can do will be an interesting question.

Facebook’s Mark Zuckerberg created a butler for his house, who he calls Jarvis, like the one of Tony Stark in the Ironman movies.

Google recently based its translation engine on machinelearning and AI, seeing vastly improved translations. Facebook’s translations base on an AI, too – although this one still seems to have a lot to learn.

Not to mention all the countless other consumer services Google has, that utilize machinelearning and AIs in the background.

Two of the main developments to look at here are platforming/interfaces/protocols and, of course, security.

AI-driven, intelligent Business Applications

Microsoft, Salesforce, and Oracle paved the way. SAP recently chimed in after being silent about machine learning and artificial intelligence for (too) long. The bottom line here is that these big vendors, and many other, here unnamed ones, have understood that AI is not an end in itself, but a means to an end. They all are strengthening the capabilities of their business applications by supporting them with processes that base upon machine learning algorithms, thus delivering solutions that are more helpful for their users and/or customers. Be it intelligent follow-up, relationship intelligence, proactive (or rather prescriptive) maintenance, smart target group segregation, chatbots automating support, intelligent knowledge bases, or smart product recommendations to site visitors.

Let’s just look at service and support, also in sales. Automation with the help of bots can be of incredible help here. Although, earlier in 2016 bots tended to offer a very limited and poor customer experience, this will be on the rise. Underlying AIs will learn, implementers of bots will learn to start with simple, meaningful interactions and to get more complex from there. Overall there is a massive potential for improved experiences at scale.

AI on the Dark Side of the Force

As we have seen, especially with Microsoft’s Tay, training AI is not a trivial thing. As there is no conscience, there is no ethics – rather a blank slate – so it is easy to lead an AI to the ‘Sark Side of the Force’. This may happen via explicit or implicit bias. Tay became malicious because of trolls, other systems simply suffered from something that one could call the ‘programmers’ bias’. Other AIs, like the Tesla autopilot, sometimes are used beyond their safe limits, which may cause accidents. Remember the first fatality caused by a self-driving car, or the incident where allegedly the car crossed a red traffic light? Techrepublic recently collected their top 10 fails. We will see more of these AI fails the more we use them and rely on them, until we manage their training better. It is simple as this: The more we trust AIs, the higher profile potential incidents will have.

Vendor Lock in by AI

This has the potential to become an interesting one; might not happen in 2017, though. Every vendor has an incentive to be sticky, i.e. to make sure that functionality continues to be used. In the context of AI and machinelearning this gets another dimension, or two. Let’s take First the model as such, which may very well be proprietary. Secondly there is a good chance that the learning algorithm is proprietary. This combination can make it difficult to change from one AI to another one. This is something that needs to get carefully considered.

 ‘Democratization’ (yuk) of AI

I hate this term. But it is the one that currently is used. Earlier we called this commoditization.

Simpler and less catchy.

But regardless how we call it – it is happening. My first two points already hinted at it. More and more functionality will be driven by or at least involve some measure of machine intelligence. With that prices will drop. The phase of creaming strategies will end. This may happen in 2017 or a bit later, but it will happen.

Soon.

Bonus Ones

These two topics came into my mind while writing down my thoughts on what will mainly go on. However, we should have a look at these two topics, too – in 2017 and beyond.

AI and Morals

Which leads me to the topic of ethics. As such an AI doesn’t have any morals – unless we train them. This will lead to very interesting questions that need to get answered.

Starting with the very simple one: Which morals are right?

There are the three laws of robotics, but these are incomplete in themselves. It is very easy to create moral dilemmas. Here, clearly lots of thought is necessary.

This also holds true for the simpler topic of AI and law. The question who is responsible in the case of an accident caused by or involving an automatically driven car may serve as an example.

I expect these questions being addressed more and more in the next years.

Criminalization of AI

Where there are good uses there are also evil uses. As said above an AI in and of itself does not have a conscience (yet) nor does it have morals – unless we train them to have.

Now, where there are benevolent trainers there are also malevolent. It is evident that not only businesses but also government agencies are building and using AIs (ever heard of SKYNET?) – and it is a safe bet that criminals are doing, too. And as criminals tend to have a somewhat stretched sense of morals and ethics we can expect the same of AIs that they are using. So we will likely see a lot of crime being conducted with the help of artificially intelligent machines.

Republished with author's permission from original post.

Thomas Wieberneit

Thomas helps organisations of different industries and sizes to unlock their potential through digital transformation initiatives using a Think Big - Act Small approach. He is a long standing CRM practitioner, covering sales, marketing, service, collaboration, customer engagement and -experience. Coming from the technology side Thomas has the ability to translate business needs into technology solutions that add value. In his successful leadership positions and consulting engagements he has initiated, designed and implemented transformational change and delivered mission critical systems.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here