An Ode To Clippy: Why Chatbots Need To Stay In The Early 2000’s Until They Mature

1
616

Share on LinkedIn

While many of us associate Microsoft AI with Tay, the Twitter bot PR nightmare who turned into a Marijuana-smoking, racist, Nazi in under 24 hours, the company has actually been a pioneer in intelligent virtual assistants since the early 2000’s with “Clippy”. The peppy little animated paperclip that lived on the side of Microsoft Office documents would identify the type of work that you were doing, would offer tips, and allowed users to search for FAQs related to what they were doing. In essence, it was the first chatbot.

Microsoft Clippy
Image Source

That said, Clippy was also wildly unpopular. The genesis of Clippy came from research performed by Stanford University, which found that the part of the brain that is activated in human interactions, is also activated in human-computer interactions. Microsoft wanted to add a human-like element to their software, in order to enhance this neurological phenomenon.

The problem was this: Clippy was not human, and “he” was too stupid. A study on human empathy and animacy for robots, found that “participants hesitated three times as long to switch off an agreeable and intelligent robot as compared to a non agreeable and unintelligent robot. The robots’ intelligence had a significant influence on its perceived animacy.” We like our robots smart, and poor little Clippy was just annoying.

The AI Intelligence Levels

Microsoft ended up removing Clippy from their products after a few years, but they were undeterred, as we know from Tay, the wild experiment in AI on Twitter. Tay, though ultimately a complete disaster, exhibited one of the highest levels of AI: deep learning. This is the most complex of the three levels of AI: heuristics, supervised learning, and deep learning.

Heuristics are just decision trees, a type of automation usually. Supervised learning is when a machine can perform a set of tasks, and respond to instructions based on being “taught” (through input-output training data). The most common application of this is interactive voice systems. The next step is the holy grail of AI, comes deep learning. This is when a machine (or robot) improves itself through customer interactions. Tay, for instance, was designed to mimic the speech patterns of a 19-year-old girl, but through her interactions with Twitter trolls, she ended up captioning a photo of Hitler with “swag alert”, and accused George W. Bush of orchestrating 9/11. Herein lies the problem with the smartest level of AI: it doesn’t have the ability to distinguish between right and wrong.

Chatbots Must Remain Clippy-Like Until They Can Distinguish Between Right And Wrong

The purpose of Clippy was to act as a human-like assistant. The problem is, since AI cannot be human-like yet, people end up reacting poorly to it when it portends to be. Take Siri, for instance: by personifying “her” we are creating this false assumption that she is human-like, when really all we use her for is search, if even that. As an AI tool, she is mostly useless, and people end up frustrated at her when that becomes apparent.

Chatbots do not need to be human-like. They need only to serve very specific functions. As one author outlined, “Imagine I had sat down and found that there was a sticker on the back of the chair in front of me that said, “Want a beer? Download our app!” Sounds great! I’d unlock my phone, go to the App Store, search for the app, put in my password, wait for it to download, create an account, enter my credit card details, figure out where in the app I actually order from, figure out how to input how many beers I want and of what type, enter my seat number, and then finally my beer would be on its way.

But imagine the stadium one more time, except now instead of spending millions to develop an app, the stadium had spent thousands to develop a simple, text-based bot. I’d sit down and see a similar sticker: “Want a beer? Chat with us!” with a chat code beside it. I’d unlock my phone, open my chat app, and scan the code. Instantly, I’d be chatting with the stadium bot, and it’d ask me how many beers I wanted: “1, 2, 3, or 4.” It’d ask me what type: “Bud, Coors, or Corona.” And then it’d ask me how I wanted to pay: Credit card already on file (**** 0345), or a new card.”

Heuristic and supervised learning chatbots can automate processes like ordering a beer at a baseball game. What they cannot do is act as humans.

So Why Are We Still Making Human-Like AI?

At this year’s ITB travel fair in Berlin, a slender, silicon encased air hostess introduced herself: “I am Chihira Kanae, a Toshiba communication android.” She is one of a recent outcropping of concierge-like robots who appear eerily human. They essentially just automate tasks- like ordering a drink, checking into a hotel, scanning a boarding pass, etc. And yet, rather than have a kiosque, we’ve developed fake humans who cost “about the same price as a Lamborghini”.

There are several dangers to this approach. One, is the inevitable frustration over this human-like bot being, well, not human. Another, is that when they do become human, they may fall prey to another neurological phenomenon, The Uncanny Valley. Humans don’t like when robots act too much like humans. It’s a Catch 22 for businesses trying to develop and incorporate AI into their products: humans don’t like robots who are too stupid, and they also don’t like robots who are eerily similar to humans.

An Ode To Clippy

While we may have mocked Clippy in its heyday, the perky little paperclip is seeming pretty genius right now. It was meant to serve just as a glorified search tool, an “assistant” who could serve up material related to one’s immediate task. And while Microsoft might have screwed up in how Clippy presented himself, I think that bot developers should take a step back, and ask themselves if really the problem was this: we’ve only now become ready for Clippy.

Elaina Ransford
Elaina Ransford has been writing about and working in CRM and mobile since her college days as an intern at a 4-person startup. Currently, she works closely with the Helpshift CEO to write pieces about chatbots, customer service, mobile, and startup culture.

1 COMMENT

  1. hi Elaine, interesting article, although I would ask your final question slightly differently: Could it have been that Clippy offered advise where none was wanted?

    Knowing and acknowledging that bots (or rather the intelligences behind them – the bot still is a front end) are not yet where they need to be to become fully accepted I do think that they can get a name/face – as long as it is clear that the interaction is not with a human.

    The really important point is to get those learning models (be they heuristic, supervised, or deep) to deliver results that are good enough. This can get achieved by strategies as I laid them out in http://customerthink.com/putting-the-cart-in-front-of-the-horse-chatbots-in-support/ – which Helpshift is supporting.

    After all humans are not infallible, either 🙂

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here