Many people have only recently become acquainted with the concept of Artificial Intelligence (AI), but the term was first coined 55 years ago this month at the Dartmouth conference. The name AI has never sat well with me, partly because calling computers “artificial” seems strange. (I haven’t heard Google referred to as “artificial search” to contrast it with the good old-fashioned searching that human beings do when poring over books in the library.)
Is it artificial?
But my bigger problem with the term artificial intelligence is that most definitions unhelpfully explain that it describes how computers perform tasks once thought to require human intelligence. First off, the hard part of defining artificial intelligence is the intelligence part, not the artificial part. But it also creates a moving target when trying to pin down artificial intelligence, because every time the computer succeeds at performing something once thought to require human intelligence, now we discover that it didn’t actually require humans at all, so now it’s not artificial intelligence anymore?
AI is always learning
To me, what’s happening now is different from the traditional automation that computers and even mechanical machines have done to take over human labor. What we call AI is different because of the way it learns. A machine does exactly what it was designed to do–an engineer carefully worked out how the process works and tested it on that task until everyone was confident that it was doing the task safely, efficiently, and with high quality.
Traditional software programs are the same–they do what the programmer designed them to do, step by step, making decisions along the way, but just firing rules to decide what to do next. And, that software program was tested against its defined task to make sure that it handles all situations properly and runs fast enough and uses as little resources as possible.
Once designed, neither the machine nor the traditional software program gets any better at its task. It does it well on day one and just as well a year later–at best. Sometimes it performs a bit worse, because the machine needs maintenance or because the software’s environment changed and it needs to be updated.
Typical automation doesn’t surprise us
Sometimes these machines surprise their designers. A machine might require less-frequent maintenance than anticipated, or a program might continue to work even as conditions change. But they never surprise us by suddenly performing their tasks better, the way a human being does. We’ve all been startled when a child suddenly starts to perform a task very well when they knew nothing about it just a few weeks before.
AI models can suddenly perform their tasks better, surprising even their designers. That’s why they refer to it as machine learning. The more often the AI model performs the task, the more different situations it sees, the better the results become. That’s why AI has taken off in the last few years as Big Data has come to the fore–the more data for AI to work with, the faster it learns.
What does this mean for you? The more data you have, the better AI can learn how to perform tasks for you. For example, as senior strategist at SoloSegment, our clients find that the longer they run our SearchBox software, the better the search results get–that’s the power of more data leading to more learning.
The first wave of the internet favored small companies now able to do what big ones do. The second wave is the revenge of the big companies, because the one with the most data wins. That’s what AI does.
I don’t think that the folks at the Dartmouth conference would be surprised–they knew the power of AI, but they might be shocked at how long it took. It seems like AI has been coming forever, but it’s really here. Don’t wait too long to put your data to work.