What’s True of AI in 2016
Artificial intelligence has fascinated humans for as long as we have been able to conceive of the notion of machines learning as well as—if not better than—people. Dozens of movies—from 1927’s “Metropolis” to 1968’s “2001: A Space Odyssey” to 2015’s “Star Wars: The Force Awakens”—have molded our ideas about AI. So, what’s true of AI in 2016, and what’s still the stuff of fiction?
Today there are actually many different branches of AI, which is leading to some confusion in the industry and among consumers. There’s the purist, non-directed AI, which models and emulates from the ground up what creates a human brain and how the brain works. These approaches often require vast computational power and have other practical limitations.
Then there’s a set of technologies often referred to as “machine learning,” which is currently seeing a new level of commercial success. In fact, with its ability to process large data sets, perform pattern recognition, classify and recommend, among many other things, machine learning has done nothing less than power the big data revolution.
So, while there’s no denying that AI branches are evolving and being used in real-world, commercial applications, at issue is the fact that some companies are offering “directed” approaches (sometimes referred to as “narrow” AI) or offshoots like machine learning and presenting it as “broad” AI.
Marketers are great at doing this.
IBM’s Watson, for example, uses natural language processing and machine learning to analyze and gain insight into large volumes of unstructured data.
Consumers probably know Watson best for beating Jeopardy super-champions Ken Jennings and Brad Rutter in 2011. Watson used a combination of statistical approaches and machine learning approaches, along with some ontologies or taxonomies (lookup tables of dictionaries for concepts - for example, such that “Paris” is defined as a “Place”) behind certain themes.
It’s impressive that computer systems are capable of beating humans at Jeopardy and chess and, more recently, Go, but these systems are the very definition of directed AI: applications that are built and highly tuned for very specific tasks, often requiring years of research against those very specific tasks.
Watson is also being used for applications in fields such as healthcare and finance. But can it, and its AI capabilities, be applied to generalized solutions, with human-like levels of intelligence? Or, can it get into the head of an artist such as Bob Dylan?
Not quite. In fact, we are still way off from a machine being able to gain true insight into the complexities of what an artist such as Dylan—or any human, for that matter—is thinking or has thought.
One of the biggest things today’s real-world applications do not have—despite what the popular press would lead you to believe—is true autonomous learning. Watson doesn’t learn in a game and start reacting and changing its algorithms, for example--it would require human intervention to tune the algorithms.
The same is true with cars. We’ve been hearing a lot lately about self-driving cars, but it’s probably a good thing that cars don’t learn to think for themselves: Imagine a car learning (based on other drivers) that it’s OK to come to a rolling start, and then, through experimentation, realizing that the more “rolling” it does the more gas-efficient it is and the faster the arrival time? What’s missing is the relatively rare-- but critical--accident event from the car’s own “training set.”
No, thank you. It’s better, even if the press labels self-driving cars as “autonomous and learning,” that the makers of these vehicles leave this self-learning part out.
With that said, an important measure of AI’s progress is not just the impressive tasks being achieved, but the amount of directed research necessary to achieve those tasks. For example, it is reported that it took a core team of 20 researchers to build Watson’s Jeopardy-beating machine (along with a strong support team to aid in those efforts). Likewise, the AlphaGo team spent 18 months researching the very complex game of Go (with 20 core researchers publishing their paper in Nature).
For now, solutions are available that let companies solve difficult problems by leveraging machine learning and natural language processing algorithms to make sense of data across multiple, disparate sources. These solutions return relevant, accurate and actionable information, enabling humans to make the informed decisions. Recommendation systems are a classic example of these approaches that help personalize experiences, from choosing your next movie to suggesting more relevant advertising. Anti-fraud systems have also taken large strides, at learning and adapting to transactions they monitor, and getting feedback from humans who make the ultimate call.
Will we someday get to a place where machines can think creatively—and perhaps write their own classic folk and rock lyrics? I think so. Are we one, two, five or 10 generations of computational power and algorithms away from having that generic, non-directed learning system that can outthink a human—learning faster and better? Yes. It’s very possible, but it’s more than one generation away.