Cognitive Computing Is Not Science Fiction
When I first heard the term “cognitive computing” several years ago, I mistakenly thought it was a science fiction sub-genre, perhaps like steam punk, but less historical and more about a world run by robots. Or maybe it was a sub-sub-genre of cyberpunk, replete with thinking robots. I hoped it was not as apocalyptic as those dark books by Philip K. Dick, but more in line with the benign robots Isaac Asimov envisioned. OK, I’ll pause now while you stop laughing.
Done? Good.
Cognitive computing does not envision a world dominated by robots who know more than humans do. It’s not about the forces of good and evil struggling for dominance, alternative universes where historical events turned out differently than they did in our current universe, or fighting off alien invaders. It is most definitely not science fiction. Instead, cognitive computing is all about machine learning, with some artificial intelligence and natural language processing (NLP) thrown into the mix, and it’s going to have an impact on you, regardless of what business you’re in.
Definitions of Cognitive Computing
If cognitive computing isn’t a science fiction sub-genre, then what is it? As an emerging technology, cognitive computing can still be a somewhat fuzzy term with many different definitions. As I have come to understand, it’s a system that learns from users’ actions and from new information that can arrive in a variety of ways. I was greatly relieved, after my science fiction faux pas, when I realized that industry experts, people who work with cognitive computing in their daily lives, are willing to confess that a hard and fast definition is not readily available.
Daniel Mayer, CEO of Expert System, notes that emerging technologies are very likely to go through “phases of hype and confusion.” Jean-Josef Jeanrond, CMO of Sinequa, admits that the definition is “still in flux.” He references the Cognitive Computing Consortium for an extended definition and condenses it to “capable of extracting relevant information from big and diverse data sets for users in their work context.”
At its website, the Consortium adds that innovation is the rationale for its existence. The convergence of three elements—market needs, available technologies, and an environment of experiment and adventure—foster that innovation in cognitive computing. Both Mayer and Jeanrond touch on these three elements.
The genesis of cognitive computing is widely thought to be IBM’s Watson. It first caught people’s attention when it won at Jeopardy! a few years ago. As Mayer points out, however, IBM had been working on Watson’s cognitive computing capabilities for years before it beat two human beings at a television game show. But the win caught the attention of the public and made it possible for cognitive computing to be accepted for more practical applications.
Natural Language Processing
Like Watson, NLP technology is hardly a new phenomenon. It’s been around for decades, but is finally gaining traction with real world, practical applications. Jeanrond and Mayer agree that the ability of NLP to understand language is a critical component that ties into cognitive computing. Mayer sees this as important both because it lets computers read text in a similar fashion to how humans read and it can map concepts via knowledge graphs. For Jeanrond, NLP’s ability to understand content in many languages and to grasp semantic and sentiment in texts adds to our ability to analyze both structured and unstructured content.
Not only is cognitive computing not science fiction, it’s not fiction at all, but has real business applications. Jeanrond provides us with several use cases, including finding experts in the pharmaceuticals industry. In that instance, a company might need to quickly put together a team to reposition a drug, and cognitive computing would speed the effort. His other use case is about customer data. Every business accumulates a great deal of customer data, and cognitive computing can bring new insights when this data is integrated into a single model.
Mayer adds a cautionary note regarding the “black box nature” of machine learning. Without human guidance and oversight, it can go horribly awry. Mayer references the debacle of Microsoft’s Tay Twitterbot. Instead of learning cognition skills by interacting with humans in the Twitterverse, it perversely learned to parrot racist and hate speech. It was taken down in short order. I can envision a science fiction story written with this episode in mind. The notion of a robot not performing as expected by its programmers is perhaps best represented by the HAL 9000 computer in the movie 2001: A Space Odyssey. “I’m sorry, Dave, I can’t do that” is not what you want from cognitive computing, not what you want your machine to learn.
Amplification of Existing Data
Artificial intelligence is both a mainstay of science fiction and a potential powerful force for increasing knowledge management by amplifying existing data. Mayer points out that, although machine learning is a promising technology, it must adapt to governance and TCO realities and constraints. Jeanrond adds that effective machine learning can be a lengthy process, requiring “more and better data than most enterprises have at their disposal.”
The idea that a machine can be as smart as—or smarter than—humans is somewhat scary. But the advantages that can be gleaned from machine learning and cognitive computing are enormous and, with proper oversight and control, they are incredibly powerful tools that can boost productivity, introduce new products, inspire creation of new markets, and increase profitability. It’s not exactly about thinking robots dominating our world, but it is about a new generation of systems harboring great benefits for humans.