-->

Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

Cognitive applications in real life

Over the past six months, we’ve been ferreting out the story on cognitive computing adoption at a number of industry conferences, both in Europe and in the United States. There’s no question but that buzz and excitement are mounting, but are organizations investing in this new technology? How are they using it and what are they using it for?

Through our own research, as well as through presentations and conversations at events like O’Reilly AI, IBM’s World of Watson, HP’s Big Data Summit, Dataversity’s SmartData, SAS Institute’s Analytics Experience and the KMWorld conference, we found a lot of interest and some serious proof-of-concept projects underway. Major areas of investment today include: professional digital assistants, monitoring and recommendations, and threat detection.

Early cognitive application use

Digital assistants are already in use in healthcare and in customer engagement applications such as call centers. They are emerging in financial services as personal investment agents or investment adviser assistants. Those applications address a specific problem such as diagnosing a disease and recommending a treatment for a specific individual. They could also contribute to helping a call center agent resolve a problem for a specific customer. They are designed to understand a problem and find a solution. That is a breakthrough in computing in that the user and the application work together to pin down what the problem is.

These systems are no longer stateless like a traditional search engine. Instead they are conversational, engaging in a more human-like dialog as it helps the user define what problem should be solved before it trots off to find potential answers. Digital assistants deliver ranked recommendations with supporting evidence within the context and needs of the individual user at a specific point in time.

In contrast, recommendation applications monitor news, activities or markets over time. They are the next step forward beyond simple alerting because they work in a fluid environment in which data, desires and company status may change. They understand business goals, relationships among companies and people, past records of successes or failures. Mergers and acquisitions and investment advisories, for example, find opportunities and deliver recommendations for action with a confidence score that shows the likelihood of a successful venture. The model for a successful venture may evolve as the system monitors and learns, and as the company and its personnel change.

Searching for the unknown is yet another use of this type of cognitive application. For instance, pharmaceutical companies are using this application in their research organizations to merge data on patient outcomes, diseases, clinical trials and molecular structure to recommend previously unknown molecules as good candidates for new drug development.

We see threat detection emerging as the first cognitive computing application to become prevalent. Major banks, credit risk companies, government customs organizations or security agencies are all investing in cognitive computing because they cannot keep up with the onslaught of data, particularly text. A European customs agency told us that they are constrained by their budget and can’t hire enough new agents to keep up. But they are also hoping that learning systems will help them create dynamic, evolving models to stay ahead of fraudsters and to detect patterns of fraudulent behavior to predict whom to watch.

Each of these uses, not surprisingly, delivers high value to the organization or the individual or both. Most are high risk/high reward and, therefore, are likely to appeal to upper management because they are worth the investment if they pan out. They attack problems that are difficult or intractable. The data may be voluminous and changing. The situation may shift as the user’s goals and circumstances change. They draw upon multiple sources of information in a variety of formats, but there is typically a large unstructured (usually text) component that cannot be understood by standard BI applications. It is not enough to assemble the data in separate silos. Rather, the clues hidden in each source must be assembled into a single picture to understand the data.

Cognitive computing today

The accumulating evidence around market adoption suggests that the past year has seen a flowering in experimentation, in new products and in new services. Since 2014, for example, the exhibits floor for IBM’s World of Watson went from tens to probably hundreds of exhibitors. Analytics companies like SAS are beginning to position themselves for the cognitive computing space, launching platforms that both appeal to their customer bases and provide competition to early-adopter IBM.

We are also beginning the important move from hype to reality. What we heard was that:

  • 99 percent of AI today is human effort. Selecting training sets, building models, training systems, selecting and curating data are mostly human endeavors.
  • Custom development is the norm. Each use is particular to the users, the data, the language, the outputs and inputs. In this, nothing has changed. We have known for decades that knowing who will use an application, for what purpose, on what device and with what data is a requirement for successful software design. When you add to this the need to understand language in all its complexity, it’s no wonder that custom deployments are the rule. Nevertheless, we do see that the time to deploy has diminished considerably—from years to months and even weeks.
  • Bias is an issue. From what we hear, there is no such thing as an unbiased algorithm, model, ontology or training set. Once you create structure or select information you have made choices in what to consider and what to ignore. That is not necessarily a bad thing. But acknowledging inherent bias in a system and warning the users about it are essential.
  • No technology is magic. Combine multiple technologies for best results—rules, simple phrases, heuristics, machine learning, models, analytics. The tricky part is understanding which technologies are appropriate for each use. We have known for a long time, for instance, that some categorizers are more adept at narrow domains and others require clear differentiators among the clusters.
  • Moving from the digital to the physical world may entail higher physical risk for humans (self-driving cars vs. video games). We are not well prepared to gauge whether saving thousands of lives is an acceptable trade-off if that means that there will inevitably be deaths due to new self-driving technology. And how does a car choose whom to save? These are ethical issues, not technology questions, that need to be answered.

Finally, the headline takeaway is that augmented applications, NOT autonomous AI, are the main development thrust. This finding reflects the current state of the longstanding faceoff in the artificial intelligence community between AI (artificial intelligence) and IA (intelligence augmentation). Despite the media hype, the applications and products we have seen are developed with a human component and are aimed at supporting or augmenting human efforts. HAL is still not a threat, except in the movies.

 

 

 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues