-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Mind the gap

Article Featured Image

Those familiar with the experience of travel by passenger rail will recognize how common it is for conductors (or the computerized voices representing conductors) to warn riders that there will be a gap between the train and the platform when the train pulls into the station. We get reminded of the need to mind that gap or risk injury to feet, legs, luggage or more.

In the world of cognitive computing, we need more conductors to issue warnings of another kind of gap that could bring injury to this emerging industry: the gap created by the current shortfall of available talent required to create AI systems. We don’t currently have enough people with the education and experience to fill cognitive computing development teams. Perhaps more serious: We are not always sure at this point exactly what kind of talent we might need.

Cognitive computing faces a range of challenges as it moves from computer science labs and pilot projects and edges out into the mainstream. The media is full of articles on the promises and perils of the technology. We can read about a few real successes in the field and a few spectacular misses, like the scandal surrounding IBM and the MD Anderson Cancer Center in Austin, Texas. We can read about how the technology will take all of our jobs or about how the impact of the machines taking on human tasks will not be a big deal. In all the ruckus, much less attention has been directed to the gritty details of what we might call the cognitive sausage factory: How and by whom are these new AI systems—for the better or for the worse—going to be built?

Until we reach the point at which machines can design and implement new systems automatically—autonomously—we will be dependent on humans to do the heavy lifting of design, preparation, curation and implementation of new generation systems.

Breakthroughs

Granted that we have already seen some truly fabulous new capabilities come to light: in image recognition and analysis, for example, where the reading of various kinds of X-rays and complex medical scans or the interpretation of reconnaissance photos or the process of identifying defects in semiconductor manufacturing fabs may well be fully automated in the near future. That breakthrough was in no small part the result of Google’s determination to make a big bet on the newly accessible technology of deep neural networks. The bet involved loading up teams of hundreds of researchers, buying companies whose people were deeply knowledgeable in the strengths and weaknesses of the technique and investing in new hardware infrastructure to support the processing of analytics across big data loads.

But those seemingly overnight advances in image processing and similar ones in language facilities like auto-translation have not been accompanied by breakthroughs across the wider spectrum of applications. While the intense interest in autonomous vehicles of all kinds, most dramatically self-driving cars, has captured the imagination of the public and the investment dollars of both technology and transport giants, even in that high-stakes field, the sense of real breakthrough to practical adoption is missing. And how do we understand the roadblock to more general progress throughout industry?

The simple fact is that the problems AI systems address are gnarly, multifaceted and require true innovation. As the definition proposed by the Cognitive Computing Consortium states: “Cognitive computing makes a new class of problems computable.” They are often problems that have never been computerized, the last bastions of manual, often highly skilled labor. The world was rocked 40 years ago when computing came to the accountants’ spreadsheet. The relentless spread of intelligent automation will now impact our professional and consumer lives in ways that will be similarly hard to predict.

Where will the professionals come from who will create this innovation and spread those new applications far and wide? Clearly we don’t have enough of such people now, and there are several reasons why the shift to cognitive computing is not going as quickly as many have predicted.

Beginning or end?

The Boston Globe reports that at MIT, the Intro to Machine Learning course is among the most popular offerings in 2017, with 700 signing up per semester, requiring four instructors and 15 TAs. While machine learning in general is the primary ingredient in the secret sauce that powers AI, it is “only” an engine, and the complexities of building the whole car around it are often daunting. For example, the current issue of the MIT Technology Review leads with a feature raising the question of whether we are at the end of the “AI boom” instead of the beginning—based on the perception that current progress is coming primarily from 30-year-old technology. The learning systems of today are heavily dependent on curated data, for example, and tend toward fragility, unpredictable changes in direction and output, and “black box” operations. There is a need for a new generation of more flexible intelligence.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues