-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Cognitive Computing: Another look at cognitive tasks

Businesses today face a transition to a new competitive playing field that has the potential to seriously alter both their operations and their opportunities. I’ve talked about this new phase of competition as the “Intelligence Economy.” It’s the next stage in the progression of core drivers we have experienced in our economy over time: agricultural, mercantile, industrial, information and now intelligence.

Our media is full of talking points about intelligent machines, self-actualizing robots, cars that think, machines whose expertise in a defined field allows them to analyze problems more accurately than humans—and no doubt these developments will soon eliminate the need for most carbon-based life forms. These often-heated stories are occasionally accompanied by a technical conversation for the lay audience, typically extolling the progress being made by using natural language, robotics, machine learning and deep learning.

I suggest that for the business, it is time to move beyond the fantasy descriptions of a mentally mechanized future and the technical jousting about whether convolutional neural networks or recursive neural networks would be the better choice for a specific problem set. It’s time to move beyond debates about the science behind using back propagation versus stochastic gradient descent, for example. These data science considerations are important, but in themselves, they do not contribute to a clear-eyed view of what the business needs to do to cope with the next economy and its challenges.

To build a practical framework for understanding what kinds of capabilities will be the key success factors for the intelligence economy, we need first to look hard at what kinds of cognitive tasks or capabilities are going to come into play to enable the innovations we will need as we partner more closely with machines. Can we delegate cognitive processes to silicon colleagues? How will we make judgments about what we need to retain as human responsibilities versus what we can partially or fully automate?

In a previous column, I brought up the problem of defining intelligence (artificial or not) in the first place, whether it can be nailed down to a single number, like an IQ test. I referenced Howard Gardner’s well-established work on multiple intelligences or frames of mind. But here I want to take a different approach: not so much looking at definitions of intelligence itself, but rather looking at the cognitive processes that in some way or other contribute to the “composite” characteristics of intelligence.

Core processes

Psychologists have developed a broad consensus on a seven-element list of core cognitive processes. These core seven are:

♦ Attention

♦ Knowledge formation

♦ Memory, long term and working

♦ Evaluation and judgment

♦ Reasoning

♦ Problem solving and decision-making

♦ Communicating through language

As we consider what kinds of AI projects to undertake for the business, it will be important to evaluate whether, which, where and when these various elements belong in a system specification. It will also be important to decide whether it will be the machine or the human who will be required to execute on the specific cognitive process.

Let’s consider what leveraging the framework provided by these components could look like in a cognitive system design, taking them one by one.

Most if not all business processes require the attention of the participants who are involved in executing the process’ tasks. We have to admit that this is an area in which humans have proven themselves to be inherently weak. We need only to look at today’s TSA, whose operations are literally defined by attention, to see how prolonged exposure to an unending chain of similar, unthreatening luggage items evidently makes the security gate personnel virtually blind to real explosives and other threats planted to test their vigilance.

Computers, on the other hand, can be great at keeping vigilant and never dropping out of high attention mode—assuming that they can be programmed to execute the desired kind of attention for the task. With the recent advances in image recognition and analysis (yes, thanks to deep learning approaches), they may soon take the place of many TSA analysts in looking through images of carry-on luggage. But attention is not the whole story for the TSA gate operative’s role. They also must be performing broader pattern recognition on a continual basis, making inferences about combinations of aspects of travelers’ presentation at the gate—and inferences based on highly intuitive information from multiple multimedia data streams will not be the strong suit of computer systems for a long time to come.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues