-->

Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

COGNITIVE COMPUTING: Is neuromorphic AI the next big thing?

Article Featured Image

CAN COMPUTER CHIPS LEARN?

Intel says yes, they can. In fall 2017, Intel announced Loihi, a neuromorphic test chip designed to be self-learning. It incorporates feedback from the environment and does not need to be trained in the way current cognitive systems do.

According to Intel Labs, the chip is up to 1,000 times more energy efficient than general computing training systems and uses fewer resources than existing neural networks. Its digital circuits mimic the way the brain conveys information, including sending information with pulses or spikes to synapses and storing it locally at the interconnections.

Intended to overcome the limitations of current machine learning systems and be more generalizable, the chip contains 130,000 (artificial) neurons and 130 million synapses. In February 2018, Intel announced the establishment of the Intel Neuromorphic Research Community (INRC). The group will consist of academic, government and industry researchers who will work on algorithms, applications and models for neuromorphic systems.

HTM brain-based model

Much of the research on neuromorphic computing is in the theoretical stage but the work is quite widespread, largely because of the enormous potential the field offers. Academia, industry and government organizations are all exploring ways of developing and implementing systems based on the human brain. Numenta was founded to understand intelligence in the neocortex of the brain and build systems based on those principles.

Privately funded, Numenta has spent over a decade developing a framework called Hierarchical Temporal Memory (HTM) using the neocortex as a model. The company holds dozens of patents and opted to make its code open source so others could build on its foundation. To use it in a commercial application or distribute it, developers can make their code open source or purchase a commercial license from Numenta.

Numenta developed HTM for IT, a technology that can detect anomalies in many different types of data, including data from network systems, financial news and geospatial activity. One example of an application built on the technology is Grok for detecting and resolving cloud issues. In one use case, Grok ingests a data stream from cloud storage solutions and tracks a variety of metrics that could indicate impending downtime or other problems. It learns over time and is able to adjust and adapt its criteria for defining an anomaly as new data is received. If it detects an anomaly, it can respond autonomously to mitigate the event or send an alert so that the issue can be investigated. “Grok can also be used for other purposes, such as natural language processing,” says Roumen Antonov, CTO of Grok, “and it can ingest data streams generated by in-house systems and applications as well as from the cloud.”

Numenta is continuing with its research to refine its model of how key features of the brain work to support memory and learning. The neocortex is very homogenous, and at a fundamental level, each part of it is carrying out the same type of operation even though different parts of the neocortex are processing different types of information (vision, language, etc.). So, if a common algorithm can be developed to replicate the functioning of the neurons, it could generalize to other tasks in an intelligent computer system in a way that current machine learning cannot.

“Our model is biologically constrained,” says Matt Taylor, open source community manager for the Numenta Platform for Intelligent Computing, “meaning that it is consistent with experimental neuroscience. If it does not match experimental evidence, we rethink the model.” On the other hand, Numenta does not feel compelled to model every part of the brain; it focuses on the neocortex as the seat of intelligence.

The neocortex receives two types of inputs. One is sensory data and the other is information about actions that are being carried out by the person (e.g., walking, eye movements). Both types of inputs provide a steady stream of data, much of it in time-based patterns. The neocortex stores that information, the memory of which allows it to make predictions based on experience. The information then directs the person’s future actions. Given information such as how many seconds are left on the “Walk” sign, for example, how fast must the person run to get across the street in time?

Numenta’s HTM model embodies the idea that this type of awareness is essential. “An intelligent computer system needs to have a way to discover things about the world around it,” Taylor says, “whether through sensors that are providing input or through exploring the internet for knowledge of some kind.” This is one aspect of computer intelligence that many current systems do not incorporate and is one reason they are limited.

Intelligent computers do not operate on a level playing field with the human brain for many reasons. Newborn humans (and other animals) have learning structures that have evolved through many millennia, while computers are starting from scratch. Even the developers of the most sophisticated neuromorphic systems do not claim to model the brain in its entirety or simulate the complex neurotransmitters that modulate brain functions. However, those systems have demonstrated the potential to provide greater transparency and flexibility than current cognitive systems, and they bear watching over the near and long term. 

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues