Technology to connect people and knowledge
Centralization is the most prominent motif in emerging KM technologies that connect people and knowledge. Several technologies, applications, and architectures exist to consolidate knowledge distributed in various systems and make it universally accessible through a single interface.
Increasingly, contemporary KM interfaces involve generative machine learning models. Their applicability to quintessential KM use cases is pervasive, spanning everything from intelligent search to ontology and taxonomy creation. Such models also influence metadata extraction, classification, content summarization, question-answering, and much more.
Nonetheless, as these applications signify, the core concepts of KM itself have not changed. Instead, technologies underpinning knowledge graphs, vector databases, large language models (LLMs), digital agents, intelligent document platforms, and other resources exist to make these concepts—and the workflows supporting them—more accessible than ever before.
The resulting democratization of knowledge extends to employees and customers alike. “‘Democratize’ is exactly the right word in this case, because anyone, without having to learn a complex language or have a computer science degree, can now talk to a knowledge graph,” reflected Jans Aasman, Franz CEO.
The same can be said of almost any other back-end system supporting today’s KM practitioners.
Domain knowledge
Whether accessible through a knowledge graph, automation platform, document repository, or some other tool, the foundation of KM remains specialized domain knowledge for specific industries and use cases. Oftentimes, that domain knowledge is richly described as, or directly consists of, metadata applied to ontologies, subject area models, schema, taxonomies, glossaries, vocabularies, and regular expressions. This metadata effectively provides a map of content that can be distributed throughout any number of systems. It reinforces meaning and enables content to be consolidated and accessed through a central interface.
According to M-Files CEO Antti Nivala, contemporary KM solutions possess domain knowledge to “ensure the metadata structure, schema, or taxonomy is what serves the customer’s industry and use case. It’s not just what are the terms or the glossary of terms, but what are those business objects that are meaningful to establish valuable context.” For example, tax return content or documents that have been classified and parsed according to metadata in this field would include things like the client’s name, the tax year, the applicable taxes, the country of the tax resident, and more.
Certain KM solutions have such meta-data models available for their clients. In other instances, organizations can simply upload an array of domain-specific documentation and readily converse with it via a digital agent by asking questions, getting summarizations, and applying knowledge gained to business cases. Certain vendors allow users to dynamically create digital agents with conversational AI capabilities by simply describing their characteristics, including physical appearance, tone of voice, and personality. “We allow people to upload their documents, PDFs, spreadsheets, and other data, which immediately gives agents domain knowledge about their offering,” bitHuman CEO Steve Gu said.
Knowledge generation
Generative models such as LLMs can create the concepts for the domain knowledge about a subject. Aasman shared a use case in which users determined criteria for assessing restaurant reviews by asking an LLM for them. The model returned a host of factors, ranging from “food temperature, parking capabilities, complaints about the staff, good dishes, and bad dishes,” Aasman revealed. These responses can provide the concepts for evaluating restaurant reviews for applications of sentiment analysis, text analytics, and even statistical analytics so restaurateurs can better understand how to increase profit margins. The same approach is applicable to retailers.