-->

KMWorld 2024, Washington, DC - November 18 - 21 

The future of knowledge management: Talking to documents with generative AI

Article Featured Image

These concepts provide a pragmatic lens to help the model understand its prompts and the underlying data it queries. It’s an essential component underlying talking to your documents. Moreover, the knowledge graph framework underpinning these concepts can be implemented in modest KM systems and scalable data warehouses. “It’s possible to embed the knowledge graph inside Snowflake, and I’m sure Snowflake, BigQuery, and others will make language models available from inside that environment, so you’ll have a one-stop shop,” Aref reflected. Foundation models can also generate ontologies for organizations. “The more important it is that you have a 100% correct ontology, the more important it is that, when you use LLMs to create ontologies, you have a human in the loop to check the output,” observed Jans Aasman, Franz CEO.

The search question

Regardless of how conversational they are as interfaces, language models —including options such as ChatGPT and LLaMA (large language model meta AI)—are not search engines. However, they’re still adept at search. One of their more cogent applications is providing question-answering for data of enormous scale. Perhaps the most successful means of implementing such an application, which considerably minimizes inaccuracies and is an optimal means for knowledge managers to formulate a generative AI initiative, is to pair it with vector-based search. This combination entails “organizing the knowledge in a better way,” commented Abhishek Gupta, Talentica principal data scientist and engineer.

Whether for documents, webpages, databases, or images, vectorizing this content is the first step to making it credibly searchable with language models. Gupta articulated a webpage use case in which he “made the whole system domain-specific by taking webpages, dividing them up into different chunks, let’s say according to paragraphs, and storing these paragraphs in a database in the form of embeddings.” Those embeddings are the individual vectors of each paragraph in the webpage. Organizations can then search their content with language models by also creating a numerical vector of their questions or prompts, so the models can effectively find the vector that’s most closely related to it— and the answer.

Vectorizing knowledge

OpenAI has a free API for vectorizing content. With the proper implementation, organizations can even retrieve the supporting documentation to verify answers, making this methodology ideal for mission-critical deployments for everything from regulatory compliance to data privacy. “Imagine you’re doing your compliance research and you’re looking at regulations,” Aasman mentioned. “You can ask a question, but you really want to know what are the paragraphs where the answers came from.”

In fact, depending on how content is vectorized, searchers talking to their documents can get specific sentences containing their answers, paragraphs, entire documents, chapters in a book, or any other type of codification. “For compliance documents, you have to make a choice: Do you want to do it on a whole sentence basis?” Aasman revealed. “But when you ask ChatGPT to answer a question, the question might be in a whole paragraph. So, maybe it’s better to do whole paragraphs. This is one of the things people are still trying to figure out: How do I chop up my documents so that ChatGPT has the highest chance of getting the right answer?” This lies at the heart of having a useful conversation with your content.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues