-->

KMWorld 2024, Washington, DC - November 18 - 21 

The future of knowledge management: Talking to documents with generative AI

Article Featured Image

Overcoming hallucinations

There are numerous approaches organizations can take to mitigate the ten dency for language models to provide contrived, inaccurate responses, popularly termed as hallucinations. Here are some of the more compelling ones:

Validation measures: According to Riewerts, “Whether it’s through OpenAI or AWS and some of the generative services they’re providing, immediately behind those services there are complementary services that are in place for sensitivity validation.” The latter services are designed to protect consumers from models simply making up erroneous responses that may seem plausible. “The same type of data that’s used to train these language models cannot be trained just for natural response purposes, but can also be trained to look for these types of patterns as well, whether it be sensitivity scores or sentiment scores based on the response itself,” Riewerts explained.

Limiting model responses: Restricting model responses to information in a vector database reduces the amount and degree of irrelevant and inaccurate responses. “What we are saying is give a command to the language model that you are AI and you should respond from that particular legal document, and you shouldn’t go beyond that,” Gupta specified. “Second thing, if you don’t have any answer or any response, just say so.”

Prompt engineering: Prompt engineering substantially narrows the scope of responses language models make to questions, which increases the pertinence of the content they generate. By successfully engineering prompts, organizations can “guide the LLM and say just look here, don’t look here,” Sglavo noted. “That reduces the size of the LLM to a smaller solution space, which then also makes the computations cheaper.”

Knowledge graphs: Positioning the knowledge graph framework, and its ontological foundation, between language models and data sources boosts the accuracy of the content the models generate. “The way that knowledge management systems and knowledge graphs help language models is they help ground them,” Aref indicated. “They help them give you accurate answers to questions that are very important, where being off by a couple digits is not acceptable.”

Fact-checking: It’s vital for organizations to fact-check the information they get back from language models, particularly those that rely on publicly available data (such as ChatGPT). Tools like WebChatGPT can obtain hyperlinks to webpages that contain information pertaining to models’ answers to questions. Users can have language models read through that information to see if it verifies the original responses from language models. “You never can trust anything you get back from an LLM, so you have to do secondary and tertiary steps to check on the answers,” Aasman said.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues