Using Generative AI for real-world KM solutions
Nearly 18 months after ChatGPT initially captivated consumers and organizations alike, generative AI (GenAI) is still fostering expectations—and prognostications—as lofty as those of any technology ...if not more so. Amidst the hype, companies are looking for ways to put the technology to use for real-world KM solutions.
Vendors are hyping generative machine learning models to create entire semantic layers for transforming data, rectifying differences in schema, and harmonizing data for singular queries across sources. Many are feverishly working on, along with nonstop marketing of, copilot assistants for automating core processes of domain-specific workloads, such as intelligent document processing (IDP). Others are touting a total revamping of the customer experience, in which customers’ interactions with brands are almost entirely guided by the generative capabilities of AI entities that know their history, address their concerns, and simultaneously upsell and cross-sell them.
Specific to KM, there are a number of use cases for deploying foundation models that broaden the amount of domain knowledge users can access, reduce the time required to do so, and automate the requisite curation steps to apply this information across business functions.
These applications are not “on the road map” or part of a product development strategy. Instead, they are in various stages of implementation across areas of case management, regulatory compliance and litigation, federal and state government, and industry verticals. The most prevalent deployments consistently address vector search, question-answering, summarization, natural language querying (NLQ), IDP, digital agents, and more.
The overarching utility derived from GenAI capabilities relies on organizations’ proficiency to reduce redundancy to minimize inaccuracies, monitor outputs, and trace responses to the underlying data sources from which their responses are produced.
Guarding against hallucinations
The quintessential KM application of generative machine learning (ML) models involves vector search for question-answering, natural language generation, summarization, and recommendations. Savvy solutions encapsulate all of these capabilities for users within the confines of a secure, regulatory-compliant data fabric that is replete with a vector database and myriad models for embedding any array of content. Medhat Galal, Appian SVP of engineering, referenced a “turn-key” records management solution for real-time information retrieval across all enterprise content repositories in which “there’s no coding required and no design, except thinking about which record you want to use, then it’s available in the interface immediately. You’re not doing prompt engineering or anything you would normally try to do with GPT-4 or customized GPTs.”
The system furnishes natural language question-answering from specific records or multiple records, including summari- zation and recommendations. However, organizations should realize that the tendency for generative models to produce nonfactual responses that sound plausible, which many term “hallucinations,” is a caveat when using this technology. “This is an area of exploration that no one has gotten right,” Galal explained. “Y ou cannot prevent, 100%, the LLMs [large language models] from hallucinating. But, we've got safety and guardrails in place, such as an addition to the prompt, to say if you can't answer the question, don’t make up the answer.” Retrieval augmented generation is employed because models are searching organizations’ records in their data fabric sources, including documents, cases, notes, comments, and more. The sheer scale at which the system operates—a federal agency is currently employing it to find all applicable laws for specific job positions—is transformative.