-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Technology to connect people and knowledge

Article Featured Image

The generative capabilities of these models aren’t limited to taxonomies and subject area models, but also include the content of documents themselves. Certain smart document management platforms directly integrate with ChatGPT so users can render documents directly with that LLM. “By issuing a prompt through the document management platform to the language model, users can say, ‘Here’s an outline; create for me a press release from the outline,’ and it will get you the press release,” Deedee Kato, Foxit VP of corporate marketing, maintained.

The same process applies to templates for contracts, technical documentation, and almost any other written content. In fact, users can significantly manipulate their documents with these generative AI techniques. Examples include the ability to rotate pages, converge content from different sources, and convert PDFs to Microsoft Word. “You can also organize different pages,” Kato remarked. “You can move 10 pages from one spot to another.”

Content summarization

One of the more ubiquitous ways in which language models enable people to access knowledge is by summarizing documents. However, as is the case with all responses from an LLM, organizations must carefully assess the accuracy of these responses since, according to Aasman, “You can never trust it.” The capacity for a language model to rapidly parse and summarize hundreds, if not thousands, of pages of documentation has undeniable business value. And, according to Nivala, “Summarization is one of the better suited tasks of the LLM, which is less prone to problems like hallucinations. When we ask the LLM to do a specific thing, like ‘Take this document and produce a summary,’ it is very reliable.”

Nevertheless, there’s still a degree of caution pertaining to these deployments—particularly for mission-critical applications. “A lot of our customers are still playing around with it,” Kato admitted. “We’ve talked to IDC and Forrester and other market analysts, and as far as actually monetizing on this, a lot of vendors are not really making a lot of money on this yet.” Ultimately, the suitability of summarization tasks may directly pertain to the costs of errors. That price should be weighed against the scale of the summarization task itself.

“People say if the price of making a mistake is not high, and you’re doing something that’s very repetitive, like reading 1,000 restaurant reviews, ... that’s the best way to use an LLM,” Aasman noted. It’s even possible to include such summaries as part of the domain knowledge to teach AI-powered digital agents to become subject matter experts. Gu discussed a tool that “allows you to put in a URL and it will automatically crawl the [page], summarize the content, and be able to use that as domain knowledge that you can converse with right away.”

Collaborative architectures

People are directly connected to knowledge through distributed architectures that rely on various technologies to centralize organizational content. The data fabric technology is renowned for its capacity to interconnect all enterprise resources within a single framework with a universal point of access. Logical data fabrics make use of data virtualization and query federation capabilities. A physical data fabric can enable organizations to link different departments—and their respective definitions, business concepts, and terminologies—within a knowledge graph. Thus, organizations can effectively collocate their content from distributed sources, maintain its individual value for their business units, and still make it accessible across the enterprise. Language models are a critical interface for these applications.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues