-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

Where GenAI fits and where it doesn’t: Examining implementation with Lucidworks

Deeply researching and evaluating how the latest AI trends can serve your organization will differentiate those who adopt AI for AI’s sake, and those who achieve greater efficiencies. Certain trends—namely the use of large language models (LLMs) and retrieval-augmented generation (RAG)—have the capacity to radically transform the way enterprise knowledge is governed, from prompt optimization to improved relevancy.

Phil Ryan, VP strategy and innovation at Lucidworks, and Eric Redman, senior director, data science and analytics at Lucidworks, joined KMWorld’s webinar, Governing Enterprise Knowledge in the AI Era: Enhancing Relevance with LLMs and RAG, to examine a variety of methods that employ LLMs and RAG to revolutionize business knowledge management—including enhanced tagging, categorization, summarization, and more.

Lucidworks lives at the intersection of search and AI, explained Ryan, where “GenAI has been  perfect for some of the specific challenges that our customers traditionally face in the search space.”

Accelerating relevance in every search experience with AI-powered capabilities is the focus of Lucidworks’ platform, shortening the searcher’s journey to find information. Yet, “you can’t have good GenAI without good information retrieval—without good orchestration capabilities,” emphasized Ryan.

Redman offered guidance regarding the GenAI path to success, which journeys through the following components:

  1. Define use case.
  2. Select model type.
  3. Integrate data sources.
  4. Manage trust and security.
  5. Define access levels.
  6. Implement cost controls.
  7. Track business case.

Redman cautioned that this journey, though seemingly simple, requires thorough understanding for each of its stages.

When defining the appropriate use case, Redman encouraged viewers to start with a use case with high information friction to ensure that the AI implementation will deliver tangible value. Enterprises should also incorporate source citations and paths to trusted resources, making users very aware that they’re speaking to AI, according to Redman.

Ultimately, GenAI succeeds in an ensemble, where a contextually aware experience consists of a myriad of components to create that experience—including semantic document chunking, post query intent analysis, reference document ranking, summarization, and more. Examining where GenAI fits and where it doesn’t, especially in regard to cost efficiencies, is vital.

“It’s easy to get excited about the shiniest new tool and then lose sight of the fact that there’s a lot of elements in these systems, and you don’t need GenAI for every piece,” said Ryan. “Sometimes you can get there with a simple embeddings model, or simple rules or heuristics.”

Furthermore, with disparate data sources comes varying characteristics, where data quality consistency will be more difficult to attain. Yet it poses a unique opportunity for GenAI, mitigating the differences in data sources to deliver a more consistent integration at the time of query, according to Redman. Large and complex documents need special attention, and different data sources may require different approaches to extract meaning.

Ryan and Redman continued their discussion of successful GenAI implementation, covering additional topics such as the relationship between models and partners, the necessity of models respecting access, cost impacts, and new approaches to measurement for GenAI.

For the full, in-depth webinar, you can view an archived version of the webinar here.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues