Establishing trustworthy AI for successful implementation
Generative AI (GenAI) paves the way for a variety of opportunities in the world of productivity, from process automation to optimization and more. Yet, despite its potential, it bears significant challenges that can hurt a business rather than help it, where hallucinations—or seemingly plausible, incorrect outputs—can make for risky implementation.
Offering their expertise on grounding AI, Ville Somppi, senior vice president, industry solutions at M-Files, joined KMWorld’s webinar, Achieving Trustworthy AI: Unlocking the Power of Generative and Extractive Models, to explore the ways that AI can be positioned for success through trust.
We live in a new era for human-machine interaction, where, due to emerging technologies like large language models (LLMs), machines know better than ever how to communicate with people, noted Somppi. This has presented a world of new possibilities, leading to use cases such as live transcription, content generation, research assistance, and more.
Although it's easy to get swept up in the GenAI hype, there is a fundamental question to consider: Who is responsible for validating sources and results? Determining how trustworthy the information is being produced by GenAI is fundamental in ensuring that the technology is as valuable as it is impressive.
“The modern large language models have been trained in a way where they want to answer, even if they…[don’t] know the answer,” said Somppi. This can lead to some obvious knowledge disasters, especially if the GenAI platform is operating in a highly regulated, highly critical industry.
Somppi then introduced the concept of generated versus extracted results, where generated produces something new from the AI’s imagination and extracted pulls from existing information with AI assistance. Extracting information relies on knowing the sources that the AI is using, and, if applicable, limiting the AI to organizational knowledge—not public information.
Can we trust AI? Yes, according to Somppi, though it’s not without its challenges.
There are three main obstacles that stand in the way of trustworthy AI:
- Connectivity: AI needs access to the right information resources to provide real value.
- Confidentiality: AI must comply with the organization's information security policy and must not disclose information to which the user is not entitled.
- Curation: AI should only access relevant and up-to-date information to ensure accuracy of responses.
Uniting inherently disparate systems is fundamental toward building effective GenAI, explained Somppi. Eliminating information silos—from systems to departments—is critical. Enterprises should examine what needs to be connected in their unique infrastructures to successfully enable AI.
Maintaining information security is core to trustworthy AI, as some clever prompt engineering on behalf of the end user may cause the GenAI to reveal sensitive information. Somppi explained that enterprises need to consider identity management, general IT access control, as well as the access intricacies of interdependent users, teams, departments, business units, and projects.
“In order to overcome the obstacle of confidentiality, the real question is, ‘Within our business, what permission models make sense?’” posed Somppi. “How do we control security; how do we control permissions of users—and therefore the AI—to information?”
The curation challenge relates to the organization, cleanliness, and management of information, ensuring that the GenAI’s outputs are accurate and fresh. Within your business, consider the following questions:
- Do we make copies of information?
- Do we know what version is official?
- Do we archive old data?
- Do we automate classification?
- Do we tag content with context?
- Do we tag sensitive documents?
- Can we exclude private content?
- Do we leverage public sources?
- Can we list the used sources?
For the full, in-depth webinar, featuring use cases, an M-Files demo, and more, you can view an archived version of the webinar here.