What’s next in KM: All roads lead to AI
“It’s important to remember that GenAI is not a panacea,” cautioned Stradtman. “There have been a lot of false promises in the community that GenAI alone will solve every KM problem, and that is not the case. However, with GenAI we are likely entering a new phase.” Companies are thinking more about the importance of well-structured and accurate enterprise content. “A year from now, good KM platforms will have the ability to identify and eliminate redundant and outdated information automatically,” Stradtman predicted, “and to organize unstructured knowledge for better ingestion into GenAI LLM models.” That capability will lead to getting more accurate answers when users are accessing a large number of sources.
The value of established KM technologies should not be overlooked, though, Stradtman pointed out. “Onboarding is an unsung hero with respect to KM. One consumer goods company showed a 20%– 30% increase in efficiency by having the data they needed readily available. At a rate of hiring 165 people a year, that was over 4,000 hours saved, or almost $1 million.” These initiatives should continue, along with emerging ones.
Governance in the age of AI
With respect to governance, AI is both a blessing and a curse. On one hand, it can help locate, categorize, and manage information, ensuring privacy and appropriate use. On the other hand, the AI process itself is subject to governance in order to ensure ethical use, which puts another layer of accountability into the mix, such as the need to conduct an AI inventory in order to understand where AI is being used in the enterprise.
The emergence of GenAI has put new pressure on data governance. “AI is data hungry,” said Blake Brannon, chief product and strategy officer at OneTrust. “Data is the fuel and AI is the engine. AI needs lots of data, and organizations are collecting alot of it. But this can lead to a massive data sprawl problem, especially with unstructured data.” Organizations frequently do not know what data they have or where it is. “It’s hard to operationalize AI governance without data governance, but you do ultimately need to do both,” he added.
Data for any purpose should be collected responsibly and its intended use made clear, with consent or permission made explicit. “Security and data controls have historically focused on protecting data, especially from bad actors,” Brannon noted. “But now, it’s not just about protecting the data. It's about proactively ensuring responsible use of that data.”
One of the functions of OneTrust is to help organizations discover data and AI in their business to understand the context of data: where it’s stored, what type of data it is, and how it is being used. “Govern these assets, identify risk, and prevent any use of the data until its use has been authorized and approved,” Brannon advised. “In addition, understanding third-party AI risk is a critical component of AI governance. An organization is still liable and accountable for issues stemming from AI in the supply chain,” he added.
Regulatory complexity is another challenge for governance in the age of AI. The EU AI Act, published in July 2024, provides a legal framework for regulation of AI systems. It focuses on the lifecycles of high-risk AI systems (biometric, critical infrastructure, and other categories), including requirements for data training and data governance, technical documentation, recordkeeping, technical robustness, transparency, human oversight, and cybersecurity. Most of the provisions are scheduled to take effect in 2026.