As GenAI opens up new possibilities, it’s important to remember that human intelligence needs to be part of the KM system. It’s humans who can spot an error in an AI-created document, who can assess the viability of a suggested course of action, who can hold opinions about a topic, who can bring tacit knowledge to the forefront, and who can explain why a decision is likely to incur an emotional response. As any of the GenAI chatbots are quick to tell you, they can’t form an opinion, and they don’t have emotions. At the heart of AI are mathematics and predictive analytics, not human understanding.
Another very human trait is trust. Right now, trust in AI technologies is at a low point. While people are happy to experiment with GenAI to create new knowledge, skepticism remains about whether that new knowledge is accurate and trustworthy. Metrigy’s study of more than 500 U.S.-based people, “Customer Experience Optimization: 2023–24: Consumer Perspective” (https://metrigy.com/product/customer-experience-optimization-2023-24- consumer-perspective), found that “Just shy of one-third have no trust whatsoever in AI. There is a significant change by age group, where 43% of those 45 or older don’t trust it compared to 18% of those younger than 45.” Trust is likely to continue to be an issue in the next few years. Until KM systems can achieve trust in AI, a totally AI-centric workplace will be a fantasy.
The future of KM will be based on collaborative work habits, fueled by technology that encourages knowledge sharing, enhances productivity, supports employees to have a healthy work life, and accepts that not every aspect of knowledge management is technology-reliant.