-->

Keep up with all of the essential KM news with a FREE subscription to KMWorld magazine. Find out more and subscribe today!

Bringing adult supervision to machine learning and AI

Other important steps

  1. You’ll need a knowledge-vetting process. Just as with machines, even the very best human experts can be wrong. They can also go rogue. Whether human or machine, it’s important to keep bad habits, biases, and outdated conclusions from creeping in.

Warning: This requires actually admitting and learning from faults and mistakes, which in turn requires a great deal of honesty and trust. Strive to achieve as close to full transparency as possible.

  1. Be error-seeking as well as goal-seeking. Know what has the greatest potential negative impact versus the probability of its occurrence. Use good old-fashioned scenario planning to identify and mitigate possible unintended consequences.
  2. If you don’t already have them, start appointing metadata owners. Eventually, you’ll need to train, nurture, and grow them into full ontology owners. In the world of big data, the notion of database ownership is rapidly becoming obsolete. Ontology is the key to cutting big data down to size (recall “Big opportunities in small data,” The future of the future, April 2016).
  3. Don’t forget accessibility (see “No one left behind,” The future of the future, March 2018). The only thing that is worse than bias is outright exclusion, especially, if someone is denied access to knowledge because your system doesn’t adequately accommodate users; with disabilities.
  4. Make sure knowledge governance doesn’t become its own isolated silo. Incorporate any existing corporate governance models your organization employs, including IT, information, and others.

Something old, something new

Cyclomatic complexity was first introduced in 1976 by mathematician Thomas J. McCabe as a means of reducing software errors by minimizing the number of paths through a program. The McCabe complexity metric, as it came to be known, was particularly useful in addressing the Y2K problem. The approach was to identify all the possible date-related paths that needed to be validated in order to ensure there were no hidden branches which could result in a system failure.

Today’s AI software exhibits the same challenge, only on a much larger scale. The good news is that this tried-and-proven methodology is perfect for evaluating the mysterious, hidden, “black box” world of machine learning. This is especially true for neural networks, which can easily exhibit runaway complexity, resulting in increased risk from errors.

As for something new, the Cyberlearning and Future Learning Technologies Program of the U.S. National Science Foundation (NSF) provides funding to develop new ways for humans and machines to learn in a technology-rich world. The goal is to improve our capacity to solve “wicked” problems involving multiple, interlinked complex systems. Such problems include climate change, crime, communicable diseases, transportation, and others. More importantly, the program involves the entire learning spectrum from K–12 school districts to universities to the workplace.

A related NSF-funded program is led by Lehigh University researchers Ting Wang and Eric Baumer. As with many software programs, machine learning algorithms are often pieced together using existing models, many of which are open source or downloadable at little or no cost. The problem is any latent vulnerabilities are inherited by the new, aggregated model. This has serious implications for a wide variety of critical applications, including smart cities, autonomous vehicles, medical diagnoses, high-frequency securities trading, and others. Wang and Baumer are addressing this challenge by developing methods for building trusted machine learning algorithms from untrusted models.

The right balance

Human and machine knowledge governance has many moving parts. No governance means leaving things to chance. Too much governance means clogging up the system and slowing things down to a crawl. The trick is achieving the right balance based on your organization’s size, goals, strategy, and risk profile.

As with any self-organizing, complex system, knowledge governance needs time and nurturing to properly emerge, grow, and evolve. Such notions are familiar to us KMers. Once again, it’s time to step up and lead the way.

This is something that can’t be left to chance. Start putting these basic principles to work in your KM practice now, and you will learn every step of the way.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues