-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Bringing adult supervision to machine learning and AI

You’ve probably heard about Tay, the infamous chatbot developed by Microsoft. It even had its own Twitter account. Unfortunately, it barely lasted a day before it had to be shut down for producing a stream of hateful, racist comments.

You’re probably also familiar with the Turing Test for assessing machine intelligence based on how well it mimics human intelligence. If he were alive today, Alan Turing might be inclined to revise his criteria. While Tay certainly exhibited human-like behavior, it turned out to be the worst that humans have to offer. IBM had a similar experience when it added the Urban Dictionary to Watson’s input data stream.

The good news is that these isolated incidents provide valuable lessons that hopefully will be remembered far down the road. We’ll need those lessons and more as the rate of growth in machine intelligence continues to outpace that of human intelligence.

We’re certainly enjoying many benefits from AI. Machine learning platforms are valuable tools for discovering hidden patterns, anomalies, and opportunities. But these tools rarely provide insights into how they derive their knowledge, or when a model or algorithm is no longer valid. This greatly increases risk, as Microsoft, Boeing, and other automation-intensive organizations are finding out.

Consider how many of your own business decisions are automated. Do you even know how many machine learning algorithms and/or business rules your organization has? And if so, how do you determine if they’re still valid?

One way to get a grip on this seemingly runaway trend is to start incorporating knowledge governance into your organization. Whether a piece of critical knowledge is inside a person’s head or an artificial neural network, you need insight (and oversight) into what exactly enters that “brain,” what happens inside it, what comes out, and why.

Getting started

The notion of knowledge governance shouldn’t conjure up images of some lofty tribunal seated in a grand hall, resolving disputes. Rather, it’s a well-defined process for getting everybody in your organization actively engaged in using human and machine knowledge to legally, morally, and ethically achieve mutually agreed-upon goals.

Think of all the moving parts that need to be operating in harmony: strategy, enterprise architecture, security, legal, regulatory, finance, public relations, supply web, IT, KM, HR, and so on. In fact, you’d be hard-pressed to find a block on your organization chart that isn’t affected in some way by both human- and machine-generated knowledge. As such, your knowledge governance group needs to be more akin to a close-knit collaborative community than a traditional board of governors.

The first step is assembling this group. Similar to many governing boards, a knowledge governance board must be granted decision-making authority. One of the board’s first actions should be to formulate a set of core values for the organization regarding human and machine knowledge. Begin with a clearly stated purpose, followed by a set of well-defined and deeply internalized values.

For example, Google’s stated principles with regard to AI include being socially beneficial; avoiding unfair bias; ensuring safety, accountability, and privacy; and upholding high standards of scientific excellence (www.blog.google/technology/ai/ai-principles). Google makes it clear that these values can change as technology and society adapt and evolve.

It’s equally important to state what your organization will not pursue. In Google’s own words, such areas include “technologies that cause or are likely to cause overall harm,” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.”

Another important step to take early on is to look at all the critical decisions in your organization. If a decision is highly computational in nature, relegate it to an automated system, but with human oversight. If the decision is highly cognitive, intuitive, and experiential, then make it people-centered, supported by computational validation.

Bridgewater hedge fund founder Ray Dalio has used this human-machine interplay to generate billions in profits for himself and his clients. In his best-selling book, Principles, he wrote that “rather than blindly following the computer’s recommendations, I would have the computer work in parallel with my own analysis and then compare the two. When the computer’s decision was different from mine, I would examine why. Most of the time, it was because I overlooked something. In those cases, the computer taught me. But sometimes … I would teach the computer. We helped each other.”

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues