-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Using AI in an uncertain world

Article Featured Image

Like anything else in life except death and taxes (and even the particulars of those are uncertain), uncertainty is something that humans deal with every day. From relying on the weather report for umbrella advice to getting to work on time, every day actions are fraught with uncertainty, and we all have learned how to navigate an unpredictable world. As AI becomes widely deployed, it simply adds a new dimension of unpredictability. Perhaps, however, instead of trying to stuff the genie back in the bottle, we can develop some realistic guidelines for its use.

Our expectations for AI and for computers in general have always been unrealistic. The fact is that software is buggy, that algorithms are crafted by humans who have certain biases about how systems and how the world works—and they may not match your biases. Furthermore, no data set is unbiased, and we use data sets with built-in biases or with holes in the data to train AI systems. Those systems are by their very nature, then, biased or lacking in information. If we depend on those systems to be perfect, we are letting ourselves in for errors, mistakes and even disasters.

However, relying on biased systems is no different from asking a friend who shares your worldview for information that may serve to bolster that view rather than balance it. And we do that all the time. Finding balanced, reliable, reputable information is hard and sometimes impossible. Any person trying to navigate an uncertain world tries to make decisions based on balanced information. The import of the decision governs (or should) the effort we make in hunting for reliable but differing sources. The speed with which a decision must be made often interferes with that effort. And we need to accept that our decisions will be imperfect or even outright wrong, because no one can amass and interpret correctly everything there is to know.

Perfect partnership

Where might AI systems fit into the information picture? We know that neither humans nor systems are infallible in their decision-making. Adding the input of a well-crafted, well-tested system that is based on a large volume of reputable data to human decision making can speed and improve the outcome. There are good reasons for that. Human thinking balances AI systems. They can plug each other’s blind spots. Humans make judgments based on their worldview. They are capable of understanding priorities, ethics, values, justice and beauty. Machines can’t. But machines can crunch vast volumes of data. They don’t get embarrassed. They may find patterns we wouldn’t think to look for. But humans can decide whether to use that information. That makes a perfect partnership in which one of the partners won’t be insulted if their input is ignored.

Adding AI into the physical world in which snap decisions are required raises additional design and ethical issues that we are ill-fit to resolve today. Self-driving cars are a good example of that. In the abstract and at a high level, it’s been shown that most accidents and fatalities are due to human error. So, self-driving cars may help us save lives. Now we come down to the individual level. Suppose we have a sober, skilled, experienced driver who would recognize a danger she has never seen before. Suppose that we have a self-driving car that isn’t trained on that particular hazard. Should the driver or the system be in charge? I would opt for an AI-assisted system with override from a sober, experienced driver.

On the other hand, devices with embedded cognition can be a boon that changes someone’s world. One project at IBM research is developing self-driving buses to assist the elderly or the disabled in living their lives independently. Like Alexa or Siri on a smaller scale, that could change lives. We come back to the matter of context, use and value. There is no single answer to human questions of “should.”

Conditioning considerations

That brings us to the question of trust. Given that we understand that several layers of bias will inevitably be built into the cognitive systems we interact with and given that the behaviors coming out of the systems are occasionally unpredictable, what kind of trust should we place in AI-based systems and under what circumstances? That depends on:

  • The impact of wrong or misleading information—Poor decisions? Physical harm? Momentary annoyance?
  • The amount and reliability of the data that feeds the system.
  • The goals of the system designers—Are they trying to convince you of something? Mislead you? Profit from your actions?
  • The quality of the question/query.

Underlying those four important conditioning considerations is a fundamental challenge: In many of the outcomes from systems involving machine learning, particularly in the current applications of deep learning, it can be exceedingly difficult for the human decision-maker to analyze how the computer came up with its report, output or recommendation(s).

At this point in the development of the technology, we can’t simply ask the system, as we could a human collaborator, how it reached the answer or recommendation it is proposing. Most systems today are not designed to be self-explanatory. Your algorithms may not be forthcoming about what insightful routes through the data they have taken to develop their answer for you. In many applications where solutions are well-contextualized, the system’s answers won’t raise questions or eyebrows, and you the user will be able to say, for example, “Yes, this is an image of a nascent tumor.” But in other applications, the context may be murkier, and the human user will be left to wonder whether to trust the recommendation coming back from the machine. “Should I really be shorting Amazon shares at this level? What makes you think so?”

Is there some way to design systems so that they become an integral part of our thinking process, including helping us develop better questions, focus our problem statements and reveal how reliable their recommendations are? Can we design systems that are transparent? Can we design systems that help people understand the vagaries of probabilistic output? Will we be able to collaborate with an AI about whether an umbrella or a full foul weather outfit is the better choice for today’s weather circumstances? Insightful application design will remain the key—always taking direction from the context of the use and the intention of the user.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues