-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

What happens when AI meets a pandemic?

Article Featured Image

In an opinion piece in The New York Times written just as the novel coronavirus pandemic was taking off in New York City, author David Brooks used the phrase “plague eyes” to describe the radically new perspective that had begun to inject itself into the consciousness of everyone facing the unprecedented health threat.

One of the things worth turning those eyes toward is the challenge of accessing trustworthy information at a time when new problems call for fast and far-reaching decisions from government politicians and bureaucrats, enterprise executives, healthcare leaders, small business owners, and individual families. At the time Brooks’ column was written, there was confusing and contradictory information coming out of the healthcare agencies, the Trump administration, the Chinese government, the mainstream media, the European media, and the media of the right, to name a few sources of real and alternative fact.

What is clear

This is what we can see clearly after some months of reading, watching, and listening to the pronouncements on the crisis from around the globe: Content challenges continue to dog AI. For example, should AI be able to create its own channel of authoritative information so that both fake-news people and alternative-facts people can access “unvarnished” truths? There has been more than enough time to realize that even the most sophisticated AI simply can’t offer the kind of discrimination that would make such an automatic and authoritative channel possible. This, of course, is partly because AI is challenged to self-regulate its algorithms’ operations over time. And it is also because it has no ability to anticipate novel facts and events such as those posed by this virus and weave them into a coherent narrative context.

One of the prominent stories illustrating the AI content problem involves Facebook. The story highlights the fraught balance that internet giants face between the limited but proven abilities of humans in the world of digital content and the impressive but wild capabilities of machine learning programs designed to support or back up the human workers.

Human judgment

In the information-poor early days of the spread of the virus in the U.S., Facebook decided to send hundreds of its content monitors—those humans whose job it is to review the posts and pictures on its site for objectionable content—home to shelter in place for safety reasons. Wired magazine estimates that human content monitors amount to some 15,000 people at Facebook and its third-party contractors, and that many of their key functions cannot be accomplished in a working-from-home environment for security and other reasons.

From an operations strategy point of view, Facebook had decided that it would rely on its machine learning-based content-reviewing algorithms to take up the slack for the dramatic reduction in human resources—and is this not the kind of future that many AI proponents have been embracing? We shouldn’t need so many humans making routine decisions that can be carried out by machines, should we? Shouldn’t machine learning be as quick and accurate as humans in reading posts and comments?

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues