-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

What happens when AI meets a pandemic?

Article Featured Image

Technology’s limitations

As Wired reported, the first sign of problems from this safety-first shift to machine learning-based monitoring appeared the day after Facebook sent most of the human monitors home. Facebook’s content analytics shift from a human-forward approach to a machine-forward one brought unintended consequences. Specifically, Facebook’s spam filter rules, when given over to the machines, suddenly began overflowing with false positives. That is, legitimate posts from reliable sources were being judged as objectionable. People suddenly noticed that content from mainstream media outlets such as The Atlantic, USA Today, the Times of Israel, and BuzzFeed were being taken down from the Facebook site for violating Facebook’s spam rules.

Facebook itself apparently understood the problem well, when it noted in a November 2019 enforcement report on objectionable content:

“While instrumental in our efforts, technology has limitations. We’re still a long way off from it being effective for all types of violations. Our software is built with machine learning to recognize patterns, based on the violation type and local language. In some cases, our software hasn’t been sufficiently trained to automatically detect violations at scale. Some violation types, such as bullying and harassment, require us to understand more context than others, and therefore require review by our trained teams.”

Continuous interaction

Technology’s limitations are now baked into our online operations just as layers in a layer cake. Or, perhaps, more suggestively, the human and the technology operations are coming to form a kind of double helix structure that plays a continuously interacting role in areas like content monitoring. Humans generally create content (although, through natural language-generation, AI increasingly does, too). Machine learning systems then review the published/posted content and make initial decisions on its character. Humans are called upon to review complex examples. Algorithms in turn actually present the content decisions of both people and machines.

We are now inhabiting a transition space where human traditions and judgments about the character and quality of various kinds of content are literally displayed side-by-side with machine-made judgments whose only traditions are to be found in their training sets. Six months into the pandemic through plague eyes, we can see clearly that AI currently is in no shape to run our information supply from an undisclosed online location. The turning point in the development of AI technologies is not going to come when the “singularity” arrives and human intelligence suddenly becomes irrelevant. The turning point will be when we find a way to leverage the machines for the things that they do uniquely, retain our fundamental human understandings, and build the crucial bridge of trust that will allow us to interact with those algorithms with insight and integrity.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues