-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Truth, lies, and large language models

Article Featured Image

As of early 2023, when you searched Google for the Dutch artist Johannes Vermeer, the top image was his famous “Girl with a Pearl Earring” … except that it was, in fact, a highly polished AI-generated image that looks like a perfume ad’s attempt to re-create the original. As Maggie Harrison, in her June 5, 2023, article in Futurism (futurism.com/top-google-result-johannes-vermeer-ai-generated-knockoff) points out, any doubts about the image’s authenticity are confirmed by the fact that her earrings are a source of light, centuries before light bulbs, much less LEDs, were invented.

Another post by Harrison, referencing an article in The New York Times (www.nytimes.com/2023/08/05/travel/amazon-guidebooks-artificial-intelligence. html), reveals that this degrading of the internet by generative AI can now be seen in other important outposts: Amazon, for example, is selling a series of travel guides generated by AI that contain errors and “hallucinations,” which is a nice way of saying “made-up stuff,” which, in turn, is a nice way of saying “BS,” which is a nice way of saying, well, never mind.

Generative AI, in the form of both ChatGPT and Google’s Bard, is responsible for another response that is both wrong and illogical: When asked to name an African country that starts with a “K,” Bard reported that none do, adding, “The closest is Kenya, which starts with a ‘K’ sound, but is actually spelled with a ‘K’ sound.” Yes, you read that correctly (www.itpro.com/technology/artificial-intelligence/ai-will-kill-google-search-if-we-arent-careful).

Gary Marcus, a leading voice in the conversation about the uses and abuses of AI, writes at the end of his Substack post on “What Google Should Really Be Worried About” (https://garymarcus.substack.com/p/what-google-should-really-be-worried): “Cesspools of automatically-generated fake websites, rather than ChatGPT search, may ultimately come to be the single biggest threat that Google ever faces.”

Long-standing problems with fakes

We, of course, have long had the problem of fakes on the internet. They’re one of the prices of having an open, unmoderated, unmanaged, global distribution system. AI’s automating of the production of fakes plays directly into internet weaknesses that are also its strengths. The internet’s connectivity makes it far easier to spread lies than to remove them, and the internet’s scale makes it infeasible for humans to moderate it effectively. The internet is the unfortunately perfect delivery system for AI’s hallucinations.

In truth, AI’s chatbots don’t sometimes hallucinate. They always hallucinate—they always make stuff up. It’s just that most of the time their hallucinations are accurate. This accuracy is surprising given that the large language models (LLM) that power AI chatbots know nothing about the world. Literally nothing. The LLM only knows which words typically follow other words in what we have written.

The problem is not just that AI chatbots hallucinate, but that we’ve been hornswoggled into expecting them to be reliable sources of knowledge. They were built to put together human-like strings of words, and, at this, they are a flabbergasting, jaw-dropping success. The creators of the current crop of AI chatbots should have realized from the beginning that a system that creates sentences that sound like knowledge has a special obligation not to let us fall for the trick it was designed to accomplish: to sound like a human being who sounds like they know what they are talking about.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues