-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

On Chat AI and BS

Article Featured Image

I prefer “hallucinations” to describe the times that chat AI makes up falsehoods, because a hallucination is an experience you think is veridical but is not. Someone in a panic pointing out a hallucinatory deer in the road isn’t BS’-ng. They actually believe it. But— and it’s a big but—every word out of an LLM’s mouth is a hallucination. It’s just that most of what today’s chatbot AI engines say happens to be true. An LLM is like a person who is constantly hallucinating oases in the desert, except sometimes there happen to be oases there.

Now, it’s important to ask of LLMs why most of what they hallucinate just “happens” to be true. It’s no accident. Most of what humans have written
contains truth about the world. Even novels contain lots of truth: Don Quixote fictitiously traveled in Spain, which is a real place. He encountered windmills, which are real things. He tilted at them, which is a real human behavior. All this knowledge is not inscribed in the LLM, but it affects the frequencies of the occurrence of words and their distances from one another to let the LLM put together sentences that usually hallucinate a correct response.

Hallucinations is an apt word

There are other words that we can use, but none of them are as precise as “hallucination.” For example, when an LLM goes wrong, it’s not because it fabricated an answer, for when it says something true, that answer is fabricated also and in precisely the same way that false hallucinations are. It’s not lying, because it has no view of the world that it knows is true, plus it has no intent to deceive because it doesn’t have any intentions at all. Is it erring? Making a mistake? Well, if you choose answers on a true-false test by flipping a coin, when someone says, “But how could you have gotten #7 wrong? It’s so easy!” saying you made a mistake would be very misleading.

So I'm sticking with hallucinations for all of chat AI's statements, true or false.

But that leaves us with a question: Why isn’t there a word that perfectly expresses this situation? The answer is easy: LLMs are doing something genuinely new in our history. Our lack of a perfectly apt verb provess it.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues