There are other words that we can use, but none of them are as precise as “hallucination.” For example, when an LLM goes wrong, it’s not because it fabricated an answer, for when it says something true, that answer is fabricated also and in precisely the same way that false hallucinations are. It’s not lying, because it has no view of the world that it knows is true, plus it has no intent to deceive because it doesn’t have any intentions at all. Is it erring? Making a mistake? Well, if you choose answers on a true-false test by flipping a coin, when someone says, “But how could you have gotten #7 wrong? It’s so easy!” saying you made a mistake would be very misleading.
So I'm sticking with hallucinations for all of chat AI's statements, true or false.
But that leaves us with a question: Why isn’t there a word that perfectly expresses this situation? The answer is easy: LLMs are doing something genuinely new in our history. Our lack of a perfectly apt verb provess it.