-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

On Chat AI and BS

Article Featured Image

Ethics and Information Technology ran an article in June 2024 titled “ChatGPT Is Bullsh*t” (doi.org/10.1007/s10676-024-09775-5) except without the asterisk. Let’s just call it BS.

I disagree with this thoughtful article written by three philosopher researchers at the University of Glasgow—Michael Townsen Hicks, James Humphries, and Joe Slater. While large language models (LLMs) like the one that empowers ChatGPT definitely have a problem with the truth, as do human BS-ers, it’s not exactly the same problem. Particularly in these beginning days of the Age of AI, I think it’s worth it to be as nit-pickingly accurate as we can.

The Paper begins by saying that the authors mean the term in the sense that the philosopher Harry Frankfurt phrased it in his popular 2005 book, On Bullshit (Princeton University Press). They write: “Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.”

I’m not convinced that that BS-ers have no concern for the truth. Think about your best example of one. Is it someone with no concern for the truth? For example, take the proverbial used car salesman who, to close a deal, tells you the car you’re interested in has been fully inspected and is in tip-top shape. It even has brand new brake linings. Except, the salesguy has no idea whether any of that is true. He has no regard for the truth, and thus he’s BS-ing you, right? It sure seems that way to me. (PS: Pardon the gender stereotyping.)

But this proverbial salesman can do something that an LLM cannot: tell the difference between speaking the truth and BS-ing. Maybe some BS-ers honestly can’t tell that they’re just making stuff up, but that seems like the line in the BS pool between the shallow end and the totally delusional end.

Knowing truth from falsehoods

I agree with the Glasgow paper that LLMs swim in that deep end. I just don’t think that that’s what we generally mean by BS. The important point is that LLMs never know when they’re telling the truth and when they’re making stuff up. They’re just algorithmically composing sentences based on the statistical relationships that they’ve discovered among the words in whatever they were trained on. They therefore have no idea whether anything they ever say is true or not. That’s not a “disregard” for the truth, because to be able to disregard the truth, you also have to be able to regard it.

LLMs have no access to truth or to reality. An LLM’s map of the world, so to speak, is a map of how we’ve put words together. In fact, it doesn’t even have words, just tokens—random numbers assigned to each distinct word (or part of a word). The map is like a table of relationships among those tokens. There are no sentences, true or false, to be found. And it doesn’t even always give the most likely response to an input, so it will sound like a human, and we humans are full of small surprises.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues