-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for $100 off!

The ChatGPT ways of knowledge

Article Featured Image

Endless information

Remember when we were thrown onto the internet, where every voice was pretty much equal, and websites had links popping off in every direction? That gave us the accurate sense that there is an endless set of related points of view. Every link was a temptation to see the world from someone else’s point of view.

Even better, once you clicked, you were reminded that knowledge is something produced by humans in conversation and argument with others. You saw with your own eyes that no topic exists on which everyone agrees. You were thrown into a world similar to that of formal scholarship, which has long existed as a network of conversation and perpetual disagreement, although, of course, these ad hoc networks lack a scholarly discipline’s expertise and exclusivity.

AI-based chatbots cover up all that. A chatbot gives you answers as if it were just reading them out of the Big Book of All Knowledge. That knowing is a contentious human practice will be less obvious.

The majority report

What’s worse, AI-based chatbots give the majoritarian view, because majorities sway statistics. Chat AI’s smooth answers may soothe us by suppressing all those pesky other cultures and sub-cultures, with their values and critiques of our beliefs: How dare they! Of course, AI-based chatbots could be engineered to present a fuller swath of what the world thinks, but that is in the hands of Tech Giants, who decide what to train their LLMs on and which sources to weight heavily in the training set.

Weighting Wikipedia’s text heavily is a good idea, but even Wikipedia represents particular cultural values. For example, the French and English versions on the history of aviation disagree about who invented airplanes. And Wikipedia is value-laden in its decisions about which topics and people are worth an article. Still, it skews Wikipedia in a direction that makes it useful to its intended audience, but contentious for other audiences. Flat-Earthers don’t like Wikipedia, and I’m OK with that.

Likewise, if you ask Bard why it says the 2020 presidential election in the U.S. wasn’t “stolen,” it will tell you that it considers Fox News to be an unreliable source of information. I agree, but that’s not the point. By choosing what they think are reliable sources, these AI-based chatbot engines are making important, value-laden assumptions without telling us. They present many answers as if they’re certain. This teaches us exactly the wrong lessons about knowledge.

AI-based chatbots can change, however. And there are many other apps that LLMs can enable, including writing tools that challenge what you’re writing and raising viewpoints you otherwise would never have encountered. Let’s hope that such tools emerge soon so we don’t get accustomed to thinking of knowledge as a free vending machine that dispenses neatly packaged truth that seems never to have touched human hands, so we can swallow and digest as if it were a Twinkie.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues