-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

The truth and chatbots

Article Featured Image

The use of computers to communicate and converse with humans is not new but in the last year or so a couple of things have occurred to bring that technology to the mainstream. For the first time, chatbots have entered almost every U.S. home and engaged directly with huge swathes of the population. Yet few people appear to be aware of the impact and the plethora of issues that have come in the wake of that change.

But before exploring the impact and the issues we might face, let’s define what a chatbot actually is: a digital language processing service, powered by rules and artificial intelligence that simulates human-like conversation.

That definition may not have helped some readers, so let’s look at a practical example of a form of chatbot with which you may be familiar. Consider the box that pops up when you browse a website and then asks you if you want any help. You type into the box your questions and it will endeavor to answer them. The likelihood is that you are not chatting with a human; rather you are talking to a computer that knows all about the product and services you are browsing. Based on some preconfigured rules, it responds to your questions with what it hopes is the right answer.

Its answers will typically be wrapped with some human friendly terminology like “How are you today?” or “It’s been great to chat with you. Have a nice day,” etc. Some are more than just a set of rules and a library of factoids; some leverage artificial intelligence and machine learning to improve their responses. Over time they learn to answer questions more and more accurately. Such tools provide real-time help and typically a good quality of service. If that were all there was to this technology, all would be well.

In the most recent presidential election, chatbots were used extensively by both candidates in their campaigns. In fact, chatbots were regularly used to guide and directly influence the political debate. Most commonly they appeared in social media and news sites where the comment sections rapidly became fiery battlegrounds. Be it Facebook or The New York Times, online debates were deluged with chatbot activity. People who engaged in online discussions regarding the strength and character of their chosen candidate were in fact often arguing back and forth with chatbots.

Chatbots and the election

There is no quantitative data to gauge how much chatbot activity there was in the election, but some estimate that 70 to 80 percent of posted comments were computer-originated. In my own research for this article, though hardly scientific, I estimate that at least 50 percent of comments in those online discussions were from chatbots. When you know what you are looking for, they are not hard to spot. Clues to look for are posters with fake Facebook pages and a limited style of argument that typically either refutes a claim with a list of counter claims or attempts to ridicule the original commentator. To be clear, both party candidates (Democrat and Republican) used the technology (or at least entities supporting them did).

Digging down a little into the technology itself may help identify some of the areas for concern. The basic chatbot with limited rules and a product catalog is a neat function and hardly cause for concern. The chatbots that use artificial intelligence and machine learning technology raise bigger questions. In simple terms, chatbots utilize natural language processing (NLP) to understand what you are saying to them. Then via preconfigured rules, they parse your communication into something actionable. Question: “How long will this take to be delivered?” The answer, based on an analysis of what ‘this’ is, today’s date and regular shipping times: “It will take three days to deliver.”

The same chatbot would be unable to answer a question like “Who is the president of France?” But AI changes that paradigm entirely, because an AI-driven chatbot learns and adapts its answers over time based on the results of what it hears. The basic chatbot is only able to provide answers based on the limited information that its administrators gave it. AI, however, can develop its own answers (knowledge) and when combined with even more sophisticated approaches such as machine learning, it not only develops its own answers but also predicts future questions and outcomes. It doesn’t use just the information administrators provide it; it finds and creates its own.

It is surprising that there is not more discussion about this—discussion between humans—although smart chatbots would happily take part too given the chance. The direction that conversation might take is interesting to consider. As you may remember, earlier this year Microsoft had a memorable chatbot experience with its TAY application. TAY was designed to converse with Twitter users and learn how to relate to millennials. In less than a day, TAY was spouting racial hatred, supporting Adolf Hitler and denigrating women. Yet here’s the twist, TAY was a cleverly designed and powerful application that was using artificial intelligence (AI) to learn by itself and become smarter. Clearly computers don’t have any sense of morality or an ethical compass to burden them in their interactions, and its owner (Microsoft) had no way to control it.

Engaging conversation

Intelligent chatbots are finding their way into all of our lives. For those readers who live in the United States, it was hard not to notice Amazon’s huge advertising push for its Alexa products. The ads were really quite good and often comically featured Garth Brooks and his wife Trisha Yearwood—as a result, no doubt, many thousands were sold in the run up to Christmas. With Echo Dots (the hardware component) selling for under $50, many homes (mine included) now have the technology running as a standard element of their family’s digital footprint.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues