The truth and chatbots
Amazon Echo leverages the same technology in that it (Alexa) listens and via language processing, interprets and responds to your questions. “Alexa, what will the weather be tomorrow?” “Alexa, what time is the next train?” etc. Alexa is listening all the time to everything that is being said within its hearing range. Alexa is constantly learning, and soon it will be doing much more than responding to basic commands and questions. Hence, Alexa by default has you under constant surveillance; it almost seems too obvious to say the words “Big Brother.” At the time of writing this, Amazon was offering a prize of $2.5 million to the developer who can build an application that can “converse coherently and engagingly with humans on popular topics for 20 minutes.”
Clearly there are many ethical and even legal issues that arise with the use of such applications that not only impact our privacy but also may lead us to question what is real and what is not. As knowledge management professionals, it also raises for us a more fundamental question about what kind of information a chatbot actually produces. We also need to ask what we should do with the information/knowledge it creates. How should it be handled particularly where it may impact our customers, employees and citizens? Automatically configuring data into a conversational response challenges our core notion of what is knowledge and information and what is its value.
To date, information and knowledge have been relatively tangible to manage, but not anymore. Today we may see chatbot use in selling political ideas and commercial products, but soon the use of AI-driven conversation will be pervasive across both our personal and work lives. I don’t believe we are anywhere near ready to deal with the impact, and it is time for us to start that work.