What are your chatbot’s pronouns?
We need a new pronoun.
I can almost see you rolling your eyes.
I understand that some people resent being asked—sometimes, it’s a demand—to use pronouns when they think that choice is under their control and it’s their prerogative to use or not use them. I personally have zero mixed feelings about this, except for the slight awkwardness of using “they” to refer a singular person. But the more rational part of me thinks that this awkwardness serves the positive purpose of making the issue more conspicuous, which, at this point, feels necessary. So, yes, I include my pronouns—he/him—in my email signature as a sign of alignment, although I am aware that it can come across as mere virtue signaling, since I am not at any risk with regards to my gender or orientation. (With regard to my religion, it’s different.)
My point relevant to tech is that, agree or disagree with the specific use of “they” and “theirs,” pronouns matter because how we are classified—what we are— matters to us.
New pronouns needed when chatting with AI
That’s why I think we need a new word to use as a pronoun when chatting with AI. And, most important, the AI needs a new pronoun when referring to itself. This goes beyond the gender-affirming “he/ him” that I use to the pronouns of “I” and “you” that we currently use in conversations among humans and are now starting to use with conversational AI. It will no doubt be distracting at first, but that is also a form of consciousness-raising.
For example, this morning I chatted with Google Bard about how large language models (LLMs) are learning new languages in what’s called “zero-shot” learning. This term means you can give an LLM a bunch of sample texts in a language it’s never encountered before, and it will learn that language without it being given any translations. It doesn’t seem possible, but Bard did its best to explain it to me.
At the end, it began to refer me to nonexistent articles for further research. After looking them up, I said to Bard: “You’re hallucinating. Those articles don’t exist.” It replied, “I’m sorry. I apologize for the error. The links I provided were actually two articles about hallucinations.” Ironically, the articles were not about hallucinations, so it was hallucinating its own hallucinations. But that’s not the point.
The point is that not only was I referring to a machine with a pronoun that implied that it’s sentient, and it was referring to itself as a creature capable of feeling remorse, but neither of us had any choice about it. How else could we have carried on the conversation, if we’re willing to pretend that it actually was a conversation?
It’s a little easier on my side because I’m the Human in Charge, and thus can ask it a question or command it to do my bidding without using a pronoun: “How does zero-shot learning work?” “Give me a summary of this post.” But it gets harder if I’m responding to a response: “You said zero-shot learning doesn’t work, but it does.” What can I say except “you”?
And how can the computer refer to itself except by saying something like: “I made a mistake about zero-shot learning. I now have better information for you.” What is the alternative to “I”?