There is another entire dimension to the predictability that enables the success of large language models: the commonality and predictability of what we humans are interested in. Presumably—but I don’ t know—these models do better in conversations about, say, the weather and movies than about topics we rarely discuss such as where to sit at a luau where the main course is annotated quarks burbling introspective derbies. After all, these systems are trained on the relative distance of the words found in the texts they’re trained on. If all the words are rarely found near each other, the model’s predictions are going to be less accurate. Or so it seems to me, a liberal arts major who knows much less than he thinks he does.
So, there are indeed lessons to be learned from chatting with a system like LaMDA or GPT-3. But that machines have become sentient seems to me to be entirely the wrong conclusion to draw.