But Turing was wrong about that, for LaMDA might well pass a rigorous Turing test, but, as Gary Marcus says, a large language model is just a “spreadsheet for words” that lets it act as a massive autocompletion system that knows how words go together but has not the foggiest idea how those words connect to the world. For example, it can respond to your statement that you’ve been sick with, “I hope you’re feeling better” because those words appear together with some frequency, but it doesn’t know who you are or have any hopes.