We know that a critical strength of generative AI is also one of its most significant weaknesses: It is designed to give excellent and plausible answers based on patterns, not facts, and, as such, it gets things wrong. In theory, you can get a system such as ChatGPT to provide the sources it used for its answers. The problem is, first, that few would bother or know how to do that, and second, ChatGPT has already been shown to invent sources. We can only hope that in the future, narrow business-focused LLMs will be designed with complete accuracy in mind. And the only way to get that is with human supervision of the data collection. OpenAI, in contrast, was primarily trained without human supervision on open web sources such as Wikipedia and even Reddit.