When AI’s eyes are smiling
But then when you say, yes, you’d like that image, it punks out, saying Gemini temporarily isn’t generating images of people. I assume this is because there have been, let’s say, “issues.” Nevertheless, even after having explained that the meaning of a smile is culturally and historically based, it’s willing to put a smile where it does not belong … if Google had not sidelined requests for human images.
But of course, GenAI doesn’t actually know that the image it offers to construct would be inaccurate and inconsistent— because it doesn’t know anything. It has no context, because these systems are on-demand answer-flingers that forget what they said faster than the person who took your order at the drive-through.
Smile for the camera
There’s actually an even more dispiriting explanation: These systems are just trying to make us happy, which means providing the response that would satisfy the most people without violating extrinsically imposed rules. In this case, it “knows” what a selfie statistically is supposed to look like: people looking up at a camera and smiling. So that’s what it offers us.
In case we were in any doubt, I think this shows that GenAI’s “knowledge” isn’t anything like what we humans consider knowledge to be.
First, we expect knowledge to have some persistence in our minds. If we know that goldfish need water to live, we don’t then take them out of the bowl for some fresh air.
Second, we expect knowledge to be connected to other knowledge. “Americans smile upon meeting someone new” is a generalization that only makes sense in a context of knowledge about America, the social function and meaning of smiles, that smiles aren’t the only gestures that express friendliness, and, ultimately, the context of everything else we know. Knowledge is a framework that gives meaning to any particular piece of knowledge.
Third, we expect that framework of knowledge to silently condition our experience and understanding and to speak up when something doesn’t fit.
Fourth, while GenAI can generate statements of knowledge (“Smiles are culturally relative”), it doesn’t have a framework of knowledge that would keep it from smearing smiles across the faces of cultures where they’d be inappropriate. As of now, GenAI doesn’t learn from the knowledge it creates any more than a paint-mixing machine learns more about colors every time it’s used. Yet.