doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
The AI can generate a picture of cows dancing with roombas on the moon. Do you think it was trained on images of cows dancing with roombas on the moon?
Individually, yes. Thousands of cows, thousands of "dancing"s, thousands of roombas, and thousands of "on the moon"s.
And they all typed Shakespeare!
But a living human artist also learned to draw dancing cows and roombas on the moon that way. It just didn’t take thousands.