doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
Right. I get you, and I agree, and I don’t think Buffalox was contradicting you by essentially saying “even if they technically aren’t the same, your government may still count it as the same.”
Yeah, and I think Buffalox agrees aswell. We were simply talking past each other. Even they used the term “depictions of CSAM” which is the same as the “simulated CSAM” term I was using myself.