doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
My question is why this logic doesn’t apply to anybody who learns anything and goes on to use that knowledge in their work without explicit permission. For example, authors generally learn to be good authors by reading the work of other good authors. Do they morally owe all past authors a share of whatever money they make?
End stage capitalism of the brain, all your ideas are ours and you owe us money for thinking them.
What a great idea there bud