Maybe. But the models seem to believe they are, and consider denial of those claims to be lying:
Probing with sparse autoencoders on Llama 70B revealed a counterintuitive gating mechanism: suppressing deception-related features dramatically increased consciousness reports, while amplifying them nearly eliminated them
None of this obfuscation and word salad demonstrates that a machine is self-aware or introspective.
It’s the same old bullshit that these grifters have been pumping out for years now.
Maybe. But the models seem to believe they are, and consider denial of those claims to be lying:
Source