House Republicans tucked the measure into a section ordering the Commerce Department to deploy funds to “modernize and secure Federal information technology systems through the deployment of commercial artificial intelligence.” The measure has remained largely unchanged since its consideration by the House Energy and Commerce Committee earlier this month, though lawmakers on the House Committee on Rules recently added an exemption so that the moratorium would not apply to the enforcement of any law that “carries a criminal penalty.”
Widely rejected by Democrats, the push is also facing opposition from some Senate Republicans, who would largely need to unite on the legislation to get it passed. At a Senate hearing Wednesday, Sen. Marsha Blackburn (R-TN) poured cold water on the idea, expressing concern that the bill would override legislation to protect artists from deepfakes in her state.
“Speaking to the states and their actions, I do want to mention that Tennessee passed the ELVIS Act, which is like our first generation of the NO FAKES Act,” said Blackburn, “And we certainly know that, in Tennessee, we need those protections, and until we pass something that is federally preemptive, we can’t call for a moratorium on those things.”
This still has to go to the Senate so please for the love of god if you care about your rights and privacy tell your senator to vote NO!
My city is currently in the midst of an AI facial recognition/predictive policing thanks to a secret city partnership with Palantir, dystopian nightmare
Frankly I would be happy to see my state ban facial recognition completely, but they definitely aren’t going to, but please take this as a warning of what is definitely coming for you next!
We should have federal regulations and state regulations! There is absolutely no need for them to ban regulation at the state level other than the argument it will halt progress.
In reality they are invading your privacy and generating valuable data for these stupid AI data centers and they don’t want you to be able to decide this sucks and I want it to stop in my state!
Not only would it ban laws for the next 10 years, it would remove existing laws. Some places already have a facial recognition ban, and this would repeal it!
It’s nuts this seems to actually have some bipartisan support in the Senate, bc everyone is “so concerned” about America winning the AI race.
News flash, we probably won’t win it. It was a dumb fucking idea in the first place, and yeah they put all of our eggs into the AI basket and it’s probably going to tank the economy even more, but why TF should we be giving them even more control of our lives in the hopes that just maybe they can make a lot of money by further invading our privacy and doing some really evil shit with our data that will make the world an even worse place?
Here is an article about the May 8, 2025 hearing.
Here is a quote from Peter Thiel protege, Michael Kratsios regarding AI regulation in 2019
They have no values, I support a federal regulation too, but in case you haven’t noticed, the people who want you to vote to remove state regulations in favor of a “light touch” federal regulation are also in charge deciding what that “light touch” federal regulation will be and if it gets enforced at all.
Most of what we attribute to Elon Musk/DOGE including using protected government data banks full of our private data to train AI, can actually be traced back to Thiel and Kratsios, as early as 2018!
I cannot believe I actually agree with Marsha Blackburn on something, but she’s right! Why TF would you believe you don’t need state regulations bc there is the possibility the same party that just tried to sneak in this nightmare, might enact some federal protections.
Government deregulation has been in the works for a very long time. Do not let them keep taking our protections away!
Exactly my stance. Federal regulation makes sense when there’s a common ground, but my first response when seeing the quote about a “light touch” was, it can’t get any lighter than it is. If you want to push for federal over state enforcement, then present something that is actually protecting more than the profit interests of those economically invested in AI. Like human species interests, preservation, not opening something we can’t close.
And before the “LLM isn’t AGI” comes into play, of course it isn’t. But if we’re treating LLM R&D with a full throttle and safety concerns on the shelf, we’re doing the same with any related field. And even LLMs can have alignment issues and be misused or misguided while connected to crucial or even life-threatening conditions. “We wouldn’t do that.” Of course we would. Money.