Share this @internewscast.com
In January, President Trump announced that “It is the policy of the United States to sustain and enhance America’s global AI dominance to promote human flourishing, economic competitiveness, and national security,” and instructed that an action plan to achieve this policy be prepared and submitted within 180 days.
Although 180 days have not yet passed and the action plan has not been submitted, the issue of AI regulation is a contentious element of the “Big, Beautiful Bill” currently in the Senate. The House’s version of the reconciliation bill includes a clause that prevents states from regulating AI for 10 years:
There are a few exceptions to the moratorium, all of which are aimed at making AI adoption easier and absolving manufacturers of liability.
If the White House’s AI action plan does not include strategies for addressing the numerous issues that come with the unregulated adoption of AI—and swiftly dealing with them—the provision should be eliminated.
Among the reasons some are opposed to the provision is potential job losses due to AI-powered automation. Rep. Marjorie Taylor Greene (R-GA) said she didn’t realize the AI provision was included in the bill before she voted on it, and that she wouldn’t have voted yes if she knew it was there. As it stands today there’s a fair chance the Senate won’t pass the bill without changes, meaning it would need to go back to the House for final passage. In an appearance on OAN Tuesday Greene vowed to vote against the bill unless the “poison pill” moratorium on regulating AI is removed, citing potential job losses.
CONGRESSWOMAN MTG SAYS SHE WILL NOT VOTE YES ON BIG BEAUTIFUL BILL UNLESS THE ‘POISON PILL’ MORATORIUM ON REGULATING AI IS REMOVED: “HUMANITY IS IN DANGER.”
Georgia Congresswoman @RepMTG tells @MattGaetz that “humanity is in danger…AI is projected to replace so many people’s… pic.twitter.com/zhjLt3dy1C
— One America News (@OANN) June 24, 2025
Additionally, the moratorium prevents states from passing laws to protect creatives from having their work product stolen by AI companies who want to use it to train their AI models without having to pay for the use of that product. What does that mean, exactly?
Let’s take the example of Meta’s AI model, Llama 3. The company was under pressure to quickly train the program to compete with more established models like ChatGPT and, according to court filings in a related lawsuit, the senior manager for the project emphasized that they needed books, not web data, to properly train their product. Internal documents reported on by The Atlantic show that Meta employees believed the process of properly licensing books and research papers would be too slow and expensive, so they got permission from “MZ” (likely Mark Zuckerberg) to use a huge database of pirated books called Library Genesis, or LibGen. Free and fast – and using stolen intellectual property.