Trump’s Budget Would Ban States From Regulating AI for 10 Years. That Could Be a Problem for Everyday Americans
President Trump’s latest budget reconciliation bill, titled the “One Big Beautiful Bill Act of 2025,” has sparked intense debate over a controversial amendment that could reshape the future of artificial intelligence regulation in the U.S. The amendment, tucked away on pages 278-279 of the bill, proposes a 10-year moratorium on state-level regulation of AI development and deployment. This sweeping provision has drawn criticism from both sides of the aisle, even from some of Trump’s most loyal supporters.
The amendment effectively strips states of their authority to create laws or regulations governing AI technologies for a decade. While the move aligns with Trump’s broader deregulatory agenda, it has caught some lawmakers off guard. Republican Congresswoman Marjorie Taylor Greene of Georgia, a staunch Trump ally, expressed her outrage on the social media platform X. She claimed she was unaware of the provision when she voted for the bill and declared herself “adamantly OPPOSED” to it, calling it “a violation of state rights.” Greene even stated she would have voted against the bill had she known about the amendment.
The moratorium on state-level AI regulation marks a major victory for Silicon Valley’s lobbying efforts in Washington. Samantha Gordon, chief program officer at TechEquity, a policy organization focused on tech-related issues, highlighted the unprecedented nature of this provision. Gordon noted that even Section 230’s liability protections for internet companies, a cornerstone of tech policy for decades, pale in comparison to the scope of this AI preemption. The move underscores the growing influence of the tech industry in shaping federal policy under the Trump administration.
This deregulatory approach to AI aligns with the administration’s broader goals. Since taking office, Trump’s administration has rescinded previous executive orders aimed at ensuring the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” In their place, the administration has adopted a new policy titled “Removing Barriers to American Leadership in AI,” which prioritizes innovation over regulatory oversight. Critics argue this shift could have far-reaching consequences for everyday Americans, as AI technologies become increasingly integrated into daily life.
The federal preemption comes at a time when states have been actively developing their own AI regulatory frameworks. For example, Utah has enacted several AI regulations, including amendments to its AI Policy Act that require disclosure when consumers interact with AI in regulated professions. Recent legislation in the state has expanded these protections, focusing on “high-risk” interactions involving health, financial, or biometric data. Similarly, West Virginia has established a task force to identify economic opportunities related to AI while developing best practices for public sector AI use and protecting individual rights and consumer data.
Critics warn that without state-level regulatory oversight, AI technologies could be deployed with minimal guardrails, potentially affecting everyday Americans in significant ways. The absence of state regulations for a decade could allow AI to reshape various aspects of society, from employment and healthcare to education and transportation, with limited accountability mechanisms in place. This lack of oversight could exacerbate existing inequalities and create new challenges for consumers, workers, and communities across the country.
The proposal represents one of the most significant deregulatory actions in recent history, highlighting the growing tension between promoting technological innovation and ensuring appropriate safeguards are in place. As AI becomes increasingly integrated into daily life, the debate over how to regulate it—whether at the federal or state level—will likely intensify, with everyday Americans hanging in the balance.
The Ongoing Debate Over AI Regulation and Its Far-Reaching Implications
The inclusion of the AI regulation moratorium in President Trump’s budget bill has ignited a fierce debate across the political spectrum, with many questioning the long-term consequences of such a sweeping policy. While supporters argue that the measure will unleash innovation and cement America’s leadership in AI, critics warn of a potential regulatory vacuum that could leave consumers and workers vulnerable.
One of the most contentious aspects of the amendment is its implications for state-level governance. States like Utah and West Virginia have already taken proactive steps to regulate AI, implementing measures to protect consumer rights and ensure ethical deployment. Utah’s AI Policy Act, for instance, mandates transparency when AI is used in regulated professions, while West Virginia has established a task force to explore AI’s economic potential and safeguard individual rights. These efforts reflect a growing recognition among state legislatures of the need for tailored, localized regulations to address the unique challenges posed by AI.
However, the federal preemption clause in Trump’s bill would effectively nullify these state-level initiatives, centralizing regulatory authority in Washington. Critics argue that this approach disregards the diversity of AI’s impact across different regions and industries, potentially leading to a one-size-fits-all policy that fails to account for local concerns. For example, AI’s role in agriculture, a critical sector in states like Iowa, may require different oversight mechanisms compared to its application in California’s tech-driven economy.
The administration’s deregulatory approach to AI has also drawn scrutiny from tech ethicists and consumer advocacy groups. They point to the rescission of the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” executive order as a troubling sign of the government’s priorities. The new policy, “Removing Barriers to American Leadership in AI,” has been criticized for prioritizing innovation over accountability, potentially allowing AI systems to be deployed without adequate safeguards for privacy, bias, and safety.
As the debate over AI regulation intensifies, lawmakers and experts are urging a more balanced approach. This could involve federal guidelines that set minimum standards for AI safety and ethics while allowing states to implement additional protections based on their specific needs. Such a framework would aim to foster innovation while mitigating risks to society.
The political fallout from the amendment continues to grow, with even some of Trump’s allies expressing concern. Congresswoman Marjorie Taylor Greene’s public opposition to the measure has highlighted the fissures within the Republican Party over the issue. Her stance reflects a broader unease among conservatives who view the preemption of state authority as a overreach of federal power.
Meanwhile, Silicon Valley’s influence in shaping the amendment has not gone unnoticed. Critics argue that the tech industry’s lobbying efforts have resulted in a policy that benefits corporate interests at the expense of public welfare. Samantha Gordon of TechEquity has warned that the preemption clause could embolden tech companies to deploy AI systems without sufficient regard for their societal impact, exacerbating issues like algorithmic bias and job displacement.
As the “One Big Beautiful Bill Act of 2025” moves through Congress, the fate of AI regulation remains uncertain. The amendment’s inclusion has galvanized opposition from a diverse coalition of lawmakers, advocacy groups, and state officials, all of whom are calling for a more nuanced approach to governing this transformative technology. Whether the final version of the bill retains the AI preemption clause will have profound implications for the future of AI in America—and for the everyday Americans who will be most affected by its deployment.
Conclusion
President Trump’s proposed 10-year moratorium on state-level AI regulation has sparked a heated debate, highlighting the delicate balance between fostering innovation and ensuring public safeguards. While the measure aligns with the administration’s deregulatory agenda and Silicon Valley’s interests, critics warn of a potential regulatory void that could leave consumers and workers vulnerable. The preemption of state authority, particularly in states like Utah and West Virginia that have already enacted AI regulations, raises concerns about the diversity of AI’s impact across regions and industries. As AI becomes increasingly integrated into daily life, the need for a balanced approach that combines federal guidelines with state-level protections becomes ever more critical. The outcome of this policy will significantly shape the future of AI in America, with everyday Americans being the most affected by its deployment.
Frequently Asked Questions
What is the proposed moratorium on AI regulation?
President Trump’s budget bill includes a 10-year moratorium that would prevent states from regulating AI development and deployment, centralizing regulatory authority at the federal level.
Why are states like Utah and West Virginia mentioned?
Utah and West Virginia have already enacted AI regulations, such as Utah’s AI Policy Act requiring transparency in AI use, and West Virginia’s task force for AI-related economic opportunities and consumer protections. These state-level initiatives could be nullified by the federal preemption clause.
What is the role of Silicon Valley in this amendment?
Silicon Valley’s lobbying efforts have significantly influenced the amendment, which aligns with the tech industry’s interests by reducing regulatory barriers and promoting innovation, though critics argue this may come at the expense of public welfare.
What are the criticisms of the deregulatory approach to AI?
Critics argue that the preemption clause could allow AI deployment without adequate safeguards, leading to issues like algorithmic bias, job displacement, and privacy concerns, potentially exacerbating inequalities and harming everyday Americans.
What’s next for the AI regulation debate?
The debate continues as the bill moves through Congress, with opposition from a diverse coalition urging a more nuanced approach that balances innovation with necessary safeguards. The final version of the bill will determine the future of AI regulation in America.