On December 11, 2025, President Donald Trump signed an Executive Order on “Ensuring a National Policy Framework for Artificial Intelligence” (the “EO”) intended to curtail state regulation of artificial intelligence (“AI”). Citing the “race” for “supremacy” of the AI “technological revolution,” the EO states, “To win, United States AI companies must be free to innovate without cumbersome regulation.” And yet, even as the EO seeks to preempt state law-making, a wave of AI regulation will take force beginning next month. It seems questionable that after failing to quell state AI regulation through a Congressional moratorium, the Administration will be able to do so by executive action.
The EO relies on three main policy goals. First, it seeks to prevent a fragmented and complex regulatory environment, asserting that “State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes.” Second, it invokes the Commerce Clause, stating “State laws sometimes impermissibly regulate beyond State borders, impinging on interstate commerce.” Third, in line with the President’s focus on eliminating “woke AI,” it alleges that state laws that regulate “algorithmic discrimination,” such as the Colorado AI Act, “embed ideological bias within models” and “force AI models to produce false results in order to avoid a ‘differential treatment of impact’ on protected groups.”
To counter these perceived threats to innovation, the EO declares, “It is the policy of the United States [is] to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”
However, the EO provides little guidance on which state laws are “onerous” and conflict with the Administration’s policy principles. Given that the EO is likely to face legal challenges of interpretation, implementation, and possibly litigation, in the coming weeks and months, businesses should continue to prepare for upcoming state AI regulation and monitor developments in this space.
Key Elements of the EO
In July, a proposed federal “AI moratorium” originally embedded in the “One Big Beautiful Bill Act” failed to pass through Congress. Without such explicit legislative authority to directly preempt state law, the EO advances several indirect methods for challenging existing state AI laws and impeding future ones:
- AI Litigation Task Force. The EO directs the Attorney General to create, within 30 days, an AI Litigation Task Force dedicated to challenging state AI laws in court where the Task Force deems such laws to be unconstitutional, preempted by other federal regulations, or otherwise unlawful.
- Evaluation of State Laws. The EO directs the Commerce Secretary within 90 days to identify and publish a list of “onerous” State AI laws that conflict with federal AI policy, which “at minimum” include “laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.”
- Funding Restrictions. The EO directs the Commerce Secretary within 90 days to issue a policy notice that prohibits states with AI laws identified as “onerous” from receiving their remaining allotted funding under the Broadband Equity Access and Deployment (“BEAD”) Program. The EO also directs other federal agencies to condition access to discretionary grants on not enacting, or committing not to enforce, “onerous” AI laws.
- Future Policymaking and Legislative Recommendations. The EO directs the Federal Trade Commission (“FTC”) to issue a policy statement on the application of the FTC Act to AI models, which “must explain the circumstances under which state laws that require alterations to the truthful outputs of AI models are preempted by the FTC’s prohibition on engaging in deceptive acts or practices affecting commerce.” The EO also directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to “jointly prepare a legislative recommendation establishing a uniform Federal policy framework for AI that preempts State AI laws” and directs the Federal Communications Commission (“FCC”) “to adopt a Federal reporting and disclosure standard for AI models that preempts conflicting State laws.”
The EO notes, “My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones. The resulting framework must forbid State laws that conflict with the policy set forth in this order.” While offering few details of any such future framework, the EO does make clear that certain areas of state law would not be preempted, including those relating to (i) child safety protections; (ii) AI compute and data center infrastructure; (iii) State government procurement and use of AI; and (iv) “other topics as shall be determined.”
Uncertainty Persists As State Legislative Priorities Clash with the Limits of Federal Authority
Throughout the first year of the Trump Administration, the President’s ambition, outlined most fully in a July 2025 AI Action Plan, to make American AI go faster, higher, stronger has grated against state efforts to regulate AI. As the promises and risks of AI have dominated headlines, states have proposed hundreds of measures – and passed dozens of laws – focused on various facets of AI, from automated decisionmaking and algorithmic pricing to AI companions, deepfakes, use of AI in regulated sectors, and training data transparency. These measures have created a complex and fractured legal landscape that is set to intensify in 2026 as key frameworks such as the Colorado AI Act, the California Consumer Privacy Act (CCPA) automated decisionmaking (ADMT) regulations, the Texas Responsible AI Governance Act (TRAIGA), and the California Training Data Transparency Act and Transparency in Frontier Artificial Intelligence Act, among others, come into force.
This complex landscape has drawn repeated criticism from Administration officials. For example, a recent tweet by David Sacks, the President’s AI advisor: “When an AI model is developed in state A, trained in state B, inferenced in state C, and delivered over the internet through national telecommunications infrastructure, that is clearly interstate commerce, and exactly the type of economic activity that the Framers of the Constitution intended to reserve for the federal government to regulate.”
While the Administration has touted the benefits of a federal standard, Congress has not acted to regulate – or even de-regulate – AI, effectively relinquishing the helm to the states. Several factors indicate that using Executive Order authority to preempt state AI regulation, rather than pursuing Congressional legislation, will face substantial legal and political hurdles:
- Legal Challenges. The EO raises important legal questions about the extent of federal power to preempt state regulation in the absence of a comprehensive federal framework. Even through legislative action, it’s not clear that the federal government can preempt state laws without “occupying the field” it seeks to regulate, that is, without passing federal AI legislation. States generally are free to regulate commerce – and even interstate commerce – so long as regulations do not discriminate against out of state actors. There’s no indication that the state AI laws that the Administration seeks to block treat out-of-state actors differently than in-state actors. If anything, California – home to most of the leading tech and AI companies in the world – could be seen as tightening the bolts on its own domestic businesses, which currently lead the AI industry in the U.S. Consequently, expect to see states mounting challenges to federal efforts to block state laws.
- Political Challenges. So far, efforts to pass a federal AI moratorium, much less an actual AI law, have met significant political resistance. Congress’s attempt to ban state AI laws was struck from the One Big Beautiful Bill Act by a vote of 99 to 1 in the Senate. The EO’s use of funding mandates (such as the threat to withdraw BEAD funding) could similarly lead to political opposition, even from the President’s own party. In particular, some of the states at the vanguard of AI regulation are solidly red states, such as Texas, Utah, Montana and others.
- Practical Challenges. Even under the most expedient timelines set out in the EO, state AI laws and regulations will come into force and will shape legal and business norms within the AI industry. The pace of efforts to dismantle state AI laws may not keep up with the decisions business leaders need to take relying on the current legal state of play. And even if state AI laws falter, courts will continue to apply existing statutes and legal principles to novel AI harms.
In the coming months, the Administration will need to articulate a vision for effective yet “minimally burdensome” AI regulation. In the meantime, in the absence of a clear national standard, states – those Justice Brandeisian “laboratories of democracy” – are likely to continue to experiment.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.
Contacts
- /en/people/w/withers-bethany

Bethany P. Withers
PartnerChair, AI & Machine Learning - /en/people/t/tene-omer

Omer Tene
Partner - /en/people/k/kohler-raphaelRK
Raphael Kohler
Associate
