Alert
March 24, 2026

Trump Administration Unveils Its AI Legislative Agenda: Calling for Preemption While Leaving Gaps

Last Friday, the Trump administration released its much-anticipated National Policy Framework for Artificial Intelligence (the "Framework"), unveiling its views on how AI should be regulated – or deregulated. The Framework, which was previewed by the White House's executive order on AI from December 2025, calls for a federally preemptive law that would focus on kids’ online safety, balancing intellectual property rights with AI development, removing barriers to innovation and enhancing AI literacy in the American workforce. It calls on Congress to not create "any new federal rulemaking body to regulate AI."

Presented as a “comprehensive national legislative framework”, the Framework will need to be adopted in detailed legislation. Its provisions on federal preemption of state AI regulation are arguably its most consequential and most contentious. In a press release, the Trump administration stated, “Importantly, this framework can succeed only if it is applied uniformly across the United States. A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.”

The administration calls on Congress to preempt state AI laws that “impose undue burdens,” replacing them with a single national standard. The Framework softens this by carving out certain areas where states would retain authority: traditional police powers to enforce laws of general applicability, zoning laws governing data center placement, and requirements governing states’ own use of AI in procurement, law enforcement, and public education.

But the preemption language goes quite far in asserting that states should not be permitted to regulate AI development at all and should not penalize AI developers for third parties’ unlawful conduct involving their models. It further provides that states should not unduly burden the use of AI for activity that would be lawful if performed without AI, purporting to prevent states from imposing any AI-specific requirements on deployed applications.

This language will inevitably raise interpretative questions. For example, it isn’t clear how “laws of general applicability,” which the Framework would grandfather in, differ from laws that “regulate AI development,” which the Framework would forbid. Does merely mentioning the term "AI" in a law render it an “AI-specific requirement”? These days, most technology license contracts and even corporate acquisition agreements define and reference AI. And the Framework itself calls for imposing what appear to be AI-specific requirements, for example, in the context of child safety.

The political viability of the Framework remains an open question. Federal preemption of AI regulation has been a flashpoint within the Republican caucus itself. Tech-aligned members and libertarians favor a deregulatory approach, while states' rights advocates and members representing districts with active state-level AI initiatives have resisted ceding ground to Washington. Last year, the Administration failed to include AI preemption in even its filibuster-proof budget reconciliation bill, underscoring these divisions. Indeed, the Framework could be seen as conceding that state laws can only be preempted by Congressional action. The Executive Order, in contrast, called for non-legislative measures against states, including by creating a litigation task force charged with challenging state AI laws and tasking executive agencies with restricting funding to states that stray from the Administration’s approach.

Meanwhile, states are not waiting around. With more than 100 bills pending on AI chatbots alone and dozens more addressing AI governance, safety, and industry-specific applications, the state-level regulatory apparatus is building momentum that will be increasingly difficult to reverse. Not just “blue states” like California, Colorado, and Illinois, but also “red states” like Montana, Texas, and Utah, have already enacted significant AI-related legislation, and more states are poised to follow. The longer Congress takes to act on the Framework, the more entrenched this patchwork becomes.

In addition to what’s covered in the Framework, certain central AI policy questions are not even mentioned. For example, the Framework doesn’t address national security, cybersecurity, AI governance, or high-risk AI. Regardless of regulatory or antiregulatory stance, policymakers all over the globe agree that these issues present critical AI risk vectors. It’s surprising, then, that the Framework omits recognition of these key fields.

The Framework is divided to seven parts, the last of which concerns federal preemption of AI regulation. The other six parts are focused on:

  • Protecting Children and Empowering Parents. The Framework calls on Congress to strengthen protection of kids online, including with measures to prevent tech addiction, empower parents to oversee kids’ use of technology, and impose age assurance requirements. Notably, these ideas have been featured in numerous legislative initiatives over the past few years and do not seem specific to AI. Indeed, applying the Administration’s impetus for AI legislation, it’s unclear how AI is different from other disruptive technologies, such as mobile, gaming, or social media, in this respect.  
  • Safeguarding and Strengthening American Communities. Under this heading, the Administration supports streamlining federal rules permitting data center construction, protecting residential ratepayers from increased electricity costs, combating AI-enabled fraud targeting seniors, and providing AI resources to small businesses. The Framework also calls for national security agencies to develop sufficient technical capacity to understand frontier AI models. The energy provisions are notable given the growing tension between AI’s voracious demand for electricity and the Administration’s broader energy agenda. 
  • Respecting Intellectual Property Rights and Supporting Creators. Perhaps the most closely watched section of the Framework, the IP provisions attempt to thread a needle between supporting AI innovation and protecting property rights. The Administration takes the position that training AI models on copyrighted material does not violate copyright laws, but at the same time states that “American creators, publishers, and innovators should be protected from AI-generated outputs.” The Framework supports letting the courts resolve the issue, urging Congress to not intervene in the judiciary’s consideration of whether training on copyrighted material constitutes fair use. At the same time, the Framework suggests Congress consider enabling licensing frameworks for rights holders to negotiate compensation from AI providers while not mandating when such licensing is required. The Framework also calls for a federal right of publicity to protect individuals from unauthorized AI-generated digital replicas of their voice or likeness, with carve-outs for parody, satire, and news reporting. This balancing act will likely satisfy neither the creative community nor AI developers, both of whom have staked out hardline positions. 
  • Preventing Censorship and Protecting Free Speech. This section reflects the Administration’s broader agenda against “woke AI.” The Framework calls on Congress to prevent the federal government from coercing AI providers to ban, compel, or alter content based on partisan or ideological agendas. It invites legislation to provide a means for Americans to seek redress from government censorship on AI platforms. This section too is light on AI-specific substance, reading as an extension of the Administration’s longtime grievances with content moderation practices. 
  • Enabling Innovation and Ensuring American AI Dominance. This section captures the Framework’s pro-innovation core. It posits the US “must lead the world in AI by removing barriers to innovation, accelerating deployment of AI applications across sectors, and ensuring broad access to the testing environments.” It calls for regulatory sandboxes, open access to federal datasets for AI training, and a firm directive that Congress should not create any new federal rulemaking body to regulate AI. Instead, the Administration favors sector-specific oversight through existing regulators and industry-led standards. 
  • Educating Americans and Developing an AI-Ready Workforce. The Framework calls for integrating AI training into existing education and workforce programs and studying AI-driven workforce displacement at the task level. These provisions are largely uncontroversial, though critics will note that the recommendations are vague and lack the funding commitments that would be necessary to effect meaningful change.

On the whole, the Framework is best understood as a statement of principles rather than a detailed legislative blueprint. It signals the Administration’s clear preference for light-touch, pro-innovation regulation at the federal level, with the AI industry largely setting the pace through self-governance and industry standards. Whether this approach will prove adequate to address the real-world harms that AI is already causing, from disinformation and algorithmic discrimination to energy demands and labor market displacement, is a question the Framework largely sidesteps. The hard work of translating these principles into legislation that can command a majority in Congress has only just begun.

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.