Alert
April 21, 2026

Client Alert: The White House Makes a Cyber and AI Policy Push

The Trump administration (the Administration) is making a renewed push to shape the direction of national cybersecurity and artificial intelligence (AI) policy. In March 2026, the Administration released both its National Policy Framework for Artificial Intelligence (the AI Framework) and its Cyber Strategy for America (the Cyber Strategy). Together, these documents provide insight into the Administration’s approach to AI and cybersecurity. Although both are high-level, they point to several emerging themes with implications for businesses.

Key Elements

Both the AI Framework and the Cyber Strategy are structured around key objectives and policy pillars. The AI Framework focuses on: (1) protecting children and empowering parents; (2) safeguarding and strengthening American communities; (3) respecting intellectual property and supporting creators; (4) preventing censorship and protecting free speech; (5) enabling innovation and ensuring American AI dominance; (6) educating Americans and developing an AI-ready workforce; and (7) establishing a federal policy framework that preempts state AI laws. The Cyber Strategy emphasizes: (1) shaping adversary behavior; (2) promoting “common sense” regulation; (3) modernizing and securing federal government networks; (4) securing critical infrastructure; (5) sustaining superiority in critical and emerging technologies; and (6) building talent and capacity.

1. “Light-Touch” Regulation Does Not Equal a Low Compliance Burden

Both the Cyber Strategy and the AI Framework emphasize industry-led standards and private sector risk management over new, prescriptive federal regulation. Although the AI Framework encourages certain measures related to children’s online protection and age verification, it largely reflects the Administration’s view that AI should be governed through existing regulatory bodies and industry-led standards rather than a new federal regime. Similarly, the Cyber Strategy promotes “common sense regulation,” encouraging cyber defenses to be developed in a manner that will “reduce compliance burdens” and help the private sector maintain “the agility necessary to keep pace with rapidly evolving threats.”

These documents signal the federal government stepping back from detailed regulation and instead shifting responsibility to companies for identifying, managing, and governing AI and cybersecurity risks.

However, businesses should not confuse a light-touch regulatory posture with a low-risk threat environment. Investors, consumers, and regulators increasingly scrutinize AI and cyber practices, vendor relationships, and public disclosures and expect defensible governance efforts. This dynamic is consistent with recent regulatory and industry developments. In December 2025, the National Institute of Standards and Technology released a preliminary draft of its forthcoming Cyber AI Profile, designed to help organizations leverage its Cybersecurity Framework to navigate emerging AI risks. Other regulators, including the New York State Department of Financial Services and the Securities and Exchange Commission, have signaled that businesses should incorporate AI risk into cybersecurity risk management plans and vice versa. Companies should remain attentive to these developments and work with legal counsel and security professionals to strengthen the security of AI systems and the use of AI to enhance cybersecurity capabilities.

2. Offensive Cyber Capabilities Are in the Spotlight as AI Impacts the Threat Landscape

Offensive cyber capabilities are a long-standing topic of interest in cybersecurity circles but have burst into the public conversation with the recent announcements of highly capable next-generation AI models, such as Anthropic’s Claude Mythos Preview and OpenAI’s GPT 5.4-Cyber. The Cyber Strategy contemplates a greater role for the private sector “by creating incentives to identify and disrupt adversary networks and scale our national capabilities,” but provides little detail on how those incentives may work or any associated liability protections that may be introduced. Subsequent comments from National Cyber Director Sean Cairncross suggest that this effort may focus on increased information sharing with the federal government.

At the same time, the legal and institutional framework to support this collaboration remains uncertain. The Cybersecurity Information Sharing Act of 2015 (CISA 2015), which creates pathways for cybersecurity intelligence exchanges and liability protections for entities that share threat indicators, has repeatedly lapsed and been temporarily reauthorized1, creating uncertainty regarding its durability. Ongoing changes at the Cybersecurity and Infrastructure Security Agency further complicate the landscape for coordination of cyber resilience.

These developments are critically important as advanced uses of AI begin to dramatically expand the potential vulnerabilities and tools for cyber defenders and adversaries. The Cyber Strategy commits to securing the AI technology stack, including data center development, and to utilizing emerging technologies, such as agentic AI, for both defensive and offensive cyber operations. Likewise, the AI Framework encourages Congress to invest in strengthening national security agencies’ ability to understand, plan for, and mitigate risks associated with frontier AI models. Taken together, organizations can anticipate a more complex threat environment with AI-enhanced tools on both sides of cyber operations, and where expectations around information-sharing and the domain of offensive cyber continue to evolve.

3. AI Preemption Is Still on the Radar, but Do Not Unwind Your AI Governance Plans

The AI Framework continues the Administration’s push for preemption of state-level AI regulation in favor of a “minimally burdensome national standard.” Throughout President Trump’s second term, the Administration has promoted rapid AI development and commercialization, discouraged prescriptive safety measures, and criticized state-level AI legislation:

  • In January 2025, President Trump revoked, via Executive Order, the Biden administration’s prior directive on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
  • In June 2025, the White House issued an Executive Order emphasizing acceleration in AI and cybersecurity policy.
  • In July 2025, the Administration pushed for a formal AI regulatory moratorium through Congress’s One Big Beautiful Bill Act but was defeated. Around the same time, the Administration released an AI Action Plan prioritizing accelerated AI development as a national security imperative.
  • In December 2025, the President signed an order on “Ensuring a National Policy Framework for Artificial Intelligence,” directing federal agencies to establish a uniform federal policy framework that includes preemption.

These efforts have had limited impact at the state level, where dozens of AI-related laws have been enacted and hundreds more introduced. As state-level AI laws proliferate, companies have begun to navigate a complex AI governance patchwork. For further context surrounding the AI preemption discussion, see our coverage here. In the meantime, businesses should monitor these developments carefully while remaining prepared to comply with the existing patchwork of AI laws and governance standards.

Key Takeaways

The AI Framework and Cyber Strategy place primary responsibility for governance on the private sector, emphasizing industry-led standards and risk management over prescriptive federal regulation. Even under this light-touch federal approach, companies face uncertainty around information sharing, cyber liability protections, and the evolving role of agencies such as CISA. The recent policy push underscores the interconnection of AI and cybersecurity, requiring companies to integrate risk management and governance plans accordingly.


  1. [1] CISA 2015’s information sharing provisions are currently in effect through September 30, 2026.

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.