Alert
July 31, 2025

America’s AI Action Plan Emphasizes Governance and Risk Management to Promote the Secure and Safe Adoption of AI Tools

On July 23, 2025, the Trump administration released its AI Action Plan (the plan), a long-anticipated road map for the federal government’s approach to artificial intelligence (AI) governance that presents a number of implications for businesses globally. While Goodwin has covered the plan and its three pillars in depth in “The Trump Administration AI Action Plan: Faster, Higher, Stronger,” this alert discusses how the plan aims to promote rapid adoption of AI tools supported by strong governance and risk management practices, especially those related to safety and security.

The AI Action Plan Both Calls for and Constitutes AI Governance and Risk Management

The plan is based on three core pillars — innovation, infrastructure, and international leadership — and contains many of the key components that would constitute a governance and risk management framework for AI adoption. It recommends policy actions to manage the government’s AI approach (i.e., governance1), while also identifying key risks and recommending policy actions to mitigate them (i.e., risk management2). In this alert, we highlight the components of the AI Action Plan most likely to be relevant for companies, consolidating them in a manner we would expect a governance and risk management framework to use. While the plan touts innovation and speed as the name of the game, the path forward is based on core risk management principles.

The AI Action Plan, through its pillars, forms a layered governance structure, blending regulatory updates, procurement policy, and diplomacy to accelerate AI development while addressing systemic risks across technical, institutional, and global domains.

Enable AI Adoption — How? Governance and Risk Management Over Regulatory Bottlenecks

The plan emphasizes a governance and risk management approach to AI safety, trust, and security over a strict regulatory mechanism to address AI risks and harms. As the plan’s first pillar states:

Today, the bottleneck to harnessing AI’s full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large, established organizations. Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated Federal effort would be beneficial in establishing a dynamic, “try-first” culture for AI across American industry. (emphasis added)

Accelerating AI Adoption

The plan builds upon the earlier governance frameworks for federal agency AI use that the White House Office of Management and Budget (OMB) outlined in previous memorandums, including OMB M-25-21 on managing risks in agency AI deployments and OMB M-25-22 on procurement and oversight of federal AI systems, which we previously covered in our April 18, 2025, audio insight “Trump Administration Sets New Direction for AI Policy.” Businesses — especially those working with or selling into the federal ecosystem — should prepare for enhanced expectations around transparency, vendor accountability, and AI risk controls.

1. Oversight — Chief AI Officer Council

A new Chief AI Officer Council will coordinate AI efforts across federal agencies, creating and implementing new governance frameworks. This signals a shift toward centralized oversight within the federal government, with downstream implications for federal contractors and vendors expected to align with emerging safety, fairness, and transparency benchmarks.

2. Identification of Key Applications for AI and AI Strategy

The plan identifies priority AI applications in areas like scientific research3 and national security, which will receive early regulatory attention and infrastructure investment. Clients in adjacent sectors — especially defense, biotech, and data services — should expect increased scrutiny amid an increase in partnership opportunities.

3. Identification of Infrastructure Need and AI Tools

The plan emphasizes the need for:

  • Compute resources, including the development of a healthy financial market for compute, with potential impact on financial services businesses
  • High-quality datasets, as poor data governance undermines performance, product, and compliance
  • Open-source and open-weight models, which promote innovation but raise traceability and misuse concerns

4. Assess Risks

Federal efforts will focus on risks such as:

  • Interpretability: Opaque AI systems causing mistrust, biased outcomes, and unintended harmful decisions due to lack of clear explanations
  • Cybersecurity: Adversarial attacks, data breaches, deepfakes, and compromise of critical AI infrastructure disrupting essential services
  • National Security: Misuse of AI by hostile actors for cyber warfare, espionage, and destabilizing technologies
  • Job Displacement: Workforce disruption, economic inequality, and challenges from AI-driven automation of human tasks

To mitigate these risks, the plan proposes implementation of an AI “evaluations ecosystem” guided by the National Institute of Standards and Technology and its Center for AI Standards and Innovation, stronger intellectual property protections, and new initiatives, including enforcement of the TAKE IT DOWN Act (passed on May 19, 2025) to combat synthetic media such as deepfakes — which may carry important implications for businesses across industries such as tech, media, and politics. It also reinforces AI workforce upskilling, with planned programs to “expand AI literacy and skills development, continuously evaluate AI’s impact on the labor market, and pilot new innovations to rapidly retrain and help workers thrive in an AI-driven economy.” Notably, the plan also calls for the development of high-security data centers that meet federal cybersecurity standards, support sensitive workloads, and are resistant to “the most determined and capable nation-state actors.”

5. Watch the Supply Chain (Vendor Risk Management)

Procurement reform is central to the plan. Agencies must now evaluate AI vendors on safety, neutrality, and transparency — not just cost and performance. This elevates vendor risk management and will likely drive updates to contracting standards and compliance expectations across industries. Beyond direct acquisition, the plan emphasizes securing the entire AI value chain, from semiconductor fabrication and compute resources to data center infrastructure and software supply chains. Agencies are instructed to streamline permitting for critical facilities, increase domestic chip production, and strengthen supply chain visibility for essential hardware and services. These infrastructure efforts aim to reduce dependence on foreign suppliers and ensure reliable access to trusted computing and tooling.

Safety and Security

The plan places significant emphasis on securing AI systems from attack and misuse. It also highlights the need to protect critical infrastructure, promote “secure by design” AI technologies, and develop tailored AI incident response frameworks. In addition, the plan includes biosecurity screening for models that could contribute to the development of biological threats, extending cybersecurity principles to AI risks. Together, alongside the administration’s executive order on cybersecurity released in June (which Goodwin covered in our alert “The Devil’s in the Details: Executive Order on Cybersecurity Reveals Administration’s Focus on AI-Cyber Convergence, Secure Software Development, and Foreign Threats”), the plan demonstrates an awareness of the critical risks and harms that can result when AI is adopted without the management of cybersecurity and safety considerations.

Conclusion

Overall, the AI Action Plan serves as an example of AI governance and risk management in the context of governing a nation. While the application of governance and risk management in the private sector should be tailored to an organization’s needs, the AI Action Plan (as well as governance measures coming out of other jurisdictions such as the EU) illustrates that 1) having such a framework in place supports innovation acceleration and 2) caring for security goes hand in hand with prudent AI adoption.

 


  1. [1] Governance: The systems, processes, and structures by which organizations or groups make and implement decisions, exercise authority, and are held accountable for achieving their objectives. 
  2. [2] Risk management: The systematic process of identifying, assessing, and controlling risks, which are potential events or situations that could negatively or positively impact the achievement of objectives. 
  3. [3] Goodwin’s inaugural AI & Drug Discovery Symposium took place on June 16, 2025. Check out our panel on AI governance and security, “AI Strategy for Success: AI Governance and AI Security.” 

 

This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.